Connect with us

Cyber Security

Microsoft starts force installing Windows 10 21H2 on more devices

Published

on

Microsoft has started the forced rollout of Windows 10, version 21H2 to more devices approaching the end of service (EOS) as part of a first machine learning (ML) training phase.

The automated feature update rollout phase comes after Windows 10 20H2 became available for broad deployment to everyone in May 2021 via Windows Update.

Windows 10 21H2 is also rolling out to seekers (users who manually check for updates) on Windows 10 2004 or newer through a fast update experience similar to a monthly update.

Forced deployment on devices reaching EOS

Microsoft started the forced rollout of Windows 10 20H2 using the same machine learning-based rollout process to devices running Windows 10 1909 and earlier in March 2021.

Now, computers running Windows versions closing their end of service will be automatically upgraded to Windows 10 21H2.

“[We] started the first phase in our rollout for machine learning (ML) training, targeting devices on Windows 10, version 20H2 that are approaching end of servicing to update automatically to Windows 10, version 21H2,” Microsoft says.

“We will continue to train our machine learning model through all phases to intelligently rollout new versions of Windows 10, and deliver a smooth update experience.”

According to the Windows 10, version 20H2 health dashboard, devices running Windows 10, version 20H2 (Home, Pro, Pro Education, and Pro for Workstations) will be the ones to get automatically upgraded in the coming months as they reach the end of servicing on May 10, 2022.

Microsoft is automatically deploying this feature update to ensure that it can adequately service these devices with the latest updates, security updates, and improvements.

No compatibility holds still in place

Windows Servicing and Delivery Director of Program Management John Cable revealed that Microsoft usually starts “this machine learning (ML)-based rollout process several months in advance of the end of service date to provide adequate time for a smooth update process.”

If you are running a Windows 10 version nearing EoS and you’re not offered to update via Window Update, you should manually check for an update to Windows 10 21H2 via the Windows Update dialog.

“As always, we recommend that you update your devices to the latest version of Windows 10 or upgrade eligible devices to Windows 11 to take advantage of the latest features and advanced protections from the latest security threats,” the company added.

You can use this Windows support document to troubleshoot Windows 10 21H2 update problems or follow this guided walk-through to fix any errors you encounter.

Source link

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure

Published

on

Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/kubernetes.io/serviceaccount/token. The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Software projects face supply chain security risk due to insecure artifact downloads via GitHub Actions

Published

on

The way build artifacts are stored by the GitHub Actions platform could enable attackers to inject malicious code into software projects with CI/CD (continuous integration and continuous delivery) workflows that don’t perform sufficient filtering when downloading artifacts. Cybersecurity researchers have identified several popular artifacts download scripts used by thousands of repositories that are vulnerable to this issue.

“We have discovered that when transferring artifacts between different workflows, there is a major risk for artifact poisoning — a technique in which attackers replace the content of a legitimate artifact with a modified malicious one and thereby initiate a supply chain attack,” researchers from supply chain security firm Legit Security said in an analysis of the issue.

To attack a vulnerable project’s CI/CD pipeline that downloads and uses artifacts generated by other workflows, attackers only need to fork the repositories containing those workflows, modify them in their local copies so they produce rogue artifacts and then make pull requests back to the original repositories without those requests having to be accepted.

A logic flaw in artifact storage APIs

GitHub Actions is a CI/CD platform for automating the building and testing of software code. The service is free for public repositories and includes free minutes of worker run time and storage space for private repositories. It’s widely adopted by projects that use GitHub to host and manage their source code repositories.

GitHub Actions workflows are automated processes defined in .yml files using YAML syntax that get executed when certain triggers or events occur, such as when new code gets committed to the repository. Build artifacts are compiled binaries, logs and other files that result from the execution of a workflow and its individual jobs. These artifacts are saved inside storage buckets with each workflow run being assigned a particular bucket where it can upload files and later download them from.

The reference “action” (script) for downloading artifacts that’s provided by GitHub doesn’t support cross-workflow artifact downloads, but reusing artifacts generated by different workflows as input for follow-up build steps are common use cases for software projects. That’s why developers have created their own custom scripts that rely on the GitHub Actions API to download artifacts using more complex filtering, such as artifacts created by a specific workflow file, a specific user, a specific branch and so on.

The problem that Legit Security found is that the API doesn’t differentiate between artifacts uploaded by forked repositories and base repositories, so if a download script filters artifacts generated by a particular workflow file from a particular repository, the API will serve the latest version of the artifact generated by that file, but this could be a malicious version generated automatically via a pull request action from a forked version of the repository.

“To put it simply: in a vulnerable workflow, any GitHub user can create a fork that builds an artifact,” the researchers said. “Then inject this artifact into the original repository build process and modify its output. This is another form of a software supply chain attack, where the build output is modified by an attacker.

The researchers found four custom actions developed by the community for downloading artifacts that were all vulnerable. One of them was listed as a dependency for over 12,000 repositories.

The Rust example

One of the repositories that used such a custom script in one of its workflows was the official repository for the Rust programming language. The vulnerable workflow, called ci.yml was responsible for building and testing the repository’s code and used the custom action to download an artifact called libgccjit.so — a Linux library file — that was generated by a workflow in a third-party repository.

All attackers had to do was fork the third-party repository, modify the workflow from that repository to generate a malicious version of the library and issue a pull request to the original repository to generate the artifact. If Rust’s workflow would have then pulled in the poisoned version of the library it would have provided the attackers with the ability to execute malicious code within the Rust repository with the workflow’s privileges.

“Upon exploitation, the attacker could modify the repository branches, pull requests, issues, releases, and all of the entities that are available for the workflow token permissions,” the researchers said.

Users need to enforce stricter filtering for artifact downloads

GitHub responded to Legit’s report by adding more filtering capabilities to the API which developers can use to better identify artifacts created by a specific run instance of the workflow (workflow run id). However, this change cannot be forced onto existing implementations without breaking workflows, so it’s up to users to update their workflows with stricter filtering in order to be protected.

Another mitigation is to filter the downloaded artifacts by the hash value of the commits that generated them or by excluding artifacts created by pull-request entirely using the exclude_pull_requests option. Legit Security also contacted the authors of the vulnerable custom artifact download scripts they found.

“In supply chain security, the focus has been on preventing people from contributing malicious code, so every time you do a change in a repository, create a pull request or do a change request, GitHub has a lot of built-in verification controls,” Liav Caspi, CTO of Legit Security tells CSO. “Somebody has to approve your code, somebody has to merge it, so there’s a person involved. What we’ve been trying to find are techniques that exploit a logic problem that any person could influence without review and I think this is one of them. If someone would have known about it, they could have injected the artifact without any approval.”

Typically, CI pipelines have workflows that run automatically on pull requests to test the code before it’s manually reviewed and if the pull request contains any artifact that needs to be built, the workflow will build it, Caspi said. A sophisticated attacker could create the pull request to get the artifact built and then delete the request by closing the submission and chances are with all the activity noise that exists in source code repositories today, it would go unnoticed, he said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

8 things to consider amid cybersecurity vendor layoffs

Published

on

2022 has been a heavy year for layoffs in the technology sector. Whether due to budget restraints, mergers and acquisitions, streamlining, or economic reasons, TrueUp’s tech layoff tracker has recorded over 1000 rounds of layoffs at tech companies globally so far, affecting more than 182,000 people. Some of the biggest tech companies in the world have announced significant staff cuts, including Amazon, Twitter, Meta, and Salesforce. Although perhaps less severely affected, cybersecurity vendors haven’t been immune. Popular security firms including Snyk, Malwarebytes, Tripwire, Cybereason, and Lacework have made notable workforce cuts this year, albeit for varying reasons from shifting business strategies to increasing cash runway.

In total, 34 security firms have announced layoffs or workforce restructuring since the start of 2022, according to layoff tracking site Layoffs.FYI. Most cited as driving forces behind cuts were a tightening market and the need to protect business longevity. While there’s little evidence to suggest 2023 will see wide-sweeping cybersecurity vendor workforce cuts of unprecedented scale in a tech sector that is faring relatively well, increasingly uncertain economic times mean that nothing is off the table. Momentum Cyber’s Cybersecurity Market Review Q3 2022 found that cybersecurity stock prices decreased 7.2% during Q3 2022, underperforming the NASDAQ at -5.0% and the S&P 500 at -6.3%. Meanwhile, the 2023 State of IT Report found that 83% of companies are concerned about a recession in 2023, with 50% planning to take precautionary measures to prepare for an economic slowdown that could see a significant portion hunker down on cybersecurity purchases and services, the report stated.

These are not monumental shifts or predictions, but they do reflect the ambiguous economic situation. They are also the types of trends that can cause cybersecurity businesses to assess and adapt their strategic positions which, as 2022 has shown, can involve staffing cuts. Reasoning aside, cybersecurity vendor layoffs raise several issues for CISOs and customers, not least security and risk-related factors. If you find yourself in the position where your cybersecurity vendor has announced cuts, here are 8 things to consider to put yourself and your business in the best position to weather the potential storm:

Can vendors provide the same level of support, communication?

First and foremost is the concern that security vendor cuts could impact a vendor’s ability to provide the same level of service support, Frank Dickson, group VP for IDC’s security and trust research practice, tells CSO. “Support is really underappreciated. When we do surveys of people who like their vendors, support always comes out as the most important feature, and it’s a huge differentiator. Does that support change? Is your field service engineer, the person that you worked with, going to change? What about new cloud configuration, scalability, those kinds of things?”

Netskope CISO EMEA Neil Thacker, agrees. “When a security vendor announces significant layoffs, customers should be most concerned about reduced engagement and communication,” he tells CSO. “Security vendors and customers should have an open and clear channel of communication to discuss any issues, challenges, and new requirements. If the ability to engage and communicate with a security vendor becomes difficult, it’s a clear sign that the layoffs have affected the organization in problematic ways.”

CISO should talk with their account managers or even senior leadership about how a vendor is managing layoffs, adds Ed Skoudis, president of SANS Technology Institute. “Businesses should be asking vendors a number of key questions: What are they doing to protect their portion of the supply chain? How can we be sure they don’t take their eye off the ball, but continue to protect us?” Honesty and transparency are vital, and amid challenging times, clear and decisive messaging from your vendor should reassure you that they’re positioned to support your business needs despite layoffs, he says.

Where are vendor cuts being made?

Next to consider is precisely where cuts are being made and if they’re tied directly to the security product or service that’s being offered, Forrester senior analyst Jess Burn, tells CSO. “The personnel that are being let go might be redundant in the eyes of the leaders, but they might have played a pretty vital role in a security process or function that you actually depend on from that vendor. That means whoever is left is going to have more on their plates, and they’re going to be doing more with less.”

Layoffs of engineers and developers should be the most concerning for CISOs and security teams, Burn adds, describing them as the “canary in the coalmine” when it comes to spotting and fixing security threats. “Often, when we see some of these early layoffs, they impact recruitment or marketing staff, but that shouldn’t concern you really.”

However, if you’re looking on LinkedIn and seeing engineers or developers being laid off, that should give you pause for thought, Burn says. Dickson concurs, adding that sales or marketing cuts are unlikely to affect the ability to get security value from the vendor, but cuts to key service or engineering staff could well do just that. For Thacker, the biggest risks to customers would come from a reduction in DevSecOps staffing, “which would potentially bring about a reduction in security oversight, feature updates, and even impact upon the general availability of the service,” while Yuval Wollman, chief cyber officer and managing director of UST, thinks cuts to innovation and research staff could have a direct impact on a product’s efficiency and reliability as the threat landscape evolves and changes.

CISOs should therefore feel comfortable asking their vendors for details about where cuts are being made and how they relate to vital security functions – and vendors should be happy to provide such information.  “A reduced security workforce will impact innovation. Your particular mix of vendors and service providers might be best of breed right now, but with staff stretched thinner, new innovations may slow down, allowing attackers to gain the upper hand as they continue to innovate their attack strategies,” warns Skoudis.

What is driving the vendor’s layoffs?

Another key factor to consider if your security vendor is laying off staff is what is driving the cuts, Dickson says. “The complexity we have is that some layoffs are not necessarily driven by a lack of revenue. Clearly, macroeconomic factors aren’t good, but you can’t necessarily take layoffs by a vendor as an indictment of their business model.”

There are numerous high-flying, almost “unicorn-type” security startups that identify a need, get funding, and all of a sudden get massive growth, Dickson adds. “The goal of this growth is to achieve some sort of IPO event, funding revenue growth with venture equity. As long as they are showing revenue growth and there’s a lot of venture funding available, they can do that. What happens when the economy goes south? Venture funding goes south.” If these types of vendors then produce the same revenue growth at the rate they were without funding, they have to make revenue equal to expenses – i.e., continue to grow but keep cash flow neutral. “Sometimes you’ll see layoffs associated with that and it’s important you look at this equity and the layoffs at a vendor, asking whether it’s because they were funding revenue growth with venture capital, or if it’s an indictment of their business model. You must take each one on a case-by-case basis.”

You can also investigate whether the company is simply experiencing an exodus of staff who are moving voluntarily, often a sign of internal unrest, adds Wollman. “Speak to other people in the market, and demand clarity from your vendor on what’s happening.”

What security service does the vendor provide?

It’s also important to assess the security service your vendor provides amid staff layoffs, Dickson says. “If you’re talking about a vendor that just secures your on-premises infrastructure, that’s kind of a known commodity. We know what a firewall does. We know what a secure web gateway buys us – we’ve done this for 20 years now.”

This could make any layoff-impacted operations or services easier to augment or replace (if required). However, if the service is more complex, less practiced or provides protection against newer, less predictable threats such as those impacting AWS built-in Kubernetes, then risks could be more significant. This could also be particularly troubling if an MSSP is involved, Skoudis adds. “Their SOCs are usually run without a lot of extra people, and fewer eyes and brains analyzing events from your network could mean that particularly devious attackers will go unnoticed longer.” As for SaaS technology, reduced headcount could raise questions about whether bugs and vulnerabilities are being found, patched, and fixed to the same standard.

The best way to mitigate risks here is to be aware of what controls the security vendor provides, and who is responsible for what, Thacker says. “The shared responsibility model should be mapped for every critical security vendor, and a review of these controls should take place on a regular basis.”

Could security vendor layoffs create sabotage risks?

A disgruntled employee who just lost their job could retaliate against their employer or the customers, Skoudis warns. If not addressed, this could open businesses to notably heightened security risks. “They could build backdoors into systems, steal sensitive information for sale on the dark web, blind detection capabilities, or commit all kinds of other mischief in products and services. In some ways, the ultimate supply chain attack is when the insiders in an organization undermine their own product or service by backdooring it or otherwise sabotaging it.”

According to one study, 45% of employees save, download, or send company data outside of the network before they leave a role, says Wollman. “In the case of a disgruntled ex-employee, the process of saving or downloading data could look like intentional data leakage or destruction, but even if the parting is amicable, organizations need to think about files being deleted or damaged, or intellectual property being stolen or misused.”

CISOs should seek reassurance from vendors that they handle any layoffs appropriately sensitively and securely, citing proof of clear and effective off-boarding processes as something to ask for. “Software development integrity controls and code checking are super important in light of sabotage-related supply chain attacks, and during times of lay-offs, it’s particularly important for companies letting people go to really focus and do this carefully, lest they subject their customers to increased risk,” Skoudis says. Vendors could be asked to review and prove their own security posture during and after layoffs.

Could layoffs put a security vendor in breach of contract?

Security vendors have a responsibility to meet contractual obligations regarding the service they provide, and if staffing cuts hamper their ability to do so, a business could find itself involved in a legal dispute, Burn points out. “If they’re not able to prove that their solution is going to keep a company safe despite layoffs, then they could be in violation of the terms of a contract and subscription. So, you might have to get a little bit legal, and that’s where you might need to line up a replacement solution too.”

When should you consider switching security vendors?

Dickon advocates caution for those considering switching vendors, even if there are concerns about the immediate impacts of layoffs. “Don’t just think about today or even three months from now. Consider the vendor and where they will be in two years from now. Might you be in a better spot if you stay with a vendor? Are you in a better spot if you switch?”

Wollman advises considering the business impact of any vendor change. “Thoroughly investigate what it would look like to switch to a new product or vendor. Ask yourself: ‘What is the financial cost of a switch of vendor, or of losing this vendor if they go out of business? What will the operative impact be of both scenarios?’ Weigh up the situation from every angle before you make any final decisions.”

What’s the silver lining of security vendor layoffs?

Among the potentially troublesome issues security vendor layoffs raise, there are some theoretical upsides. “In some cases, layoffs may be a good sign of a security vendor who is streamlining and cutting inefficiencies, especially as we come out of a period of high growth, where companies may have onboarded new staff too quickly,” Wollam says.

Burn urges CISOs and businesses not to overlook the opportunity to benefit from staffing cuts, in that a vendor’s loss of skilled security people could be their gain. “You could recruit them. Security vendors have always recruited away from end-user organizations. Now there is an opportunity for CISOs, because there is still a massive staffing shortage.”

As a security leader, you could find yourself being able to staff up internally with people who have been laid off if they happen to be folks that are in engineering or some other security-type role, she adds. “There is an opportunity, in the racket, to come out on the right side out of this, because I know firms are still having a terrible time recruiting and retaining security talent, specifically because they’re in such high demand.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE