Connect with us

Cyber Security

12 steps to take when there’s an active adversary on your network



CISOs know they must respond quickly and effectively to an incident, yet surveys point to continuing challenges to deliver on that goal.

The State of Incident Response 2021 report, from tech companies Kroll, Red Canary and VMware, surveyed more than 400 IS professionals and 100 legal and compliance leaders and found that 45% of them identified inadequacies in detection and response resources. Additionally, 55% wanted to improve time to containment and incident response automation.

There are compelling reasons for investing in improved incident response.

Consider the findings from tech company Cisco, which in its December 2021 Security Outcomes Study Volume 2 report, identified five key drivers of cybersecurity program success. The five include the ability to detect threats early and accurately, the ability to respond quickly to incidents, and the capacity to recover promptly from disasters.

CISOs need detailed cyber incident response plans to deliver on those three points. They need to practice them to identify any deficits that could hinder their performance should hackers strike. And they need to drill regularly so they can perform as best they can in a real event.

“An active incident is not the time to go figure all that out,” says Joe McMann, global cybersecurity portfolio lead for Capgemini.

To be well prepared, enterprise cybersecurity teams need to have accurate asset inventories and visibility into all areas of their IT environment; they need to know their organization’s mission-critical systems; and they must understand how to respond if they detect hackers trying to disrupt any of that.

The key steps they’ll need to take—quickly and nearly simultaneously—if there’s an active adversary in their network are as follows:

1. Sound the alarm

Security teams face an average of 11,047 alerts a day, according to the 2021 State of Security Operations report from Forrester Consulting and Palo Alto Networks.

Of course, many of those alerts are false positives or indicate low-priority risks, but others point to bigger problems that must be quickly escalated.

“You need to know when to break the glass. People are afraid to pull that trigger, to reach that mode, because it’s hard to take it back if you do. There’s oversight and costs, and people are afraid to spin it up sometimes,” McMann says.

Given that, teams must have good guidelines to know when and how to escalate situations.

“That decision point will be unique to each organization, but the escalation path, who to call, when to engage legal, [etc.] should be clearly documented,” says Nick Biasini, head of outreach for Cisco Talos, a threat intelligence organization.

That prevents delays that could allow hackers more time to do damage, yet prevent costly responses to minor incidents or false alarms.

2. Scope the situation and triage

“Take stock of what you know and what you don’t know: These are the facts, the alerts being generated, information I’m receiving from peers, how big is this, how big could the impact be, those are some of the questions that have to be answered initially, so you can prioritize, make smart decisions and take actions,” McMann says.

This requires CISOs to have in place good asset management and visibility into systems, he adds, as security logs, application logs, transactional data and other such data help teams assess the situation, then triage and formulate the right response.

3. Bring in the business

CISOs should be looping in business during the triage process, security leaders say, a point that’s often overlooked during active responses. As part of this, security teams need to immediately identify what impacted components are critical for conducting business, who owns those components and who controls them.

As J. Wolfgang Goerlich, advisory CISO with Cisco Secure, says: “This is a business problem. But in a security breach, a very technical person will be thinking, ‘I have to remediate.’ However, one of the things that CISOs need to remember is that a breach is a business problem not a technical problem. So there should be a secondary process that’s running business continuity and disaster recovery so that the business can keep doing what it needs to be doing.”

4. Staunch the bleeding

As that’s all happening, security teams need to focus on egress routes to make sure that nothing is getting out, says Steven Graham, senior vice president for EC-Council, a cybersecurity technical certification body.

“If there’s an active adversary in the network, they probably set up as many backdoors as they could. Identify what egress points exist within the network so you can stop the effect of the attack,” he says.

5. And find the points of entry

At the same time, security teams need to figure out how the hackers got in and where they went. “Investigate the breadcrumbs, what was their path in, what did they do next, what is everything they touched. It’s an additional step of triage,” Graham says, adding that good networking monitoring is a must here. Then close those vulnerabilities so no one else can get in again.

6. Assemble the troops

As the scope of the incident comes into focus, CISOs should be assembling the full complementary team they’ll need to respond – all the executives needed to make decisions; the security and IT practitioners with the skills needed for response; the right representatives from communications, human resources, legal and other functional areas; and any external resources required. CISOs also need to know whether and when to bring in law enforcement, and which agencies to involve, another element that should be outlined in advance so there’s no scrambling during the incident, says Randy Trzeciak, director of the Master of Science in Information Security Policy & Management (MSISPM) program at Carnegie Mellon University.

7. Track your actions

Notes on the investigation, priorities, accomplished tasks, ongoing activities, unresolved needs and other details must be effectively documented and efficiently disseminated, McMann says, adding that Word docs or emails typically aren’t good vehicles for such information-sharing and archiving.

He stresses the need for a good knowledge management system or communications platform for sharing and recording data during the incident response—another point that he says is often overlooked during a real event.

“You have to have a platform that collects and stores information and the findings, all the open questions. That has to be collected, organized, and made sense of because that information levels up to the incident coordinators so CISOs or their deputies can distill the information and make decisions,” he says.

8. Coordinate the counterattack

As investigation turns to action, CISOs need to coordinate their moves against the hackers—whether that means booting them out right away or taking time to monitor their activities before striking against them, Biasini says.

“They’re going to have more than one foothold, so you want to kick them out of all the footholds at once. Be as thorough as you can, so you’re not playing Whac-A-Mole,” he says.

9. Work the plan

CISOs, other executives, and all responding teams need to stick to the incident response plan, and resist taking over tasks outside their assigned roles, experts say.

“You have a playbook. Make sure that’s being run and you don’t take it over. Your plan as a leader is not to step in unless it’s assigned to you,” warns Jeff Pollard, vice president and principal analyst at Forrester Research.

Leaders in crisis are often tempted to jump into the trenches, but they, like everyone else, can best contribute by focusing on their own work. CISOs who start reviewing log data or jumping on keyboards actually create bottlenecks in the response and delay other critical tasks that only they can do, such as communicating to the board.

10. But adapt as needed

Even the most detailed and practiced incident response plan can’t account for every potential scenario, a new threat or a novel technique, so CISOs and organizations as a whole must know when to pivot and be able to adapt their response to the realities they’re facing during the actual event.

“There’s always a curve ball,” Pollard says.

He points out that ransomware attacks at one point morphed, with hacker groups not only holding encrypted data hostage but then, after getting paid the ransom, threatening to release stolen data if another ransom isn’t handed over. Somewhere there’s a CISO who was first to see that and had to figure out on the fly how best to counter—which, Pollard says, confirms the need for security leaders to be agile.

11. Alert others

CISOs won’t be able to hide an incident, and in many cases they can’t legally try to do so. That means they must work with their legal and communications teams to plan what they should say, when to say it, how to deliver the message and to whom.

“Know your points of contact, create a clear concise story, and get everyone on the same page,” Graham says.

CISOs should also alert other internal and external security officials, Pollard adds.

“When there’s an attack, sharing becomes an afterthought or it’s a concern because of the possibility of litigation, but find out what you can share, so you can let your team know what they can and can’t talk about, and let others know they should check their environments even if you’re not able [to say that you’ve been breached],” he says.

Not only does that help prepare other CISOs, Pollard adds, it helps the responding CISO quickly know whether the vulnerability or attack is unique to his or her organization or part of a larger issue.

12. Stay calm; tend to staff needs

Security professionals will know the gravity of the situation, so angry or frantic reactions won’t get anyone to work harder or scare off the adversaries. In fact, such reactions can do more harm than good. As Pollard says: “It’s going to be a crisis, but it doesn’t have to be chaos. We can work a crisis; no one can work effectively in chaos.”

He and others say CISOs and their executive colleagues are better served by being attentive to their workers and their needs.

Goerlich says he has seen teams “run themselves into the ground” by working long hours without breaks and even a day or more without sleep. Although that grueling schedule shows a level of dedication, it’s likely to lead to mistakes.

“People get into their zones and work well beyond the times that they should,” Goerlich says, noting that CISOs should plan for clear lines of communications, caps for work hours, staggered schedules, and post-event time off. He adds: “As much as possible, organizations should think out in advance how to handle the human elements.”

Copyright © 2022 IDG Communications, Inc.

Source link

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness



Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap


Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure



Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/ The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Software projects face supply chain security risk due to insecure artifact downloads via GitHub Actions



The way build artifacts are stored by the GitHub Actions platform could enable attackers to inject malicious code into software projects with CI/CD (continuous integration and continuous delivery) workflows that don’t perform sufficient filtering when downloading artifacts. Cybersecurity researchers have identified several popular artifacts download scripts used by thousands of repositories that are vulnerable to this issue.

“We have discovered that when transferring artifacts between different workflows, there is a major risk for artifact poisoning — a technique in which attackers replace the content of a legitimate artifact with a modified malicious one and thereby initiate a supply chain attack,” researchers from supply chain security firm Legit Security said in an analysis of the issue.

To attack a vulnerable project’s CI/CD pipeline that downloads and uses artifacts generated by other workflows, attackers only need to fork the repositories containing those workflows, modify them in their local copies so they produce rogue artifacts and then make pull requests back to the original repositories without those requests having to be accepted.

A logic flaw in artifact storage APIs

GitHub Actions is a CI/CD platform for automating the building and testing of software code. The service is free for public repositories and includes free minutes of worker run time and storage space for private repositories. It’s widely adopted by projects that use GitHub to host and manage their source code repositories.

GitHub Actions workflows are automated processes defined in .yml files using YAML syntax that get executed when certain triggers or events occur, such as when new code gets committed to the repository. Build artifacts are compiled binaries, logs and other files that result from the execution of a workflow and its individual jobs. These artifacts are saved inside storage buckets with each workflow run being assigned a particular bucket where it can upload files and later download them from.

The reference “action” (script) for downloading artifacts that’s provided by GitHub doesn’t support cross-workflow artifact downloads, but reusing artifacts generated by different workflows as input for follow-up build steps are common use cases for software projects. That’s why developers have created their own custom scripts that rely on the GitHub Actions API to download artifacts using more complex filtering, such as artifacts created by a specific workflow file, a specific user, a specific branch and so on.

The problem that Legit Security found is that the API doesn’t differentiate between artifacts uploaded by forked repositories and base repositories, so if a download script filters artifacts generated by a particular workflow file from a particular repository, the API will serve the latest version of the artifact generated by that file, but this could be a malicious version generated automatically via a pull request action from a forked version of the repository.

“To put it simply: in a vulnerable workflow, any GitHub user can create a fork that builds an artifact,” the researchers said. “Then inject this artifact into the original repository build process and modify its output. This is another form of a software supply chain attack, where the build output is modified by an attacker.

The researchers found four custom actions developed by the community for downloading artifacts that were all vulnerable. One of them was listed as a dependency for over 12,000 repositories.

The Rust example

One of the repositories that used such a custom script in one of its workflows was the official repository for the Rust programming language. The vulnerable workflow, called ci.yml was responsible for building and testing the repository’s code and used the custom action to download an artifact called — a Linux library file — that was generated by a workflow in a third-party repository.

All attackers had to do was fork the third-party repository, modify the workflow from that repository to generate a malicious version of the library and issue a pull request to the original repository to generate the artifact. If Rust’s workflow would have then pulled in the poisoned version of the library it would have provided the attackers with the ability to execute malicious code within the Rust repository with the workflow’s privileges.

“Upon exploitation, the attacker could modify the repository branches, pull requests, issues, releases, and all of the entities that are available for the workflow token permissions,” the researchers said.

Users need to enforce stricter filtering for artifact downloads

GitHub responded to Legit’s report by adding more filtering capabilities to the API which developers can use to better identify artifacts created by a specific run instance of the workflow (workflow run id). However, this change cannot be forced onto existing implementations without breaking workflows, so it’s up to users to update their workflows with stricter filtering in order to be protected.

Another mitigation is to filter the downloaded artifacts by the hash value of the commits that generated them or by excluding artifacts created by pull-request entirely using the exclude_pull_requests option. Legit Security also contacted the authors of the vulnerable custom artifact download scripts they found.

“In supply chain security, the focus has been on preventing people from contributing malicious code, so every time you do a change in a repository, create a pull request or do a change request, GitHub has a lot of built-in verification controls,” Liav Caspi, CTO of Legit Security tells CSO. “Somebody has to approve your code, somebody has to merge it, so there’s a person involved. What we’ve been trying to find are techniques that exploit a logic problem that any person could influence without review and I think this is one of them. If someone would have known about it, they could have injected the artifact without any approval.”

Typically, CI pipelines have workflows that run automatically on pull requests to test the code before it’s manually reviewed and if the pull request contains any artifact that needs to be built, the workflow will build it, Caspi said. A sophisticated attacker could create the pull request to get the artifact built and then delete the request by closing the submission and chances are with all the activity noise that exists in source code repositories today, it would go unnoticed, he said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading