Connect with us

Cyber Security

EPSS explained: How does it compare to CVSS?



The Common Vulnerability Scanning System (CVSS) is the most frequently cited rating system to assess the severity of security vulnerabilities. It has been criticized, however, as not being appropriate to assess and prioritize risk from those vulnerabilities. For this reason, some have called for using the Exploit Prediction Scoring System (EPSS) or combining CVSS and EPSS to make vulnerability metrics more actionable and efficient. Like CVSS, EPSS is governed by the Forum of Incident Response and Security Teams (FIRST).

EPSS definition

EPSS prides itself on being an open and data-driven effort that aims to estimate the probability that a software vulnerability will be exploited in the wild. CVSS focuses on the innate characteristics of vulnerabilities culminating in a severity score. The severity score alone doesn’t indicate a likelihood of exploitation, which is critical information for vulnerability management professionals who need to prioritize their vulnerability remediation and mitigation efforts to maximize their impact on reducing organizational risk.

EPSS has a special interest group (SIG) that is open to the public for those interested in participating in the effort. EPSS is volunteer driven and led by researchers, security practitioners, academics, and government personnel. FIRST can and does own the rights to update the model and the associated guidance as the organization sees fit, despite this industry collaboration driven approach. The group boasts chairs and creators from organizations such as RAND, Cyentia, Virginia Tech, and Kenna Security among many members from a variety of organizations. EPSS has several related papers that dive into associated topics such as attack prediction, vulnerability modeling and disclosure, and software exploitation. 

The EPSS model 

EPSS aims to help security practitioners and their organizations improve vulnerability prioritization efforts. There are an exponentially growing number of vulnerabilities in today’s digital landscape and that number is increasing due to factors such as the increased digitization of systems and society, increased scrutiny of digital products, and improved research and reporting capabilities.

Organizations generally can only fix between 5% and 20% of vulnerabilities each month, EPSS claims. Fewer than 10% of published vulnerabilities are ever known to be exploited in the wild. Longstanding workforce issues are also at play, such as the annual ISC2 Cybersecurity Workforce Study, which shows shortages exceeding two million cybersecurity professionals globally. These factors warrant organizations having a coherent and effective approach to aid in prioritizing vulnerabilities that pose the highest risk to their organization to avoid wasting limited resources and time.

The EPSS model aims to provide some support by producing probability scores that a vulnerability will be exploited in the next 30 days and the scores range between 0 and 1 or 0% and 100%. To provide these scores and projections, EPSS uses data from sources such as the MITRE CVE list, data about CVEs such as days since publication, and observations from exploitation-in-the-wild activity from security vendors such as AlienVault and Fortinet. 

The EPSS team published data to support their approach of using CVSS scores with EPSS scoring data to lead to more effective vulnerability remediation efforts. For example, many organizations mandate that vulnerabilities with a specific CVSS score or higher must be remediated, such as a 7 or above. However, this prioritizes vulnerability remediation based on only the CVSS score, not if the vulnerability is known to be exploited or not. Coupling EPSS with CVSS is more effective because that prioritizes vulnerabilities based on both their severity rating and if they are known to be actively exploited. This lets organizations address CVEs that pose the greatest risk to the organization. 

EPSS focuses on two core metrics  – efficiency and coverage. Efficiency examines how well organizations are using resources to resolve the percentage of remediated vulnerabilities. EPSS points out that it is more efficient for most of an organization’s resources to be spent remediating mostly known-exploited vulnerabilities, as opposed to random vulnerabilities based on only severity scores via CVSS. Coverage is a look at the percentage of exploited vulnerabilities that were remediated. 

To show the efficiency in their proposed approach, EPSS conducted a study in 2021 evaluating CVSS v3 base scores and EPSS v1 and EPSS v2 data over a 30-day period to determine the total number of CVEs, the number of remediated CVEs and the number of exploited CVEs.

Initially, the study showed that most CVEs aren’t remediated. Secondly, the number of exploited CVEs that are remediated is just a subset of the total remediated CVEs. This means that organizations don’t remediate most CVEs, and among those they do, many aren’t actively known to be exploited and potentially don’t pose the greatest risk.

The study also demonstrates that the EPSS v2 further improves the efficiency of vulnerability remediation efforts by maximizing the percentage of exploited vulnerabilities that are remediated. When organizations have resource challenges with cybersecurity practitioners, it is crucial to maximize their return on investment by having the resources focus on the vulnerabilities that pose the greatest risk to the organization. Ultimately, EPSS is trying to help organizations make more efficient use of their limited resources and improve their effectiveness of driving down organizational risk. 

EPSS shortcomings

Like CVSS, EPSS has its critics from the industry and academia. One article titled Probably Don’t Rely on EPSS Yet comes from Carnegie Mellon University’s Software Engineering Institute’s blog. SEI originally published a paper titled Towards Improving CVSS, which laid out some sharp criticisms of CVSS, from which EPSS originated shortly after the publication. 

The primary criticisms leveled by the article include EPSS’s opacity as well as issues with its data and outputs. The article discusses how it isn’t clear how EPSS dictates the development processes, governance, or its intended audience. EPSS relies on pre-existing CVE IDs, meaning it wouldn’t be helpful for entities such as software suppliers, incident response teams, or bug bounty groups because many of the vulnerabilities these groups deal with don’t have CVE IDs yet and might never receive them. EPSS wouldn’t be helpful when dealing with zero-day vulnerabilities, given they gain visibility as exploitation is underway and have no CVE ID. 

The blog author also raises concerns about the openness and transparency of EPSS. While EPSS dubs itself an open and data-driven effort and has a public SIG, it and FIRST retain the right to change the site and model at any time without explanation. Even SIG members have no access to the code or data the underlying EPSS model uses. The SIG itself has no oversight or governance of the model, and the process by which the model is updated or modified isn’t transparent to the public, let alone SIG members. The article points out that the EPSS model and data could also be pulled back from the public domain given it is governed and managed by FIRST. 

The article notes that EPSS focuses on the probability that a vulnerability will be exploited in the next 30 days, but this requires a few fundamental things to exist for it to be projected. They include an existing CVE ID in the NVD with an associated CVSS v3 vector value, an IDS signature tied to an active attempted exploit of the CVE ID, contribution from AlienVault or Fortinet, and the model itself tied to the next 30 days.

As the author pointed out, only 10% of vulnerabilities with CVE IDs have accompanying IDS signatures, meaning 90% of vulnerabilities with CVE IDs may go undetected for exploitation. This also creates a dependency on Fortinet and AlienVault with regards to IDS sensors and associated data. This could be mitigated to some extent by further involvement from the broader security vendor community. While data from Fortinet and AlienVault is useful, it doesn’t represent the entire threat landscape or perspectives of the other major security vendors that could contribute to vulnerability exploitability probability.

While these are valid critiques, using EPSS gives organizations an opportunity to make the most of their scarce security resources to drive down organizational risk. Focusing on vulnerabilities with the highest probability of exploitation lets organizations make investments that have the highest chance to mitigate malicious actors and minimize friction on development teams.

Copyright © 2022 IDG Communications, Inc.

Source link

Cyber Security

When blaming the user for a security breach is unfair – or just wrong



In his career in IT security leadership, Aaron de Montmorency has seen a lot — an employee phished on their first day by someone impersonating the CEO, an HR department head asked to change the company’s direct deposit information by a bogus CFO, not to mention multichannel criminal engagement with threat actors attacking from social media to email to SMS text.

In these cases, the users almost fell for it, but something didn’t feel right. So, they manually verified by calling the executives who were being impersonated. De Montmorency, director of IT, security, and compliance with Tacoma, Washington-based Elevate Health, praises the instincts that stopped the attacks from causing financial or reputational damage. Yet, he contends that expecting users to be the frontline defense against rampant phishing, pharming, whaling, and other credential-based attacks increasingly taking place over out-of-band channels is a recipe for disaster.

“Of course, train your staff. The human element is the weakest link here. But don’t rely on training alone — or technology alone — to protect the organization. What you’re looking for is a balance,” de Montmorency says.

Protecting out-of-band usage

As attackers go after employees over out-of-band channels such as Zoom, Slack, and Teams, user education and technical controls must follow. Enterprises need visibility into what their users are clicking, downloading, uploading, or linking to in what can be dozens of collaborative platforms, he adds. His company uses SafeGuard Cyber to monitor east-west traffic and detect malicious activities being attempted over these channels. The tool is agentless and only requires a single user sign-on to access their platforms of choice, making it frictionless to users, which is one of the key criteria in getting user buy-in to necessary security controls.

A recent report by the University of Madison dissects the many ways business collaboration platforms (BCPs) can be leveraged for app-to-app delegation attacks, user-to-app interaction hijacking, and app-to-user confidentiality violations. In their tests, researchers were able to send arbitrary emails on behalf of victims, merge code requests, launch fake video calls with loose security settings, steal private messages, and maintain a malicious presence even after app uninstallation. Using homemade scraping tools, researchers estimated that 1,493 (61%) of the 2,460 Slack apps analyzed and 427 (33%) of 1,304 Microsoft Teams apps analyzed were vulnerable to delegation attacks. Additionally, 1,266 (51%) of Slack apps use slash commands, which are vulnerable to both user-to-app and app-to-user violations.

These trends show how social engineering attacks have moved to where employees are working over collaborative platforms. That means security awareness education and security controls need to work in tandem to protect users, their devices, and their credentials from these ever-evolving threats, no matter where they’re working from. And they need to do so in a way that enables collaboration, rather than blocking it, according to experts.

Security as a matter of psychology

“In the past, we’ve treated our employees as extensions of our computers. We would lay down the law: ‘No, you cannot go to this site or use this social platform.’ But people are still human, and their number one priority is getting their work done, so they will go around draconian rules and blocking if they have to,” says Russell Spitler, co-founder and CEO of Nudge Security, which recently commissioned a study of 900 users titled Debunking the Stupid User Myth.

In the study, 67% of participants said they would not comply with these types of blocking interventions and would instead look for a workaround if blocking got in the way of doing their jobs. Inversely, the report states that if organizations empower their users to make more educated decisions, they could achieve two times the compliance rate of blocking intervention.

“If you can remove the psychological reasons not to do something with respect to security, you can hope the person will be an ally,” says Dr. Aaron Kay, a professor of management and psychology and neuroscience at Duke University who advised on the report. 

Kristofer Laxdal, CISO of Mississauga, Ontario-based Prophix, a financial performance management platform with more than 500 employees, agrees that punitive training methods and restrictive security controls cause more harm than good because phishing, pharming, whaling, and other social engineering attacks aimed at leveraging privileged access are only getting worse. He also feels the industry is at an inflection point where security can improve user experience rather than getting in the way.

Make security frictionless for users

For example, Laxdal cites zero trust and passwordless computing as technological controls that take the onus off the employee while improving security and reducing risk. Onerous controls such as screen locks and timeouts, on the other hand, are causing users to covertly install mouse jigglers and keypress generators to keep their screens from locking up when their computers are idle because they don’t have time to keep logging in, he adds.

“Security practitioners have thrown in layers of technical controls and security awareness training. Yet, phishing, IP theft, pharming, and ransomware have gone on far too long,” he says. “So, while there is indeed a human component to security, the controls themselves need to be frictionless, because users are tired of inputting multiple logins to multiple systems. They’re also experiencing multifactor authentication fatigue. Technical controls need to be put in place to remove the issues that the end user is experiencing.”

Know thy users

The best place to start is understanding employee roles, resources, and access habits, Laxdal says. For example, financial workers should understand the specific risks to business accounts and social engineering attempts such as BEC scams that may target them. Development departments will have different risk areas to focus on; for example, their IP on hosted servers or malware hidden in public open-source libraries. HR, on the other hand, is dealing with PII (financial, banking, and healthcare information) that shouldn’t be shared over any channel, particularly given that anyone can impersonate a CEO and request files or transfers.

“All of these vectors are being used globally against information assets and are overwhelmingly credential-based attacks that are perpetrated through phishing. Users need to understand why and be part of that discussion with real-world examples,” Spitler explains. “Sit down with your employees, ask about their typical day and access requirements. And understand each functional area of the business so you can design controls and training for their business.”

His company uses Ninjio, which combines behavioral analytics with security awareness training done in an anime style that, he says, is made compelling and engaging by the use of real-world hacks to show users what could happen if a user takes a dangerous action. Nudge Security also deploys analytics to identify when users stray off their approved platforms, engages the user by asking questions, and even assists them to securely set up the new platform with two-factor authentication and other secure enablement.

“If you want happy, compliant workers, they need to have agency in their decisions and feel that they are trusted and respected by their organizations,” adds Kay. “Deliver messages that facilitate that feeling. Be transparent about the reasons for your programs. And don’t frustrate users by ordering them to do things they don’t understand.”

Trust degrades over time

But just what is trust? What do you base it on, and how do you apply that to users, asks Winn Schwartau, an early infowar and security awareness pioneer. “I trust them not to steal from me based on what criteria? I trust that they’ve got the best interest of my company, based on what? I trust them not to click on malicious links or attachments based upon what? Because I trained them? I’ve been in this business a long time and I can tell you training doesn’t move the bar as much as people would like to believe.”

Schwartau believes that user education and training should be part of a holistic security program that starts with critical asset identification physically and electronically, limiting user access to only those systems they need, and “detection-in-depth” monitoring for abuse. He suggests employing a high-speed OODA loop for examining user behavior and to align technical controls. 

This, he adds, can help determine a soft initial level of trust. But scammers and social engineers continue to hone their tactics, so CISOs need to adapt and uplevel their user education and technical controls. Over time, trust goes down and risk increases in any environment, which he mathematically details in his book Analogue Network Security. “In many ways, it’s a total paradox,” he adds. “Employees are your greatest asset. And yet, employees are also your weakest link.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness



Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap


Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure



Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/ The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading