Connect with us

Cyber Security

How to Best Protect Multi-cloud and Hybrid Environments

Published

on

After a recent prolonged AWS outage (which was followed by “aftershocks” on subsequent days), a CNBC story encapsulated one of the significant conversations stirred by the event: Can big businesses rely on a single vendor, or do they need to spread their workloads in case something like this happens again?

Our answer is that organizations are already spreading their workloads across multiple environments. The more pressing question is: How can enterprises best protect those multi-cloud and hybrid environments as they evolve and change?

Every multi-cloud environment is unique and complex

A rapidly growing number of organizations worldwide are decommissioning their traditional data centers and moving applications to multiple cloud-hosted environments. This makes their security architectures uniquely more complex, because what’s needed to secure one cloud will differ from the requirements of another. At the same time, threat actors — well aware of these rapid migrations to colocation facilities and the public cloud — have no shortage of attack techniques and tools (think booters, DDoS for hire) to target vulnerabilities introduced by inconsistent security policies and requirements.

This inconsistency is impossible to avoid. Security controls (WAF, DDoS, bot management, API protection, etc.) are unique to each environment. So as customers attempt to reduce risk, improve performance, or gain specific features by spreading their workloads across multiple clouds, they’ll inevitably end up with multiple security solutions, increasing the likelihood of misconfiguration and mismanagement — a leading cause of compromised data. Another layer of difficulty (both frustrating and costly) arises when enterprise IT starts troubleshooting across a disparate and fragmented cloud-hosted infrastructure. 

(If you’re thinking you’ll avoid this complexity by sticking with one solution, in our view this makes the least sense; it’s wastefully expensive and introduces unnecessary performance issues and points of failure.)

More significantly, troubleshooting across a multi-cloud environment is sometimes impossible. Many cloud-hosted IPs fall outside of an enterprise’s direct control, leaving it vulnerable — as we clearly saw on December 7 — to a successful DDoS attack. (Read more in our ebook DDoS Defense in a Hybrid Cloud World.)

Considering the increasing intensity and variety of cyberattacks, and the inevitability of further migrations to multi-cloud and hybrid environments, enterprises are best protected by taking cloud security into their own hands.

Cobbling together CSP solutions is less secure and more expensive

If your organization uses multiple public cloud providers, in addition to hosting on-premises workloads, you need flexible DDoS attack protection across hybrid architectures — especially since responsibility for security within public cloud environments can be inconsistent from provider to provider. Making a false assumption about who’s responsible can leave you exposed to huge risk.

In general, the customer is ultimately responsible for application security in the public cloud, as you can see in this shared responsibility model from AWS, which is similar to that of other public cloud providers. That responsibility includes DDoS protection, but also extends to higher-level security controls like protecting against data exfiltration, hacking, and bots.

Hyperscale cloud providers offer some of the required security controls, but not all. Web application firewalls, security lists, API protection, IP reputation, and bot management solutions are available to varying extents, but they are additional purchases that generally operate independent of one another. Relying on this “click-to-add” architecture instead of a single, purpose-built security platform does three problematic things: It adds another layer of complexity, increases cloud costs, and reduces the overall security of the application. In addition, this scenario forces IT staff to dedicate time to managing security, which also adds to the overall cost.

For enterprises to integrate, deploy, and manage DDoS defenses within each cloud service provider’s (CSP) unique environment — and with many internet-facing assets located across multiple clouds — operational complexity quickly compounds. Adding to the pressure, many CSP in-house DDoS mitigation solutions fall short in providing what enterprises most need to protect themselves: 

  • Reporting and visibility into events before and after they happen, including post-attack analysis
  • A time-to-mitigate service level agreement (most only offer service credits to the affected organization after a breach or outage)
  • On-demand access to SOC support from a 24/7 global security operations center

The last point, proper support, is critical to maintain business continuity and mitigate impact. Because staffing of security positions is increasingly difficult (and this is true across global regions), many enterprises have no in-house experts to turn to, and most CSP security solutions don’t have a security services support option.

How to best protect your multi-cloud and hybrid environments? At the edge.

Your mitigation strategy should empower your cloud strategy, not be at the mercy of it. Akamai’s purpose-built security solution protects applications and stops malicious bots and account fraud at the edge instantly, before they reach applications, data centers, and infrastructure. It offers four layers of defense in a single platform, fine-tuned to the specific requirements of your web applications or internet-based services.

Edge defense: The Akamai edge CDN delivers and accelerates web traffic using HTTP and HTTPS protocols. Every Akamai edge server operates as a reverse proxy, forwarding legitimate HTTP/S traffic on ports 80 and 443, and dropping all other traffic at the network edge. This means that every Akamai customer inherently gets instant mitigation of all network-layer DDoS attacks — built into their web delivery. This brings up the other advantage of edge-security solutions: you won’t need to maintain a separate CDN and you also get out-of-the box egress savings via caching.

DNS defense: The same technology applies to Akamai’s authoritative DNS service, Edge DNS, which instantly drops all traffic not on port 53. Unlike other DNS solutions, Akamai specifically architected Edge DNS for availability and resiliency against DDoS attacks (in addition to improved performance) with architectural redundancies at multiple levels, including name servers, points of presence, networks, and even segmented IP anycast clouds.

Cloud scrubbing defense: Our Prolexic solution protects entire data centers and internet-facing infrastructure from DDoS attacks — across all ports and protocols. By routing both legitimate and malicious traffic through Prolexic, we are able to build both positive and negative security models that proactively and instantly mitigate DDoS attacks with high accuracy.

Human defense: Akamai Security Operations Command Center (SOCC) experts act as an extension of an enterprise’s incident response team to balance automated detection and response with human engagement. This layer of defense adds huge benefits to business, including:

  • Proactive monitoring of behavioral anomalies for early threat detection
  • Expert-crafted defense with scalable protection
  • Visibility into existing and emerging threats, so you can mitigate them faster
  • Enhanced security intelligence to address the growing attack surface

Finally, responding at the edge also reduces the actual cost of fighting the DDoS attack because scaling up is not necessary. Roll-your-own solutions (like a WAF AMI from AWS or a mod security-based solution) run on compute nodes, which means the bigger the attack, the more they have to scale up to fight that attack. And the more they scale up, the higher the costs.

Think you’re a low-risk target? No such thing in multi-cloud environments.

According to IDC, DDoS attacks are expected to grow at an 18% CAGR through 2023 — a clear indicator that it’s time to increase investment in robust mitigation controls. And while some organizations may believe they’re low-risk targets for a DDoS attack, the AWS outage makes one thing clear: Our growing reliance on internet connectivity to power business-critical services and applications leaves everyone exposed to downtime and diminished performance — if their environments are too complex to manage, protect, and troubleshoot. Learn more about security at the edge.

Does your company make life better for your customers through innovative digital experiences? We want to hear about it! Enter the Future of Life Online Challenge for your chance to win up to $1 million worth of Akamai cybersecurity and edge technology solutions. 

About the author

Pavel Despot, Senior Product Marketer for the cloud in Akamai’s Edge Technologies Group, has more than 20 years of experience designing and deploying large-scale cloud solutions for global carriers, financial institutions, and other enterprises. Previously, as Principal Solutions Engineer at Akamai, he designed secure and fault-tolerant cloud solutions. He holds two patents in mobile network design and has held various leadership roles on the CTIA Wireless Internet Caucus, the CDMA Developers Group, and the Interactive Advertising Bureau. Pavel lives in Boston.

Copyright © 2022 IDG Communications, Inc.

Source link

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness

Published

on

Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap

 

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure

Published

on

Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/kubernetes.io/serviceaccount/token. The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Software projects face supply chain security risk due to insecure artifact downloads via GitHub Actions

Published

on

The way build artifacts are stored by the GitHub Actions platform could enable attackers to inject malicious code into software projects with CI/CD (continuous integration and continuous delivery) workflows that don’t perform sufficient filtering when downloading artifacts. Cybersecurity researchers have identified several popular artifacts download scripts used by thousands of repositories that are vulnerable to this issue.

“We have discovered that when transferring artifacts between different workflows, there is a major risk for artifact poisoning — a technique in which attackers replace the content of a legitimate artifact with a modified malicious one and thereby initiate a supply chain attack,” researchers from supply chain security firm Legit Security said in an analysis of the issue.

To attack a vulnerable project’s CI/CD pipeline that downloads and uses artifacts generated by other workflows, attackers only need to fork the repositories containing those workflows, modify them in their local copies so they produce rogue artifacts and then make pull requests back to the original repositories without those requests having to be accepted.

A logic flaw in artifact storage APIs

GitHub Actions is a CI/CD platform for automating the building and testing of software code. The service is free for public repositories and includes free minutes of worker run time and storage space for private repositories. It’s widely adopted by projects that use GitHub to host and manage their source code repositories.

GitHub Actions workflows are automated processes defined in .yml files using YAML syntax that get executed when certain triggers or events occur, such as when new code gets committed to the repository. Build artifacts are compiled binaries, logs and other files that result from the execution of a workflow and its individual jobs. These artifacts are saved inside storage buckets with each workflow run being assigned a particular bucket where it can upload files and later download them from.

The reference “action” (script) for downloading artifacts that’s provided by GitHub doesn’t support cross-workflow artifact downloads, but reusing artifacts generated by different workflows as input for follow-up build steps are common use cases for software projects. That’s why developers have created their own custom scripts that rely on the GitHub Actions API to download artifacts using more complex filtering, such as artifacts created by a specific workflow file, a specific user, a specific branch and so on.

The problem that Legit Security found is that the API doesn’t differentiate between artifacts uploaded by forked repositories and base repositories, so if a download script filters artifacts generated by a particular workflow file from a particular repository, the API will serve the latest version of the artifact generated by that file, but this could be a malicious version generated automatically via a pull request action from a forked version of the repository.

“To put it simply: in a vulnerable workflow, any GitHub user can create a fork that builds an artifact,” the researchers said. “Then inject this artifact into the original repository build process and modify its output. This is another form of a software supply chain attack, where the build output is modified by an attacker.

The researchers found four custom actions developed by the community for downloading artifacts that were all vulnerable. One of them was listed as a dependency for over 12,000 repositories.

The Rust example

One of the repositories that used such a custom script in one of its workflows was the official repository for the Rust programming language. The vulnerable workflow, called ci.yml was responsible for building and testing the repository’s code and used the custom action to download an artifact called libgccjit.so — a Linux library file — that was generated by a workflow in a third-party repository.

All attackers had to do was fork the third-party repository, modify the workflow from that repository to generate a malicious version of the library and issue a pull request to the original repository to generate the artifact. If Rust’s workflow would have then pulled in the poisoned version of the library it would have provided the attackers with the ability to execute malicious code within the Rust repository with the workflow’s privileges.

“Upon exploitation, the attacker could modify the repository branches, pull requests, issues, releases, and all of the entities that are available for the workflow token permissions,” the researchers said.

Users need to enforce stricter filtering for artifact downloads

GitHub responded to Legit’s report by adding more filtering capabilities to the API which developers can use to better identify artifacts created by a specific run instance of the workflow (workflow run id). However, this change cannot be forced onto existing implementations without breaking workflows, so it’s up to users to update their workflows with stricter filtering in order to be protected.

Another mitigation is to filter the downloaded artifacts by the hash value of the commits that generated them or by excluding artifacts created by pull-request entirely using the exclude_pull_requests option. Legit Security also contacted the authors of the vulnerable custom artifact download scripts they found.

“In supply chain security, the focus has been on preventing people from contributing malicious code, so every time you do a change in a repository, create a pull request or do a change request, GitHub has a lot of built-in verification controls,” Liav Caspi, CTO of Legit Security tells CSO. “Somebody has to approve your code, somebody has to merge it, so there’s a person involved. What we’ve been trying to find are techniques that exploit a logic problem that any person could influence without review and I think this is one of them. If someone would have known about it, they could have injected the artifact without any approval.”

Typically, CI pipelines have workflows that run automatically on pull requests to test the code before it’s manually reviewed and if the pull request contains any artifact that needs to be built, the workflow will build it, Caspi said. A sophisticated attacker could create the pull request to get the artifact built and then delete the request by closing the submission and chances are with all the activity noise that exists in source code repositories today, it would go unnoticed, he said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE