Connect with us

Cyber Security

How chaos engineering can help DevSecOps teams find vulnerabilities

Published

on

The words “chaos” and “engineering” aren’t usually found together. After all, good engineers keep chaos at bay. Yet lately software developers are deploying what they loosely call “chaos” in careful amounts to strengthen their computer systems by revealing hidden flaws. The results aren’t perfect – anything chaotic can’t offer guarantees– but the techniques are often surprisingly effective, at least some of the time, and that makes them worthwhile.

The process can be especially useful for security analysts because their job is to find the undocumented and unanticipated backdoors. The chaotic testing can’t identify all security failures, but it can reveal dangerous, unpatched vulnerabilities that were not imagined by the developers. Good chaos engineering can help both the DevSecOps and DevOps teams because sometimes problems of reliability or resilience can also be security weaknesses, too. The same coding mistakes can often either crash the system or lead to intrusions.

What is chaos engineering?

The term is a neologism meant to unify different techniques that have found some success. Some use words like “fuzzing” or “glitching” to describe how they tweak a computer system and, perhaps, send it off balance and maybe crash it. They inject random behavior that can stress the software while watching carefully for malfunctions or bugs to appear. These are often failure modes that might take years to reveal themselves in regular usage.

John Gilmore, one of the founders of the Electronic Freedom Foundation (EFF) and the member of the development team behind several key open-source projects, says that coding is a process of continual refinement and chaos engineering is one way to speed up the search for all possible execution paths. “The real value of long-running code is that most of the bugs have been shaken out of it by the first 10 million people to run it, the first 20 compilers that have compiled it, the first five operating systems it runs on. Ones that have then been tested by fuzzing and penetration tests (e.g., Google Project Zero) have many fewer unexplored code paths than any new piece of code.” he explains.

Gilmore likes to tell a story from the 1970s of when he worked for Data General, an early minicomputer manufacturer. He found that flipping a power switch at random times would leave the operating system state in disarray. “Rather than fixing the problem, the operating system engineers claimed that flipping the breaker wasn’t a valid test.” Gilmore says, before adding,   “As a result, Data General is dead now.”

Source link

Cyber Security

JSON-based SQL injection attacks trigger need to update web application firewalls

Published

on

Security researchers have developed a generic technique for SQL injection that bypasses multiple web application firewalls (WAFs). At the core of the issue was WAF vendors failing to add support for JSON inside SQL statements, allowing potential attackers to easily hide their malicious payloads.

The bypass technique, discovered by researchers from Claroty’s Team82, was confirmed to work against WAFs from Palo Alto Networks, Amazon Web Services (AWS), Cloudflare, F5, and Imperva. These vendors have released patches, so customers should update their WAF deployments. However, the technique might work against WAF solutions from other vendors as well, so users should ask their providers if they can detect and block such attacks.

“Attackers using this novel technique could access a backend database and use additional vulnerabilities and exploits to exfiltrate information via either direct access to the server or over the cloud,” the Claroty researchers said in their report. “This is especially important for OT and IoT platforms that have moved to cloud-based management and monitoring systems. WAFs offer a promise of additional security from the cloud; an attacker able to bypass these protections has expansive access to systems.”

Bypass found while investigating other vulnerabilities

The Claroty researchers developed this attack technique while investigating vulnerabilities they found in a wireless device management platform from Cambium Networks called cnMaestro that can be deployed on premises and in the cloud. The cloud service operated by Cambium provides a separate isolated instance of the cnMaestro server for each customer and uses AWS on the backend.

The team found seven vulnerabilities in cnMaestro including a SQL injection (SQLi) flaw that allowed them to exfiltrate users’ sessions, SSH keys, password hashes, tokens, and verification codes from the server database. SQL injection is one of the most common and dangerous web application vulnerabilities and allows attackers to inject arbitrary SQL queries into requests that the application would then execute against the database with its own privileges.

After confirming their exploit worked against an on-premises deployment of cnMaestro, the researchers attempted it against a cloud-hosted instance. From the server response, they realized that the request was likely blocked by AWS’s web application firewall, which detected it as malicious.

Instead of giving up, the researchers decided to investigate how the AWS WAF recognizes SQL injection attempts, so they created their own vulnerable application hosted on AWS and sent malicious requests to it. Their conclusion was that the WAF uses two primary methodologies for identifying SQL syntax: searching for specific words in the request that it recognizes as part of SQL syntax and attempting to parse different parts of the request as valid SQL syntax.

“While most WAFs will use a combination of both methodologies in addition to anything unique the WAF does, they both have one common weakness: They require the WAF to recognize the SQL syntax,” the researchers said. “This triggered our interest and raised one major research question: What if we could find SQL syntax that no WAF would recognize?”

WAF vendors overlooked JSON in SQL

Starting around 10 years ago, database engines started to add support for working with JSON (JavaScript Object Notation) data. JSON is a data formatting and exchange standard that’s widely used by web applications and web APIs when talking to each other. Since applications already exchange data in JSON format, relational database engine creators found it useful to allow developers to directly use this data inside SQL operations without additional processing and modification.

PostgreSQL added this capability back in 2012, with other major database engines following over the years: MySQL in 2015, MSSQL in 2016 and SQLite in 2022. Today all these engines have JSON support turned on by default. However, WAF vendors did not follow suit, probably because they still considered this feature as being new and not well known.

“From our understanding of how a WAF could flag requests as malicious, we reached the conclusion that we need to find SQL syntax the WAF will not understand,” the Claroty researchers said. “If we could supply a SQLi payload that the WAF will not recognize as valid SQL, but the database engine will parse it, we could actually achieve the bypass. As it turns out, JSON was exactly this mismatch between the WAF’s parser and the database engine. When we passed valid SQL statements that used less prevalent JSON syntax, the WAF actually did not flag the request as malicious.”

After confirming that the AWS WAF firewall was vulnerable and they could use JSON to hide their SQLi exploit, the researchers wondered if other WAFs might have the same loophole. Testing of WAFs from several major vendors proved that their suspicion was correct, and they could use JSON syntax to bypass SQLi defenses with only minimal modifications among vendors.

The researchers reported the issue to the vendors they found vulnerable but also contributed their technique to ​​SQLMap, an open-source penetration testing tool that automates SQL injection attacks. This means the bypass technique is now publicly available and can be used by anyone.

“Team82 disclosed its findings to five of the leading WAF vendors, all of which have added JSON syntax support to their products,” the researchers said. “We believe that other vendors’ products may be affected, and that reviews for JSON support should be carried out.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

In-house vs. Outsourced Security: Understanding the Differences

Published

on

Cybersecurity is not optional for businesses today. Ignoring security can result in a devastating breach or a productivity-sapping attack on the organization. But for many small- and medium-sized businesses (SMBs), the debate often revolves around whether to hire a third party or assemble an in-house security operations team.

Both options have their own pros and cons, but SMBs should weigh several factors to make the best decision for their own unique security needs. An in-house team, a managed security services provider (MSSP), or even a hybrid approach can make sense for various reasons.

Before choosing to build an in-house security team or outsource to an MSSP, businesses must first evaluate their unique needs to ensure the choice lays a foundation for future success.

Weighing control vs. costs

The obvious reason for assembling your own security team is control and immediate knowledge of what goes into your security operations.

“Handling security internally means you will sometimes have better visibility and centralized management,” says Scott Barlow, vice president of global MSP and cloud alliances at Sophos. “That said, if you outsource with the right service provider, visibility into what is going on should not be an issue.”

For many smaller organizations, the cost of running an in-house security program is prohibitive. Hiring skilled security specialists is expensive, and they are often difficult to find. They require regular training, and certifications must be kept fresh – typically at a cost to the employer.

“When you outsource to an MSSP, you will be paying a lot less than paying a senior security executive,” Barlow says. “I suggest that organizations conduct a cost analysis of outsourcing compared to paying salaries. Much of the time, it’s better to outsource.”

There are also technology and license costs to consider. Keeping software licenses up to date can consume both time and money, whereas working with an MSSP means access to the latest technology without worrying about license costs.

If both are important, try a hybrid model

Of course, some large organizations might need an in-house security presence.

“Generally, the larger you become, the more you need someone internally. That is where a co-managed model makes the most sense,” Barlow says.

In a hybrid model, companies tap outside support to collaborate with an internal security executive or team. This approach allows for more scalability while also providing the business with plenty of expertise through their relationship with the MSSP.

“Maybe you want to outsource a portion of the services because you can’t cover 24-7. Or maybe you need coverage on weekends,” Barlow says.

One major benefit to tapping outside support: your in-house team will have more time to focus on mission-critical objectives.

“With a hybrid approach, the internal IT and security teams can pivot to focus on more revenue generating activities,” Barlow says.

Click here to learn more.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Prevention or Detection: Which Is More Important for Defending Your Network?

Published

on

When it comes to physically protecting a building, you have two primary defenses: prevention and detection. You can either prevent people from entering your property without your permission, or you can detect when they have already trespassed onto your property. Most people would prefer to prevent any trespassing, but a determined adversary is always going to be able gain access to your building, given enough time and resources. In this scenario, detection becomes the only alternative.

The same holds true for protecting assets in the digital world. We have the same two primary defenses: prevention and detection. And just like in the physical world, a determined adversary is going to gain access to your digital assets, given enough time and resources. The question will be: How quickly are you able to determine that an adversary has penetrated your network?

If you can’t prevent, you must discover

This is where detection comes in. Do you have the right tools and procedures in place to find attacks quickly when they are occurring? Most businesses do not. It takes days, weeks, and often even months before an attack is discovered. The gap between breach and discovery is known as dwell time, which is estimated to be more than 200 days in most cases and, according to IBM, as many as 280 days in some instances. If it takes this long to discover that an attack is in process, it may be impossible to determine the root cause if you don’t have enough historical data to review.

Therefore, it is just as important, and maybe even more important, to spend money increasing your ability to detect when a breach has occurred rather than to determine when a breach is actively occurring or to see that specific firewall (FW) or intrusion detection system (IDS) rules have actively prevented an attack. New attacks are taking place all the time, and bad actors are constantly coming up with new ways of infiltrating your network. It is important to understand that, at some point, a bad actor is going to get through and penetrate your network. What will be vitally important is whether you are able to see the attack when it is taking place, or shortly after, or whether instead the attack will be discovered weeks or months after the fact. In the latter case, do you have enough historical data to go back and determine when the attack started, or will that data be long gone by the time you notice something is wrong?

Saving the data you need

It is important to have several months’ worth of data so that you can go back and determine the initial compromise on your network. Having an advanced network detection and response (NDR) tool such as NETSCOUT’s Omnis Cyber Intelligence (OCI) can ensure that you have the data you need. OCI stores all of the relevant information, including layer 2-7 metadata and packets that you need to determine the root cause of an attack—not just flow data that won’t help in this situation.

How much historical network traffic are you storing? Do you have enough data to go back and research the start of an attack if it occurred 200 days ago? Or are you going to rely on catching bad actors faster than the industry average? It is important to understand the need for leveraging both prevention and detection capabilities and ensuring that you have enough storage to thoroughly investigate an attack when it occurs.

Watch this video to see how NETSCOUT can help your back-in-time investigation.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE