Connect with us

Cyber Security

Firefox Relay gets added to disposable email blocklist, angers users



The maintainers of a “disposable email service” blocklist have decided to add Firefox Relay to the list, leaving many users of the service upset.

Firefox Relay is a privacy-centric email service that enables users to protect their real email addresses and hence limit spam.

Firefox Relay to go into disposable email blocklist

Launched in November 2021, Firefox Relay was created with the goal of helping users safeguard their privacy and limit the amount of email spam directed at them. 

Available as a free and premium offering, the service hides the user’s real email address to help protect their identity by giving them an alias to use.

Disposable email address services work by providing users with a temporary, intermediate email address that “relays” mail to their real inbox.

Users signing up for Firefox Relay are assigned an @* email alias which forwards their mail to their actual email address.

Firefox Relay's mozmail domain
FireFox Relay’s domain in use (Mozilla)

Although disposable email services might provide users with peace of mind when signing into free Wi-Fi portals that require an email address, and services with a high probability of sending marketing emails to users, they can also become a nuisance for service providers.

For example, mission-critical sites providing e-commerce and online banking services may become susceptible to abuse by threat actors if they allow the use of disposable emails.

Therefore, blocklists of domains used by burner email services are compiled and maintained by third-parties.

These can be referred to by online service providers from time to time to deny account signups to users presenting a disposable email address.

As seen by BleepingComputer today, the list, “disposable-email-domains” present on a GitHub repository by the same name contains known burner email services like 10minutemail, GuerrillaMail, and Mailinator.

Alongside these domains, was also proposed for addition as of a few days ago:

firefox relay added to blacklist
Firefox Relay domain added to blocklist (GitHub)

It isn’t clear who all or how many service providers reference the “disposable-email-domains” list when checking if a provided email address is a burner.

But, note, we did not see * domains on the list just yet: “” is the functional domain used by email aliases generated by Firefox Relay.

Back in November 2021, Firefox Relay’s team lead had requested the maintainer of a separate burner email list, “burner-email-providers” to exempt the particular domain form the blocklist:

“We are operating Relay with a number of features that I think mitigate the risks that these aliases pose,” Mozilla’s privacy and security engineer Luke Crouch explained in November.

Firstly, if a alias is disabled by the user, any emails sent to the alias are not bounced back but instead discarded with a 404 error message returned by the service’s HTTP webook, stated Crouch. 

Secondly, he explained, the anti-abuse protections built into Relay limit free users to a total of five aliases, and further rate-limit premium customers so they cannot abuse the service by creating large-scale throw-away aliases for, say, automated signups to web services.

With that reasoning, was swiftly removed from that blocklist. And it appears, the creators of “disposable-email-domains” have also honored the clause, for now.

Users upset at the decision

The move to propose the addition of Firefox Relay’s main domain to the disposable email providers blocklist has left many users confused and unpleased, prompting the list’s maintainers to lock the GitHub discussion before it gets “too heated.”

“Well, nice pickle. Why are you doing this to us Firefox? Among other things this throws a wrench in the original (not really rock-solid) reasoning about domain levels from here — so it breaks our CI even if in the correct order,” asked software developer Martin Cech, who is one of the contributors to the blocklist’s repository.

“My reasoning on including this is that an email with a mozmail domain is never going to be a primary email and is always going to forward to some other address,” responded the list’s co-maintainer, Dustin Ingram, who is also a Google open source security team member.

But, one of the pseudonymous GitHub users, worldofgeese cautioned that such blocklists could strip users of “one of the few defenses they have” against their email address leaking, and from threat actors waiting to flood users’ mailboxes with spam.

“Can you not do this? You look like extremely bad actors. Please don’t contribute to an unsafe internet,” wrote worldofgeese.

“I use Private Relay to protect my personal mail address, not as a tool for spam. I’m not even sure how a user would use Private Relay for spam, as users cannot begin email chains with a Relay address, only respond to mails delivered to those addresses.”

Another GitHub user urged that the decision to blocklist Firefox Relay be reconsidered as the service is one of the safeguards that prevent personal email addresses from turning up in data breaches and being spammed.

Interestingly, privacy-focused email services like Fastmail allow creation of both real and randomly generated email aliases via their primary domain (i.e.

“Good luck blocking the hundreds of thousands of Fastmail users by trying to block the minority using masked addresses,” challenged a Hacker News commentator.

As seen by BleepingComputer, is present on the allowlist within the “disposable-email-domains” repo.

Some surmised that, with additional effort, malicious actors could choose to abuse legitimate email providers like Gmail just as well, rather than turning to a service like Firefox Relay, thereby rendering such blocklists futile.

And the divide seems to be stern between those who vouch by the efficacy of Firefox Relay and disposable email services, and those with the painful task of maintaining anti-spam blocklists.

“The reason disposable email addresses exist and are popular is because services have abused users’ trust to not use these emails for shady ad revenue and marketing schemes,” writes a user on Hacker News.

“It’s further compounded by shoddy security that leads to leaks and exposure of people’s personal email addresses to pwned compromised lists. People don’t want to give up their personal email addresses so that they can be spammed or hacked. Until services do better (ie don’t sell me out for cheap) I’ll keep using the latest disposable email address to sign up for your user-hostile websites.”

Whether the privacy afforded by email relay services outweighs the risks posed by their abuse remains an ongoing debate.

Source link

Cyber Security

JSON-based SQL injection attacks trigger need to update web application firewalls



Security researchers have developed a generic technique for SQL injection that bypasses multiple web application firewalls (WAFs). At the core of the issue was WAF vendors failing to add support for JSON inside SQL statements, allowing potential attackers to easily hide their malicious payloads.

The bypass technique, discovered by researchers from Claroty’s Team82, was confirmed to work against WAFs from Palo Alto Networks, Amazon Web Services (AWS), Cloudflare, F5, and Imperva. These vendors have released patches, so customers should update their WAF deployments. However, the technique might work against WAF solutions from other vendors as well, so users should ask their providers if they can detect and block such attacks.

“Attackers using this novel technique could access a backend database and use additional vulnerabilities and exploits to exfiltrate information via either direct access to the server or over the cloud,” the Claroty researchers said in their report. “This is especially important for OT and IoT platforms that have moved to cloud-based management and monitoring systems. WAFs offer a promise of additional security from the cloud; an attacker able to bypass these protections has expansive access to systems.”

Bypass found while investigating other vulnerabilities

The Claroty researchers developed this attack technique while investigating vulnerabilities they found in a wireless device management platform from Cambium Networks called cnMaestro that can be deployed on premises and in the cloud. The cloud service operated by Cambium provides a separate isolated instance of the cnMaestro server for each customer and uses AWS on the backend.

The team found seven vulnerabilities in cnMaestro including a SQL injection (SQLi) flaw that allowed them to exfiltrate users’ sessions, SSH keys, password hashes, tokens, and verification codes from the server database. SQL injection is one of the most common and dangerous web application vulnerabilities and allows attackers to inject arbitrary SQL queries into requests that the application would then execute against the database with its own privileges.

After confirming their exploit worked against an on-premises deployment of cnMaestro, the researchers attempted it against a cloud-hosted instance. From the server response, they realized that the request was likely blocked by AWS’s web application firewall, which detected it as malicious.

Instead of giving up, the researchers decided to investigate how the AWS WAF recognizes SQL injection attempts, so they created their own vulnerable application hosted on AWS and sent malicious requests to it. Their conclusion was that the WAF uses two primary methodologies for identifying SQL syntax: searching for specific words in the request that it recognizes as part of SQL syntax and attempting to parse different parts of the request as valid SQL syntax.

“While most WAFs will use a combination of both methodologies in addition to anything unique the WAF does, they both have one common weakness: They require the WAF to recognize the SQL syntax,” the researchers said. “This triggered our interest and raised one major research question: What if we could find SQL syntax that no WAF would recognize?”

WAF vendors overlooked JSON in SQL

Starting around 10 years ago, database engines started to add support for working with JSON (JavaScript Object Notation) data. JSON is a data formatting and exchange standard that’s widely used by web applications and web APIs when talking to each other. Since applications already exchange data in JSON format, relational database engine creators found it useful to allow developers to directly use this data inside SQL operations without additional processing and modification.

PostgreSQL added this capability back in 2012, with other major database engines following over the years: MySQL in 2015, MSSQL in 2016 and SQLite in 2022. Today all these engines have JSON support turned on by default. However, WAF vendors did not follow suit, probably because they still considered this feature as being new and not well known.

“From our understanding of how a WAF could flag requests as malicious, we reached the conclusion that we need to find SQL syntax the WAF will not understand,” the Claroty researchers said. “If we could supply a SQLi payload that the WAF will not recognize as valid SQL, but the database engine will parse it, we could actually achieve the bypass. As it turns out, JSON was exactly this mismatch between the WAF’s parser and the database engine. When we passed valid SQL statements that used less prevalent JSON syntax, the WAF actually did not flag the request as malicious.”

After confirming that the AWS WAF firewall was vulnerable and they could use JSON to hide their SQLi exploit, the researchers wondered if other WAFs might have the same loophole. Testing of WAFs from several major vendors proved that their suspicion was correct, and they could use JSON syntax to bypass SQLi defenses with only minimal modifications among vendors.

The researchers reported the issue to the vendors they found vulnerable but also contributed their technique to ​​SQLMap, an open-source penetration testing tool that automates SQL injection attacks. This means the bypass technique is now publicly available and can be used by anyone.

“Team82 disclosed its findings to five of the leading WAF vendors, all of which have added JSON syntax support to their products,” the researchers said. “We believe that other vendors’ products may be affected, and that reviews for JSON support should be carried out.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

In-house vs. Outsourced Security: Understanding the Differences



Cybersecurity is not optional for businesses today. Ignoring security can result in a devastating breach or a productivity-sapping attack on the organization. But for many small- and medium-sized businesses (SMBs), the debate often revolves around whether to hire a third party or assemble an in-house security operations team.

Both options have their own pros and cons, but SMBs should weigh several factors to make the best decision for their own unique security needs. An in-house team, a managed security services provider (MSSP), or even a hybrid approach can make sense for various reasons.

Before choosing to build an in-house security team or outsource to an MSSP, businesses must first evaluate their unique needs to ensure the choice lays a foundation for future success.

Weighing control vs. costs

The obvious reason for assembling your own security team is control and immediate knowledge of what goes into your security operations.

“Handling security internally means you will sometimes have better visibility and centralized management,” says Scott Barlow, vice president of global MSP and cloud alliances at Sophos. “That said, if you outsource with the right service provider, visibility into what is going on should not be an issue.”

For many smaller organizations, the cost of running an in-house security program is prohibitive. Hiring skilled security specialists is expensive, and they are often difficult to find. They require regular training, and certifications must be kept fresh – typically at a cost to the employer.

“When you outsource to an MSSP, you will be paying a lot less than paying a senior security executive,” Barlow says. “I suggest that organizations conduct a cost analysis of outsourcing compared to paying salaries. Much of the time, it’s better to outsource.”

There are also technology and license costs to consider. Keeping software licenses up to date can consume both time and money, whereas working with an MSSP means access to the latest technology without worrying about license costs.

If both are important, try a hybrid model

Of course, some large organizations might need an in-house security presence.

“Generally, the larger you become, the more you need someone internally. That is where a co-managed model makes the most sense,” Barlow says.

In a hybrid model, companies tap outside support to collaborate with an internal security executive or team. This approach allows for more scalability while also providing the business with plenty of expertise through their relationship with the MSSP.

“Maybe you want to outsource a portion of the services because you can’t cover 24-7. Or maybe you need coverage on weekends,” Barlow says.

One major benefit to tapping outside support: your in-house team will have more time to focus on mission-critical objectives.

“With a hybrid approach, the internal IT and security teams can pivot to focus on more revenue generating activities,” Barlow says.

Click here to learn more.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Prevention or Detection: Which Is More Important for Defending Your Network?



When it comes to physically protecting a building, you have two primary defenses: prevention and detection. You can either prevent people from entering your property without your permission, or you can detect when they have already trespassed onto your property. Most people would prefer to prevent any trespassing, but a determined adversary is always going to be able gain access to your building, given enough time and resources. In this scenario, detection becomes the only alternative.

The same holds true for protecting assets in the digital world. We have the same two primary defenses: prevention and detection. And just like in the physical world, a determined adversary is going to gain access to your digital assets, given enough time and resources. The question will be: How quickly are you able to determine that an adversary has penetrated your network?

If you can’t prevent, you must discover

This is where detection comes in. Do you have the right tools and procedures in place to find attacks quickly when they are occurring? Most businesses do not. It takes days, weeks, and often even months before an attack is discovered. The gap between breach and discovery is known as dwell time, which is estimated to be more than 200 days in most cases and, according to IBM, as many as 280 days in some instances. If it takes this long to discover that an attack is in process, it may be impossible to determine the root cause if you don’t have enough historical data to review.

Therefore, it is just as important, and maybe even more important, to spend money increasing your ability to detect when a breach has occurred rather than to determine when a breach is actively occurring or to see that specific firewall (FW) or intrusion detection system (IDS) rules have actively prevented an attack. New attacks are taking place all the time, and bad actors are constantly coming up with new ways of infiltrating your network. It is important to understand that, at some point, a bad actor is going to get through and penetrate your network. What will be vitally important is whether you are able to see the attack when it is taking place, or shortly after, or whether instead the attack will be discovered weeks or months after the fact. In the latter case, do you have enough historical data to go back and determine when the attack started, or will that data be long gone by the time you notice something is wrong?

Saving the data you need

It is important to have several months’ worth of data so that you can go back and determine the initial compromise on your network. Having an advanced network detection and response (NDR) tool such as NETSCOUT’s Omnis Cyber Intelligence (OCI) can ensure that you have the data you need. OCI stores all of the relevant information, including layer 2-7 metadata and packets that you need to determine the root cause of an attack—not just flow data that won’t help in this situation.

How much historical network traffic are you storing? Do you have enough data to go back and research the start of an attack if it occurred 200 days ago? Or are you going to rely on catching bad actors faster than the industry average? It is important to understand the need for leveraging both prevention and detection capabilities and ensuring that you have enough storage to thoroughly investigate an attack when it occurs.

Watch this video to see how NETSCOUT can help your back-in-time investigation.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading