Connect with us

Cyber Security

Tor Project appeals Russian court’s decision to block access to Tor



US-based Tor Project and Russian digital-rights protection org RosKomSvoboda are appealing a Russian court’s decision to block access to public Tor nodes and the project’s website.

The non-profit Tor Project operates the Tor decentralized network, which runs on top of the Internet, allowing users to bypass censorship, access websites anonymously, and visit special Onion URLs (.onion) accessible only over Tor.

Commonly referred to as the dark web, Tor allows website developers to create special onion services that are only accessible via the Tor network and provide anonymity to the hidden service’s operator.

Russian court blocks website and Tor nodes

In December, the Tor Project announced that Russia blocked their website and various public Tor nodes used to connect to the decentralized network in regions of Russia.

“Russia is the country with the second-largest number of Tor users, with more than 300,000 daily users or 15% of all Tor users. As it seems this situation could quickly escalate to a country-wide Tor block, it’s urgent that we respond to this censorship! We need your help NOW to keep Russians connected to Tor!” read a Tor blog post at the time.

Today, in coordinated announcements, RosKomSvoboda and the Tor Project explain that the Saratov district court in Russia ordered the block after the prosecutor’s office learned that the Tor network allows access to the “Federal List of Extremist Materials,” a list of works banned in Russia.

“The formal reason was the decision of the Saratov district court of 2017 in accordance with Art. 15.1 of the Law ‘On Information,’ ” RosKomSvoboda explained in an announcement on the appeal.

“This decision does not apply to any specific content, it is based on a review by the prosecutor’s office, which found that the Tor project website has the ability to ‘download an anonymizing browser program for subsequent visits to sites that host materials included in the Federal List of Extremist Materials.’ “

RosKomSvoboda and Tor believe that the court’s decision to block the Tor Project’s website and infrastructure is illegal for the following reasons:

  • The case was considered without the participation of the representatives of Tor, which violated their procedural rights and the adversarial nature of the process;
  • The decision violates the constitutional right to freely provide, receive and disseminate information and protect privacy.

A Tor Project press release shared with BleepingComputer explains that Russia is the second-largest country using the Tor network, with more than 300,000 daily users.

However, since the Saratov court’s decision, the Tor Project has seen a sharp decline in Russian users accessing the service, as illustrated in the graph below.


“With the help of attorneys from RosKomSvoboda, Darbinyan Sarkis and Abashina Ekaterina, we are appealing the court decision and we hope to revert this situation and help create a precedent in Russia for digital rights,” said Isabela Bagueros, Executive Director of the Tor Project.

For now, Russian users can bypass the country’s censorship of the website using a mirror site hosted by the Electronic Frontier Foundation at

Volunteers have also contributed over 1,000 additional Tor bridges that are not currently blocked, allowing Russian people to access the Tor network and counter government censorship.

Source link

Cyber Security

When blaming the user for a security breach is unfair – or just wrong



In his career in IT security leadership, Aaron de Montmorency has seen a lot — an employee phished on their first day by someone impersonating the CEO, an HR department head asked to change the company’s direct deposit information by a bogus CFO, not to mention multichannel criminal engagement with threat actors attacking from social media to email to SMS text.

In these cases, the users almost fell for it, but something didn’t feel right. So, they manually verified by calling the executives who were being impersonated. De Montmorency, director of IT, security, and compliance with Tacoma, Washington-based Elevate Health, praises the instincts that stopped the attacks from causing financial or reputational damage. Yet, he contends that expecting users to be the frontline defense against rampant phishing, pharming, whaling, and other credential-based attacks increasingly taking place over out-of-band channels is a recipe for disaster.

“Of course, train your staff. The human element is the weakest link here. But don’t rely on training alone — or technology alone — to protect the organization. What you’re looking for is a balance,” de Montmorency says.

Protecting out-of-band usage

As attackers go after employees over out-of-band channels such as Zoom, Slack, and Teams, user education and technical controls must follow. Enterprises need visibility into what their users are clicking, downloading, uploading, or linking to in what can be dozens of collaborative platforms, he adds. His company uses SafeGuard Cyber to monitor east-west traffic and detect malicious activities being attempted over these channels. The tool is agentless and only requires a single user sign-on to access their platforms of choice, making it frictionless to users, which is one of the key criteria in getting user buy-in to necessary security controls.

A recent report by the University of Madison dissects the many ways business collaboration platforms (BCPs) can be leveraged for app-to-app delegation attacks, user-to-app interaction hijacking, and app-to-user confidentiality violations. In their tests, researchers were able to send arbitrary emails on behalf of victims, merge code requests, launch fake video calls with loose security settings, steal private messages, and maintain a malicious presence even after app uninstallation. Using homemade scraping tools, researchers estimated that 1,493 (61%) of the 2,460 Slack apps analyzed and 427 (33%) of 1,304 Microsoft Teams apps analyzed were vulnerable to delegation attacks. Additionally, 1,266 (51%) of Slack apps use slash commands, which are vulnerable to both user-to-app and app-to-user violations.

These trends show how social engineering attacks have moved to where employees are working over collaborative platforms. That means security awareness education and security controls need to work in tandem to protect users, their devices, and their credentials from these ever-evolving threats, no matter where they’re working from. And they need to do so in a way that enables collaboration, rather than blocking it, according to experts.

Security as a matter of psychology

“In the past, we’ve treated our employees as extensions of our computers. We would lay down the law: ‘No, you cannot go to this site or use this social platform.’ But people are still human, and their number one priority is getting their work done, so they will go around draconian rules and blocking if they have to,” says Russell Spitler, co-founder and CEO of Nudge Security, which recently commissioned a study of 900 users titled Debunking the Stupid User Myth.

In the study, 67% of participants said they would not comply with these types of blocking interventions and would instead look for a workaround if blocking got in the way of doing their jobs. Inversely, the report states that if organizations empower their users to make more educated decisions, they could achieve two times the compliance rate of blocking intervention.

“If you can remove the psychological reasons not to do something with respect to security, you can hope the person will be an ally,” says Dr. Aaron Kay, a professor of management and psychology and neuroscience at Duke University who advised on the report. 

Kristofer Laxdal, CISO of Mississauga, Ontario-based Prophix, a financial performance management platform with more than 500 employees, agrees that punitive training methods and restrictive security controls cause more harm than good because phishing, pharming, whaling, and other social engineering attacks aimed at leveraging privileged access are only getting worse. He also feels the industry is at an inflection point where security can improve user experience rather than getting in the way.

Make security frictionless for users

For example, Laxdal cites zero trust and passwordless computing as technological controls that take the onus off the employee while improving security and reducing risk. Onerous controls such as screen locks and timeouts, on the other hand, are causing users to covertly install mouse jigglers and keypress generators to keep their screens from locking up when their computers are idle because they don’t have time to keep logging in, he adds.

“Security practitioners have thrown in layers of technical controls and security awareness training. Yet, phishing, IP theft, pharming, and ransomware have gone on far too long,” he says. “So, while there is indeed a human component to security, the controls themselves need to be frictionless, because users are tired of inputting multiple logins to multiple systems. They’re also experiencing multifactor authentication fatigue. Technical controls need to be put in place to remove the issues that the end user is experiencing.”

Know thy users

The best place to start is understanding employee roles, resources, and access habits, Laxdal says. For example, financial workers should understand the specific risks to business accounts and social engineering attempts such as BEC scams that may target them. Development departments will have different risk areas to focus on; for example, their IP on hosted servers or malware hidden in public open-source libraries. HR, on the other hand, is dealing with PII (financial, banking, and healthcare information) that shouldn’t be shared over any channel, particularly given that anyone can impersonate a CEO and request files or transfers.

“All of these vectors are being used globally against information assets and are overwhelmingly credential-based attacks that are perpetrated through phishing. Users need to understand why and be part of that discussion with real-world examples,” Spitler explains. “Sit down with your employees, ask about their typical day and access requirements. And understand each functional area of the business so you can design controls and training for their business.”

His company uses Ninjio, which combines behavioral analytics with security awareness training done in an anime style that, he says, is made compelling and engaging by the use of real-world hacks to show users what could happen if a user takes a dangerous action. Nudge Security also deploys analytics to identify when users stray off their approved platforms, engages the user by asking questions, and even assists them to securely set up the new platform with two-factor authentication and other secure enablement.

“If you want happy, compliant workers, they need to have agency in their decisions and feel that they are trusted and respected by their organizations,” adds Kay. “Deliver messages that facilitate that feeling. Be transparent about the reasons for your programs. And don’t frustrate users by ordering them to do things they don’t understand.”

Trust degrades over time

But just what is trust? What do you base it on, and how do you apply that to users, asks Winn Schwartau, an early infowar and security awareness pioneer. “I trust them not to steal from me based on what criteria? I trust that they’ve got the best interest of my company, based on what? I trust them not to click on malicious links or attachments based upon what? Because I trained them? I’ve been in this business a long time and I can tell you training doesn’t move the bar as much as people would like to believe.”

Schwartau believes that user education and training should be part of a holistic security program that starts with critical asset identification physically and electronically, limiting user access to only those systems they need, and “detection-in-depth” monitoring for abuse. He suggests employing a high-speed OODA loop for examining user behavior and to align technical controls. 

This, he adds, can help determine a soft initial level of trust. But scammers and social engineers continue to hone their tactics, so CISOs need to adapt and uplevel their user education and technical controls. Over time, trust goes down and risk increases in any environment, which he mathematically details in his book Analogue Network Security. “In many ways, it’s a total paradox,” he adds. “Employees are your greatest asset. And yet, employees are also your weakest link.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness



Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap


Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure



Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/ The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading