Connect with us

Cyber Security

Russia-linked cyberattacks on Ukraine: A timeline

Published

on

On Saturday night, January 15, Microsoft shook the cybersecurity world with a report that destructive wiper malware had penetrated dozens of government, non-profit, and IT organizations in Ukraine. This news capped a week of mounting apprehension of cyberattacks in Ukraine that could presage or accompany a real-world Russian military invasion of the country.

Since January 11, several possibly interconnected developments related to Russia’s cybersecurity posture paint a complex and unclear portrait of what’s happening in Ukraine. The following is a timeline of these increasingly high-stakes developments:

January 11: U.S. releases cybersecurity advisory

 The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA) released a joint cybersecurity advisory (CSA) providing an overview of Russian state-sponsored cyber operations. It covered commonly observed tactics, techniques and procedures. The advisory also provided detection actions, incident response guidance, and mitigations.

CISA also recommended that network defenders review CISA’s Russia Cyber Threat Overview and Advisories page for more information on Russian state-sponsored malicious cyber activity. The agencies seemingly released the CSA as part of an occasional series of joint cybersecurity advisories.

January 13 to 14: Ukrainian websites defaced

Following a breakdown of diplomatic talks between Russia and the West intended to forestall a threatened Russian invasion of Ukraine, hackers launched defacement attacks that brought down dozens of Ukrainian government websites, including the Ministry of Foreign Affairs, the Ministry of Education, and others. The hackers posted a message that said, “Be afraid and expect the worst.”

The message also warned Ukrainians that “All your personal data has been sent to a public network. All data on your computer is destroyed and cannot be recovered,” and raised historical grievances between Poland and Ukraine. Ukraine’s State Bureau of Investigations (SBI) press service said that no data was stolen in the attack.

Although Ukraine did not attribute the attacks to Russia definitively, the European Union’s chief diplomat Josep Borrell hinted that Russia was the culprit. Serhiy Demedyuk, deputy secretary of Ukraine’s national security and defense council, preliminarily pinned the attacks on a hacker group linked to Belarusian intelligence known as UNC1151. Belarus is a close ally of Russia.

The European Union (EU) condemned the attacks and said it stands “ready to provide additional, direct, technical assistance to Ukraine to remediate this attack and further support Ukraine against any destabilizing actions, including by further building up its resilience against hybrid and cyber threats.” NATO Secretary-General Jens Stoltenberg said that his cyber experts in Brussels were exchanging information with their Ukrainian counterparts on the malicious cyber activities and would sign an agreement on enhanced cyber cooperation.

January 14: Russia takes down REvil ransomware group

In what appeared to be a surprise demonstration of U.S.-Russian collaboration, Russia’s FSB domestic intelligence service said that it dismantled ransomware crime group REvil at the request of the United States in an operation that resulted in the arrest of the group’s members. The announcement was made even as the attacks on the Ukraine websites were underway.

A senior administration official notably stopped short of confirming that the arrests were made at the administration’s request. The official did say they were the product of the “president’s commitment to diplomacy and the channel that he established and the work that has been underway in sharing information and in discussing the need for Russia to take action.”

January 15: Microsoft reveals discovery of malware on Ukrainian websites

Microsoft said it observed destructive malware disguised as ransomware in systems belonging to dozens of Ukrainian government agencies and organizations that work closely with the Ukrainian government. Microsoft didn’t specify which agencies and organizations were targeted but said they “provide critical executive branch or emergency response functions,” as well as an IT firm that manages websites for public and private sector clients, including government agencies whose websites were recently defaced.

If activated by the attacker, the wiper malware would render the infected computer system inoperable. Microsoft’s Threat Intelligence Center (MSTIC) issued a technical post outlining the malware, saying that while designed to look like ransomware, it lacked a ransom recovery mechanism, was intended to be destructive and was built to render targeted devices inoperable rather than to obtain a ransom.

MSTIC found no notable associations between the observed activity, tracked as DEV-0586, and other known activity groups. Microsoft has implemented protections to detect this malware family, known as WhisperGate, via Microsoft Defender Antivirus and Microsoft Defender for Endpoint.

January 16: Ukraine blames Russia for attack on Ukrainian websites

Ukraine’s Ministry of Digital Transformation said that all the evidence points to the fact that Russia is behind the recent attacks on Ukraine’s government websites. “The latest cyberattack is one of the manifestations of Russia’s hybrid war against Ukraine, which has been going on since 2014,” the ministry said.

Speaking on the CBS news program Face the Nation, Jake Sullivan, U.S. National Security Advisor, said the attacks on the Ukraine websites “is part of the Russian playbook, so it would not surprise me one bit if it ended being attributed to Russia.” Separately, NATO made good on its promise to sign a deal to bolster its cybersecurity support for Ukraine.

Unanswered questions regarding Russia’s cyber activity in Ukraine

Many unknowns surround this flurry of Russia-related cyber activity. These are the key unanswered questions:

Who are the attackers? The unknowns start with the absence of solid attribution of who the Ukraine attackers are. Despite the claims, no definitive research that confirms attribution has been released.

Is the REvil take-down related to the cyber incidents in Ukraine? It’s also unclear if the timing of Russia’s arrests of the REvil gang members is connected with the cyber incidents in Ukraine. Chris Painter, the former coordinator for cyber issues at the U.S. State Department, tells CSO that “just like the U.S., Russia can walk and chew gum at the same time, but the timing is very interesting and could be seen as a message saying, look, we can cooperate on things, but if you sanction us further, then all bets are off, and you can forget about it.”

He also doubts that Russia can sustain its cooperation on cybersecurity matters. “Russia doesn’t have a great track record in cooperating on even criminal cases for many years, and I’ve dealt with them for many years on this.”

Are the website defacements and the destructive malware that Microsoft discovered linked? Ukraine’s Demedyuk said the defacement attackers were “just a cover for more destructive actions that were taking place behind the scenes and the consequences of which we will feel in the near future.” Some cybersecurity experts speculate that there is such a connection but that the attacks were poorly executed in a superficial “combined arms operation” between two different actors, leading to synchronization failures.

Painter thinks it’s possible that the malicious actor launched the defacement attacks to tie up precious cybersecurity resources while surreptitiously launching the more severe malware attacks. “It could be that they did the defacements to divert resources so that it’d be harder for people to respond to the more serious stuff,” he says.

Are more attacks coming? Another unknown is whether the defacement and malware attacks are just opening salvos in what might be more disruptive cyber incidents in Ukraine and elsewhere. Microsoft warns that “It is possible more organizations have been infected with this malware, and the number of impacted organizations could grow.” Painter says, “I suspect that there probably are other intrusions if this is a prelude to a physical attack, or even if not.”

All organizations should immediately investigate

Concerns over malicious Russian activity are not limited to Ukraine. In its technical advisory, Microsoft said, “We strongly encourage all organizations to immediately conduct a thorough investigation and to implement defenses using the information provided in this post.”

Painter advises all organizations to heed the joint alert by CISA, NSA, the FBI. Cybersecurity personnel should “absolutely follow the guidance and the warnings that CISA and FBI put out,” he says. “The government is saying this one’s particularly important.”

Copyright © 2022 IDG Communications, Inc.

Source link

Cyber Security

When blaming the user for a security breach is unfair – or just wrong

Published

on

In his career in IT security leadership, Aaron de Montmorency has seen a lot — an employee phished on their first day by someone impersonating the CEO, an HR department head asked to change the company’s direct deposit information by a bogus CFO, not to mention multichannel criminal engagement with threat actors attacking from social media to email to SMS text.

In these cases, the users almost fell for it, but something didn’t feel right. So, they manually verified by calling the executives who were being impersonated. De Montmorency, director of IT, security, and compliance with Tacoma, Washington-based Elevate Health, praises the instincts that stopped the attacks from causing financial or reputational damage. Yet, he contends that expecting users to be the frontline defense against rampant phishing, pharming, whaling, and other credential-based attacks increasingly taking place over out-of-band channels is a recipe for disaster.

“Of course, train your staff. The human element is the weakest link here. But don’t rely on training alone — or technology alone — to protect the organization. What you’re looking for is a balance,” de Montmorency says.

Protecting out-of-band usage

As attackers go after employees over out-of-band channels such as Zoom, Slack, and Teams, user education and technical controls must follow. Enterprises need visibility into what their users are clicking, downloading, uploading, or linking to in what can be dozens of collaborative platforms, he adds. His company uses SafeGuard Cyber to monitor east-west traffic and detect malicious activities being attempted over these channels. The tool is agentless and only requires a single user sign-on to access their platforms of choice, making it frictionless to users, which is one of the key criteria in getting user buy-in to necessary security controls.

A recent report by the University of Madison dissects the many ways business collaboration platforms (BCPs) can be leveraged for app-to-app delegation attacks, user-to-app interaction hijacking, and app-to-user confidentiality violations. In their tests, researchers were able to send arbitrary emails on behalf of victims, merge code requests, launch fake video calls with loose security settings, steal private messages, and maintain a malicious presence even after app uninstallation. Using homemade scraping tools, researchers estimated that 1,493 (61%) of the 2,460 Slack apps analyzed and 427 (33%) of 1,304 Microsoft Teams apps analyzed were vulnerable to delegation attacks. Additionally, 1,266 (51%) of Slack apps use slash commands, which are vulnerable to both user-to-app and app-to-user violations.

These trends show how social engineering attacks have moved to where employees are working over collaborative platforms. That means security awareness education and security controls need to work in tandem to protect users, their devices, and their credentials from these ever-evolving threats, no matter where they’re working from. And they need to do so in a way that enables collaboration, rather than blocking it, according to experts.

Security as a matter of psychology

“In the past, we’ve treated our employees as extensions of our computers. We would lay down the law: ‘No, you cannot go to this site or use this social platform.’ But people are still human, and their number one priority is getting their work done, so they will go around draconian rules and blocking if they have to,” says Russell Spitler, co-founder and CEO of Nudge Security, which recently commissioned a study of 900 users titled Debunking the Stupid User Myth.

In the study, 67% of participants said they would not comply with these types of blocking interventions and would instead look for a workaround if blocking got in the way of doing their jobs. Inversely, the report states that if organizations empower their users to make more educated decisions, they could achieve two times the compliance rate of blocking intervention.

“If you can remove the psychological reasons not to do something with respect to security, you can hope the person will be an ally,” says Dr. Aaron Kay, a professor of management and psychology and neuroscience at Duke University who advised on the report. 

Kristofer Laxdal, CISO of Mississauga, Ontario-based Prophix, a financial performance management platform with more than 500 employees, agrees that punitive training methods and restrictive security controls cause more harm than good because phishing, pharming, whaling, and other social engineering attacks aimed at leveraging privileged access are only getting worse. He also feels the industry is at an inflection point where security can improve user experience rather than getting in the way.

Make security frictionless for users

For example, Laxdal cites zero trust and passwordless computing as technological controls that take the onus off the employee while improving security and reducing risk. Onerous controls such as screen locks and timeouts, on the other hand, are causing users to covertly install mouse jigglers and keypress generators to keep their screens from locking up when their computers are idle because they don’t have time to keep logging in, he adds.

“Security practitioners have thrown in layers of technical controls and security awareness training. Yet, phishing, IP theft, pharming, and ransomware have gone on far too long,” he says. “So, while there is indeed a human component to security, the controls themselves need to be frictionless, because users are tired of inputting multiple logins to multiple systems. They’re also experiencing multifactor authentication fatigue. Technical controls need to be put in place to remove the issues that the end user is experiencing.”

Know thy users

The best place to start is understanding employee roles, resources, and access habits, Laxdal says. For example, financial workers should understand the specific risks to business accounts and social engineering attempts such as BEC scams that may target them. Development departments will have different risk areas to focus on; for example, their IP on hosted servers or malware hidden in public open-source libraries. HR, on the other hand, is dealing with PII (financial, banking, and healthcare information) that shouldn’t be shared over any channel, particularly given that anyone can impersonate a CEO and request files or transfers.

“All of these vectors are being used globally against information assets and are overwhelmingly credential-based attacks that are perpetrated through phishing. Users need to understand why and be part of that discussion with real-world examples,” Spitler explains. “Sit down with your employees, ask about their typical day and access requirements. And understand each functional area of the business so you can design controls and training for their business.”

His company uses Ninjio, which combines behavioral analytics with security awareness training done in an anime style that, he says, is made compelling and engaging by the use of real-world hacks to show users what could happen if a user takes a dangerous action. Nudge Security also deploys analytics to identify when users stray off their approved platforms, engages the user by asking questions, and even assists them to securely set up the new platform with two-factor authentication and other secure enablement.

“If you want happy, compliant workers, they need to have agency in their decisions and feel that they are trusted and respected by their organizations,” adds Kay. “Deliver messages that facilitate that feeling. Be transparent about the reasons for your programs. And don’t frustrate users by ordering them to do things they don’t understand.”

Trust degrades over time

But just what is trust? What do you base it on, and how do you apply that to users, asks Winn Schwartau, an early infowar and security awareness pioneer. “I trust them not to steal from me based on what criteria? I trust that they’ve got the best interest of my company, based on what? I trust them not to click on malicious links or attachments based upon what? Because I trained them? I’ve been in this business a long time and I can tell you training doesn’t move the bar as much as people would like to believe.”

Schwartau believes that user education and training should be part of a holistic security program that starts with critical asset identification physically and electronically, limiting user access to only those systems they need, and “detection-in-depth” monitoring for abuse. He suggests employing a high-speed OODA loop for examining user behavior and to align technical controls. 

This, he adds, can help determine a soft initial level of trust. But scammers and social engineers continue to hone their tactics, so CISOs need to adapt and uplevel their user education and technical controls. Over time, trust goes down and risk increases in any environment, which he mathematically details in his book Analogue Network Security. “In many ways, it’s a total paradox,” he adds. “Employees are your greatest asset. And yet, employees are also your weakest link.”

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Improving Cyber Hygiene with Multi-Factor Authentication and Cyber Awareness

Published

on

Using multi-factor authentication (MFA) is one of the key components of an organizations Identity and Access Management (IAM) program to maintain a strong cybersecurity posture. Having multiple layers to verify users is important, but MFA fatigue is also real and can be exploited by hackers.

Enabling MFA for all accounts is a best practice for all organizations, but the specifics of how it is implemented are significant because attackers are developing workarounds. That said, when done correctly – and with the right pieces in place – MFA is an invaluable tool in the cyber toolbox and a key piece of proper cyber hygiene. This is a primary reason why MFA was a key topic for this year’s cybersecurity awareness month. For leaders and executives, the key is to ensure employees are trained to understand the importance of the security tools – like MFA – available to them while also making the process easy for them.

MFA is still an important piece of the cyber hygiene puzzle

Multi-factor authentication (MFA) helps to provide extra layers of security throughout your organization. This quick verification serves as a tool that allows organizations to confirm identity before allowing users to access company data. This can look like prompting employees to use mobile tokens and/or to enter a specific code they’ve been texted or emailed before logging on to certain devices and websites. 

MFA fatigue is rising, and hackers are noticing

Even though MFA should be a basic requirement these days, it’s not a foolproof tactic. Attackers are finding new ways around this security layer with what are called MFA fatigue attacks.

As employees try to access work applications, they are often prompted to verify their identity in some way established by the IT security team. This typically involves notifications to their smartphones. Anyone who has been trying to complete their work in a timely manner knows the irritation of constantly having to take action on these notifications. This is the basis of the MFA fatigue attack.

Attackers excel at finding ways to gain entry to their chosen target, and they seem to know a good bit about human psychology. Attackers are now spamming employees with compromised credentials with MFA authorization requests – sometimes dozens of times in an hour – until they get so irritated that they approve the request using their authentication apps. Or they might assume there is a system malfunction and accept the notification just to make the notifications stop.

A simple, effective MFA strategy for long-term success

Getting MFA right is a balance between being strict enough so that the security measure maintains integrity and lax enough so that employees don’t grow tired of it and get tripped up.

Employees may grow irritated or think that MFA prompts are excessive as a result of frequently invalidating sessions. On the other hand, if too lenient, authenticated sessions can last too long, IP changes won’t result in new prompts, new MFA device enrollments won’t result in alerts, and enterprises run the risk of not being informed when, for instance, an authentication token that has already passed the MFA check gets stolen.

Most employees have never heard of MFA fatigue attacks, so they don’t know to look for or report them. In order to cope, organizations need to educate employees to make sure they’re prepared to spot these attacks.

Organizations need to place controls on MFA to lower the potential for MFA abuse. The most effective control is to not use methods that allow simple approvals of notifications – a scenario that contributes to MFA fatigue. All approvals should mandate responses that prove the user has the authenticated device. Number matching, for instance, is a technique that requires the user to enter a series of numbers they can see on their screen.

There’s also the effective one-time passcode (OTP) method of approval where the user gets information from the authentication request and has to enter it for verification. This requires a little more work on the user’s part, but it helps reduce the risk of MFA fatigue.

Another useful tool is an endpoint privilege management solution, which helps to stop the theft of cookies. If attackers get a hold of those cookies, they can bypass MFA controls. This solution is a robust layer in the protection of user credentials.

It’s important to set thresholds and send alerts to the SOC if certain thresholds are exceeded. The SOC can use user behavior analytics to create context-based triggers that alert the security team if any unusual behavior occurs. It can also prohibit user authentication from dubious IP addresses.

Outsmarting cyber criminals with the right security solutions and training

MFA prevents unauthorized access from cyber criminals, yet they have found a way to circumvent it by using its own premise of trust and authentication against users. That’s why organizations must use a two-pronged approach of educating employees about MFA fatigue attacks and setting up appropriate guardrails to reduce the likelihood of these attacks succeeding. Solutions like Fortinet’s FortiAuthenticator, FortiToken and FortiTrust Identity further protect organizations and strengthens their security posture. At the same time, cybersecurity awareness training, like Fortinet’s Security Awareness and Training service, can help ensure that employees are aware of all threat methods, as well as the importance of properly using all the security tools available to them.

Find out more about how Fortinet’s Training Advancement Agenda (TAA) and Training Institute programs—including the NSE Certification programAcademic Partner program, and Education Outreach program—are increasing access to training to help solve the cyber skills gap

 

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Cyber Security

Researchers found security pitfalls in IBM’s cloud infrastructure

Published

on

Security researchers recently probed IBM Cloud’s database-as-a-service infrastructure and found several security issues that granted them access to the internal server used to build database images for customer deployments. The demonstrated attack highlights some common security oversights that can lead to supply chain compromises in cloud infrastructure.

Developed by researchers from security firm Wiz, the attack combined a privilege escalation vulnerability in the IBM Cloud Databases for PostgreSQL service with plaintext credentials scattered around the environment and overly permissive internal network access controls that allowed for lateral movement inside the infrastructure.

PostgreSQL is an appealing target in cloud environments

Wiz’ audit of the IBM Cloud Databases for PostgreSQL was part of a larger research project that analyzed PostgreSQL deployments across major cloud providers who offer this database engine as part of their managed database-as-a-service solutions. Earlier this year, the Wiz researchers also found and disclosed vulnerabilities in the PostgreSQL implementations of Microsoft Azure and the Google Cloud Platform (GCP).

The open-source PostgreSQL relational database engine has been in development for over 30 years with an emphasis on stability, high-availability and scalability. However, this complex piece of software was not designed with a permission model suitable for multi-tenant cloud environments where database instances need to be isolated from each other and from the underlying infrastructure.

PostgreSQL has powerful features through which administrators can alter the server file system and even execute code through database queries, but these operations are unsafe and need to be restricted in shared cloud environments. Meanwhile, other admin operations such as database replication, creating checkpoints, installing extensions and event triggers need to be available to customers for the service to be functional. That’s why cloud service providers (CSPs) had to come up with workarounds and make modifications to PostgreSQL’s permission model to enable these capabilities even when customers only operate with limited accounts.

Privilege escalation through SQL injection

While analyzing IBM Cloud’s PostgreSQL implementation, the Wiz researchers looked at the Logical Replication mechanism that’s available to users. This feature was implemented using several database functions, including one called create_subscription that is owned and executed by a database superuser called ibm.

When they inspected the code of this function, the researchers noticed an SQL injection vulnerability caused by improper sanitization of the arguments passed to it. This meant they could pass arbitrary SQL queries to the function, which would then execute those queries as the ibm superuser. The researchers exploited this flaw via the PostgreSQL COPY statement to execute arbitrary commands on the underlying virtual machine that hosted the database instance and opened a reverse shell.

With a shell on the Linux system they started doing some reconnaissance to understand their environment, such as listing running processes, checking active network connections, inspecting the contents of the /etc/passwd files which lists the system’s users and running a port scan on the internal network to discover other servers. The broad port scan caught the attention of the IBM security team who reached out to the Wiz team to ask about their activities.

“After discussing our work and sharing our thoughts with them, they kindly gave us permission to pursue our research and further challenge security boundaries, reflecting the organization’s healthy security culture,” the Wiz team said.

Stored credentials lead to supply chain attack

The gathered information, such as environment variables, told the researchers they were in a Kubernetes (K8s) pod container and after searching the file system they found a K8s API access token stored locally in a file called /var/run/secrets/kubernetes.io/serviceaccount/token. The API token allowed them to gather more information about the K8s cluster, but it turned out that all the pods were associated with their account and were operating under the same namespace. But this wasn’t a dead end.

K8s is a container orchestration system used for software deployment where containers are usually deployed from images — prebuilt packages that contain all the files needed for a container and its preconfigured services to operate. These images are normally stored on a container registry server, that can be public or private. In the case of IBM Cloud it was a private container registry that required authentication.

The researchers used the API token to read the configurations of the pods in their namespace and found the access key for four different internal container registries in those configuration files. The description of this newly found key in IBM Cloud’s identity and access management (IAM) API suggested it had both read and write privileges to the container registries, which would have given the researchers the ability to overwrite existing images with rogue ones.

However, it turned out that the key description was inaccurate and they could only download images. This level of access had security implications, but it did not pose a direct threat to other IBM Cloud customers, so the researchers pushed forward.

Container images can contain a lot of sensitive information that’s used during deployment and later gets deleted, including source code, internal scripts referencing additional services in the infrastructure, as well as credentials needed to access them. Therefore, the researchers decided to download all images from the registry service and use an automated tool to scan them for secrets, such as credentials and API tokens.

“In order to comprehensively scan for secrets, we unpacked the images and examined the combination of files that made up each image,” the researchers said. “Container images are based on one or more layers; each may inadvertently include secrets. For example, if a secret exists in one layer but is deleted from the following layer, it would be completely invisible from within the container. Scanning each layer separately may therefore reveal additional secrets.”

The JSON manifest files of container images have a “history” section that lists historical commands that were executed during the build process of every image. In several such files, the researchers found commands that had passwords passed to them as command line arguments. These included passwords for an IBM Cloud internal FTP server and a build artifact repository.

Finally, the researchers tested if they could access those servers from within their container and it turned out that they could. This overly permissive network access combined with the extracted credentials allowed them to overwrite arbitrary files in the build artifact repository that’s used by the automated IBM Cloud build process to create container images. Those images are then used in customer deployments, opening the door to a supply chain attack.

“Our research into IBM Cloud Databases for PostgreSQL reinforced what we learned from other

cloud vendors, that modifications to the PostgreSQL engine effectively introduced new

vulnerabilities to the service,” the researchers said. “These vulnerabilities could have been exploited by a malicious actor as part of an extensive exploit chain culminating in a supply-chain attack on the platform.”

Lessons for other organizations

While all of these issues have already been privately reported to and fixed by the IBM Cloud team, they are not unique to IBM. According to the Wiz team, the “scattered secrets” issue is common across all cloud environments.

Automated build and deployment workflows often leave secrets behind in various places such as configuration files, Linux bash history, journal files and so on that developers forget to wipe when deployment is complete. Furthermore, some developers accidentally upload their whole .git and CircleCI configuration files to production servers. Forgotten secrets commonly found by the Wiz team include cloud access keys, passwords, CI/CD credentials and API access tokens.

Another prevalent issue that played a critical role in the IBM Cloud attack is the lack of strict access controls between production servers and internal CI/CD systems. This often allows attackers to move laterally and gain a deeper foothold into an organization’s infrastructure.

Finally, private container registries can provide a wealth of information to attackers that goes beyond credentials. They can reveal information about critical servers inside the infrastructure or can contain code that reveals additional vulnerabilities. Organizations should make sure their container registry solutions enforce proper access controls and scoping, the Wiz team said.

Copyright © 2022 IDG Communications, Inc.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE