Kaspersky official blog https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Thu, 31 Jul 2025 12:36:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png Kaspersky official blog https://www.kaspersky.com/blog 32 32 Zoomers at work: how scammers target this demographic | Kaspersky official blog https://www.kaspersky.com/blog/polyworking-genz-scams/54010/ Thu, 31 Jul 2025 12:36:36 +0000 https://www.kaspersky.com/blog/?p=54010 The stereotype of Gen Z as lazy, uncommitted employees averse to hard work, and prone to job-hopping is quite common. But the statistics tell a different story. Nearly half of Zoomers juggle multiple gigs: a full-time job, freelancing, and various side hustles. And cybercriminals have identified these polyworking young professionals as convenient targets.

Our experts dug into this trend and uncovered some non-obvious threats. This article explores how Gen Z can navigate their multi-job lifestyles without putting their cybersecurity at risk.

More apps, more problems

The core issue stems from the sheer number of corporate apps and accounts Gen Z has to juggle. Think about it: Zoom for one job, Slack for another, and Notion for tasks across the board. And the more applications they use, the larger the attack surface for cybercriminals. Scammers constantly send phishing emails that convincingly impersonate employers, and distribute malware disguised as business software. They can even send fake assignments, pretending to be your boss.

From mid-2024 to mid-2025, Kaspersky experts recorded six million attacks involving fake collaboration platforms. Most often, attackers imitated the “golden trio” of corporate applications: Zoom, and Microsoft Excel and Outlook.

Here’s how it might play out: an attacker sends an email seemingly from Zoom asking you to update the app. The email contains a link that leads to a phishing site mimicking the real Zoom page. This fake site then immediately downloads a bogus application to your device. The imposter app could then steal your contacts’ data or even gain access to your entire work environment — the potential scenarios are numerous.

Phishing site urging the user to install a "Zoom update"

Phishing site urging the user to install a “Zoom update”

How scammers are deceiving job-seeking Gen Z

If you’ve ever seen a message in a neighborhood chat like, “URGENT: remote work, $60 an hour!” — it’s likely a scam. But these days scammers have grown much more sophisticated. They’re posting what look like legitimate job openings on popular job platforms, detailing the terms so thoroughly that the positions appear genuine. In reality, even the most well-crafted job posting can turn out to be completely fake.

Fake SMM job posting

Fake SMM job posting

Cybercriminals may even conduct fake interviews to make their schemes appear more convincing. One common form of extortion targets Gen Z through fake “interviews”, where victims are told to log out of their personal Apple ID and access a purported “company” account. If the victim complies, the scammers activate Lost Mode, effectively bricking the applicant’s iPhone. Naturally, they then demand a hefty sum to unlock it.

Freelance opportunities also deserve a close look. The search for freelance work is often less formal than traditional job hunting: all communication happens through messaging apps, and payments might even come from a client’s personal account. It’s incredibly easy to imitate this casual communication style, and scammers exploit this. In a worst-case scenario, instead of landing a new gig, you could end up with a bricked phone, malware infection, compromised personal accounts, or even losing all your money to the “client”.

It’s impossible to list every single red flag when you’re looking for a new job, but here are the main things to watch out for.

  • If someone wants something done yesterday and is promising a ton of cash for it, you’re likely dealing with scammers.
  • Third-party payments. Stick to payment methods you trust.
  • Sign-in/sign-out requests. Be extremely wary if someone asks you to sign in or out of any accounts — especially your personal Apple ID.
  • Paid training. If they’re asking you to pay for training upfront with the promise of reimbursement later — simply ignore them.
  • Excessive personal data. Applying to be a dog walker, but they’re asking for copies of every page of your passport? No way, José.

Why Gen Z is being targeted, and how to fight back

Some companies have adopted BYOD policies, asking employees to use their personal tech for work. The problem is, these are often the same devices used for everything else: gaming, downloading files from the internet, and chatting with friends. Do we even need to say that downloading torrents on the laptop used for work is a dubious idea?

Many Gen Zers also make a costly mistake when using a large number of applications: they use one password for everything. Just a single data breach (and they happen all the time!), and cybercriminals can gain access to all your messaging apps, calendars, email clients, and other work-specific applications. Of course, coming up with and remembering complex passwords every time is a challenge. That’s why we recommend using a password manager that can generate strong, unique passwords, and securely store them for you.

What else you can do to avoid falling victim to cybercriminals while you’re job searching?

  • Boost your cybersecurity knowledge by playing Case 404.
  • Always enable two-factor authentication wherever possible. By the way, you can store your 2FA tokens in our password manager.
  • Avoid downloading apps or updates from suspicious websites.
  • Install Kaspersky Premium on your personal devices. This application can prevent you from opening phishing links, and significantly improve your personal security.

Cybersecurity cheat-sheet for polyworkers:

]]>
full large medium thumbnail
Passkey support in business applications | Kaspersky official blog https://www.kaspersky.com/blog/passkey-enterprise-issues-and-threats/54003/ Wed, 30 Jul 2025 14:29:56 +0000 https://www.kaspersky.com/blog/?p=54003 Transition to passkeys promises organizations a cost-effective path toward robust employee authentication, increased productivity, and regulatory compliance. We’ve already covered all the pros and cons of this business solution in a separate, in-depth article. However, the success of the transition — and even its feasibility — really hinges on the technical details and implementation specifics across numerous corporate systems.

Passkey support in identity management systems

Before tackling organizational hurdles and drafting policies, you’ll have to determine if your core IT systems are ready for the switch to passkeys.

Microsoft Entra ID (Azure AD) fully supports passkeys, letting admins set them as the primary sign-in method. For hybrid deployments with on-premises resources, Entra ID can generate Kerberos tickets (TGTs), which your Active Directory domain controller can then process.

However, Microsoft doesn’t yet offer native passkey support for RDP, VDI, or on-premises-only AD sign-ins. That said, with a few workarounds, organizations can store passkeys on a hardware token like a YubiKey. This kind of token can simultaneously support both the traditional PIV (smart cards) technology and FIDO2 (passkeys). There are also third-party solutions for these scenarios, but you’ll need to evaluate how using them impacts your overall security posture and regulatory compliance.

Good news for Google Workspace and Google Cloud users: they offer full passkey support.

Popular identity management systems like Okta, Ping, Cisco Duo, and RSA IDplus also support FIDO2 and all major forms of passkeys.

Passkey support on client devices

We have a detailed post on the subject. All modern operating systems from Google, Apple, and Microsoft support passkeys. However, if your company uses Linux, you’ll likely need extra tools, and overall support is still limited.

Also, while for all major operating systems it might look like full support on the surface, there’s a lot of variety in how passkeys are stored, and that can lead to compatibility headaches. Combinations of several systems like Windows computers and Android smartphones are the most problematic. You might create a passkey on one device and then find you can’t access it on another. For companies with a strictly managed device fleet, there are a couple of ways to tackle this. For example, you could have employees generate a separate passkey for each company device they use. This means a bit more initial setup: employees will need to go through the same process of creating a passkey on every device. However, once that’s done, signing in takes minimal time. Plus, if they lose one device, they won’t be completely locked out of their work data.

Another option is to use a company-approved password manager to store and sync passkeys across all employees’ devices. This is also a must for companies using Linux computers, as its operating system can’t natively store passkeys. Just a heads-up: this approach might add some complexity when it comes to regulatory compliance audits.

If you’re looking for a solution with almost no issues with sync and multiple platforms, hardware passkeys like the YubiKey are the way to go. The catch is that they can be significantly more expensive to deploy and manage.

Passkey support in business applications

The ideal scenario for bringing passkeys into your business apps is to have all your applications sign in through single sign-on (SSO). That way, you only need to implement passkey support in your corporate SSO solution, such as Entra ID or Okta. However, if some of your critical business applications don’t support SSO, or if that support isn’t part of your contract (which, unfortunately, happens), you’ll have to issue individual passkeys for users to sign in to each separate system. Hardware tokens can store anywhere from 25 to 100 passkeys, so your main extra cost here would be on the administrative side.

Popular business systems that fully support passkeys include Adobe Creative Cloud, AWS, GitHub, Google Workspace, HubSpot, Office 365, Salesforce, and Zoho. Some SAP systems also support passkeys.

Employee readiness

Rolling out passkeys means getting your team up to speed regardless of the scenario. You don’t want them scratching their heads trying to figure out new interfaces. The goal is for everyone to feel confident using passkeys on every single device. Here are the key things your employees will need to understand.

  • Why passkeys beat passwords (they’re much more secure, faster to sign in with, and don’t need to be rotated)
  • How biometrics work with passkeys (the biometric data never leaves the device, and isn’t stored or processed by the employer)
  • How to get their very first passkey (for example, Microsoft has a Temporary Access Pass feature, and third-party IAM systems often send an onboarding link; the process needs to be thoroughly documented, though)
  • What to do if their device doesn’t recognize their passkey
  • What to do if they lose a device (sign in from another device that has its own passkey, or use an OTP, perhaps given to them in a sealed envelope for just such an emergency)
  • How to sign in to work systems from other computers (if the company’s policies permit it)
  • What a passkey-related phishing attempt might look like

Passkeys are no silver bullet

Moving to passkeys doesn’t mean your cybersecurity team can just cross identity threats off their risk list. Sure, it makes things tougher for attackers, but they can still do the following:

  • Target systems that haven’t switched to passkeys
  • Go after systems that still have fallback login methods like passwords and OTPs
  • Steal authentication tokens from devices infected with infostealers
  • Use special techniques to bypass passkey protections

While it’s impossible to phish the passkey itself, attackers can set up fake web infrastructure to trick a victim into authenticating and validating a malicious session on a corporate service.

A recent example of this kind of AiTM attack was documented in the U.S. In that incident, the victim was lured to a fake authentication page for a corporate service, where attackers first phished their username and password, and then the session confirmation by having them scan a QR code. In this incident, the security policies were configured correctly, so scanning this QR code did not lead to successful authentication. But since such a mechanism with passkeys was implemented, the attackers hope that somewhere it is configured incorrectly, and the physical proximity of the device on which authentication is carried out and the device where the key is stored is not checked.

Ultimately, switching to passkeys requires detailed policy configuration. This includes both authentication policies (such as disabling passwords when a passkey is available, or banning physical tokens from unknown vendors) and monitoring policies (such as logging passkey registrations or cross-device scenarios from suspicious locations).

]]>
full large medium thumbnail
What to do if you get a phishing email | Kaspersky official blog https://www.kaspersky.com/blog/how-to-deal-with-email-phishing/53990/ Tue, 29 Jul 2025 13:24:53 +0000 https://www.kaspersky.com/blog/?p=53990 Phishing emails typically end up in the spam folder, because today’s security systems easily recognize most of them; however, these systems aren’t completely reliable, so some bona fide email messages land in the junk folder too. This article explains how to detect phishing emails, and what to do about them.

Signs of phishing email

There are several markers that are widely believed to indicate a message sent by scammers. Below are some examples.

  • Catchy subject line. A phishing message will likely represent a fraction of all the mail landing in your inbox. This is why scammers usually try to make their subject lines stand out by using trigger words like “urgent”, “prize”, “cash”, “giveaway”, or similar, designed to prompt you to open the message as quickly as possible.
  • Call to action. You can bet the message will encourage you to do at least one of the following: click a link, pay for something you don’t really need, or check the details in an attachment. The attackers’ primary goal is to lure victims away from their email and into unsafe spaces where they’re tricked into spending money or surrendering access to their accounts.
  • Expiring timer. The message might feature a timer that says, “Follow this link. It expires in 24 hours.” All these tricks are just nonsense. Scammers want to rush you so you start to panic and stop thinking carefully about your money.
  • Mistakes in the email body. In the past year, there’s been an increase in phishing emails sent in multiple languages at once, often with some odd mistakes.
  • Suspicious sender address. If you live in, say, Brazil, and you get an email message from an Italian address, that’s a red flag and a good reason to completely ignore its contents.

An impersonal greeting like “Dear %username%” used to be a sure sign of a phishing email, but scammers have moved on from that. Targeted messages addressing the victim by name are becoming increasingly common. Ignore those too.

What to do if you get a phishing email

If you’ve managed to spot one using the signs described above, well done — you’re awesome! You can go ahead and delete it without even opening. And if you want to do your good deed for the day, report the phishing attempt via Outlook or Gmail to make this world a tiny bit safer. We understand that spotting phishing in your email right away isn’t easy — so here’s a short list of don’ts to help with detection.

Don’t open attachments

Scammers can hide malware inside various types of email attachments: images, HTML files, and even voice messages. Here’s a recent example: you get an email with an attachment that appears to be a voice message with the SVG extension, but that’s typically an image format… To listen to the recording, you have to open the attachment, and what do you know — you find yourself on a phishing site that masquerades as Google Voice! And no, you don’t hear any audio. Instead, you’re redirected to another website where you’ll be prompted to enter the login and password for your email account. If you’re interested in learning more, here’s a Securelist blog post on this.

It seems that voice messages are sent more often through messengers than by email

It seems that voice messages are sent more often through messengers than by email

This and other stories just go to show you shouldn’t open attachments. Any attachments. At all. Especially if you weren’t expecting the message in the first place.

Don’t open links

This is a golden rule that will help keep your money and accounts safe. A healthy dose of caution is exactly what everyone needs when using the internet. Let’s take a look at this phishing message.

An "exciting win-win", but only the scammers benefit

An “exciting win-win”, but only the scammers benefit

Does this look odd? It’s written in two languages: Russian and Dutch. It shows the return address of a language school in the Netherlands, yet it references the Russian online marketplace Ozon. The message body congratulates the recipient: “You are one of our few lucky clients who get a chance to compete for uncredible prizes.” “Competing for prizes” is easy: just click the link, which has been thoughtfully included twice.

A week later, another message landed in the same inbox. Again, it came in two languages: Italian and Russian. This one came from a real Italian email address associated with the archive of Giovanni Korompay‘s works. The artist passed away in 1988. No, this wasn’t an offer to commemorate the painter. Most likely, hackers have breached the archive’s email account and are now sending phishing mail about soccer betting pretending to be from that source. All of that looks a rather fishy.

Another email in two languages

Another email in two languages

These messages have a lot in common. One thing we didn’t mention is how phishing links are disguised. Scammers deliberately use the TinyURL link shortener to make links look as legitimate as possible. But the truth is, a link that starts with tinyurl.com could point to anything: from the Kaspersky Daily blog to something malicious.

Don’t believe what’s written down

Scammers come up with all sorts of tricks: pretending to be Nigerian princes, sending fake Telegram Premium subscriptions, or congratulating people on winning fake giveaways. Every week, I get email with text like this: “Congratulations! You can claim your personal prize.” Sometimes they even add the amount of the supposed winnings to make sure I open the message. And once, I did.

The scammers were too lazy to shorten this link

The scammers were too lazy to shorten this link

Inside, it’s all by the book: a flashy headline, congratulations, and calls to click the link. To make it seem even more convincing, the email is supposedly signed by a representative from the “Prize Board of the Fund”. What fund? What prize board? And how could I possibly have won something I never even entered into? That part is unclear.

You may have noticed the unusual design of this message: it clearly stands out from the previous examples. To add credibility, the scammers used Google Forms, Google’s official service for surveys and polls. The scheme is a simple one: they create a survey, set it up to send response copies to the email addresses of their future victims, and collect their answers. Read Beware of Google Forms bearing crypto gifts to find out what happens if you open a link like that.

The bottom line

Following these rules will protect you from many — but not all — of the tricks that attackers might come up with. That’s why we recommend trusting a reliable solution: Kaspersky Premium. Every year, our products undergo testing by the independent Austrian organization AV-Comparatives to evaluate their ability to detect phishing threats. We described the testing procedure in a post a year ago. In June 2025, Kaspersky Premium for Windows successfully met the certification criteria again and received the Approved certificate, a mark of quality in protecting users from phishing.

Important clarification: at Kaspersky, we use a unified stack of security technologies, which is what the experts tested. This means the Kaspersky Premium for Windows award also applies to our other products for home users (Kaspersky Standard, Kaspersky Plus, and Kaspersky Premium) and for businesses (such as Kaspersky Endpoint Security for Business and Kaspersky Small Office Security).

More about phishing:

]]>
full large medium thumbnail
Are passkeys enterprise-ready? | Kaspersky official blog https://www.kaspersky.com/blog/passkey-enterprise-readiness-pros-cons/53986/ Mon, 28 Jul 2025 15:21:28 +0000 https://www.kaspersky.com/blog/?p=53986 Every major tech giant touts passkeys as an effective, convenient password replacement that can end phishing and credential leaks. The core idea is simple: you sign in with a cryptographic key that’s stored securely in a special hardware module on your device, and you unlock that key with biometrics or a PIN. We’ve already covered the current state of passkeys for home users in detail across two articles (on terminology and basic use cases and more complex scenarios. However, businesses have entirely different requirements and approaches to cybersecurity. So, how good are passkeys and FIDO2 WebAuthn in a corporate environment?

Reasons for companies to switch to passkeys

As with any large-scale migration, making the switch to passkeys requires a solid business case. On paper, passkeys tackle several pressing problems at once:

  • Lower the risk of breaches caused by stolen legitimate credentials — phishing resistance is the top advertised benefit of passkeys.
  • Strengthen defenses against other identity attacks, such as brute-forcing and credential stuffing.
  • Help with compliance. In many industries, regulators mandate the use of robust authentication methods for employees, and passkeys usually qualify.
  • Reduce costs. If a company opts for passkeys stored on laptops or smartphones, it can achieve a high level of security without the extra expense of USB devices, smart cards, and their associated management and logistics.
  • Boost employee productivity. A smooth, efficient authentication process saves every employee time daily and reduces failed login attempts. Switching to passkeys usually goes hand in hand with getting rid of the universally loathed regular password changes.
  • Lightens the helpdesk workload by decreasing the number of tickets related to forgotten passwords and locked accounts. (Of course, other types of issues pop up instead, such as lost devices containing passkeys.)

How widespread is passkey adoption?

A FIDO Alliance report suggests that 87% of surveyed organizations in the US and UK have either already transitioned to using passkeys or are currently in the process of doing so. However, a closer look at the report reveals that this impressive figure also includes the familiar enterprise options like smart cards and USB tokens for account access. Although some of these are indeed based on WebAuthn and passkeys, they’re not without their problems. They’re quite expensive and create an ongoing burden on IT and cybersecurity teams related to managing physical tokens and cards: issuance, delivery, replacement, revocation, and so on. As for the heavily promoted solutions based on smartphones and even cloud sync, 63% of respondents reported using such technologies, but the full extent of their adoption remains unclear.

Companies that transition their entire workforce to the new tech are few and far between. The process can get both organizationally challenging and just plain expensive. More often than not, the rollout is done in phases. Although pilot strategies may vary, companies typically start with those employees who have access to IP (39%), IT system admins (39%), and C-suite executives (34%).

Potential obstacles to passkey adoption

When an organization decides to transition to passkeys, it will inevitably face a host of technical challenges. These alone could warrant their own article. But for this piece, let’s stick to the most obvious issues:

  • Difficulty (and sometimes outright impossibility) of migrating to passkeys when using legacy and isolated IT systems — especially on-premises Active Directory
  • Fragmentation of passkey storage approaches within the Apple, Google, and Microsoft ecosystems, complicating the use of a single passkey across different devices
  • Additional management difficulties if the company allows the use of personal devices (BYOD), or, conversely, has strict prohibitions such as banning Bluetooth
  • Ongoing costs for purchasing or leasing tokens and managing physical devices
  • Specific requirement of non-syncable hardware keys for high-assurance-with-attestation scenarios (and even then, not all of them qualify — the FIDO Alliance provides specific recommendations on this)
  • Necessity to train employees and address their concerns about the use of biometrics
  • Necessity to create new, detailed policies for IT, cybersecurity, and the helpdesk to address issues related to fragmentation, legacy systems, and lost devices (including issues related to onboarding and offboarding procedures)

What do regulators say about passkeys?

Despite all these challenges, the transition to passkeys may be a foregone conclusion for some organizations if required by a regulator. Major national and industry regulators generally support passkeys, either directly or indirectly:

The NIST SP 800-63 Digital Identity Guidelines permit the use of “syncable authenticators” (a definition that clearly implies passkeys) for Authenticator Assurance Level 2, and device-bound authenticators for Authenticator Assurance Level 3. Thus, the use of passkeys confidently checks the boxes during ISO 27001, HIPAA, and SOC 2 audits.

In its commentary on DSS 4.0.1, the PCI Security Standards Council explicitly names FIDO2 as a technology that meets its criteria for “phishing-resistant authentication”.

The EU Payment Services Directive 2 (PSD2) is written in a technology-agnostic manner. However, it requires Strong Customer Authentication (SCA) and the use of Public Key Infrastructure based devices for important financial transactions, as well as dynamic linking of payment data with the transaction signature. Passkeys support these requirements.

The European directives DORA and NIS2 are also technology-agnostic, and generally only require the implementation of multi-factor authentication — a requirement that passkeys certainly satisfy.

In short, choosing passkeys specifically isn’t mandatory for regulatory compliance, but many organizations find it to be the most cost-effective path. Among the factors tipping the scales in favor of passkeys are the extensive use of cloud services and SaaS, an ongoing rollout of passkeys for customer-facing websites and apps, and a well-managed fleet of corporate computers and smartphones.

Enterprise roadmap for transitioning to passkeys

  1. Assemble a cross-functional team. This includes IT, cybersecurity, business owners of IT systems, tech support, HR, and internal communications.
  2. Inventory your authentication systems and methods. Identify where WebAuthn/FIDO2 is already supported, which systems can be upgraded, where single sign-on (SSO) integration can be implemented, where a dedicated service needs to be created to translate new authentication methods into ones your systems support, and where you’ll have to continue using passwords — under beefed-up SOC monitoring.
  3. Define your passkey strategy. Decide whether to use hardware security keys or passkeys stored on smartphones and laptops. Plan and configure your primary sign-in methods, as well as emergency access options such as temporary access passcodes (TAP).
  4. Update your corporate information security policies to reflect the adoption of passkeys. Establish detailed sign-up and recovery rules. Establish protocols for cases where transitioning to passkeys isn’t on the cards (for example, because the user must rely on a legacy device that has no passkey support). Develop auxiliary measures to ensure secure passkey storage, such as mandatory device encryption, biometrics use, and unified endpoint management or enterprise mobility management device health checks.
  5. Plan the rollout order for different systems and user groups. Set a long timeline to identify and fix problems step-by-step.
  6. Enable passkeys in access management systems such as Entra ID and Google Workspace, and configure allowed devices.
  7. Launch a pilot, starting with a small group of users. Collect feedback, and refine your instructions and approach.
  8. Gradually connect systems that don’t natively support passkeys using SSO and other methods.
  9. Train your employees. Launch a passkey adoption campaign, providing users with clear instructions and working with “champions” on each team to speed up the transition.
  10. Track progress and improve processes. Analyze usage metrics, login errors, and support tickets. Adjust access and recovery policies accordingly.
  11. Gradually phase out legacy authentication methods once their usage drops to single-digit rates. First and foremost, eliminate one-time codes sent through insecure communication channels, such as text messages and email.
]]>
full large medium thumbnail
Hijacking Discord invite links to install malware | Kaspersky official blog https://www.kaspersky.com/blog/hijacked-discord-invite-links-for-multi-stage-malware-delivery/53955/ Fri, 25 Jul 2025 10:07:54 +0000 https://www.kaspersky.com/blog/?p=53955 Attackers are using expired and deleted Discord invite links to distribute two strains of malware: AsyncRAT for taking remote control of infected computers, and Skuld Stealer for stealing crypto wallet data. They do this by exploiting a vulnerability in Discord’s invite link system to stealthily redirect users from trusted sources to malicious servers.

The attack leverages the ClickFix technique, multi-stage loaders and deferred execution to bypass defenses and deliver malware undetected. This post examines in detail how attackers exploit the invite link system, what is ClickFix and why they use it, and, most importantly, how not to fall victim to this scheme.

How Discord invite links work

First, let’s look at how Discord invite links work and how they differ from each other. By doing so, we’ll gain an insight into how the attackers learned to exploit the link creation system in Discord.

Discord invite links are special URLs that users can use to join servers. They are created by administrators to simplify access to communities without having to add members manually. Invite links in Discord can take two alternative formats:

  • https://discord.gg/{invite_code}
  • https://discord.com/invite/{invite_code}

Having more than one format, with one that uses a “meme” domain, is not the best solution from a security viewpoint, as it sows confusion in the users’ minds. But that’s not all. Discord invite links also have three main types, which differ significantly from each other in terms of properties:

  • Temporary invite links
  • Permanent invite links
  • Custom invite links (vanity URLs)

Links of the first type are what Discord creates by default. Moreover, in the Discord app, the server administrator has a choice of fixed invite expiration times: 30 minutes, 1 hour, 6 hours, 12 hours, 1 day or 7 days (the default option). For links created through the Discord API, a custom expiration time can be set — any value up to 7 days.

Codes for temporary invite links are randomly generated and usually contain 7 or 8 characters, including uppercase and lowercase letters, as well as numbers. Examples of a temporary link:

  • https://discord.gg/a7X9pLd
  • https://discord.gg/Fq5zW2cn

To create a permanent invite link, the server administrator must manually select Never in the Expire After field. Permanent invite codes consist of 10 random characters — uppercase and lowercase letters, and numbers, as before. Example of a permanent link:

  • https://discord.gg/hT9aR2kLmB

Lastly, custom invite links (vanity links) are available only to Discord Level 3 servers. To reach this level, a server must get 14 boosts, which are paid upgrades that community members can buy to unlock special perks. That’s why popular communities with an active audience — servers of bloggers, streamers, gaming clans or public projects — usually attain Level 3.

Custom invite links allow administrators to set their own invite code, which must be unique among all servers. The code can contain lowercase letters, numbers and hyphens, and can be almost arbitrary in length — from 2 to 32 characters. A server can have only one custom link at any given time.

Such links are always permanent — they do not expire as long as the server maintains Level 3 perks. If the server loses this level, its vanity link becomes available for reuse by another server with the required level. Examples of a custom invite link:

  • https://discord.gg/alanna-titterington
  • https://discord.gg/best-discord-server-ever
  • https://discord.gg/fq5zw2cn

From this last example, attentive readers may guess where we’re heading.

How scammers exploit the invite system

Now that we’ve looked at the different types of Discord invite links, let’s see how malicious actors weaponize the mechanism. Note that when a regular, non-custom invite link expires or is deleted, the administrator of a legitimate server cannot get the same code again, since all codes are generated randomly.

But when creating a custom invite link, the server owner can manually enter any available code, including one that matches the code of a previously expired or deleted link.

It is this quirk of the invite system that attackers exploit: they track legitimate expiring codes, then register them as custom links on their servers with Level 3 perks.

As a result, scammers can use:

  • Any expired temporary invite links (even if the expired link has capital letters and the scammers’ custom URL replaces them with lowercase, the system automatically redirects the user to this vanity URL)
  • Permanent invite links deleted from servers, if the code consisted solely of lowercase letters and numbers (no redirection here)
  • Custom invite links, if the original server has lost Level 3 perks and its link is available for re-registration

What does this substitution lead to? Attackers get the ability to direct users who follow links previously posted on wholly legitimate resources (social networks, websites, blogs and forums of various communities) to their own malicious servers on Discord.

What’s more, the legal owners of these resources may not even realize that the old invite links now point to fake Discord servers set up to distribute malware. This means they can’t even warn users that a link is dangerous, or delete messages in which it appears.

How ClickFix works in Discord-based attacks

Now let’s talk about what happens to users who follow hijacked invite links received from trusted sources. After joining the attackers’ Discord server, the user sees that all channels are unavailable to them except one, called verify.

Malicious Discord server

On the attackers’ Discord server, users who followed the hijacked link have access to only one channel, verify Source

This channel features a bot named Safeguard that offers full access to the server. To get this, the user must click the Verify button, which is followed by a prompt to authorize the bot.

Authorization window of the Safeguard bot

On clicking the Authorize button, the user is automatically redirected to the attackers’ external site, where the next and most important phase of the attack begins. Source

After authorization, the bot gains access to profile information (username, avatar, banner), and the user is redirected to an external site: https://captchaguard[.]me. Next, the user goes through a chain of redirects and ends up on a well-designed web page that mimics the Discord interface, with a Verify button in the center.

Fake verification screen on an external site

Redirection takes the user to a fake page styled to look like the Discord interface. Clicking the Verify button activates malicious JavaScript code that copies a PowerShell command to the clipboard Source

Clicking the Verify button activates JavaScript code that copies a malicious PowerShell command to the clipboard. The user is then given precise instructions on how to “pass the check”: open the Run window (Win + R), paste the clipboarded text (Ctrl + C), and click Enter.

The ClickFix technique implemented by Discord link hijackers

Next comes the ClickFix technique: the user is instructed to paste and run the malicious command copied to the clipboard in the previous step. Source

The site does not ask the user to download or run any files manually, thereby removing the typical warning signs. Instead, users essentially infect themselves by running a malicious PowerShell command that the site slips onto the clipboard. All these steps are part of an infection tactic called ClickFix, which we’ve already covered in depth on our blog.

AsyncRAT and Skuld Stealer malware

The user-activated PowerShell script is the first step in the multi-stage delivery of the malicious payload. The attackers’ next goal is to install two malicious programs on the victim’s device — let’s take a closer look at each of them.

First, the attackers download a modified version of AsyncRAT to gain remote control over the infected system. This tool provides a wide range of capabilities: executing commands and scripts, intercepting keystrokes, viewing the screen, managing files, and accessing the remote desktop and camera.

Next, the cybercriminals install Skuld Stealer on the victim’s device. This crypto stealer harvests system information, siphons off Discord login credentials and authentication tokens saved in the browser, and, crucially, steals seed phrases and passwords for Exodus and Atomic crypto wallets by injecting malicious code directly into their interface.

Skuld sends all collected data via a Discord webhook — a one-way HTTP channel that allows applications to automatically send messages to Discord channels. This provides a secure way for stealing information directly in Discord without the need for a sophisticated management infrastructure.

As a result, all data — from passwords and authentication tokens to crypto wallet seed phrases — is automatically published in a private channel set up in advance on the attackers’ Discord server. Armed with the seed phrases, the attackers can recover all the private keys of the hijacked wallets and gain full control over all cryptocurrency assets of their victims.

How to avoid falling victim?

Unfortunately, Discord’s invite system lacks transparency and clarity. And this makes it extremely difficult, especially for newbies, to spot the trick before clicking a hijacked link and during the redirection process.

Nevertheless, there are some security measures that, if done properly, should fend off the worst outcome — a malware-infected computer and financial losses:

  • Never paste code into the Run window if you don’t know exactly what it does. Doing this is extremely dangerous, and normal sites will never give such an instruction.
  • Configure Discord privacy and security by following our detailed guide. This will not guard against hijacked invite links, but will minimize other risks associated with Discord.
  • Use a reliable security solution that gives advance warning of danger and prevents the download of malware. It’s best to install it on all devices, but especially on ones where you use crypto wallets and other financial software.

Malicious actors often target Discord to steal cryptocurrency, game accounts and assets, and generally cause misery for users. Check out our posts for more examples of Discord scams:

]]>
full large medium thumbnail
How to protect yourself from Google Forms scams | Kaspersky official blog https://www.kaspersky.com/blog/google-forms-scam/53909/ Thu, 24 Jul 2025 07:59:31 +0000 https://www.kaspersky.com/blog/?p=53909 You’ve probably filled out a Google Forms survey at least once — likely signing up for an event, taking a poll, or gathering someone else’s contacts. No wonder you did — this is a convenient and easy-to-use service backed by a tech giant. This simplicity and trust have become the perfect cover for a new wave of online scams. Fraudsters have figured out how to use Google Forms to hide their schemes, luring victims with promises of free cryptocurrency. And all the victim has to do to fall into the trap is click a link.

Free crypto is only in a scammer’s trap

Just like parents tell their kids not to take candy from strangers, we recommend being cautious about offers that seem too good to be true. Today’s story is exactly about that. Our researchers have uncovered a new wave of scam attacks exploiting Google Forms. Scammers use this Google service to send potential victims emails offering free cryptocurrency.

"The transaction for the transfer has been verified"

“The transaction for the transfer has been verified”

As is often the case, the scam is wrapped in a flashy, tempting package: victims are lured with promises of cashing out a large sum of cryptocurrency. But before you can get your payout, the scammers ask you to pay a fee — though not right away. First, you have to click a link in the email, land on a fake website, and enter your crypto wallet details and your email address (a nice bonus for the scammers). And just like that, you wave goodbye to your money.

The scammers are counting on victims finding an offer of 1.275 BTC too hard to resist

The scammers are counting on victims finding an offer of 1.275 BTC too hard to resist

If we take a closer look at these emails, we’ll see that they don’t exactly win any awards for looking legit. That’s because, while Google Forms is a free tool that allows anyone, including scammers, to create professional-looking emails, these emails have a very specific look that’s pretty hard to pass off as a real crypto platform notification. So why do scammers use Google Forms?

Because this allows the message to slip through email filters, and there’s a good reason for that. Email messages like these are sent from Google’s own mail servers and include links to the domain forms.gle. The links look legit to spam filters, so there’s a good chance these messages will make it into your inbox. This is how scammers exploit the good reputation of this online service.

Google Forms scams are on the rise. According to some experts, the number of these scams increased by 63% in 2024 and likely continues to grow in 2025. That means one thing: you need to share this post right now with your loved ones who are just starting to explore the internet. Tell them about the most common types of scams today and how to protect themselves.

Protecting yourself from Google Forms scams

The easiest and most effective approach is to rely on a trusted security tool that alerts you whenever you try to visit a phishing website. What are some other things you can do?

  • Avoid following links in emails you weren’t expecting. Chances are, there’s nothing good behind them.
  • Avoid entering your personal information on suspicious websites. If your curiosity gets the better of you and you do click a link in an email, be absolutely sure not to enter any payment or personal information.
  • Remember the free lunch. Alert: there is no such thing. Watch out for offers promising payments or prizes — especially if they ask you to pay a commission upfront.
  • Learn how other types of scams operate and share the news of the latest threats with your loved ones.

If you’ve grown tired of all the Google Forms scams, you can set up a filter for the phrase “Create your own Google Form” in your email client. Every single Google Forms email contains that phrase, so the filter will move any messages with the text right to the spam folder. The problem with this approach is that you might miss legitimate emails from Google Forms. Here’s how to block these emails in Gmail and Outlook.

Read about other tricks that scammers have up their sleeves:

]]>
full large medium thumbnail
How to set up security and privacy in Garmin apps | Kaspersky official blog https://www.kaspersky.com/blog/garmin-privacy-settings/53920/ Wed, 23 Jul 2025 12:17:13 +0000 https://www.kaspersky.com/blog/?p=53920 Sports smartwatches continue to be a prime target for cybercriminals, offering a wealth of sensitive information about potential victims. We’ve previously discussed how fitness tracking apps collect and share user data: most of them publicly display your workout logs, including precise geolocation, by default.

It turns out that smartwatches continue that lax approach to protecting their owners’ personal data. In late June 2025, all COROS smartwatches were found to have serious vulnerabilities that exposed not only the watches themselves but also user accounts. By exploiting them, malicious actors can gain full access to the data in the victim’s account, intercept sensitive information like notifications, change or factory-reset device settings, and even interrupt workout tracking leading to the loss of all data.

What’s particularly frustrating is that COROS was notified of these issues back in March 2025, yet fixes aren’t expected until the end of the year.

Similar vulnerabilities were discovered in 2022 in devices from arguably one of the most popular manufacturers of sports smartwatches and fitness gadgets, Garmin, although these issues were promptly patched.

In light of these kinds of threats, it’s natural to want to maximize your privacy by properly configuring the security settings in your sports apps. Today, we’ll break down how to protect your data within Garmin Connect and the Connect IQ Store — two online services in one of the most widely used sports gadget ecosystems.

How to find privacy settings in Garmin Connect

The privacy settings are located in different sections of the menu depending on whether you’re using the mobile app or the web version.

In the Garmin Connect mobile app:

  1. Open Garmin Connect on your smartphone.
  2. Tap the three dots (More section) in the bottom right corner.
  3. Select Settings.
  4. Locate Profile & Privacy.
How to find the privacy settings in Garmin Connect for iOS — the process is essentially the same in the Android version of the app

How to find the privacy settings in Garmin Connect for iOS — the process is essentially the same in the Android version of the app

In the web version of Garmin Connect:

  1. Open the Garmin Connect website in a browser.
  2. Click the profile icon in the top right corner.
  3. Select Account Settings.
  4. Navigate to Privacy Settings.
How to find the privacy settings in the web version of Garmin Connect

How to find the privacy settings in the web version of Garmin Connect

There, you can adjust the visibility of your profile, activities, and steps, and even decide who can see your badges. For the highest level of privacy, we recommend selecting Only me. This ensures that your personal information, workout stats, and other data are visible only to you.

How to hide your workout locations in Garmin Connect

Revealing your routes is one of the most significant privacy risks. This could allow malicious actors to track you in near real-time.

Analysis of publicly available geodata has repeatedly revealed leaks of highly confidential information — from the locations of secret U.S. military bases exposed by anonymized heatmaps of service members’ activity, to the routes of head-of-state motorcades, pieced together from their bodyguards’ smartwatch tracking data. All this data ended up publicly accessible, not because of a hack, but due to incorrect privacy settings within the app itself, which broadcasts all of the owner’s movements online by default.

These leaks clearly showed that data from wearable sensors can cause a lot of problems for their wearers. Even if you’re not guarding top government officials, training maps can reveal your home address, workplace, and other frequently visited locations.

Garmin’s tactical watch models include a Stealth mode feature, designed specifically for military personnel. In their line of work, a lack of privacy can be a matter of life and death. However, with Garmin Connect, you can set up your own privacy zones for almost every Garmin gadget.

Setting up privacy zones:

  1. Open your Garmin Connect profile in a browser (the feature isn’t available in the mobile app).
  2. Navigate to Privacy Zones.
  3. Tap + Add New Zone.
  4. Enter your home address or some other place you want to hide.
  5. Set a zone radius — we recommend at least 500 meters.
How to set up privacy zones in Garmin Connect

How to set up privacy zones in Garmin Connect

Garmin’s Privacy Zones are quite similar to a feature Strava introduced back in 2013. They automatically hide the start and end points of your workouts if these fall within a designated area. And even if you share your workout with the whole world, it’ll be impossible to see your exact location — for example, your home.

Just a bit further up in that same section, it’s worth checking out other ways your movement data might be used: for instance, to create heatmaps based on user routes. You can opt out of sharing this kind of data. To understand what each function does and how to adjust it, simply tap Edit directly below it. A description will pop up, explaining what data is collected and how it’s used.

How to adjust advanced data collection and sharing settings in Garmin Connect

How to adjust advanced data collection and sharing settings in Garmin Connect

How to change the visibility of past activities in Garmin Connect

Changing your privacy settings won’t retroactively apply to activities you’ve already saved in Garmin Connect. Even if you crank up your privacy to the max right now, all your past recordings will still show up with the visibility settings they had when you first created them. So if you’ve been using Garmin for a while and you’re just now getting around to tweaking your privacy, you’ll want to update your previously saved activities as well.

  1. Sign in to the web version of Garmin Connect.
  2. Select Account Settings → Privacy Settings.
  3. Locate Update Past Activities, select a new level of privacy for all past workouts, and confirm your changes.
You can only change the privacy settings for your previously saved activities in the web version of Garmin Connect.

You can only change the privacy settings for your previously saved activities in the web version of Garmin Connect.

How to delete individual activities in Garmin Connect

You can remove specific saved activities so no one can see them.

  1. Open the Garmin Connect mobile app.
  2. Navigate to More → Activities → All Activities.
  3. Select the workout you want to delete.
  4. Tap the three dots in the top right corner.
  5. Tap Delete Activity.
How to remove individual workout records from Garmin Connect

How to remove individual workout records from Garmin Connect

If you need to wipe all your previously saved activities, and you have a lot of them, it might be easier to delete your old account and create a new one. However, keep in mind that deleting your account will result in the loss of all your workout data and health metrics.

How to monitor connected devices and services in Garmin Connect

Another potential source of personal data leaks comes from devices and services that have access to your Garmin Connect account. If you frequently switch out your sports gadgets, make sure you remove them from your account.

  1. Tap the device icon in the top right corner of Garmin Connect.
  2. The Devices section will open.
  3. Remove any unfamiliar or unused devices by swiping left on them.

Next, check the list of third-party apps that have access to your account:

  1. Open Settings.
  2. Navigate to Connected Apps, and remove those you no longer use.
How to remove old devices and connected apps from Garmin Connect

How to remove old devices and connected apps from Garmin Connect

How to protect yourself from vulnerabilities in Connect IQ

It’s not just incorrect privacy settings in Garmin Connect that can expose your data. Vulnerabilities in apps and watch faces available through the Connect IQ Store marketplace can also lead to data leaks. In 2022, security researcher Tao Sauvage found that the Connect IQ API developer platform contained 13 vulnerabilities. These could potentially be exploited to bypass permissions and compromise your watch.

Some of these vulnerabilities have been lurking in the Connect IQ API since its very first release back in 2015. Over a hundred models of Garmin devices were at risk, including fitness watches, outdoor navigators, and cycling computers. Fortunately, these vulnerabilities were patched in 2023, but if you haven’t updated your device since before then (or you purchased a used gadget), it’s crucial to update its firmware to the latest version.

Even though these specific vulnerabilities have been fixed, the Connect IQ Store remains a potential entry point for future threats. Because of this, we recommend the following:

  1. Avoid installing third-party watch faces and apps from unknown developers in the Connect IQ Store.
  2. Stick to official Garmin watch faces built into your device.
  3. Make sure to regularly update your Garmin devices. You can do this through Garmin Express on your desktop, or by using Garmin Connect on your smartphone.
  4. Turn off automatic app downloads from the Connect IQ Store in the settings.

General recommendations

In an era of increasing cyberthreats to IoT devices, properly configuring the privacy settings on your wearables is crucial. Your digital security doesn’t just depend on device vendors; it also relies on the steps you take to protect your personal data.

  1. Use unique passwords for all accounts, including Garmin Connect. Read more on how to create a strong and easy-to-remember password.
  2. Turn on two-factor authentication wherever possible.
  3. Double-check the privacy settings after every app update to avoid any unwelcome surprises.
  4. Curb your connections on the Garmin Connect social network.
  5. Ignore connection requests from strangers.

To manage privacy for popular apps and gadgets, be sure to use our free service, Privacy Checker. And to stay on top of the latest cyberthreats and respond quickly, subscribe to our Telegram channel. Finally, the specialized privacy protection modes in Kaspersky Premium ensure maximum security for your personal information and help prevent data theft across all your devices.

Below are detailed instructions on how to configure security and privacy for the most popular running trackers.

]]>
full large medium thumbnail
Common mistakes in using CVSS | Kaspersky official blog https://www.kaspersky.com/blog/cvss-rbvm-vulnerability-management/53912/ Tue, 22 Jul 2025 19:46:22 +0000 https://www.kaspersky.com/blog/?p=53912 When you first encounter CVSS (Common Vulnerability Scoring System), it’s easy to think this is the perfect tool for triaging and prioritizing vulnerabilities. A higher score must mean a more critical vulnerability, right? In reality, that approach doesn’t quite work out. Every year, we see an increasing number of vulnerabilities with high CVSS scores. Security teams just can’t patch them all in time, but the vast majority of these flaws are never actually exploited in real-world attacks. Meanwhile, attackers are constantly leveraging less flashy vulnerabilities with lower scores. There are other hidden pitfalls too — ranging from purely technical issues like conflicting CVSS scores to conceptual ones like a lack of business context.

These aren’t necessarily shortcomings of the CVSS itself. Instead, this highlights the need to use the tool correctly, as part of a more sophisticated and comprehensive vulnerability management process.

CVSS discrepancies

Do you ever notice how the same vulnerability might have different severity scores depending on the available source? One score from the cybersecurity researcher who found it, another from the vendor of the vulnerable software, and yet another from a national vulnerability database? It’s not always just a simple mistake. Sometimes, different experts can disagree on the context of exploitation. They might have different ideas about the privileges with which a vulnerable application runs, or whether it’s internet-facing. For instance, a vendor might base its assessment on its recommended best practices, while a security researcher might consider how applications are typically configured in real-world organizations. One researcher might rate the exploit complexity as high, while another deems it low. This isn’t an uncommon occurrence. A 2023 study by Vulncheck found that 20% of vulnerabilities in the National Vulnerability Database (NVD) had two CVSS3 scores from different sources, and 56% of those paired scores were in conflict with each other.

Common mistakes when using CVSS

For over a decade, FIRST has advocated for the methodologically correct application of CVSS. Yet organizations that use CVSS ratings in their vulnerability management processes continue to make typical mistakes:

  1. Using the CVSS base score as the primary risk indicator. CVSS measures the severity of a vulnerability — not when it will be exploited or the potential impact of its exploitation on the organization under attack. Sometimes, a critical vulnerability is harmless within a specific company’s environment because it resides in insignificant and isolated systems. Conversely, a large-scale ransomware attack might begin with a seemingly innocuous information leak vulnerability with a CVSS score of 6.
  2. Using the CVSS Base score without Threat/Temporal and Environmental adjustments. The availability of patches, public exploits, and compensatory measures significantly influences how and how urgently a vulnerability should be addressed.
  3. Focusing only on vulnerabilities above a certain score. This approach is sometimes mandated by government or industry regulators (“remediate vulnerabilities with CVSS score above 8 within one month”). As a result, cybersecurity teams face a continuously growing workload that, in reality, doesn’t make their infrastructure more secure. The number of vulnerabilities with high CVSS scores identified annually has been rapidly increasing over the past 10 years.
  4. Using CVSS to assess the likelihood of exploitation. These metrics are poorly correlated: only 17% of critical vulnerabilities are ever exploited in attacks.
  5. Using only the CVSS rating. The standardized vector string was introduced in CVSS so that defenders could understand the details of a vulnerability and independently calculate its importance within their own organization. CVSS 4.0 was specifically revised to make it easier to account for business context using additional metrics. Any vulnerability management efforts based solely on a numerical rating will largely be ineffective.
  6. Ignoring additional sources of information. Relying on a single vulnerability database and analyzing only CVSS is insufficient. The absence of data on patches, working proofs of concept, and real-world exploitation cases makes it difficult to decide how to address vulnerabilities.

What CVSS doesn’t tell you about a vulnerability

CVSS is the industry standard for describing a vulnerability’s severity, the conditions under which it can be exploited, and its potential impact on a vulnerable system. However, beyond this description (and the CVSS Base score), there’s a lot it doesn’t cover:

  • Who found the vulnerability? Was it the vendor, an ethical researcher who reported the flaw and waited for a patch, or was it a malicious actor?
  • Is there an exploit publicly available? In other words, is there readily available code to exploit the vulnerability?
  • How practical is it to exploit in real-world scenarios?
  • Is there a patch? Does it cover all vulnerable software versions, and what are the potential side effects of applying it?
  • Should the organization address the vulnerability? Or does it affect a cloud service (SaaS) where the provider will automatically fix the defects?
  • Are there signs of exploitation in the wild?
  • If there are none, what’s the likelihood attackers will leverage this vulnerability in the future?
  • Which specific systems within your organization are vulnerable?
  • Is the exploitation practically accessible to an attacker? For example, a system might be a corporate web server accessible to anyone online, or it could be a vulnerable printer physically connected to a single computer that has no network access. A more complex example might be a vulnerability in a software component’s method, where the specific business application using that component never actually calls the method.
  • What would happen if the vulnerable systems were compromised?
  • What’s the financial cost of such an event to the business?

All these factors significantly influence the decision of when and how to remediate a vulnerability — or even if remediation is necessary at all.

How to amend CVSS? RBVM has the answer!

Many factors that are often hard to account for within the confines of CVSS are central to a popular approach known as risk-based vulnerability management (RBVM).

RBVM is a holistic, cyclical process, with several key phases that repeat regularly:

  • Inventorying all IT assets of your business. This includes everything from computers, servers and software, to cloud services and IoT devices.
  • Prioritizing assets by importance: identifying your crown jewels.
  • Scanning assets for known vulnerabilities.
  • Enriching the vulnerability data. This includes refining CVSS-B and CVSS-BT ratings, incorporating threat intelligence, and assessing the likelihood of exploitation. Two popular tools for gauging exploitability are EPSS (another FIRST rating that provides a percentage probability of real-world exploitation for most vulnerabilities), and consulting databases like CISA KEV, which contains information about vulnerabilities actively exploited by attackers.
  • Defining the business context: understanding the potential impact of an exploit on vulnerable systems, considering their configurations and how they’re used within your organization.
  • Determining how the vulnerability can be neutralized through either patches or compensatory measures.
  • The most exciting part: assessing the business risk and setting priorities based on all the gathered data. Vulnerabilities with the highest probability of exploitation and possible significant impact on your key IT assets are prioritized. To rank vulnerabilities, you can either calculate CVSS-BTE — incorporating all collected data into the Environmental component, or use alternative ranking methodologies. Regulatory aspects also influence prioritization.
  • Setting deadlines for each vulnerability’s resolution based on its risk level and operational considerations, such as the most convenient time for updates. If updates or patches aren’t available, or if their implementation introduces new risks and complexities, compensatory measures are adopted instead of direct remediation. Sometimes, the cost of fixing a vulnerability outweighs the risk it poses, and a decision might be made not to remediate it at all. In such cases, the business consciously accepts the risks of the vulnerability being exploited.

In addition to what we’ve discussed, it’s crucial to periodically analyze your company’s vulnerability landscape and IT infrastructure. Following this analysis, you need to introduce cybersecurity measures that prevent entire classes of vulnerabilities from being exploited or significantly boost the overall security of specific IT systems. These measures can include network micro-segmentation, least privilege implementation, and adopting stricter account management policies.

A properly implemented RBVM process drastically reduces the burden on IT and security teams. They spend their time more effectively as their efforts are primarily directed at flaws that pose a genuine threat to the business. To grasp the scale of these efficiency gains and resource savings, consider this FIRST study. Prioritizing vulnerabilities using EPSS alone allows you to focus on just 3% of vulnerabilities while achieving 65% efficiency. In stark contrast, prioritizing by CVSS-B requires addressing a whopping 57% of vulnerabilities with a dismal 4% effectiveness. Here, “efficiency” refers to successful remediation of vulnerabilities that have actually been exploited in the wild.

]]>
full large medium thumbnail
Update Microsoft SharePoint ASAP | Kaspersky official blog https://www.kaspersky.com/blog/toolshell-is-back-cve-2025-53771-53770/53905/ Mon, 21 Jul 2025 11:56:29 +0000 https://www.kaspersky.com/blog/?p=53905 Unknown malefactors are actively attacking companies that use SharePoint Server 2016, SharePoint Server 2019 and SharePoint Server Subscription Edition. By exploiting a chain of two vulnerabilities – CVE-2025-53770 (CVSS rating – 9.8) and CVE-2025-53771 (CVSS rating – 6.3), attackers are able to execute malicious code on the server remotely. The severity of the situation is highlighted by the fact that patches for the vulnerabilities were released by Microsoft late Sunday night. To protect the infrastructure, researchers recommend installing the updates as soon as possible.

The attack via CVE-2025-53770 and CVE-2025-53771

Exploitation of this pair of vulnerabilities allows unauthenticated attackers to take control of SharePoint servers, and therefore not only gain access to all the information stored on them, but also use the servers to spread their attack on the rest of the infrastructure.

Researchers at EYE Security state that even before the Microsoft bulletins were published, they had seen two waves of attacks using this vulnerability chain, resulting in dozens of servers being compromised. Attackers install web shells on vulnerable SharePoint servers and steal cryptographic keys that can later allow them to impersonate legitimate services or users. This way they can to gain access to compromised servers even after the vulnerability has been patched and the malware destroyed.

Our solutions proactively detected and blocked malicious activity linked to the ToolShell attack. Our telemetry data shows exploitation attempts worldwide, including in Africa, Asia, the Middle East, and Russia. A detailed investigation of the attack and associated vulnerabilities, along with indicators of compromise, is available in a Securelist blog post.

Relationship to CVE-2025-49704 and CVE-2025-49706 vulnerabilities (ToolShell chain)

Researchers noticed that the exploitation of the CVE-2025-53770 and CVE-2025-53771 vulnerability chain is very similar to the ToolShell chain of two other vulnerabilities, CVE-2025-49704 and CVE-2025-49706, demonstrated in May, as part of the Pwn2Own hacking competition in Berlin. Those two were patched by previously released updates, but apparently not perfectly.

By all indications, the new pair of vulnerabilities is an updated ToolShell chain, or rather a bypass of the patches that fix it. This is confirmed by Microsoft’s remarks in the description of the new vulnerabilities: “Yes, the update for CVE-2025-53770 includes more robust protections than the update for CVE-2025-49704. The update for CVE-2025-53771 includes more robust protections than the update for CVE-2025-49706.”

How to stay safe?

The first thing to do is install the patches, and before rolling out the emergency updates released yesterday, you should install the regular July KB5002741 and KB5002744. At the time of writing this post, there were no patches for SharePoint 2016, so if you’re still using this version of the server, you’ll have to rely on compensating measures.

You should also make sure that robust protective solutions are installed on the servers and that the Antimalware Scan Interface (AMSI), which helps Microsoft applications and services to interact with running cybersecurity products, is enabled.

Researchers recommend replacing machine keys in ASP.NET on vulnerable SharePoint servers (you can read how to do this in Microsoft’s recommendations), as well as other cryptographic keys and credentials that may have been accessed from the vulnerable server.

If you have reason to suspect that your SharePoint servers have been attacked, it is recommended that you check them for indicators of compromise, primarily the presence of the malicious spinstall0.aspx file.

If your internal incident response team lacks the in-house resources to identify indicators of compromise or remediate the incident, we advise you to contact third-party experts.

]]>
full large medium thumbnail
HR guidelines phishing email | Kaspersky official blog https://www.kaspersky.com/blog/employee-handbook-phishing-scheme/53836/ Fri, 18 Jul 2025 08:00:13 +0000 https://www.kaspersky.com/blog/?p=53836 We’ve been seeing attempts at using spear-phishing tricks on a mass scale for quite a while now. These efforts are typically limited to slightly better than usual email styling that mimics a specific company, faking a corporate sender via ghost spoofing, and personalizing the message, which, at best, means addressing the victim by name. However, in March of this year, we began noticing a particularly intriguing campaign in which not only the email body but also the attached document was personalized. The scheme itself was also a bit unusual: it tried to trick victims into entering their corporate email credentials under the pretense of HR policy changes.

A fake request to review new HR guidelines

Here’s how it works. The victim receives an email, seemingly from HR, addressing them by name. The email informs them of changes to HR policy regarding remote work protocols, available benefits, and security standards. Naturally, any employee would be interested in these kinds of changes, so their cursor naturally drifts toward the attached document, which, incidentally, also features the recipient’s name in its title. What’s more, the email has a convincing banner stating that the sender is verified and the message came from a safe-sender list. As experience shows, this is precisely the kind of email that deserves extra scrutiny.

An email asking the recipient to review HR guidelines

A phishing email message designed to lure victims with fake HR policy updates

For starters, the entire email content — including the reassuring green banner and the personalized greeting — is an image. You can easily check this by trying to highlight any part of the text with your mouse. A legitimate sender would never send an email this way; it’s simply impractical. Imagine an HR department having to save and send individual images to every single employee for such a widespread announcement! The only reason to embed text as an image is to bypass email antispam or antiphishing filters.

There are other, more subtle clues in the email that can give away the attackers. For example, the name and even the format of the attached document don’t match what’s mentioned in the email body. But compared to the “picturesque” email, these are minor details.

An attachment that imitates HR guidelines

Of course, the attached document doesn’t contain any actual HR guidelines. What you’ll find is a title page with a small company logo and a prominent “Employee Handbook” header. It also includes a table of contents with items highlighted in red as if to indicate changes, followed by a page with a QR code (as if to access the full document). Finally, there’s a very basic instruction on how to scan QR codes with your phone. The code, of course, leads to a page where the user is asked to enter corporate credentials, which is what the authors of the scheme are after.

A document pretending to highlight updates to the HR guidelines

The scammers’ document used as a lure

The document is peppered with phrases designed to convince the victim it’s specifically for them. Even their name is mentioned twice: once in the greeting and again in the line “This letter is intended for…” that precedes the instruction. Oh, and yes, the file name also includes their name. But the first question this document should raise is: what’s the point?

Realistically, all this information could have been presented directly in the email without creating a personalized, four-page file. Why would an HR employee go to such lengths and create these seemingly pointless documents for each employee? Honestly, we initially doubted that scammers would bother with such an elaborate setup. But our tools confirm that all the phishing emails in this campaign indeed contain different attachments, each unique to the recipient’s name. We’re likely seeing the work of a new automated mailing mechanism that generates a document and an email image for each recipient… or perhaps just some extremely dedicated phishers.

How to stay safe

A specialized security solution can block most phishing email messages at the corporate mail server. In addition, all devices used by company employees for work, including mobile phones, should also be protected.

We also recommend educating employees about modern scam tactics — for example, by sharing resources from our blog — and continually raising their overall cybersecurity awareness. This can be achieved through platforms like Kaspersky Automated Security Awareness.

]]>
full large medium thumbnail