Our experts dug into this trend and uncovered some non-obvious threats. This article explores how Gen Z can navigate their multi-job lifestyles without putting their cybersecurity at risk.
The core issue stems from the sheer number of corporate apps and accounts Gen Z has to juggle. Think about it: Zoom for one job, Slack for another, and Notion for tasks across the board. And the more applications they use, the larger the attack surface for cybercriminals. Scammers constantly send phishing emails that convincingly impersonate employers, and distribute malware disguised as business software. They can even send fake assignments, pretending to be your boss.
From mid-2024 to mid-2025, Kaspersky experts recorded six million attacks involving fake collaboration platforms. Most often, attackers imitated the “golden trio” of corporate applications: Zoom, and Microsoft Excel and Outlook.
Here’s how it might play out: an attacker sends an email seemingly from Zoom asking you to update the app. The email contains a link that leads to a phishing site mimicking the real Zoom page. This fake site then immediately downloads a bogus application to your device. The imposter app could then steal your contacts’ data or even gain access to your entire work environment — the potential scenarios are numerous.
If you’ve ever seen a message in a neighborhood chat like, “URGENT: remote work, $60 an hour!” — it’s likely a scam. But these days scammers have grown much more sophisticated. They’re posting what look like legitimate job openings on popular job platforms, detailing the terms so thoroughly that the positions appear genuine. In reality, even the most well-crafted job posting can turn out to be completely fake.
Cybercriminals may even conduct fake interviews to make their schemes appear more convincing. One common form of extortion targets Gen Z through fake “interviews”, where victims are told to log out of their personal Apple ID and access a purported “company” account. If the victim complies, the scammers activate Lost Mode, effectively bricking the applicant’s iPhone. Naturally, they then demand a hefty sum to unlock it.
Freelance opportunities also deserve a close look. The search for freelance work is often less formal than traditional job hunting: all communication happens through messaging apps, and payments might even come from a client’s personal account. It’s incredibly easy to imitate this casual communication style, and scammers exploit this. In a worst-case scenario, instead of landing a new gig, you could end up with a bricked phone, malware infection, compromised personal accounts, or even losing all your money to the “client”.
It’s impossible to list every single red flag when you’re looking for a new job, but here are the main things to watch out for.
Some companies have adopted BYOD policies, asking employees to use their personal tech for work. The problem is, these are often the same devices used for everything else: gaming, downloading files from the internet, and chatting with friends. Do we even need to say that downloading torrents on the laptop used for work is a dubious idea?
Many Gen Zers also make a costly mistake when using a large number of applications: they use one password for everything. Just a single data breach (and they happen all the time!), and cybercriminals can gain access to all your messaging apps, calendars, email clients, and other work-specific applications. Of course, coming up with and remembering complex passwords every time is a challenge. That’s why we recommend using a password manager that can generate strong, unique passwords, and securely store them for you.
What else you can do to avoid falling victim to cybercriminals while you’re job searching?
]]>Cybersecurity cheat-sheet for polyworkers:
Before tackling organizational hurdles and drafting policies, you’ll have to determine if your core IT systems are ready for the switch to passkeys.
Microsoft Entra ID (Azure AD) fully supports passkeys, letting admins set them as the primary sign-in method. For hybrid deployments with on-premises resources, Entra ID can generate Kerberos tickets (TGTs), which your Active Directory domain controller can then process.
However, Microsoft doesn’t yet offer native passkey support for RDP, VDI, or on-premises-only AD sign-ins. That said, with a few workarounds, organizations can store passkeys on a hardware token like a YubiKey. This kind of token can simultaneously support both the traditional PIV (smart cards) technology and FIDO2 (passkeys). There are also third-party solutions for these scenarios, but you’ll need to evaluate how using them impacts your overall security posture and regulatory compliance.
Good news for Google Workspace and Google Cloud users: they offer full passkey support.
Popular identity management systems like Okta, Ping, Cisco Duo, and RSA IDplus also support FIDO2 and all major forms of passkeys.
We have a detailed post on the subject. All modern operating systems from Google, Apple, and Microsoft support passkeys. However, if your company uses Linux, you’ll likely need extra tools, and overall support is still limited.
Also, while for all major operating systems it might look like full support on the surface, there’s a lot of variety in how passkeys are stored, and that can lead to compatibility headaches. Combinations of several systems like Windows computers and Android smartphones are the most problematic. You might create a passkey on one device and then find you can’t access it on another. For companies with a strictly managed device fleet, there are a couple of ways to tackle this. For example, you could have employees generate a separate passkey for each company device they use. This means a bit more initial setup: employees will need to go through the same process of creating a passkey on every device. However, once that’s done, signing in takes minimal time. Plus, if they lose one device, they won’t be completely locked out of their work data.
Another option is to use a company-approved password manager to store and sync passkeys across all employees’ devices. This is also a must for companies using Linux computers, as its operating system can’t natively store passkeys. Just a heads-up: this approach might add some complexity when it comes to regulatory compliance audits.
If you’re looking for a solution with almost no issues with sync and multiple platforms, hardware passkeys like the YubiKey are the way to go. The catch is that they can be significantly more expensive to deploy and manage.
The ideal scenario for bringing passkeys into your business apps is to have all your applications sign in through single sign-on (SSO). That way, you only need to implement passkey support in your corporate SSO solution, such as Entra ID or Okta. However, if some of your critical business applications don’t support SSO, or if that support isn’t part of your contract (which, unfortunately, happens), you’ll have to issue individual passkeys for users to sign in to each separate system. Hardware tokens can store anywhere from 25 to 100 passkeys, so your main extra cost here would be on the administrative side.
Popular business systems that fully support passkeys include Adobe Creative Cloud, AWS, GitHub, Google Workspace, HubSpot, Office 365, Salesforce, and Zoho. Some SAP systems also support passkeys.
Rolling out passkeys means getting your team up to speed regardless of the scenario. You don’t want them scratching their heads trying to figure out new interfaces. The goal is for everyone to feel confident using passkeys on every single device. Here are the key things your employees will need to understand.
Moving to passkeys doesn’t mean your cybersecurity team can just cross identity threats off their risk list. Sure, it makes things tougher for attackers, but they can still do the following:
While it’s impossible to phish the passkey itself, attackers can set up fake web infrastructure to trick a victim into authenticating and validating a malicious session on a corporate service.
A recent example of this kind of AiTM attack was documented in the U.S. In that incident, the victim was lured to a fake authentication page for a corporate service, where attackers first phished their username and password, and then the session confirmation by having them scan a QR code. In this incident, the security policies were configured correctly, so scanning this QR code did not lead to successful authentication. But since such a mechanism with passkeys was implemented, the attackers hope that somewhere it is configured incorrectly, and the physical proximity of the device on which authentication is carried out and the device where the key is stored is not checked.
Ultimately, switching to passkeys requires detailed policy configuration. This includes both authentication policies (such as disabling passwords when a passkey is available, or banning physical tokens from unknown vendors) and monitoring policies (such as logging passkey registrations or cross-device scenarios from suspicious locations).
]]>There are several markers that are widely believed to indicate a message sent by scammers. Below are some examples.
An impersonal greeting like “Dear %username%” used to be a sure sign of a phishing email, but scammers have moved on from that. Targeted messages addressing the victim by name are becoming increasingly common. Ignore those too.
If you’ve managed to spot one using the signs described above, well done — you’re awesome! You can go ahead and delete it without even opening. And if you want to do your good deed for the day, report the phishing attempt via Outlook or Gmail to make this world a tiny bit safer. We understand that spotting phishing in your email right away isn’t easy — so here’s a short list of don’ts to help with detection.
Scammers can hide malware inside various types of email attachments: images, HTML files, and even voice messages. Here’s a recent example: you get an email with an attachment that appears to be a voice message with the SVG extension, but that’s typically an image format… To listen to the recording, you have to open the attachment, and what do you know — you find yourself on a phishing site that masquerades as Google Voice! And no, you don’t hear any audio. Instead, you’re redirected to another website where you’ll be prompted to enter the login and password for your email account. If you’re interested in learning more, here’s a Securelist blog post on this.
This and other stories just go to show you shouldn’t open attachments. Any attachments. At all. Especially if you weren’t expecting the message in the first place.
This is a golden rule that will help keep your money and accounts safe. A healthy dose of caution is exactly what everyone needs when using the internet. Let’s take a look at this phishing message.
Does this look odd? It’s written in two languages: Russian and Dutch. It shows the return address of a language school in the Netherlands, yet it references the Russian online marketplace Ozon. The message body congratulates the recipient: “You are one of our few lucky clients who get a chance to compete for uncredible prizes.” “Competing for prizes” is easy: just click the link, which has been thoughtfully included twice.
A week later, another message landed in the same inbox. Again, it came in two languages: Italian and Russian. This one came from a real Italian email address associated with the archive of Giovanni Korompay‘s works. The artist passed away in 1988. No, this wasn’t an offer to commemorate the painter. Most likely, hackers have breached the archive’s email account and are now sending phishing mail about soccer betting pretending to be from that source. All of that looks a rather fishy.
These messages have a lot in common. One thing we didn’t mention is how phishing links are disguised. Scammers deliberately use the TinyURL link shortener to make links look as legitimate as possible. But the truth is, a link that starts with tinyurl.com could point to anything: from the Kaspersky Daily blog to something malicious.
Scammers come up with all sorts of tricks: pretending to be Nigerian princes, sending fake Telegram Premium subscriptions, or congratulating people on winning fake giveaways. Every week, I get email with text like this: “Congratulations! You can claim your personal prize.” Sometimes they even add the amount of the supposed winnings to make sure I open the message. And once, I did.
Inside, it’s all by the book: a flashy headline, congratulations, and calls to click the link. To make it seem even more convincing, the email is supposedly signed by a representative from the “Prize Board of the Fund”. What fund? What prize board? And how could I possibly have won something I never even entered into? That part is unclear.
You may have noticed the unusual design of this message: it clearly stands out from the previous examples. To add credibility, the scammers used Google Forms, Google’s official service for surveys and polls. The scheme is a simple one: they create a survey, set it up to send response copies to the email addresses of their future victims, and collect their answers. Read Beware of Google Forms bearing crypto gifts to find out what happens if you open a link like that.
Following these rules will protect you from many — but not all — of the tricks that attackers might come up with. That’s why we recommend trusting a reliable solution: Kaspersky Premium. Every year, our products undergo testing by the independent Austrian organization AV-Comparatives to evaluate their ability to detect phishing threats. We described the testing procedure in a post a year ago. In June 2025, Kaspersky Premium for Windows successfully met the certification criteria again and received the Approved certificate, a mark of quality in protecting users from phishing.
Important clarification: at Kaspersky, we use a unified stack of security technologies, which is what the experts tested. This means the Kaspersky Premium for Windows award also applies to our other products for home users (Kaspersky Standard, Kaspersky Plus, and Kaspersky Premium) and for businesses (such as Kaspersky Endpoint Security for Business and Kaspersky Small Office Security).
]]>More about phishing:
As with any large-scale migration, making the switch to passkeys requires a solid business case. On paper, passkeys tackle several pressing problems at once:
A FIDO Alliance report suggests that 87% of surveyed organizations in the US and UK have either already transitioned to using passkeys or are currently in the process of doing so. However, a closer look at the report reveals that this impressive figure also includes the familiar enterprise options like smart cards and USB tokens for account access. Although some of these are indeed based on WebAuthn and passkeys, they’re not without their problems. They’re quite expensive and create an ongoing burden on IT and cybersecurity teams related to managing physical tokens and cards: issuance, delivery, replacement, revocation, and so on. As for the heavily promoted solutions based on smartphones and even cloud sync, 63% of respondents reported using such technologies, but the full extent of their adoption remains unclear.
Companies that transition their entire workforce to the new tech are few and far between. The process can get both organizationally challenging and just plain expensive. More often than not, the rollout is done in phases. Although pilot strategies may vary, companies typically start with those employees who have access to IP (39%), IT system admins (39%), and C-suite executives (34%).
When an organization decides to transition to passkeys, it will inevitably face a host of technical challenges. These alone could warrant their own article. But for this piece, let’s stick to the most obvious issues:
Despite all these challenges, the transition to passkeys may be a foregone conclusion for some organizations if required by a regulator. Major national and industry regulators generally support passkeys, either directly or indirectly:
The NIST SP 800-63 Digital Identity Guidelines permit the use of “syncable authenticators” (a definition that clearly implies passkeys) for Authenticator Assurance Level 2, and device-bound authenticators for Authenticator Assurance Level 3. Thus, the use of passkeys confidently checks the boxes during ISO 27001, HIPAA, and SOC 2 audits.
In its commentary on DSS 4.0.1, the PCI Security Standards Council explicitly names FIDO2 as a technology that meets its criteria for “phishing-resistant authentication”.
The EU Payment Services Directive 2 (PSD2) is written in a technology-agnostic manner. However, it requires Strong Customer Authentication (SCA) and the use of Public Key Infrastructure based devices for important financial transactions, as well as dynamic linking of payment data with the transaction signature. Passkeys support these requirements.
The European directives DORA and NIS2 are also technology-agnostic, and generally only require the implementation of multi-factor authentication — a requirement that passkeys certainly satisfy.
In short, choosing passkeys specifically isn’t mandatory for regulatory compliance, but many organizations find it to be the most cost-effective path. Among the factors tipping the scales in favor of passkeys are the extensive use of cloud services and SaaS, an ongoing rollout of passkeys for customer-facing websites and apps, and a well-managed fleet of corporate computers and smartphones.
The attack leverages the ClickFix technique, multi-stage loaders and deferred execution to bypass defenses and deliver malware undetected. This post examines in detail how attackers exploit the invite link system, what is ClickFix and why they use it, and, most importantly, how not to fall victim to this scheme.
First, let’s look at how Discord invite links work and how they differ from each other. By doing so, we’ll gain an insight into how the attackers learned to exploit the link creation system in Discord.
Discord invite links are special URLs that users can use to join servers. They are created by administrators to simplify access to communities without having to add members manually. Invite links in Discord can take two alternative formats:
Having more than one format, with one that uses a “meme” domain, is not the best solution from a security viewpoint, as it sows confusion in the users’ minds. But that’s not all. Discord invite links also have three main types, which differ significantly from each other in terms of properties:
Links of the first type are what Discord creates by default. Moreover, in the Discord app, the server administrator has a choice of fixed invite expiration times: 30 minutes, 1 hour, 6 hours, 12 hours, 1 day or 7 days (the default option). For links created through the Discord API, a custom expiration time can be set — any value up to 7 days.
Codes for temporary invite links are randomly generated and usually contain 7 or 8 characters, including uppercase and lowercase letters, as well as numbers. Examples of a temporary link:
To create a permanent invite link, the server administrator must manually select Never in the Expire After field. Permanent invite codes consist of 10 random characters — uppercase and lowercase letters, and numbers, as before. Example of a permanent link:
Lastly, custom invite links (vanity links) are available only to Discord Level 3 servers. To reach this level, a server must get 14 boosts, which are paid upgrades that community members can buy to unlock special perks. That’s why popular communities with an active audience — servers of bloggers, streamers, gaming clans or public projects — usually attain Level 3.
Custom invite links allow administrators to set their own invite code, which must be unique among all servers. The code can contain lowercase letters, numbers and hyphens, and can be almost arbitrary in length — from 2 to 32 characters. A server can have only one custom link at any given time.
Such links are always permanent — they do not expire as long as the server maintains Level 3 perks. If the server loses this level, its vanity link becomes available for reuse by another server with the required level. Examples of a custom invite link:
From this last example, attentive readers may guess where we’re heading.
Now that we’ve looked at the different types of Discord invite links, let’s see how malicious actors weaponize the mechanism. Note that when a regular, non-custom invite link expires or is deleted, the administrator of a legitimate server cannot get the same code again, since all codes are generated randomly.
But when creating a custom invite link, the server owner can manually enter any available code, including one that matches the code of a previously expired or deleted link.
It is this quirk of the invite system that attackers exploit: they track legitimate expiring codes, then register them as custom links on their servers with Level 3 perks.
As a result, scammers can use:
What does this substitution lead to? Attackers get the ability to direct users who follow links previously posted on wholly legitimate resources (social networks, websites, blogs and forums of various communities) to their own malicious servers on Discord.
What’s more, the legal owners of these resources may not even realize that the old invite links now point to fake Discord servers set up to distribute malware. This means they can’t even warn users that a link is dangerous, or delete messages in which it appears.
Now let’s talk about what happens to users who follow hijacked invite links received from trusted sources. After joining the attackers’ Discord server, the user sees that all channels are unavailable to them except one, called verify.
On the attackers’ Discord server, users who followed the hijacked link have access to only one channel, verify Source
This channel features a bot named Safeguard that offers full access to the server. To get this, the user must click the Verify button, which is followed by a prompt to authorize the bot.
On clicking the Authorize button, the user is automatically redirected to the attackers’ external site, where the next and most important phase of the attack begins. Source
After authorization, the bot gains access to profile information (username, avatar, banner), and the user is redirected to an external site: https://captchaguard[.]me. Next, the user goes through a chain of redirects and ends up on a well-designed web page that mimics the Discord interface, with a Verify button in the center.
Redirection takes the user to a fake page styled to look like the Discord interface. Clicking the Verify button activates malicious JavaScript code that copies a PowerShell command to the clipboard Source
Clicking the Verify button activates JavaScript code that copies a malicious PowerShell command to the clipboard. The user is then given precise instructions on how to “pass the check”: open the Run window (Win + R), paste the clipboarded text (Ctrl + C), and click Enter.
Next comes the ClickFix technique: the user is instructed to paste and run the malicious command copied to the clipboard in the previous step. Source
The site does not ask the user to download or run any files manually, thereby removing the typical warning signs. Instead, users essentially infect themselves by running a malicious PowerShell command that the site slips onto the clipboard. All these steps are part of an infection tactic called ClickFix, which we’ve already covered in depth on our blog.
The user-activated PowerShell script is the first step in the multi-stage delivery of the malicious payload. The attackers’ next goal is to install two malicious programs on the victim’s device — let’s take a closer look at each of them.
First, the attackers download a modified version of AsyncRAT to gain remote control over the infected system. This tool provides a wide range of capabilities: executing commands and scripts, intercepting keystrokes, viewing the screen, managing files, and accessing the remote desktop and camera.
Next, the cybercriminals install Skuld Stealer on the victim’s device. This crypto stealer harvests system information, siphons off Discord login credentials and authentication tokens saved in the browser, and, crucially, steals seed phrases and passwords for Exodus and Atomic crypto wallets by injecting malicious code directly into their interface.
Skuld sends all collected data via a Discord webhook — a one-way HTTP channel that allows applications to automatically send messages to Discord channels. This provides a secure way for stealing information directly in Discord without the need for a sophisticated management infrastructure.
As a result, all data — from passwords and authentication tokens to crypto wallet seed phrases — is automatically published in a private channel set up in advance on the attackers’ Discord server. Armed with the seed phrases, the attackers can recover all the private keys of the hijacked wallets and gain full control over all cryptocurrency assets of their victims.
Unfortunately, Discord’s invite system lacks transparency and clarity. And this makes it extremely difficult, especially for newbies, to spot the trick before clicking a hijacked link and during the redirection process.
Nevertheless, there are some security measures that, if done properly, should fend off the worst outcome — a malware-infected computer and financial losses:
]]>Malicious actors often target Discord to steal cryptocurrency, game accounts and assets, and generally cause misery for users. Check out our posts for more examples of Discord scams:
Just like parents tell their kids not to take candy from strangers, we recommend being cautious about offers that seem too good to be true. Today’s story is exactly about that. Our researchers have uncovered a new wave of scam attacks exploiting Google Forms. Scammers use this Google service to send potential victims emails offering free cryptocurrency.
As is often the case, the scam is wrapped in a flashy, tempting package: victims are lured with promises of cashing out a large sum of cryptocurrency. But before you can get your payout, the scammers ask you to pay a fee — though not right away. First, you have to click a link in the email, land on a fake website, and enter your crypto wallet details and your email address (a nice bonus for the scammers). And just like that, you wave goodbye to your money.
If we take a closer look at these emails, we’ll see that they don’t exactly win any awards for looking legit. That’s because, while Google Forms is a free tool that allows anyone, including scammers, to create professional-looking emails, these emails have a very specific look that’s pretty hard to pass off as a real crypto platform notification. So why do scammers use Google Forms?
Because this allows the message to slip through email filters, and there’s a good reason for that. Email messages like these are sent from Google’s own mail servers and include links to the domain forms.gle. The links look legit to spam filters, so there’s a good chance these messages will make it into your inbox. This is how scammers exploit the good reputation of this online service.
Google Forms scams are on the rise. According to some experts, the number of these scams increased by 63% in 2024 and likely continues to grow in 2025. That means one thing: you need to share this post right now with your loved ones who are just starting to explore the internet. Tell them about the most common types of scams today and how to protect themselves.
The easiest and most effective approach is to rely on a trusted security tool that alerts you whenever you try to visit a phishing website. What are some other things you can do?
If you’ve grown tired of all the Google Forms scams, you can set up a filter for the phrase “Create your own Google Form” in your email client. Every single Google Forms email contains that phrase, so the filter will move any messages with the text right to the spam folder. The problem with this approach is that you might miss legitimate emails from Google Forms. Here’s how to block these emails in Gmail and Outlook.
]]>Read about other tricks that scammers have up their sleeves:
It turns out that smartwatches continue that lax approach to protecting their owners’ personal data. In late June 2025, all COROS smartwatches were found to have serious vulnerabilities that exposed not only the watches themselves but also user accounts. By exploiting them, malicious actors can gain full access to the data in the victim’s account, intercept sensitive information like notifications, change or factory-reset device settings, and even interrupt workout tracking leading to the loss of all data.
What’s particularly frustrating is that COROS was notified of these issues back in March 2025, yet fixes aren’t expected until the end of the year.
Similar vulnerabilities were discovered in 2022 in devices from arguably one of the most popular manufacturers of sports smartwatches and fitness gadgets, Garmin, although these issues were promptly patched.
In light of these kinds of threats, it’s natural to want to maximize your privacy by properly configuring the security settings in your sports apps. Today, we’ll break down how to protect your data within Garmin Connect and the Connect IQ Store — two online services in one of the most widely used sports gadget ecosystems.
The privacy settings are located in different sections of the menu depending on whether you’re using the mobile app or the web version.
How to find the privacy settings in Garmin Connect for iOS — the process is essentially the same in the Android version of the app
There, you can adjust the visibility of your profile, activities, and steps, and even decide who can see your badges. For the highest level of privacy, we recommend selecting Only me. This ensures that your personal information, workout stats, and other data are visible only to you.
Revealing your routes is one of the most significant privacy risks. This could allow malicious actors to track you in near real-time.
Analysis of publicly available geodata has repeatedly revealed leaks of highly confidential information — from the locations of secret U.S. military bases exposed by anonymized heatmaps of service members’ activity, to the routes of head-of-state motorcades, pieced together from their bodyguards’ smartwatch tracking data. All this data ended up publicly accessible, not because of a hack, but due to incorrect privacy settings within the app itself, which broadcasts all of the owner’s movements online by default.
These leaks clearly showed that data from wearable sensors can cause a lot of problems for their wearers. Even if you’re not guarding top government officials, training maps can reveal your home address, workplace, and other frequently visited locations.
Garmin’s tactical watch models include a Stealth mode feature, designed specifically for military personnel. In their line of work, a lack of privacy can be a matter of life and death. However, with Garmin Connect, you can set up your own privacy zones for almost every Garmin gadget.
Garmin’s Privacy Zones are quite similar to a feature Strava introduced back in 2013. They automatically hide the start and end points of your workouts if these fall within a designated area. And even if you share your workout with the whole world, it’ll be impossible to see your exact location — for example, your home.
Just a bit further up in that same section, it’s worth checking out other ways your movement data might be used: for instance, to create heatmaps based on user routes. You can opt out of sharing this kind of data. To understand what each function does and how to adjust it, simply tap Edit directly below it. A description will pop up, explaining what data is collected and how it’s used.
Changing your privacy settings won’t retroactively apply to activities you’ve already saved in Garmin Connect. Even if you crank up your privacy to the max right now, all your past recordings will still show up with the visibility settings they had when you first created them. So if you’ve been using Garmin for a while and you’re just now getting around to tweaking your privacy, you’ll want to update your previously saved activities as well.
You can only change the privacy settings for your previously saved activities in the web version of Garmin Connect.
You can remove specific saved activities so no one can see them.
If you need to wipe all your previously saved activities, and you have a lot of them, it might be easier to delete your old account and create a new one. However, keep in mind that deleting your account will result in the loss of all your workout data and health metrics.
Another potential source of personal data leaks comes from devices and services that have access to your Garmin Connect account. If you frequently switch out your sports gadgets, make sure you remove them from your account.
Next, check the list of third-party apps that have access to your account:
It’s not just incorrect privacy settings in Garmin Connect that can expose your data. Vulnerabilities in apps and watch faces available through the Connect IQ Store marketplace can also lead to data leaks. In 2022, security researcher Tao Sauvage found that the Connect IQ API developer platform contained 13 vulnerabilities. These could potentially be exploited to bypass permissions and compromise your watch.
Some of these vulnerabilities have been lurking in the Connect IQ API since its very first release back in 2015. Over a hundred models of Garmin devices were at risk, including fitness watches, outdoor navigators, and cycling computers. Fortunately, these vulnerabilities were patched in 2023, but if you haven’t updated your device since before then (or you purchased a used gadget), it’s crucial to update its firmware to the latest version.
Even though these specific vulnerabilities have been fixed, the Connect IQ Store remains a potential entry point for future threats. Because of this, we recommend the following:
In an era of increasing cyberthreats to IoT devices, properly configuring the privacy settings on your wearables is crucial. Your digital security doesn’t just depend on device vendors; it also relies on the steps you take to protect your personal data.
To manage privacy for popular apps and gadgets, be sure to use our free service, Privacy Checker. And to stay on top of the latest cyberthreats and respond quickly, subscribe to our Telegram channel. Finally, the specialized privacy protection modes in Kaspersky Premium ensure maximum security for your personal information and help prevent data theft across all your devices.
]]>Below are detailed instructions on how to configure security and privacy for the most popular running trackers.
These aren’t necessarily shortcomings of the CVSS itself. Instead, this highlights the need to use the tool correctly, as part of a more sophisticated and comprehensive vulnerability management process.
Do you ever notice how the same vulnerability might have different severity scores depending on the available source? One score from the cybersecurity researcher who found it, another from the vendor of the vulnerable software, and yet another from a national vulnerability database? It’s not always just a simple mistake. Sometimes, different experts can disagree on the context of exploitation. They might have different ideas about the privileges with which a vulnerable application runs, or whether it’s internet-facing. For instance, a vendor might base its assessment on its recommended best practices, while a security researcher might consider how applications are typically configured in real-world organizations. One researcher might rate the exploit complexity as high, while another deems it low. This isn’t an uncommon occurrence. A 2023 study by Vulncheck found that 20% of vulnerabilities in the National Vulnerability Database (NVD) had two CVSS3 scores from different sources, and 56% of those paired scores were in conflict with each other.
For over a decade, FIRST has advocated for the methodologically correct application of CVSS. Yet organizations that use CVSS ratings in their vulnerability management processes continue to make typical mistakes:
CVSS is the industry standard for describing a vulnerability’s severity, the conditions under which it can be exploited, and its potential impact on a vulnerable system. However, beyond this description (and the CVSS Base score), there’s a lot it doesn’t cover:
All these factors significantly influence the decision of when and how to remediate a vulnerability — or even if remediation is necessary at all.
Many factors that are often hard to account for within the confines of CVSS are central to a popular approach known as risk-based vulnerability management (RBVM).
RBVM is a holistic, cyclical process, with several key phases that repeat regularly:
In addition to what we’ve discussed, it’s crucial to periodically analyze your company’s vulnerability landscape and IT infrastructure. Following this analysis, you need to introduce cybersecurity measures that prevent entire classes of vulnerabilities from being exploited or significantly boost the overall security of specific IT systems. These measures can include network micro-segmentation, least privilege implementation, and adopting stricter account management policies.
A properly implemented RBVM process drastically reduces the burden on IT and security teams. They spend their time more effectively as their efforts are primarily directed at flaws that pose a genuine threat to the business. To grasp the scale of these efficiency gains and resource savings, consider this FIRST study. Prioritizing vulnerabilities using EPSS alone allows you to focus on just 3% of vulnerabilities while achieving 65% efficiency. In stark contrast, prioritizing by CVSS-B requires addressing a whopping 57% of vulnerabilities with a dismal 4% effectiveness. Here, “efficiency” refers to successful remediation of vulnerabilities that have actually been exploited in the wild.
]]>Exploitation of this pair of vulnerabilities allows unauthenticated attackers to take control of SharePoint servers, and therefore not only gain access to all the information stored on them, but also use the servers to spread their attack on the rest of the infrastructure.
Researchers at EYE Security state that even before the Microsoft bulletins were published, they had seen two waves of attacks using this vulnerability chain, resulting in dozens of servers being compromised. Attackers install web shells on vulnerable SharePoint servers and steal cryptographic keys that can later allow them to impersonate legitimate services or users. This way they can to gain access to compromised servers even after the vulnerability has been patched and the malware destroyed.
Our solutions proactively detected and blocked malicious activity linked to the ToolShell attack. Our telemetry data shows exploitation attempts worldwide, including in Africa, Asia, the Middle East, and Russia. A detailed investigation of the attack and associated vulnerabilities, along with indicators of compromise, is available in a Securelist blog post.
Researchers noticed that the exploitation of the CVE-2025-53770 and CVE-2025-53771 vulnerability chain is very similar to the ToolShell chain of two other vulnerabilities, CVE-2025-49704 and CVE-2025-49706, demonstrated in May, as part of the Pwn2Own hacking competition in Berlin. Those two were patched by previously released updates, but apparently not perfectly.
By all indications, the new pair of vulnerabilities is an updated ToolShell chain, or rather a bypass of the patches that fix it. This is confirmed by Microsoft’s remarks in the description of the new vulnerabilities: “Yes, the update for CVE-2025-53770 includes more robust protections than the update for CVE-2025-49704. The update for CVE-2025-53771 includes more robust protections than the update for CVE-2025-49706.”
The first thing to do is install the patches, and before rolling out the emergency updates released yesterday, you should install the regular July KB5002741 and KB5002744. At the time of writing this post, there were no patches for SharePoint 2016, so if you’re still using this version of the server, you’ll have to rely on compensating measures.
You should also make sure that robust protective solutions are installed on the servers and that the Antimalware Scan Interface (AMSI), which helps Microsoft applications and services to interact with running cybersecurity products, is enabled.
Researchers recommend replacing machine keys in ASP.NET on vulnerable SharePoint servers (you can read how to do this in Microsoft’s recommendations), as well as other cryptographic keys and credentials that may have been accessed from the vulnerable server.
If you have reason to suspect that your SharePoint servers have been attacked, it is recommended that you check them for indicators of compromise, primarily the presence of the malicious spinstall0.aspx file.
If your internal incident response team lacks the in-house resources to identify indicators of compromise or remediate the incident, we advise you to contact third-party experts.
]]>Here’s how it works. The victim receives an email, seemingly from HR, addressing them by name. The email informs them of changes to HR policy regarding remote work protocols, available benefits, and security standards. Naturally, any employee would be interested in these kinds of changes, so their cursor naturally drifts toward the attached document, which, incidentally, also features the recipient’s name in its title. What’s more, the email has a convincing banner stating that the sender is verified and the message came from a safe-sender list. As experience shows, this is precisely the kind of email that deserves extra scrutiny.
For starters, the entire email content — including the reassuring green banner and the personalized greeting — is an image. You can easily check this by trying to highlight any part of the text with your mouse. A legitimate sender would never send an email this way; it’s simply impractical. Imagine an HR department having to save and send individual images to every single employee for such a widespread announcement! The only reason to embed text as an image is to bypass email antispam or antiphishing filters.
There are other, more subtle clues in the email that can give away the attackers. For example, the name and even the format of the attached document don’t match what’s mentioned in the email body. But compared to the “picturesque” email, these are minor details.
Of course, the attached document doesn’t contain any actual HR guidelines. What you’ll find is a title page with a small company logo and a prominent “Employee Handbook” header. It also includes a table of contents with items highlighted in red as if to indicate changes, followed by a page with a QR code (as if to access the full document). Finally, there’s a very basic instruction on how to scan QR codes with your phone. The code, of course, leads to a page where the user is asked to enter corporate credentials, which is what the authors of the scheme are after.
The document is peppered with phrases designed to convince the victim it’s specifically for them. Even their name is mentioned twice: once in the greeting and again in the line “This letter is intended for…” that precedes the instruction. Oh, and yes, the file name also includes their name. But the first question this document should raise is: what’s the point?
Realistically, all this information could have been presented directly in the email without creating a personalized, four-page file. Why would an HR employee go to such lengths and create these seemingly pointless documents for each employee? Honestly, we initially doubted that scammers would bother with such an elaborate setup. But our tools confirm that all the phishing emails in this campaign indeed contain different attachments, each unique to the recipient’s name. We’re likely seeing the work of a new automated mailing mechanism that generates a document and an email image for each recipient… or perhaps just some extremely dedicated phishers.
A specialized security solution can block most phishing email messages at the corporate mail server. In addition, all devices used by company employees for work, including mobile phones, should also be protected.
We also recommend educating employees about modern scam tactics — for example, by sharing resources from our blog — and continually raising their overall cybersecurity awareness. This can be achieved through platforms like Kaspersky Automated Security Awareness.
]]>