Introduction — A Tale of Two Adams
This week, I'm taking another detour from my usual walkthrough format to try something a little different. This is another entry in my StumbleSOC Stories series, a collection of field notes and real‑world anecdotes. It's essentially my working diary, where I recount interesting incidents with actionable insights and takeaways.
This particular story is adapted from two real incidents separated by about ten months, both involving Microsoft Teams, voice phishing, and a suspiciously helpful individual named Adam. It's less a tale of deep technical chops and more a reminder of how effective social engineering can be when it shows up in a familiar place, at the wrong time, wearing a friendly display name.
Rather than walk through this as a formal incident response case study, I'm taking more of an inside‑out approach. We'll stumble through what tipped us off, how we scoped what happened, where our controls worked, and, more importantly, where they didn't. Along the way, I'll highlight the adjustments the organization made and how small changes in visibility, reporting, and detection made a real difference the second time around.
The goal here isn't to document a perfect response or to shame a user who did everything "wrong." It's to show how quickly trust can be established with ubiquitous, real‑time collaboration tools, how easily that trust can be exploited, and how defenders can adapt without cloistering themselves off from tools the business relies on every day.
This is A Tale of Two Adams. Let's go!
"Hello, this is Adam from your IT Department."
Back in the summer of 2025, I had my first close encounter with Microsoft Teams voice phishing. I know, fashionably late to the vishing party. Bear with me. A user in an organization I was supporting received a call with the display name "IT Support (Working From Home)". You can probably guess where this is going.
The security team picked up on this when alerts started popping for strange artifacts blocked by the IPS, but nothing surfaced from the EDR tooling, and there was no report from the user.
After a quick triage, the team pieced together that the user had attempted to open Quick Assist on their work laptop, a preinstalled remote support utility in Windows. That activity was blocked by the EDR network sensor due to a pre‑existing URL block indicator. A key detail here is that these informational‑level events tend to be pretty noisy in that environment, so they were suppressed and never surfaced in the queue.
From there, we identified that the user had also received a suspicious email containing instructions and a link to download and install another remote support tool, Zoho Assist. Shortly after, there was an attempt to download a ZIP file via curl. That connection attempt is ultimately what the IPS picked up and blocked.
Once this activity was confirmed, we pumped the brakes, isolated the device, and reached out to the user for more context:
"OMG, someone named Adam called me from 'IT Support (Working From Home) (External)' and ran something on my computer. After the call I was like… I hope that was legit."
It wasn't. The user had their suspicions too (maybe the EXTERNAL tag helped) and had already rebooted the device, cutting off the remote access. We mostly just confirmed what they already suspected. The attacker only had access to the device for a few minutes, but in that short window there was already a lot to learn.
Then, almost on cue, as we were discussing the situation, Adam called back to "resolve the connection issues."
Adam seemed like an initial access-type actor, not the man with the plan. He explained that he was totally stumped as to why the commands he ran hadn't worked (thanks, IPS) and that the issue needed to be escalated. Before that could happen though, he needed to get back on the system to adjust the power‑saving settings so it wouldn't go to sleep. The device, he insisted, had to be left on overnight.
That was the cue to shut down the party, bounce the bad guy, and clean up. While no damage was done, it warranted a hard look at what hadn't gone well. One immediate outcome was the decision to throw together a detection rule to flag Quick Assist activity so there'd be an early signal if this happened again. It wasn't elegant, but it got the job done, and it's an important detail for later.
I'd almost forgotten about this incident until…
10 months later: "Hello, this is (a different) Adam from your IT Department."
One Friday afternoon, right before a holiday weekend, an unexpected helpdesk ticket came in. A user reported that they'd missed a Microsoft Teams call from a suspicious caller with the display name "HELP DESK."
Then, only about five minutes later, that rarely seen and almost forgotten Quick Assist detection alert from way back in the summer popped on another workstation.
Could it be another Adam?
This time, we were ready.
When the team contacted the user immediately, they explained they had been speaking with a purported internal sysadmin and, wouldn't you believe it, also named Adam. The timing couldn't have been better for the attacker. With a holiday weekend looming and the user trying to wrap things up before heading out, New Adam's call landed at exactly the wrong moment. The user just wanted to be helpful and cross one last thing off their list.
As instructed, the victim tried the classic Windows + Ctrl + Q shortcut to open Quick Assist. Blocked.
New Adam pivoted, instructing them to download another remote management tool, AnyDesk. Also blocked.
At that point, the user recalled that Adam sounded disappointed and abruptly hung up. The effort, it seemed, was now too great.
"After asking for 2 tools he gave up. And he sounded disappointed when they came up the way they did."
The user headed into their last meeting of the day, not even registering that Adam had just been an attacker trying to gain remote access to their laptop.
Fortunately, the earlier incident had given the team a pretty clear view of the playbook. This time, preventive controls and detection did exactly what they were supposed to do. No access was granted. The lessons from the earlier stumble paid off, turning this into a tidy containment and a perfect teachable moment.
The user genuinely couldn't believe it. Teams showed Helpdesk, plain as day, and Adam sounded like he wanted to help, right? The thing is, if they'd paused and done even a basic directory search in their Teams client, they would've quickly discovered what we already knew. There was no Adam.
A name or picture in Teams doesn't prove who the caller is, a tone doesn't prove intent, yet this is exactly how trust gets exploited.
It might seem obvious in hindsight, but vishing can be just as effective as email‑based phishing. This isn't a new tactic, but it remains powerful because, much like classic tech support scams, it preys on implicit trust and urgency. In most organizations, IT is seen as the steward of the device. Users get conditioned to a familiar break‑fix rhythm. People want to help, and they want to be helped.
Most security awareness training, including in this case, still focuses heavily on email threats. It's often so fixated on blockbuster phishing statistics that the idea that something like a Teams voice call could be an entry point doesn't even cross a user's mind.
But that's the job isn't it? Keep learning. Keep improving. Keep protecting.
So, what's the point of this story?
The Tech Recs
Let's talk options. What can we actually do to impose cost on the next would‑be Adams and defend against these attacks in an organization that can't simply wall itself in by blocking Teams calls from untrusted externals altogether?
This is very much a Microsoft‑focused perspective, but a lot of the thinking applies more broadly. The goal here isn't to solve vishing with a single control or to block everything outright. It's to add friction, make these attacks harder to pull off, noisier, and more likely to stumble before they go anywhere useful.
Much of what follows is grounded in Microsoft's guidance for protecting Teams through Microsoft Defender for Office 365, which provides a solid baseline. From there, the emphasis shifts to what actually helps in practice and how relatively small changes can turn the tides back in the defender's favor.
These recommendations pull from Microsoft SecOps documentation, sprinkled in with firsthand observations from incidents like the ones above. I'm intentionally tying together ideas that Microsoft documentation often treats in isolation and shaping them into something more prescriptive and practical.
If you work in the Microsoft ecosystem, a quick heads‑up: some of these capabilities are still in preview, and others evolve quickly as Teams and Defender continue to change. Depending on when you're reading this, specific features, settings, or behaviors may look a little different.
From here on out, we'll get concrete.
User-Initiated Reporting of Suspicious Teams Calls or Messages
Remember when I said one of the pitfalls was too much phish-focused training?
The first, and arguably easiest, improvement here, especially in a straight‑up Microsoft shop, is empowering users to report more than just email. Suspicious Teams messages and calls should fall into that same muscle memory. And yes, there's a reasonably frictionless way to do this without forcing users to open a helpdesk ticket. Instead, those reports can flow directly into your incident queue.
Microsoft has clearly put more attention on this threat over the past year and has improved their offerings as a result. As this attack surface continues to grow, it makes sense to lean on your user base and take advantage of what's probably already been drilled into them ad‑nauseam. If users know how to report phishing emails, reporting suspicious Teams activity shouldn't feel like a new behavior.
From an admin perspective, this starts in the Teams Admin Center. Depending on whether you're using the classic policy experience or the newer unified policy pages, the paths look a little different.
For Messages:
- Classic: Teams Admin → Messaging → Messaging Policies → Report a security concern
- Unified policy page:
https://admin.teams.microsoft.com/one-policy/settings/messaging→ Report a security concern

For Voice Calls (Preview):
For voice calls, this capability is still in Preview at the time of writing, but it's worth enabling where available:
- Classic: Teams Admin → Voice → Calling Policies → Report a call
- Unified policy page:
https://admin.teams.microsoft.com/one-policy/settings/calling→ Report a call

Microsoft Defender Portal Enablement
Once reporting is enabled in the Teams Admin Center, there's a corresponding step on the Defender side. In the Microsoft Defender portal, you'll want to validate:
Microsoft Defender → Settings → Email & collaboration → User reported settings → Monitor reported items in Microsoft Teams

NOTE: "The value of this setting is meaningful only if reporting is turned on in the Teams admin center as described in the previous section." Meaning only turn it on if you've also enabled reporting in Teams, because it won't work otherwise.
While you're in there, it's also worth jumping down to the Microsoft Teams protection blade and confirming that Zero‑hour auto purge (ZAP) for Teams chats and channels is enabled. Specifically, "Protect Teams chats and channels using retroactive content scanning and removal" should be set to On, with appropriate quarantine policies behind it.

Once everything is setup, user‑initiated reporting gives you signals very similar to traditional phishing workflows, including alerts like:
- Teams message reported by user as security risk
- Teams message reported by user as not security risk
- Teams call reported by user as a security risk
- Teams call reported by user as a not security risk
The most important part, though, isn't the toggle. It's making sure this shows up in end‑user training. If users don't know it exists, they can't use it, and the whole control adds no value.
And if someone does fall for a vishing call? Turn them into a security champion. Let them share what happened with their team. Peer accountability and humility are often far more effective than yet another reminder from IT about "being vigilant."
Blocking External Contacts
Another obvious, quick win here is blocking the malicious external contact outright. It's not the most painful control on the pyramid of pain, but it does stop the immediate threat. In both of the incidents above, and based on broader industry reporting, these attackers seem to favor [.]onmicrosoft.com domains, which makes this option particularly useful.
What's changed recently, and for the better, is Microsoft's decision to let security teams participate directly through the Tenant Allow/Block List (TABL) on the Defender side.
This capability nicely complements the user reporting feature. More importantly, it gives security teams a way to act quickly when admin roles are split across functions. If your SOC sees a suspicious call artifact and needs to action on it, having to wait on a separate admin team can feel like unnecessary friction.
On the Teams side, the setting to block external users (like our two Adams calling from [.]onmicrosoft.com domains) lives here:
Teams Admin Center → External Collaboration → External Access → Organizational settings → Block specific users from communicating with people in my organization

NOTE: For this to work, the Allow or block external domain option must be set to either Allow all external domains or Block only specific external domains.
Now the fun part: Traditionally, maintaining this list required a Teams admin role. To better support least privilege, Microsoft added the option to allow the security team to manage blocked users and domains directly through Defender. Enabling "Allow my security team to manage blocked domains and blocked users" hands day‑to‑day action to the people already doing triage.

Once that setting is flipped, security teams can manage blocked Teams senders directly from the Defender portal using the Tenant Allow/Block List, without jumping between consoles or roles. It's not exciting, but it reduces response time, and in incidents like these, that's important.

Pull Audit Logs to Confirm Who Else Received a Call
Going back to the stories of the two Adams, one important step after blocking the contact was figuring out who else they might have reached. Scoping impact matters, and in this case we needed to understand whether this activity stopped with a single user or if others had also been contacted.
At the time of writing, this is a bit more painful than it should be. There isn't a relevant Advanced Hunting table available for Teams calls yet.
I emphasize yet because Microsoft recently announced in a blog post, "From Impersonation Calls to Transparent Reporting: Defending the New Front Door of Attacks" that this gap is closing. Among several updates, Microsoft highlighted that "Microsoft Defender has turned Teams calling from a blind spot into a first‑class SOC signal," including the ability to investigate Teams calling activity at scale via Advanced Hunting. Hooray!
That's a welcome change, but it didn't help us when we actually needed to identify who else the Adams had contacted. So for this incident, and for this blog, we had to fall back to what's available today and leverage the Unified Audit Log instead.
From a Defender perspective, this lives under:
Microsoft Defender → System → Audit
To scope activity, we might structure the search using the following parameters:
Workload
- MicrosoftTeams
Activities
- Operation:
CallParticipantDetail - Friendly name: Added information about call participants
- Operation:
ChatCreated - Friendly name: Created a chat
Keyword search (optional)
- The attacker's email address, often from a
[.]onmicrosoft.comdomain, if it was reported by the user or visible in their Teams client

So, what's the downside with this approach? Lag.
If you've spent any time in the Microsoft ecosystem, this won't be surprising. There's a delay between when an event occurs and when it becomes searchable in the audit log. Microsoft is fairly explicit about this, noting that for core services like Teams, "audit records typically become available 60 to 90 minutes after the event," and sometimes longer.
Still, it's better than nothing. In these cases, it helped confirm that only the original reporter and the second victim had any contact with the second Adam.
The takeaway here is simple. Knowing which logs exist, where to find them, and what their limitations are is critical when responding to an incident. Even imperfect visibility can make the difference between guessing and confidently scoping the impact.
Disable & Detect Quick Assist:
The last recommendation for this piece is to better control and monitor the remote management and support tools in your environment. I'm not going to deep‑dive into RMM tooling as a whole here, since this varies a lot by organization, tooling, and maturity. That said, understanding which tools are normal and approved, which aren't, and which are commonly abused by threat actors can go a long way toward detecting potentially malicious activity.
In this story, Quick Assist was already blocked, but it didn't raise an alert (learning moment #1). The second RMM tool, Zoho Assist, wasn't blocked at all (learning moment #2). That combination presented an opportunity to reassess how RMM tools were controlled overall, which directly led to successfully blocking AnyDesk during the second incident. It also highlighted something more interesting. Even a blocked tool like Quick Assist can still provide detection value and act as a kind of canary in the coal mine.
So rather than trying to solve RMM tooling holistically in this post, let's focus on that first learning moment: Quick Assist. It's built into Windows, frequently abused in social engineering attacks, and a good place to impose some friction.
In a Microsoft Defender for Endpoint environment, Microsoft's own guidance recommends disabling Quick Assist by blocking its service endpoint using a URL‑based indicator. Specifically, this means blocking traffic to:
https://remoteassistance.support.services.microsoft.com
While this approach won't uninstall Quick Assist (which is also a valid option), it prevents the application from establishing a session. More importantly for us, it preserves telemetry. That detection signal turned out to be really valuable compared to simply removing the tool entirely.
NOTE: "Blocking the endpoint will disrupt the functionality of Remote Help, as it relies on this endpoint for operation."
Detection Rules
Blocking alone is helpful, but pairing it with detection gives you early warning.
After disabling Quick Assist via a Microsoft Defender for Endpoint URL block indicator, creating a custom detection can help tip you off to suspicious behavior. Sure, you might see the occasional fat‑finger hotkey / accidental launch, but you might also catch the start of a vishing attack before it goes any further. That trade‑off is usually worth it.
You might be wondering why not just check the Create alert option when configuring the indicator. In my experience, that doesn't work reliably enough to depend on. Instead, creating a custom detection rule through Advanced Hunting has been far more consistent.
Below is a simple example query that looks for blocked connection attempts to the Quick Assist endpoint, where the initiating process contains quickassist.exe:
DeviceEvents
| where ActionType == "ExploitGuardNetworkProtectionBlocked"
| where RemoteUrl contains "remoteassistance.support.services.microsoft.com"
| where InitiatingProcessCommandLine has "quickassist.exe"
| summarize arg_max(Timestamp, *) by DeviceId
| project Timestamp, DeviceId, ReportId, DeviceName, RemoteUrl, InitiatingProcessFileName, InitiatingProcessCommandLine, InitiatingProcessId, ActionTypeOnce the query is in place, you can use the Create detection rule option directly from the KQL window, fill in the required fields, and set an appropriate rule frequency. This gives you visibility into when Quick Assist is being launched (or abused) in your environment, even though the session itself never succeeds.
In my experience, the value of being tipped off to a potential vishing attempt far outweighs the small number of false positives this kind of detection generates.
Key Takeaways
Let's wrap this up.
I'm always a little surprised by how easy it is to gain a victim's trust with something as simple as a display name. There's nothing particularly fancy going on here. Just "Help Desk" calling at the right moment and sounding plausible enough to gain remote access.
Looking back at the two Adams, a few things stand out.
First, maximize the tools you already have, and keep up with how they evolve. Microsoft Teams isn't just messaging and meetings anymore. It's an attack surface, and it's always changing. For security operations, capabilities like user reporting, tenant‑level blocking, and improved visibility into call activity didn't meaningfully exist not that long ago. Staying current on what your platform can actually do matters, especially when attackers are happy to take advantage of whatever is least defended.
Second, training only works if users are empowered to act. In both incidents, the users weren't reckless. They were trying to be helpful, and they were fooled. The difference the second time around wasn't intuition, it was having a simple, familiar way to report something that felt off. If you want users to be part of the defense, make the response path easy and predictable so they don't have to guess what to do under pressure.
Third, be honest about what didn't go as planned and improve it. This part is important. In the first incident, Quick Assist was blocked, but silent. That wasn't a failure, but it was a missed opportunity. Instead of ignoring it, we adjusted and turned that quiet block event into usable signal the next time around.
Finally, don't wait for perfect detection from a single tool. No one alert told this story end to end. Some signal came from the IPS. Some came from audit logs. Some came from users. Each piece on its own was incomplete, but together it was enough to understand scope, confirm access, and respond.
That's really the point of this whole post. Defending against these attacks is hard. You need visibility where you can get it, the knowledge to act on what you see, and a willingness to close the gaps when you stumble.
Thanks for reading!

References:
Security Operations Guide for Teams protection in Microsoft Defender for Office 365: https://learn.microsoft.com/en-us/defender-office-365/mdo-support-teams-sec-ops-guide#enable-secops-to-hunt-for-threats-and-detections-in-microsoft-teams
Microsoft Learn — User reported settings in Microsoft Teams: https://learn.microsoft.com/en-us/defender-office-365/submissions-teams#turn-off-or-turn-on-user-reporting-of-teams-messages-in-the-defender-portal
Microsoft Learn — End user reporting for Teams Calling (Preview): https://learn.microsoft.com/en-us/microsoftteams/end-user-reporting-teams-calling
Cynet — "Emerging Threat: Microsoft Teams Vishing Campaign Continues": https://www.cynet.com/blog/emerging-threat-microsoft-teams-vishing-campaign-continues/#:~:text=The%20attacker%20contacts%20the%20targeted,to%20Office%20365%20for%20Business.
Microsoft Learn — Block domains and addresses in Microsoft Teams using the Tenant Allow/Block List: https://learn.microsoft.com/en-us/defender-office-365/tenant-allow-block-list-teams-domains-configure
From Impersonation Calls to Transparent Reporting: Defending the New Front Door of Attacks: https://techcommunity.microsoft.com/blog/microsoftdefenderforoffice365blog/from-impersonation-calls-to-transparent-reporting-defending-the-new-front-door-o/4503050
Microsoft Learn — Search the audit log: https://learn.microsoft.com/en-us/purview/audit-search#:~:text=Microsoft%20doesn't%20guarantee%20a,commit%20to%20a%20specific%20time.
Microsoft Learn — Audit Log Activities (Teams): https://learn.microsoft.com/en-us/purview/audit-log-activities#teams-activities
Microsoft Learn — Disable Quick Assist within your organization: https://learn.microsoft.com/en-us/windows/client-management/client-tools/quick-assist#disable-quick-assist-within-your-organization
Microsoft Learn — Create custom detection rules: https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules
MITRE ATT&CK — Software — Quick Assist (S1209): https://attack.mitre.org/software/S1209/