Tycoon 2FA phishing empire dismantled in global cybercrime crackdown

A sprawling cybercrime platform that helped thousands of attackers bypass modern authentication protections has been disrupted in a coordinated global operation led by technology companies, cybersecurity researchers and law enforcement agencies.
The takedown targeted Tycoon 2FA, one of the most prolific phishing-as-a-service platforms in operation in recent years, and underscores how identity-based attacks have become a central battleground in modern cybersecurity.
As organizations move more operations into cloud platforms and rely on digital identities to manage access, phishing campaigns that compromise those identities can provide attackers with direct entry into critical systems.
Through a combination of legal action, infrastructure seizures and cross-border intelligence sharing, the coalition dismantled key parts of the service’s technical backbone and seized hundreds of domains used to support its campaigns.
The operation illustrates both the scale of the modern phishing economy and the increasingly coordinated efforts required to disrupt it.
At the center of the disruption effort was a partnership involving Microsoft, Europol, and a broad set of cybersecurity companies and nonprofit organizations.
The coalition included firms such as Trend Micro, Cloudflare, Intel471, Proofpoint, SpyCloud and Coinbase, along with intelligence sharing groups and law enforcement agencies from multiple European countries.
The combined effort targeted the infrastructure powering Tycoon 2FA’s operations, including the domains used to host phishing pages and administrative panels.
As part of the operation, investigators seized roughly 330 domains that formed the core of the service’s infrastructure. These domains hosted control panels used by cybercriminals as well as fake login pages designed to harvest credentials from victims.
The seizures were carried out under a court order in the United States and supported by coordinated enforcement actions in several European jurisdictions.
Tycoon 2FA had emerged as one of the most significant drivers of phishing activity worldwide since it appeared around 2023. The platform allowed criminals to conduct sophisticated attacks that could defeat multi-factor authentication, the security measure widely adopted by organizations to protect accounts beyond a simple password.
By mid-2025, the service was responsible for about 62 percent of the phishing attempts blocked by Microsoft’s systems, with some months seeing more than 30 million malicious emails sent through the infrastructure.
The impact was substantial. Researchers estimate that the service has been linked to roughly 96,000 victims globally, including tens of thousands of Microsoft customers whose accounts were targeted or compromised.
Healthcare organizations, schools and universities were among the hardest hit sectors, with phishing campaigns disrupting operations and exposing sensitive data.
The platform’s success lay in its design. Tycoon 2FA operated as an adversary-in-the-middle phishing system, a technique that intercepts communication between a victim and a legitimate service during the login process.
When a user entered their credentials and responded to authentication prompts, the system relayed that information in real time to the actual service while simultaneously capturing passwords, authentication codes and session cookies.
Those stolen session tokens allowed attackers to log in to accounts even if the password was later changed, unless all active sessions were revoked. This approach effectively undermined traditional multi-factor authentication protections, which were designed to stop attackers who only possess a password.
By capturing authentication tokens generated during a valid login session, the Tycoon 2FA infrastructure allowed attackers to assume the identity of legitimate users and move through systems without triggering many security alerts.
The service also lowered the barrier to entry for cybercrime. Tycoon 2FA operated as a subscription-based phishing-as-a-service platform, meaning criminals could rent access to the toolkit without needing deep technical skills.
The system provided prebuilt phishing templates that mimicked widely used services such as Microsoft 365 and Google Workspace, along with hosting infrastructure and dashboards for managing campaigns and viewing stolen credentials.
This model reflects a broader trend in the cybercrime ecosystem where specialized services are sold or leased in underground markets to enable large-scale attacks.
Instead of building tools themselves, attackers can purchase ready-made capabilities including phishing kits, malware distribution services, hosting infrastructure and stolen credentials. The result is an interconnected economy that functions much like a legitimate technology supply chain.
Investigators said Tycoon 2FA fit squarely within that ecosystem. The service was reportedly marketed and managed through encrypted messaging platforms such as Telegram and supported by partners responsible for payments, marketing and technical support.
Other illicit services handled mass email distribution or provided the servers used to host phishing infrastructure, allowing the entire operation to scale quickly. Trend Micro researchers who tracked the platform say its infrastructure included thousands of domains and supported a global network of operators.
The service generated enormous volumes of phishing traffic, delivering campaigns targeting enterprises, governments and individuals across multiple continents.
Analysis of victim data also illustrates the breadth of the threat. Intelligence gathered from exposed Tycoon 2FA panels revealed hundreds of thousands of captured credentials and authentication records.
Most of the compromised accounts were tied to corporate email domains rather than free consumer email providers, underscoring the platform’s focus on enterprise environments where access to a single account can open pathways into larger organizational systems.
For attackers, those compromised accounts often served as the starting point for broader intrusions.
Once inside an organization’s email or cloud collaboration environment, criminals could conduct business email compromise scams, steal sensitive data, or use the account to launch additional phishing campaigns targeting colleagues and partners. In some cases, access obtained through phishing operations later facilitated ransomware deployments.
The takedown effort also demonstrates how technology companies increasingly use civil litigation alongside traditional law enforcement methods to disrupt cybercrime infrastructure.
In this case, Microsoft’s Digital Crimes Unit filed a civil complaint in federal court to obtain legal authority to seize domains associated with the platform. The action was supported by threat intelligence gathered by private security companies and shared with international law enforcement agencies.
Europol played a central coordinating role through its Cyber Intelligence Extension Program, which is designed to move beyond intelligence sharing toward direct operational collaboration between governments and the private sector.
Authorities in countries including Latvia, Lithuania, Portugal, Poland, Spain and the United Kingdom participated in enforcement actions connected to the case.
Cybersecurity researchers emphasize that while such operations can significantly disrupt cybercrime infrastructure, they rarely eliminate it entirely. Platforms like Tycoon 2FA are part of a broader ecosystem in which new tools quickly emerge to replace those that are shut down.
Nonetheless, investigators say the dismantling of widely used services can have cascading effects by forcing attackers to rebuild infrastructure and raising the cost and complexity of their operations.
AI fraud pushing pace on need for advanced deepfake detection tools

A blog post for GetReal Security by Dr. Edward Amoros, CEO of TAG Infosphere and research professor at NYU, looks at how and why information executives can justify investment in deepfake detection.
“Executives fund technologies based on initiatives that demonstrate measurable risk reduction, economic value, and alignment with governance and compliance objectives,” Amoros says. “Deepfake and continuous identity protection programs must therefore be framed not as experimental controls, but as ROI-driven investments.”
Amoros says traditional cyber risk frameworks such as FAIR and ISO/IEC 27005 are applicable to deepfake and synthetic media threats, “even though the attack vector is new.”
“In both frameworks, risk is defined as a function of loss event frequency and loss magnitude. Deepfakes map cleanly into these structures once they are properly categorized as identity-based loss events rather than media anomalies.”
This reframing allows chief information security officers (CISOs) to “integrate deepfake risk into existing enterprise risk registers, rather than treating it as a parallel or experimental concern.”
Moreover, it makes economic sense. “On the investment side, deepfake protection costs are typically modest relative to these potential losses,” Amoros says. “Detection platforms, continuous identity assurance tooling, and integration into collaboration environments represent a fraction of what organizations already spend on IAM, SOC operations, or fraud prevention. The economic argument becomes one of loss avoidance rather than productivity enhancement.”
There is an increasing need to maintain agile compliance postures and auditability. But, “to sustain executive support, CISOs must define metrics that move beyond simple detection counts. Useful measures include detection accuracy across voice and video channels, coverage across high-risk workflows, false positive rates, and mean time to response for identity anomalies.”
Amoros says the call to action is clear. “Enterprise security teams should begin to treat identity authenticity as a measurable control objective. When CISOs can quantify identity protection, they can justify it, and when they justify it, they can finally defend it at scale.”
DataVisor identifies AI readiness gap between concern and defense
DataVisor’s 2026 Fraud & AML Executive Report reveals a readiness gap between concern over AI-driven fraud and financial institutions’ ability to defend against it.
According to a release, the report shows 74 percent of surveyed senior fraud and AML leaders across banks, credit unions, fintechs and digital payments platforms citing AI-driven fraud as a top threat. But 67 percent say their organizations lack the infrastructure to deploy effective AI defenses.
Fragmented data, legacy detection models, organizational silos and outdated governance compromise defenses.
“Financial institutions are facing attackers that operate at machine speed, but many defenses still operate at legacy operational speed,” says Yinglian Xie, CEO of DataVisor. “Closing the AI Readiness Gap requires modern foundations – unified data, adaptive machine learning, adoption of LLM-based AI agents, and operational models designed for continuous, real-time response. The organizations that modernize their infrastructure and workflows will be best positioned to stay ahead.
Paper points to future in which AI cloning tools make voice biometrics obsolete
The Bloomsbury Intelligence and Security Institute (BISI) has collaborated with CyberWomen Groups C.I.C. on the publication of “When Voice Is No Longer Proof: AI Vocal Cloning and the Limits of Voice-Based Authentication,” a paper by Hannah-Rose Shearman, a student researcher in cybersecurity and digital forensics.
CyberWomen Groups C.I.C. is “a student-led initiative dedicated to diversifying STEM by supporting and connecting university students interested in or studying cybersecurity, regardless of gender identity.”
Shearman’s paper looks at how the increasing accessibility and realism of synthetic voice technology undermines the effectiveness of voice biometrics, creating significant vulnerabilities in voice-based verification, and driving high-risk sectors such as finance and public services away from voice as a reliable biometric identifier.
“Academic and industry research has demonstrated that modern voice cloning tools can replicate vocal characteristics with high fidelity and bypass traditional authentication controls,” Shearman writes.
“Techniques that were previously resource-intensive and difficult to scale can now be performed with minimal technical expertise, owing to the increasing accessibility of commercial voice synthesis platforms such as ElevenLabs, thereby weakening long-standing assumptions about the uniqueness and reliability of voice as a standalone proof of identity.”
In short, voice used to be considered secure. But the tools for replicating voices, or generating fake ones, have gotten good enough that voices can no longer be trusted. And with social media, there is ample reference material for cloning.
That has major implications for the “political, operational, security, and economic risk landscape for institutions that rely on voice-based authentication.”
“Elevated fraud activity results in direct financial losses and increased costs for investigation, remediation, and customer support,” says Shearman. Eventually, these outweigh the benefits of using voice authentication in the first place.
“At a political and regulatory level, the growing divergence between institutional authentication practices and emerging AI capabilities exposes gaps in consumer protection and accountability frameworks.”
“While organisations continue to deploy voice biometrics as a trusted control, victims of AI-enabled impersonation encounter limited avenues for redress in jurisdictions where legal frameworks have not yet adapted to address synthetic voice misuse.”
The paper concludes with a forecast, which calls for higher volumes of impersonation attempts and related pressures over the next 12 months as AI voice cloning continues to develop – and, over the long term, continued “erosion of trust in voice as a primary identity control.”
‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability
Graduate Highly Commended paper in the 2026 National Uehiro Oxford Essay Prize in Practical Ethics. By James Forsdyke. ORCID: https://orcid.org/0009-0008-2446-4586 In this paper, I will discuss ethical considerations when it comes to correcting assumptions about people with physical disabilities. In particular, I will discuss the assumption some individuals make that physical disabilities are necessarily accompanied… Read More »‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability
The post ‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability first appeared on Practical Ethics.
The Cinema of Societal Collapse
Vikram Murthi

This year’s Oscar-nominated international feature films—especially The Secret Agent and Sirāt—tackle what it means to live and die under tyranny.
The post The Cinema of Societal Collapse appeared first on The Nation.
Humanoid home robots are on the market – but do we really want them?
Courtesy of 1X. By Eduardo B. Sandoval, UNSW Sydney Last year, Norwegian-US tech company 1X announced a strange new product: “the world’s first consumer-ready humanoid robot designed to transform life at home”. Standing 168 centimetres tall and weighing in at 30 kilograms, the US$20,000 Neo bot promises to automate common household chores such as folding […]


































