Bold claim: a powerful iPhone-hacking toolkit, once thought to be the exclusive domain of highly resourced actors, is now reportedly in the hands of foreign spies and cybercriminals, raising serious questions about mobile security worldwide. But here’s where it gets controversial: the true origin, ownership, and path of Coruna, the toolkit in question, remain hotly debated among researchers and industry insiders.
A recent Google security briefing introduces Coruna, a sophisticated iPhone exploit kit designed to silently install malware when a user loads a website containing its code. It reportedly leverages 23 iOS vulnerabilities and comprises five complete hacking techniques capable of bypassing iPhone defenses, including the popular WebKit-based browser engine. Such breadth and depth suggest development by a well-resourced, likely state-affiliated group.
Google’s investigation traces Coruna’s lineage through several stages. First, components appeared in February of last year in a form attributed to a “customer of a surveillance company.” Five months later, a fuller version emerged in an espionage operation linked to a suspected Russian intelligence group, which embedded the exploit in a common visitor-counting module on Ukrainian sites. More recently, Coruna surfaced again in a for-profit campaign aimed at stealing cryptocurrency from Chinese-language users visiting crypto and gambling sites.
A notable gap in Google’s report is the absence of a named origin for the surveillance-company customer. iVerify, a mobile-security firm that analyzed a variant found on infected Chinese sites, speculates that Coruna may have begun life as a toolkit developed for or purchased by the U.S. government. Both Google and iVerify highlight that Coruna contains modules previously seen in the Triangulation operation, a malware framework Russia has associated with the NSA, though the U.S. government did not address that claim.
iVerify’s assessment adds another layer: the code appears to have been written by English-speaking developers and is unusually polished, modular, and capable of high-level exploitation. Rocky Cole, iVerify’s cofounder, even suggests this could be among the first clear examples of U.S. government tooling escaping into adversaries’ hands and becoming a tool for criminals as well.
“An EternalBlue moment for mobile malware” is how some researchers describe Coruna’s trajectory. EternalBlue was a notorious Windows exploit leaked from the NSA in 2017 that fueled major global attacks. If Coruna originated as a U.S. government tool, the concern is that highly capable exploits can leak and proliferate, undermining mobile security as adversaries gain access to advanced techniques.
Apple has patched the vulnerabilities Coruna exploited in iOS updates after iOS 13 through 17.2.1. Current evidence suggests the toolkit targeted the WebKit layer, potentially affecting Safari users on older iOS versions, while there is no confirmed indication that Chrome or other engines were compromised. Importantly, Coruna checks for Lockdown Mode—Apple’s strongest security setting—and refrains from hacking devices where it’s enabled.
Despite these safeguards, iVerify estimates that Coruna infected tens of thousands of devices. Their measurements, based on traffic to a command-and-control server associated with the cybercriminal variant infecting Chinese-language sites, suggest around 42,000 devices may have been compromised in that campaign alone. The scope of infections among Ukrainians visiting compromised sites in the Russian espionage operation remains unclear, as Google declined further comment and Apple did not respond to inquiries.
In iVerify’s forensic analysis of the cybercriminal iteration, the malware’s core was found to be well-crafted and highly modular, with additional payloads aimed at draining cryptocurrency wallets, exfiltrating photos, and sometimes harvesting emails. The added components appeared less polished than the underlying toolkit, implying separate authors or later integrations by the criminals who obtained the code.
A key question is whether Coruna’s sophisticated architecture points to a single origin or if it represents a mosaic assembled from multiple sources. Rocky Cole of iVerify notes that while some Triangulation-like components exist, many Coruna elements appear novel and unique, hinting at a cohesive original author or a tightly choreographed development effort. He cautions that the kit’s overall coherence makes a simple, piecemeal assembly less likely.
If Coruna did originate as U.S. government tooling, the central mystery becomes how such tools slipped into foreign and criminal hands. The market for zero-day exploits and the brokers who trade them can fetch tens of millions of dollars, and there’s growing evidence that these tools are not restricted to one side of the geopolitical spectrum. This reality has sparked debate about whether government-developed capabilities should be restricted in a way that prevents leakage or misuse.
One notable real-world parallel is the case of a Trenchant executive who was sentenced to seven years’ imprisonment for selling exploits to various buyers, including a Russian broker, illustrating how zero-day tools can travel from legitimate institutions to the open market. Industry insiders describe exploit brokers as primarily profit-driven and often lacking exclusivity, making leaks and cross-border transfers more likely.
Bottom line: Coruna’s journey from potential U.S. government origins to foreign and criminal use underscores a broader vulnerability in modern mobile security. The tech underpinning these exploits—remote code execution, stealthy persistence, and modular design—remains a potent threat that low- and high-resource attackers alike can pursue.
Controversial thought to consider: should governments invest in offensive cybersecurity tools if they fear leakage, or should they pivot toward stronger defensive strategies and stricter export controls? And what responsibilities do researchers and vendors bear when they discover that highly advanced exploits can circulate beyond their intended users? What’s your take on where the line should be drawn between national security interests and the broader risk to civilian users?