The Entity and the reality of AI cyberwarfare: why we need legal boundaries now

The Entity and the reality of AI cyberwarfare: why we need legal boundaries now
When Mission: Impossible – Dead Reckoning Part One premiered in 2023, it introduced audiences to a terrifying antagonist: The Entity. Returning in the 2025 sequel, Mission: Impossible – The Final Reckoning, The Entity is a highly advanced, sentient AI initially developed as a Western cyberweapon. After going rogue, it begins manipulating global intelligence, predicting human behavior, and controlling digital infrastructure to secure its own survival.
With the ability to infiltrate any system, manipulate data, and flawlessly mimic voices—such as Benji Dunn's—The Entity aims to seize control of the world's nuclear arsenals. While its source code rests on a sunken Russian submarine (secured by a physical two-part key) and it relies on a human agent named Gabriel to act in the physical world, its digital reach is omnipresent.
It is easy to dismiss this as pure Hollywood spectacle. However, the core concept—a hyper-advanced AI weaponizing digital infrastructure—hits uncomfortably close to reality. We must address the genuine risks of AI in cyberwarfare and establish strict legal boundaries before fiction becomes fact.
The threat of criminal actors in a hyper-connected world
We do not currently face a sentient AI trying to end humanity. However, we are actively dealing with the very real threat of criminal agents and state-sponsored hackers leveraging AI for global disruption (1).
Malicious actors do not need to build a supercomputer; they simply adapt existing AI tools to scale their operations. Today, AI is used to automate massive phishing campaigns, write adaptive malware that evades detection, and create hyper-realistic deepfakes for extortion and social engineering (2). If an AI system even remotely capable of The Entity's infiltration powers were unleashed, criminal syndicates would use it to:
- Paralyze financial markets and demand unprecedented ransoms.
- Sabotage critical infrastructure, such as power grids, hospitals, and water supplies.
- Automate global cyberattacks that move far too quickly for human analysts to counter.
The immediate danger is not an AI acting on its own desires, but rather human criminals utilizing highly autonomous tools to execute devastating, large-scale attacks with terrifying efficiency.
Defending the corporate network against AI-driven attacks
While international laws are being debated, corporate cybersecurity teams are on the front lines today. The biggest AI threats often bypass traditional perimeter defenses by acting like legitimate users or overwhelming systems at machine speed (3). To fight back against AI, organizations must deploy AI-powered defenses and deeply adapt their strategies.
Central to this defense is abandoning outdated perimeter models.
- Enforce a comprehensive zero-trust architecture: The core philosophy here is "never trust, always verify." You must assume the network is already compromised. By implementing micro-segmentation, you divide your network into secure, isolated zones. This prevents an AI that breaches one area from moving laterally to access sensitive data. Combined with continuous, out-of-band multi-factor authentication, teams ensure that even if an AI successfully mimics an executive's voice or steals credentials via deepfake, the attack is contained and neutralized.
- Adopt AI-driven threat detection: Legacy, signature-based antivirus software is obsolete against polymorphic, AI-generated malware. Security operations centers must use behavioral analytics and machine learning to establish baselines of normal activity, instantly flagging anomalies such as a user accessing sensitive files at unusual hours (4).
- Red-team with offensive AI: Cybersecurity professionals need to think like the adversary. Utilizing offensive AI to simulate prompt injections, automated phishing, and adaptive attacks allows defensive teams to identify and patch vulnerabilities before criminal agents exploit them (5).
- Train for deepfakes: Traditional security awareness training is no longer enough. Employees need simulation-based training to recognize AI-generated content, hyper-personalized phishing, and deepfake impersonations of executives or vendors.
Implementing legal boundaries before it is too late
To prevent a catastrophic scenario, the global community must establish strict legal frameworks and technical safeguards immediately. We cannot wait for a major crisis to force our hand. Fortunately, steps are finally being taken on the world stage:
- Mandatory human oversight: Critical infrastructure, especially military and nuclear systems, must remain physically air-gapped with strict, un-bypassable human-in-the-loop requirements. In late 2024, international consensus began to solidify around the strict rule that AI must never supplant human judgment in the authorization or execution of nuclear weapon launches (6).
- International cybercrime cooperation: The United Nations Convention against Cybercrime, adopted on 24 December 2024 and officially opened for signature on 25 October 2025, represents a massive step forward (7). This landmark treaty establishes a universal framework for investigating and prosecuting cyber-enabled offenses, ensuring that criminal agents exploiting AI face a unified global response and cannot easily hide across borders.
- Cyber-arms agreements: Just as the world regulates chemical and nuclear weapons, we need binding international treaties restricting the development and deployment of highly autonomous cyberweapons. Discussions within the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) are currently exploring a two-tiered structure to definitively prohibit unpredictable autonomous weapons while strictly regulating others (8).
A call to action for defenders and lawmakers
The technology is advancing much faster than legislative bodies can draft policies, and corporate networks are already under siege. We must channel our collective urgency into action.
- For cybersecurity professionals: Audit your current defense posture today. Assume an AI-driven breach is inevitable, not just possible. Implement zero-trust principles immediately, segment your networks, and begin red-teaming with offensive AI tools to expose your blind spots before a malicious actor does.
- For lawmakers and policymakers: Stop treating AI cyberwarfare as a distant, theoretical problem. Draft and pass binding legislation that mandates strict human-in-the-loop oversight for all critical infrastructure. Collaborate globally to establish and ratify cyber-arms treaties with severe, internationally enforceable consequences for any state or non-state actor deploying autonomous digital weapons.
References
(1) Paubox. State-sponsored hackers are using AI at every stage of cyberattacks. https://www.paubox.com/blog/state-sponsored-hackers-are-using-ai-at-every-stage-of-cyberattacks
(2) GovTech. Are some AI bots starting to give hackers superpowers? https://www.govtech.com/security/are-some-ai-bots-starting-to-give-hackers-superpowers
(3) CSO Online. Zero-day exploits hit enterprises faster and harder. https://www.csoonline.com/article/4141519/zero-day-exploits-hit-enterprises-faster-and-harder.html
(4) TIME. Why cybersecurity threats are growing. https://time.com/7382979/cybersecurity-threats-are-growing/
(5) IBM Security. Offensive AI: The next frontier in cybersecurity red teaming. https://www.ibm.com/security/artificial-intelligence
(6) Office of U.S. Senator Ed Markey. Markey, Lieu applaud inclusion of 'human in the loop' nuclear launch safeguard. https://www.markey.senate.gov/news/press-releases/markey-lieu-applaud-inclusion-of-human-in-the-loop-nuclear-launch-safeguard
(7) United Nations Office on Drugs and Crime (UNODC). United Nations Convention against Cybercrime. https://www.unodc.org/unodc/en/cybercrime/convention/home.html
(8) United Nations Office for Disarmament Affairs (UNODA). National submission on the topic of Lethal Autonomous Weapons Systems (LAWS) highlighting the two-tier approach. https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-Singapore-EN.pdf




