2024-03-23
Cornell researchers develop AI-powered computer worm
A team of researchers from Cornell Institute of Technology has unveiled a pioneering yet deeply concerning advancement – a computer worm that harnesses the power of generative artificial intelligence to autonomously replicate and spread across AI systems.
Dubbed "Moris II," in a nod to the infamous Morris worm that wrought havoc on the internet three decades ago, this malware represents a new frontier in cyber threats. Its ability to infiltrate AI systems, steal personal data from emails, and self-propagate raises alarms about the vulnerabilities inherent in modern AI technologies.
In an exclusive interview with Wired magazine, Ben Nassi, one of the study's authors, sounded the alarm bells. "Now you have the opportunity to conduct or carry out a new type of cyber attack that has never been seen before," he cautioned gravely.
The Worm's Modus Operandi At the core of Moris II's insidious nature lies a technique termed "self-replicating enemy hint." By feeding the AI system a carefully crafted instruction, or "hint," the researchers were able to coax the system into generating a new hint in response – a response that could then be used to infect other AI systems via messaging platforms.
To demonstrate their creation's capabilities, the Cornell team developed an experimental email system powered by cutting-edge generative AI models like OpenAI's ChatGPT, Google's Gemini, and the open-source LLaVA. They then seeded the system's database with a self-replicating query, allowing the AI to generate responses based on this "poisoned" data.
The results were alarming. Each AI-generated response contained the seeds of a new infection vector, capable of spreading to other systems when shared through messaging apps or email – a self-propagating cycle of digital contagion.
Extending beyond mere text, Moris II can even embed itself within images as a hidden hint, insidiously infecting email systems through seemingly innocuous visual content.
A Grave Threat to Privacy However, the implications of this research extend far beyond the spread of the worm itself. Nassi revealed that Moris II could extract a wide array of confidential information from emails, including "names, phone numbers, credit card numbers, national insurance numbers — anything that is considered confidential."
This revelation casts a harsh light on the potential consequences of such an attack, which could lead to privacy violations, financial fraud, and untold harm to end-users on a massive scale.
A Wake-Up Call for AI Security While the demonstration by the Cornell researchers is undoubtedly a technical tour de force, it also serves as a sobering wake-up call for the AI industry. As Nassi emphasized, the study's primary objective is not to critique the vulnerabilities of existing AI models, but rather to underscore the urgent need for enhanced security measures.
"As AI becomes more accessible and its understanding expands, the likelihood of its use with malicious intent increases," Nassi warned, a sentiment that should resonate deeply within the tech community.
The team has already reported their findings to industry giants OpenAI and Google, but the onus now falls on AI developers and cybersecurity experts to fortify their defenses against this new breed of threat.
In an era where artificial intelligence is rapidly permeating every aspect of our digital lives, the specter of Moris II serves as a stark reminder that we must remain vigilant, proactive, and ever-mindful of the need to balance technological progress with robust safeguards against those who would exploit it for nefarious ends.
Share with friends:
Write and read comments can only authorized users
Last news