AI-assisted bioterrorism is a risk worth considering
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.
As the EU experienced with the AI Act, finding a trade-off between innovation and risks takes time. But the specific risks of AI-enabled bioterrorism can be tackled now, Kevin Esvelt and Ben Mueller write.
AI regulation has become an area of inter-state competition. While the EU just reached a deal on the AI Act, the US has previously released a far-reaching executive order on AI, and the UK convened political and industry leaders at the AI Safety Summit.
In many of these discussions, one risk is getting more attention: AI-assisted bioterrorism, or the ability of individuals to cause catastrophe by using AI tools to get access to a pandemic virus.
We recently showed that this is a risk worth considering. In an informal experiment, we tasked individuals with using an open-source large-language model that had its safeguards removed to help them obtain a pathogen capable of causing a pandemic.
Within three hours, participants identified many of the steps required to start a potentially catastrophic outbreak.
Particularly concerning was that the model advised participants on how to access the viral DNA — the blueprint for creating the pathogen — while evading existing screening methods. The extent to which current models aid bioterrorism by summarising information that already exists online remains unclear.
However, current capabilities aside, the findings suggest that in the absence of robust safeguards, more advanced future models might provide malicious individuals with streamlined and accessible information on how to access, construct, and release a pandemic virus.
A technology open to malicious actors
The DNA constructs required to build a virus from scratch can be ordered online: many gene synthesis providers will manufacture a thousand base pair pieces of DNA for under €200 — something that only a few decades ago took researchers thousands of hours and warranted a Nobel Prize.
The advent of custom gene synthesis is now a pillar of the biological sciences, allowing researchers to rapidly iterate on the design of mRNA-based vaccines and therapeutics, among many other benefits. However, just as advances in gene synthesis enable new discoveries and treatments, the technology can also be misused by malicious actors to access pandemic pathogens.
Many companies have taken the important step of screening orders to detect DNA from dangerous pathogens.
But not all of them do. Language-based AI models can already advise individuals how to identify and exploit such loopholes to obtain the DNA of pandemic-capable pathogens.
Echoing this concern, the CEO of Anthropic, a leading AI company, recently warned US lawmakers that within two years, next-generation AI systems could enable large-scale bioterrorism unless appropriate guardrails are put in place.
Yoshua Bengio, one of the “Godfathers of AI” voiced similar concerns. As EU policymakers think about appropriate reactions to AI’s rapid progress, one simple guardrail deserves closer scrutiny: a legal requirement to screen all gene synthesis orders for hazardous sequences.
Fewer than one hundred highly specialised companies provide DNA synthesis services, which offers an ideal policy lever: if the EU requires companies to screen all orders against an up-to-date database of known pandemic pathogens and implement know-your-customer requirements, bad actors can no longer access the building blocks required to seed the next pandemic.
The case for mandatory gene synthesis screening
The US has already taken steps to secure gene synthesis. The government’s recent Executive Order on AI mandates federally funded entities to follow the government’s new biological synthesis screening guidelines.
The EU itself has not yet passed regulations on the issue. Given the growing risk of pandemic-level bioterrorism, a mandate for gene synthesis screening would be a first, meaningful step.
Providers representing over 80% of the industry are in favour of mandates. Others have legitimate concerns about the potential costs of gene synthesis screening and worry about potential intellectual property concerns, but the imminent release of free, privacy-preserving screening tools should mitigate this issue.
A mandated screening system should fulfil several criteria. Orders should be encrypted to preserve privacy and subjected to screening that exclusively detects hazards with negligibly few false alarms, while authorised laboratories need to be offered a way to seamlessly obtain permitted DNA sequences without delays.
Furthermore, all screening systems should verifiably check orders against a database of hazardous sequences that is immediately updated to detect newly identified potential pandemic pathogens, with strong incentives to use best-in-class screening as determined by “red-teaming” efforts that test how readily hazardous sequences can be obtained without detection.
Finally, new DNA synthesis and assembly devices, such as benchtop synthesisers, should incorporate built-in screening that meets the above criteria. Implementing rigorous screening protocols based on these criteria is vital to fully realise biotechnology’s far-reaching benefits while securing it against misuse.
Safeguarding biology’s promise
Progress in both biotechnology and artificial intelligence will drive revolutionary advances in the life sciences and medicine.
Custom gene synthesis is a fundamental enabler of these remarkable benefits. But the harm caused by SARS-CoV-2, a single historically mild pandemic virus, demonstrates that its misuse — made more likely by advancements in generative AI — could do harm on a scale that outweighs all of these benefits.
As the European Union experienced with the AI Act, finding a trade-off between fostering innovation and reducing risks takes time.
But the specific risks of AI-enabled bioterrorism, acknowledged both by industry leaders and biosecurity professionals, can be tackled now.
By mandating gene synthesis screening, the European Union can lower these risks substantially, safeguarding the promise of ever-advancing biotechnology.
Kevin Esvelt is an associate professor at the Massachusetts Institute of Technology’s Media Lab, where he directs the Sculpting Evolution group. He co-founded the SecureDNA Foundation, a Switzerland-based nonprofit. Ben Mueller is a Research Scientist at MIT and COO at SecureBio, an organisation that works on the policies and technologies to safeguard against future pandemics.
At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.
Source: Euro News