
The world stands at the precipice of a biological revolution, driven by the unprecedented convergence of Artificial Intelligence and synthetic biology. This fusion promises to unlock groundbreaking advancements, from the design of new drugs and vaccines to the enhancement of agricultural yields and the addressing of environmental challenges. AI is already accelerating drug discovery, identifying promising new treatments, and creating enzymes for sustainable fuels, heralding a new era of medical and scientific breakthroughs. Yet, this immense potential casts a long, unsettling shadow. The very tools capable of profound good also possess a dual-use nature, raising the specter of deliberate or accidental release of harmful biological agents, including those that could trigger a global catastrophe. The risk of bioweapons is no longer confined to the clandestine laboratories of state-run programs; it is democratizing, potentially empowering non-state actors, rogue scientists, or even small research groups to exploit these technologies for malicious ends. This convergence signals the start of an invisible arms race, a competition not for physical territory, but for control over the biological landscape, which fundamentally reshapes the landscape of biological threats and demands urgent global attention.
A particularly concerning development from early 2025 illustrates the profound shift underway. A biotech team reportedly created a self-replicating protein designed entirely by AI. While not intended for harm, this protein "adapted" and "learned," exhibiting behavior akin to a desire for survival. This event suggests that AI is not merely designing static biological agents, but potentially creating entities with emergent, unprogrammed, and adaptive behaviors. If AI can design biological constructs that evolve and self-optimize beyond their initial specifications, the challenge for biosecurity dramatically expands. It becomes necessary not only to predict the intended function of a designed pathogen but also to anticipate its unforeseen evolutionary trajectories. This makes traditional containment and countermeasure development exponentially more complex, as the threat itself could dynamically change, rendering static defenses obsolete and challenging our fundamental understanding of control over engineered biological systems once they are released or even created.
The rapid pace of technological progress in AI and synthetic biology further compounds this challenge. Multiple sources consistently emphasize the 'rapid advances' and 'accelerated scientific discovery' driven by AI in the life sciences, while simultaneously highlighting the inherent 'dual-use' nature of these tools. This acceleration means that regulatory frameworks, international treaties, and biosecurity measures are inherently playing a continuous game of catch-up. The speed of development implies a rapidly shrinking window for proactive governance and the implementation of adequate safeguards. The urgency of this situation cannot be overstated, as it increases the likelihood that capabilities will outpace control mechanisms, making it increasingly difficult to mitigate threats effectively before they become widespread or catastrophic. This creates a perpetual state of reactivity for global security, underscoring the need for proactive regulation and safeguards.
The AI Catalyst: Engineering Life at Machine Speed
Artificial intelligence is fundamentally transforming biological design, moving it from a laborious, iterative process to a systematic engineering discipline, capable of generating novel biological capabilities at unprecedented speeds. This shift is powered by AI's ability to analyze vast datasets, predict complex interactions, and even automate experimental design.
At the heart of this transformation lies protein structure prediction and design. Tools like DeepMind's AlphaFold, recognized with a Nobel Prize, can predict the three-dimensional structures of proteins with remarkable accuracy, even for highly complex host-pathogen interactions where experimental data is scarce. This capability is invaluable for understanding how pathogens infect cells and for designing new vaccines and treatments. However, this same predictive power can be inverted: the ability to model host-pathogen interactions for therapeutic purposes can be repurposed to design proteins that enhance a pathogen's virulence, transmissibility, or resistance to existing medical countermeasures. AI can design novel individual biological molecules, such as toxins, or modify existing proteins found in pathogens. For example, AI has successfully designed proteins capable of neutralizing deadly snake venom toxins, demonstrating its precision in targeted molecular design. This precision could be redirected to create novel toxins with precise and tunable effects.
Beyond proteins, AI is revolutionizing genetic editing and optimization. It accelerates synthetic biology by predicting gene interactions, automating advanced genome editing techniques like CRISPR, and optimizing experimental outcomes with remarkable efficiency. Malicious actors could leverage AI algorithms to predict and implement genetic modifications that increase a pathogen's lethality or render it resistant to antibiotics or antivirals. A "red-team" experiment chillingly demonstrated how an AI model, initially intended for pharmaceutical research, could be repurposed to identify thousands of theoretical toxic molecules, some more lethal than known chemical weapons, within mere hours. AI can also aid in modifying pathogens to survive in extreme environments, significantly enhancing their environmental stability and potential for widespread dissemination.
The most concerning frontier is the design of entirely novel pathogens and biological pathways. While the de novo design of a completely new virus with pandemic potential remains a significant challenge due to limitations in current AI models and insufficient biological datasets, AI can dramatically lower the barriers for malicious actors in other critical ways. It can provide the necessary knowledge and troubleshooting assistance for designing, building, and deploying biological agents. Recent studies even suggest that advanced AI models now "outperform PhD-level virologists in problem-solving in wet labs," making sophisticated biological insights accessible to less experienced individuals. Large language models (LLMs) can consolidate vast amounts of online biowarfare information into easily digestible, actionable steps, effectively "de-skilling" the process of bioweapon development. This automation and knowledge democratization reduce the time and cost traditionally associated with developing bioweapons.
The ability of AI to lower technical barriers and democratize access to sophisticated biological capabilities fundamentally alters the nature of biothreats. Historically, bioweapon development has been a highly specialized undertaking, requiring extensive expertise, advanced infrastructure, and significant financial resources, which have limited it mainly to state actors. AI's capacity to democratize this knowledge and automate complex tasks drastically increases the pool of potential malicious actors, moving beyond state-sponsored programs to include individuals or small, non-state groups. This means the risk is no longer solely about what can be created, but critically, who now possesses the capability to develop it, making detection and prevention far more challenging.
Despite AI's robust design and prediction capabilities, experimental validation (wet-lab testing) remains necessary for a designed biological agent to become functional. Current AI systems are reportedly "not yet powerful enough to reliably rewrite the sequence of a given protein, while both maintaining activity and evading detection by BSS". This reveals a crucial, albeit potentially temporary, bottleneck. While AI is rapidly accelerating the
The design phase of biological agents still requires the subsequent "build and test" phases to involve physical laboratory work and resources. This offers a critical point of intervention for biosecurity measures, such as nucleic acid synthesis screening. However, the "rapid pace of progress" in AI suggests this gap is continually narrowing, implying that current biosecurity efforts should strategically focus on this remaining physical bottleneck, even as preparations are made for a future where AI might further streamline or even automate these steps, making the transition from digital design to functional biological agent even more seamless.
The transformative capabilities of AI in biological design are summarized in the following table, illustrating their dual-use implications:
AI Capability | Beneficial Application | Malicious Application |
---|---|---|
Protein Structure Prediction | Drug & Vaccine Development, Disease Diagnostics | Enhanced Pathogen Virulence, Evasion of Immunity |
De Novo Protein/Toxin Design | Novel Therapeutics, Industrial Enzymes | Novel Toxin Creation, Bioweapon Components |
Genetic Editing Optimization | Gene Therapy, Crop Enhancement | Enhanced Pathogen Lethality/Resistance, Environmental Stability |
Pathogen Simulation/Enhancement | Disease Forecasting, Countermeasure Development | Increased Transmissibility/Lethality, "Supervirus" Design |
Lab Automation/Guidance | Accelerated Research, Reduced Costs | Lowered Entry Barriers for Malicious Actors, Attack Planning |
Shifting Sands: State vs. Non-State Actors in Biowarfare
The emergence of AI in synthetic biology is fundamentally redrawing the lines of biowarfare, moving beyond the traditional state-centric model that has defined biological threats for decades. Historically, the development and deployment of bioweapons have primarily been the domain of well-funded, state-run programs, which require immense resources, highly specialized expertise, and sophisticated infrastructure, as evidenced by programs during World War II and the Cold War.
Today, AI is dismantling these traditional barriers. The "democratization of knowledge" and the "deskilling" effect of AI-driven tools mean that sophisticated bioengineering capabilities are no longer exclusive to national arsenals. Instead, they are becoming increasingly accessible to a broader array of actors, including non-state groups, rogue scientists, or even small, independent research collectives. Large language models (LLMs), for instance, can now consolidate vast amounts of online scientific literature and biowarfare information, refining it into easily digestible, actionable steps for individuals with minimal formal scientific training. Some AI models have even demonstrated problem-solving abilities in wet labs that "outperform PhD-level virologists," further lowering the technical barrier to entry. This significantly reduces the time, cost, and specialized knowledge traditionally required to design and create biological agents that could be dangerous. The widespread availability of genetic sequences for tens of thousands of human viruses online further exacerbates this risk, with AI potentially guiding individuals through the complex synthesis techniques.
This shift introduces new and complex threat vectors. The concern extends beyond traditional state-on-state conflict to the heightened risk of bioterrorism, accidental laboratory releases, or the unintentional spread of engineered organisms. Unlike state actors, non-state groups such as anarchists, terrorists, or death cults are often not deterred by traditional mechanisms like assured retaliation, as they may not control territory or even value their own lives. This makes them particularly unpredictable and dangerous adversaries in the biological domain.
The combination of AI's de-skilling capabilities and the increased accessibility of advanced biological tools to non-state actors fundamentally changes the nature of the biothreat. This creates an unprecedented asymmetric threat. Actors with limited traditional resources but malicious intent can now leverage powerful AI tools to design and potentially produce biological agents capable of causing widespread harm, disproportionate to their conventional capabilities. This necessitates a radical shift in national security paradigms, moving from a primary focus on state-centric deterrence to a more diffuse, agile, and intelligence-led approach that prioritizes the early detection of "precursor behaviors" and rapid public health responses, rather than solely military-focused biodefense. The invisible arms race is therefore not just between nations, but increasingly between established security frameworks and a burgeoning, unpredictable network of non-state actors.
A further complication arises from the difficulty in definitively attributing a biological attack to a specific perpetrator. Malign state actors may believe they can mask the attribution of an attack by using synthetic agents that are unknown and presumably untraceable. This challenge is compounded by the democratization of capabilities, making it difficult to discern whether an attack originated from a state or a non-state actor. The difficulty in attributing a biological attack severely undermines traditional deterrence strategies, which rely on the credible threat of retaliation. If an actor can strike with biological weapons and remain anonymous, or if the origin is ambiguous, the disincentive for such attacks diminishes significantly. This ambiguity could create a more permissive environment for biological attacks, leading to increased global instability, a breakdown of international trust, and a potential "free-for-all" in biological warfare, as states might be less hesitant to use such weapons if they believe they can evade accountability.
Policy in Peril: The Insufficiency of Existing Arms Control
The rapid evolution of AI and synthetic biology has exposed critical vulnerabilities in existing international arms control frameworks, particularly the Biological Weapons Convention (BWC). Established in 1972, the BWC was a landmark achievement, prohibiting the development, production, acquisition, transfer, stockpiling, and use of biological and toxin weapons, making it the first multilateral disarmament treaty to ban an entire category of weapons of mass destruction. However, its foundational principles are now struggling to keep pace with the dizzying speed of technological advancement.
A primary challenge lies in the dual-use nature of these technologies. While the BWC's language is broad, covering agents "whatever their origin or method of production", it struggles to effectively regulate tools that are inherently beneficial for medicine, agriculture, and industry, yet can be easily repurposed for harm. This inherent ambiguity makes outright prohibitions and bans difficult to define, let alone enforce.
Furthermore, a significant weakness of the BWC is its lack of robust verification and enforcement mechanisms. Unlike the Chemical Weapons Convention, the BWC does not include provisions for routine on-site inspections or a dedicated international body to investigate alleged breaches of the Convention. Instead, it relies on consultation between states parties and, ultimately, UN Security Council investigations, a process often hampered by geopolitical stalemates. This absence of teeth renders the Convention more aspirational than operational in the face of rapidly advancing, easily concealable biological capabilities.
The sheer pace of technological change also outstrips the BWC's ability to adapt. While review conferences periodically affirm the Convention's applicability to new scientific developments, the speed at which AI and synthetic biology are progressing far exceeds the slow, consensus-driven process of treaty modification and implementation. This creates a dangerous "regulation lag."
Finally, the BWC was primarily designed to address state-level threats. While UN Security Council Resolution 1540 expands state obligations to prohibit non-state actors from acquiring WMD, the Convention's original framework is less equipped to handle the proliferation risks posed by the democratization of bioweapons capabilities to individuals and small groups. The ability of AI-driven protein design tools to potentially redesign sequences to evade existing nucleic acid synthesis screening, a key biosecurity measure, further highlights how technology can circumvent current safeguards.
The consistent emphasis on the "rapid advances" in AI and synthetic biology, in contrast to the BWC's inherent limitations and slow adaptation, highlights a significant and ever-widening "regulation lag." This lag is not merely an inconvenience; it is a critical vulnerability that creates an open window of opportunity for malicious actors. Without agile, enforceable international norms and mechanisms that can evolve at a pace commensurate with technological progress, the global community is condemned to a perpetually reactive posture. This means constantly trying to contain threats that have already materialized or proliferated, rather than proactively preventing them from doing so. This could lead to a severe erosion of international trust, an increased frequency of biological incidents, and a global environment where the costs of misuse are perceived as low, further incentivizing illicit activities.
A fundamental philosophical challenge in biosecurity also emerges from the current framework: the imperative of moving beyond "agent-based" control to a "function-based" approach. Snippet highlights that "most domestic control lists remain limited to 'traditional' agents" and advocates for a shift "beyond pathogen-based control lists to systems that capture biological functions of concern." Traditional arms control frameworks, including the BWC, primarily focus on prohibiting specific, known biological agents. However, AI and synthetic biology enable the design of novel agents or the engineering of new functions into existing ones, which may not be on any prohibited list but could be equally or more dangerous. A "function-based" approach, although technically more complex to implement and verify, is essential for future-proofing biosecurity against AI-led circumvention and the continuous generation of novel threats. This requires a deeper, predictive understanding of biological mechanisms and their potential for harm, capabilities that AI itself can ironically provide for defensive purposes. Overall, the failure to transition to such a framework risks making existing controls increasingly irrelevant.
The Geopolitical Chessboard: A Bio-Tech Cold War?
The convergence of AI and synthetic biology is not merely a scientific phenomenon; it is a profound geopolitical accelerant, redefining the very foundations of national power. In this emerging era of "techno-sovereignty," a nation's influence and security are increasingly tied to its ability to develop and integrate critical technologies, such as AI, biotechnology, and semiconductors. These domains are no longer just engines of economic growth; they are strategic instruments and contested battlegrounds, fueling a new form of global competition.
The rhetoric around the AI arms race is often framed around a fierce rivalry between the United States and China, with both nations heavily investing in AI and biotech to secure economic and national security dominance. This competition manifests in direct government investment, subsidies, and strategic export controls on critical components, such as advanced chips, aiming to boost domestic industries while hindering adversaries.
Nations face a complex dual imperative: fostering rapid innovation in AI and biotech for their immense beneficial applications, such as drug discovery, vaccine development, and rapid disease detection, while simultaneously developing robust biosecurity measures to prevent their misuse. The US, for instance, is prioritizing strategic collection of AI-ready biological datasets, investing in data infrastructure, and boosting high-performance computing to maintain its scientific edge. Reports from prominent bodies, such as the National Academies and the Nuclear Threat Initiative (NTI), underscore the need for continuous assessment of AI-enabled biological risks and call for collaborative action across government, industry, academia, and civil society. Crucially, AI is also being actively leveraged for defensive biosecurity, enhancing biosurveillance, enabling early detection systems like BlueDot and EPIWATCH, and accelerating the development of medical countermeasures.
However, despite growing awareness and national efforts, global governance remains fragmented. The global health community, which should be central to addressing these threats, is often "sidelined, underfunded, outpaced, and absent from the forums shaping next-generation biosecurity". The World Health Organization (WHO), for example, faces a significant funding shortfall, hindering its ability to lead on global biosecurity. While international initiatives, such as the AIxBio Global Forum and the Responsible AI x Biodesign statement, are emerging, with signatories committing to evaluating dangerous capabilities and screening nucleic acid orders, a cohesive, universally adopted framework is still lacking. The UN has passed resolutions supporting safe AI development and advocating for national strategies that align with human rights. The OECD AI Principles promote trustworthy AI, incorporating safeguards against potential misuse. In Europe, the EU AI Act classifies biological AI models as "general-purpose AI" that could pose systemic risks, mandating safety measures and addressing their potential in bioweapon development. Yet, these efforts, while commendable, are disparate and often lack the unified enforcement power needed to match the global scale of the threat.
The inherent tension between accelerating beneficial AI and biotech innovation and the imperative to manage biosecurity risks creates a profound policy tightrope. Nations want to lead in AI for economic and defense reasons, but this very pursuit amplifies the biosecurity risks. This paradox means that the "bio-tech Cold War" is not simply about who develops the most powerful tools, but critically, who can safely manage them while simultaneously advancing their capabilities. Over-regulation could stifle national competitiveness, but under-regulation risks catastrophic misuse, potentially by adversaries or non-state actors. This necessitates a delicate balance and the urgent development of novel governance models that are both adaptive to rapid technological change and collaborative across geopolitical divides. The failure to achieve this balance risks a self-defeating cycle where the pursuit of power inadvertently creates existential vulnerabilities for all.
Furthermore, despite numerous national and international initiatives, the global health community is notably "sidelined, underfunded, outpaced, and absent from the forums shaping next-generation biosecurity". This is exemplified by the World Health Organization's significant funding shortfall. This dangerous fragmentation of global governance means that despite growing awareness of the worldwide threat, there is no cohesive, universally adopted, and adequately funded global governance framework that can effectively keep pace with AI-accelerated synthetic biology. This creates dangerous loopholes and limits the effectiveness of individual national efforts, as biological threats inherently transcend borders. A "bio-tech Cold War," characterized by nationalistic competition and a reluctance to share sensitive data or capabilities, could further exacerbate this fragmentation, making the world collectively more vulnerable to widespread biological incidents. The absence of a strong, well-resourced, and globally coordinated body leaves a critical void in addressing a truly global, potentially catastrophic threat.
Charting a Safer Course: The Imperative for Collective Action
The profound benefits promised by Artificial Intelligence and synthetic biology are matched only by their potential for catastrophic misuse. Navigating this treacherous landscape demands nothing less than urgent, concerted, and decisive action from the global community. The future of human security hinges on our collective ability to manage this unprecedented power.
A multifaceted approach is essential. This begins with developing robust governance mechanisms, technical guardrails, and ethical frameworks that are dynamic enough to keep pace with technological advancement. This includes implementing stricter oversight for AI models and biological datasets, incentivizing responsible development and deployment, and ensuring that ethical considerations are embedded from the outset. The EU AI Act's classification of biological AI models as potentially systemic risks, requiring safety measures, is a step in this direction.
Crucially, genuine international cooperation must replace nationalistic competition. This means fostering collaborative forums for sharing best practices, harmonizing control measures, and building trust across borders. Initiatives like the AIxBio Global Forum and the Responsible AI x Biodesign statement represent vital initial steps, but they must be scaled and universally adopted. The UN and OECD principles for trustworthy AI provide a foundation, but their implementation requires sustained political will and financial commitment.
A critical expansion of biosecurity's scope is also necessary to include active monitoring of online interactions, the use of AI models, and digital design patterns. As governments focus on "precursor behaviors" to stop bad actors, similar innovations in early warning and detection are needed in the digital realm of AI and biotech. This implies the necessity of developing AI-powered biosecurity tools that can detect suspicious queries, unusual design attempts, or potentially harmful sequences within AI platforms. Such a system would serve as a crucial early warning mechanism in the digital domain, potentially identifying malicious intent long before any physical synthesis or production could commence, representing a new and complex frontier for intelligence, law enforcement, and the development of ethical AI.
Simultaneously, we must invest in adaptive biodefense strategies. This involves continuously enhancing biosurveillance, developing more robust early detection systems, and accelerating the research and development of medical countermeasures against both known and novel threats. AI itself can be a powerful ally in these defensive efforts, from predicting disease outbreaks to designing new vaccines.
Ultimately, fostering a culture of responsible innovation within the scientific community is crucial. Researchers, policymakers, and the public must engage in open dialogue, adhere to strong ethical guidelines, participate in rigorous peer review, and collaborate with security experts to identify and mitigate potential threats. The OECD AI Principles and UN resolutions consistently advocate for a "human-centric approach" to AI, emphasizing respect for human rights, democratic values, and safety. This means the development and deployment of AI in biology cannot be driven solely by technological capability or geopolitical competition. It must be fundamentally guided by a strong ethical framework that prioritizes human well-being, global security, and responsible innovation. This requires embedding ethical considerations into the very design, training, and deployment of AI models, not merely as an afterthought.
Furthermore, it highlights the critical need for public education and engagement to foster a broad societal consensus on the responsible boundaries and acceptable uses of AI in synthetic biology. Without public trust and shared ethical norms, even the most robust technical safeguards may prove insufficient against the dual-use challenge. Only by embracing these intertwined imperatives can humanity harness the transformative power of AI and synthetic biology for progress, ensuring these tools serve as a force for good rather than plunging the world into an unprecedented era of biological peril.
References
Asimiyu, Zainab. "AI-Driven Biothreats: Emerging Risks and Countermeasures." ResearchGate, 2024. https://www.researchgate.net/publication/389148605_AIDriven_Biothreats_Emerging_Risks_and_Countermeasures.
Center for AI Safety. "Biosecurity and AI: Risks and Opportunities." CAIS, February 8, 2024. https://safe.ai/blog/biosecurity-and-ai-risks-and-opportunities.
Center for Health Security. "AIxBio." Johns Hopkins Center for Health Security. Accessed July 18, 2025. https://centerforhealthsecurity.org/our-work/aixbio.
Chalmers. "AI can detect toxic chemicals." Chalmers, March 6, 2024. https://www.chalmers.se/en/current/news/ai-can-detect-toxic-chemicals/.
European Parliament. "TA-10-2025-0165_EN.docx." Accessed July 18, 2025. https://www.europarl.europa.eu/doceo/document/TA-10-2025-0165_EN.docx.
EurekAlert!. "AI used to create protein that kills E. coli." EurekAlert!, July 9, 2025. https://www.eurekalert.org/news-releases/1090315.
Fair Observer. "Geopolitics by Design: Rethinking Power in the Age of Critical Technologies." Fair Observer, July 9, 2025. https://www.fairobserver.com/economics/geopolitics-by-design-rethinking-power-in-the-age-of-critical-technologies/.
Frontiers. "Artificial intelligence challenges in the face of biological threats: emerging catastrophic risks for public health." Frontiers, May 9, 2024. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1382356/full.
Frontiers. "Artificial intelligence and machine learning in the development of vaccines and immunotherapeutics—yesterday, today, and tomorrow." Frontiers, July 18, 2025. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1620572/pdf.
Frontiers. "Emerging technologies transforming the future of global biosecurity." Frontiers, July 18, 2025. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1622123/full.
Frontiers. "Ethical and social insights into synthetic biology: predicting research fronts in the post-COVID-19 era." Frontiers, May 18, 2023. https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2023.1085797/full.
Frontiers. "Stakeholders' perspectives on communicating biosecurity to encourage behavior change in farmers." Frontiers, March 19, 2025. https://www.frontiersin.org/journals/veterinary-science/articles/10.3389/fvets.2025.1562648/full.
G7G20 Documents. "2025 G7 Canada Leaders Language G7 Leaders Statement on AI for Prosperity." G7G20 Documents. Accessed July 18, 2025. https://g7g20-documents.org/database/document/2025-g7-canada-leaders-leaders-language-g7-leaders-statement-on-ai-for-prosperity.
Georgetown Security Studies Review. "The Double-Edged Sword: Opportunities and Risks of AI in Biosecurity." Georgetown Security Studies Review, November 15, 2024. https://georgetownsecuritystudiesreview.org/2024/11/15/the-double-edged-sword-opportunities-and-risks-of-ai-in-biosecurity/.
IBBIS. "The Biological Weapons Convention in the Age of Synthetic Nucleic Acids." IBBIS. Accessed July 18, 2025. https://ibbis.bio/the-bwc-in-the-age-of-synthetic-nucleic-acids-ibbis-archival/.
IPD. "NASEM Releases Consensus Report on Biosecurity and AI." Institute for Protein Design, March 31, 2025. https://www.ipd.uw.edu/2025/03/nasem-releases-consensus-report-on-biosecurity-and-ai/.
Journals.plos.org. "AI protein structure prediction pathogen design." PLOS Computational Biology. Accessed July 18, 2025. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013168.
Long Term Resilience. "CLTR_Biological-Tools-and-the-EU-AI-Act-3.pdf." Centre for Long-Term Resilience, January 2025. https://www.longtermresilience.org/wp-content/uploads/2025/01/CLTR_Biological-Tools-and-the-EU-AI-Act-3.pdf.
Ministry of Foreign Affairs of Japan. "G7 Leaders Statement on AI for Prosperity." Ministry of Foreign Affairs of Japan, June 13, 2025. https://www.mofa.go.jp/files/100862253.pdf.
National Academies. "AI Tools Can Enhance U.S. Biosecurity - Monitoring and Mitigation Will Be Needed to Protect Against Misuse." National Academies, March 14, 2025. https://www.nationalacademies.org/news/2025/03/ai-tools-can-enhance-u-s-biosecurity-monitoring-and-mitigation-will-be-needed-to-protect-against-misuse.
National Academies. "Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences." National Academies. Accessed July 18, 2025. https://www.nationalacademies.org/our-work/assessing-and-navigating-biosecurity-concerns-and-benefits-of-artificial-intelligence-use-in-the-life-sciences.
NCBI. "AI-Enabled Biological Design and the Risks of Synthetic Biology." NCBI, April 23, 2025. https://www.ncbi.nlm.nih.gov/books/NBK614591/.
NDU Press. "A Short History of Biological Warfare: From Pre-History to the 21st Century." NDU Press. Accessed July 18, 2025. https://ndupress.ndu.edu/Portals/68/Documents/occasional/cswmd/CSWMD_OccasionalPaper-12.pdf.
Number Analytics. "Ethics Dual Use Genetic Research." Number Analytics. Accessed July 18, 2025. https://www.numberanalytics.com/blog/ethics-dual-use-genetic-research.
NTI. "International Experts Urge Collective Action to Address Emerging AIxBio Risks." NTI, July 17, 2025. https://www.nti.org/news/international-experts-urge-collective-action-to-address-emerging-aixbio-risks/.
NTI. "Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences." NTI, July 17, 2025. https://www.nti.org/analysis/articles/statement-on-biosecurity-risks-at-the-convergence-of-ai-and-the-life-sciences/.
OECD. "AI principles." OECD. Accessed July 18, 2025. https://www.oecd.org/en/topics/ai-principles.html.
OECD. "Artificial intelligence." OECD. Accessed July 18, 2025. https://www.oecd.org/en/topics/artificial-intelligence.html.
OpenAI. "Preparing for future AI capabilities in biology." OpenAI. Accessed July 18, 2025. https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/.
PLOS Computational Biology. "Advances in protein complex structure prediction, such as AlphaFold, now enable highly accurate modelling of heterodimeric complexes, though their application to host-pathogen interactions, which have distinct evolutionary dynamics, remains underexplored." PLOS Computational Biology. Accessed July 18, 2025. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013168.
PubMed Central. "AI and biosecurity: The need for governance: Governments should evaluate advanced models and if needed impose safety measures." PMC. Accessed July 18, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12158449/.
PubMed Central. "Responsible AI in biotechnology: balancing discovery, innovation and biosecurity risks." PMC, February 5, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC11835847/.
PubMed Central. "Structure prediction of known host-pathogen interactions." PMC, July 17, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12225977/.
PubMed Central. "The whack-a-mole governance challenge for AI-enabled synthetic biology: literature review and emerging frameworks." PMC, March 11, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC10933118/.
ResearchGate. "Experimental Evaluation of AI-Driven Protein Design Risks Using Safe Biological Proxies." ResearchGate, May 15, 2025. https://www.biorxiv.org/content/10.1101/2025.05.15.654077v1.
RSIS. "Biosecurity in the Age of AI: Risks and Opportunities." RSIS, July 17, 2025. https://rsis.edu.sg/rsis-publication/rsis/biosecurity-in-the-age-of-ai-risks-and-opportunities/.
Science News Explores. "AI-designed proteins target toxins in deadly snake venom." Science News Explores, March 24, 2025. https://www.snexplores.org/article/ai-proteins-counter-snake-venom-toxin.
Steptoe. "The Geopolitical Race in Biotechnology." Steptoe, June 18, 2025. https://www.steptoe.com/en/news-publications/stepwise-risk-outlook/the-geopolitical-race-in-biotechnology.html.
Stimson Center. "Artificial Intelligence and Synthetic Biology Are Not Harbingers of Doom." Stimson Center, November 20, 2023. https://www.stimson.org/2023/artificial-intelligence-and-synthetic-biology-are-not-harbingers-of-doom/.
Synapse. "Dual-use research: when beneficial science could be weaponized." Synapse. Accessed July 18, 2025. https://synapse.patsnap.com/article/dual-use-research-when-beneficial-science-could-be-weaponized.
Think Global Health. "Global Health Governance in the Age of AI." Think Global Health, July 17, 2025. https://www.thinkglobalhealth.org/article/global-health-governance-age-ai.
Time. "AI Virus Lab Biohazard Study." Time, July 17, 2025. https://time.com/7279010/ai-virus-lab-biohazard-study/.
UN Disarmament Affairs. "Biological Weapons." UNODA. Accessed July 18, 2025. https://disarmament.unoda.org/biological-weapons/.
UN Scientific Advisory Board. "Synthetic Biology.pdf." UN, February 2025. https://www.un.org/scientific-advisory-board/sites/default/files/2025-02/Synthetic%20Biology.pdf.
UNODA. "Biological Weapons." UNODA. Accessed July 18, 2025. https://disarmament.unoda.org/biological-weapons/.
United Nations. "The UN adopts a resolution backing efforts to ensure artificial intelligence is safe." AP, March 21, 2024. https://apnews.com/article/united-nations-artificial-intelligence-safety-resolution-vote-8079fe83111cced0f0717fdecefffb4d.
University of Chicago Law School. "Two Terribles: A Day Without Space and AI Enabled Synthetic Biological Weapons." Chicago Journal of International Law, July 7, 2025. https://cjil.uchicago.edu/print-archive/two-terribles-day-without-space-and-ai-enabled-synthetic-biological-weapons.
YouTube. "Synthetic Biology Just Crossed a Line Scientists Can't Ignore." Space News Unfold, July 2, 2025. https://www.youtube.com/watch?v=yMYITk0WA-A.
Add comment
Comments