Generative AI has been seeing rapid expansion since ChatGPT’s launch in November 2022. Several new tools and chatbots have been developed with enhanced features and improved capabilities amidst the growing concerns from experts and regulators. Opinions are rife that these chatbots could pose a fresh threat to global stability and security.
Dario Amodei says
The CEO of Anthropic, Dario Amodei, cautioned that AI systems could empower felons to develop bioweapons and other hazardous weapons within the next few years.
Anthropic is founded by former employees of Open AI. The company rose to fame with the launch of Claude which is ChatGPT’s rival. Anthropic consulted with biosecurity experts to find if the large language models have the potential for future weaponization.
Amodei testified, “Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” at a hearing on Tuesday.
Amodei also testified at a hearing on Thursday before a US Senate technology subcommittee. He said that regulation is urgently needed to handle the usage of AI chatbots for malevolent purposes in the field of nuclear technology, cyber security, biology, and chemistry.
Amodei stated in this testimony that textbooks and Google could only harm to a certain extent since they offer partial information. However, his collaborators and company found that the present AI systems have the potential to fill those gaps.
Dario said, “The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.”
He further added, “However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”
Wrong uses of AI chatbots
AI companies have previously acknowledged the dangers of the tools they are building themselves as well.
The timeline provided by Amodei may be a little exaggerated. However, it is not unfounded. Information on creating weapons rests in classified documents with specialized experts. However, AI can make such information widely accessible and available.
There is no clarity on how the researchers found that AI chatbots can be used for harmful purposes. However, researchers from the Center for AI Safety in San Francisco and Carnegie Mellon University in Pittsburgh found out recently that open-source systems can be used to develop jailbreaks for closed and popular AI systems.
What is FraudGPT?
FraudGPT is an example of AI systems being used for unlawful purposes. There is a buzz going on in the dark world about this bot. It could send phishing emails, and create cracking tools among other things.
These dangers are further increased by the expanding power of open-source large language models.