- The Washington Times - Tuesday, April 9, 2024

A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

Federal researchers working in a government lab famed for helping produce the atomic bomb during World War II are now focused on what some see as a new, equally existential threat to humanity: artificial intelligence.

The Manhattan Project, which developed the first atomic bombs, birthed Oak Ridge National Laboratory in the hills of East Tennessee more than 80 years ago. Amid the frenzy for new AI models and tools now, the U.S. government lab established a new AI security research center last year to focus on the promise and perils of the technology.



Edmon Begoli, the center’s founding director, is investigating the possibility of major risks to humanity from AI and told The Washington Times on Tuesday he is concerned about the threat of a ruthlessly efficient AI system.

The danger he envisions is not a consciously malicious tech tool seeking to harm people, along the lines of Skynet from the “The Terminator” movie franchise, but an AI system connected everywhere that expands the “attack surface” for hackers and cannot be easily shut off.

“It’s just so omnipresent that you cannot go back and delete it from everything,” Mr. Begoli said at a Defense Writers Group event. “And so again, it’s not like some big mind trying to kill humans, it’s just a thing that is so good at doing what it does, it can hurt us because it’s misaligned.”

AI security research is booming beyond the walls of Mr. Begoli’s lab.

OpenAI, makers of the popular chatbot ChatGPT, assembled its own team last year to examine concerns about AI going rogue and enabling the extinction
of humanity. In July, OpenAI warned of danger from a potential superintelligent AI system that would be misaligned and too smart for humans to rein in.

In the last six months, OpenAI reportedly eclipsed the threshold of earning $2 billion in annual revenue. Anthropic, a top rival in studying the dangers of AI, has reportedly forecast annual revenue of $850 million for 2024.

Oak Ridge National Laboratory’s total annual budget, for research covering everything from nuclear science to advanced computing, is approximately $2.4 billion, according to the lab’s website.

The lab, operating under the Department of Energy, has different incentives for its work than researchers in the private sector and academia.

Mr. Begoli praised OpenAI and Anthropic’s research on Tuesday but said the value his team of vulnerability hunters creates springs from the unequaled scope of his lab’s work.

While private companies fixate on threats affecting their businesses’ products, Mr. Begoli said, his lab is empowered to focus on a broader array of dangers. It has partnered with the Department of Homeland Security and the Air Force Research Lab.

“We are trying to understand: ‘Are we moving in a direction that can truly hurt the United States and can hurt humanity?’ That is the primary question,” Mr. Begoli said. “It’s not like, ‘Well, it’s hurting my stock options.’”

Mr. Begoli’s team is hardly the only government lab busily investigating threats from AI to national security. The National Security Agency created its own AI Security Center last year with the goal of verifying the safe design of AI tools for the national security community to use.

The NSA’s new AI center is housed within its Cybersecurity Collaboration Center, where private tech companies meet with the code-breaking and code-making spy agency to tackle complex problems.

The Cybersecurity Collaboration Center has swelled dramatically in size in a short time. NSA’s Morgan Adamski said last month that the collaboration center started with one partner about four years ago and has since grown to more than 1,000 partners.

Amid escalating fears of an out-of-control AI world, federal scientists are struggling to keep current on all the ongoing research.

Mr. Begoli said the hundreds of thousands of research papers on adversarial AI being published means his lab probably needs its own AI model just to comprehend all of the available research.


“Our biggest challenge is to keep up,” Mr. Begoli said. “It’s not having some secret knowledge, it’s absorbing everything that’s happening.”

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide