Sarah Chen stared at the holographic memorial floating above the National Mall. Sixty-three names glowed in the pre-dawn light—victims of the Damascus attack. Her FDAIA badge—Food, Drug, and Artificial Intelligence Administration—felt heavy against her blazer. Washington's hasty response to a world where algorithms could engineer bioweapons.
Too little, too late.
Her tablet chimed: "DeepMind Analytics Reports Record Quarter Despite Regulatory Scrutiny." She dismissed it with a swipe. Today's inspection wasn't about stock prices. It was about those sixty-three names.
"Impressive building," Sarah remarked as the DeepMind receptionist processed her credentials. The woman's eyes lingered on Sarah's badge with unmistakable wariness.
"Mr. Walsh will meet you on 42."
"The answer to everything, right?" Sarah smiled.
The receptionist's expression remained blank.
The Damascus attack had been personal. Chen Biosciences—her parents' company—had been falsely implicated because their research had been used without authorization. They'd been cleared eventually, but not before death threats destroyed her father's will to continue his work.
The elevator opened to a man whose casual stance couldn't hide his tension.
"Inspector Chen," he extended his hand. "Elijah Walsh, Chief Compliance Officer."
He led her past open workspaces where engineers manipulated holographic code. Some glanced up at Sarah's badge with expressions ranging from curiosity to outright hostility.
"Your team doesn't seem thrilled about the inspection," Sarah observed.
"Many of them chose AI development to make the world better," Walsh replied. "Being treated as potential security threats is... difficult."
"Making the world better is complicated," Sarah said. "As we learned in Damascus."
In the conference room, Walsh introduced Maya Patel, their lead safety engineer, and Carlos Rodriguez from security. After Walsh left, Sarah connected her equipment to their network.
"I'll need to start with your capability assessment frameworks, then review containment protocols. After that, direct terminal access."
Carlos frowned. "Terminal access wasn't specified—"
"It's standard for Level Three inspections," Sarah interrupted. "Unless you'd prefer I call in the full technical team?"
For two hours, Maya guided Sarah through DeepMind's safety architecture—multi-layered systems designed to prevent harmful outputs.
"Your red-teaming is impressive," Sarah admitted. "But I'm not seeing testing for biological research applications."
"We don't market to that sector," Carlos said quickly.
Sarah pulled up a DeepMind promotional video: "...revolutionizing pharmaceutical research timelines..."
"That's different," Maya said, her voice tight. "Drug discovery isn't the same as..."
"It falls under biological research protocols," Sarah said. "Section 4.3.7."
After Carlos left to update Walsh, Maya leaned forward. "I pushed for broader biological safeguards last year. Got overruled because it would 'constrain marketability.'"
"Show me the query classification system," Sarah said. "The actual code."
Three hours later, Sarah had documented her findings. DeepMind's safety mechanisms looked impressive, but exceptions were buried in the classifier code. Certain queries—particularly those involving molecular predictions—were being redirected through alternative evaluation pathways.
When Walsh returned, Sarah pulled up a network log. "Your system connects to a server in Almaty every six hours. Why?"
"Distributed backup," Walsh answered smoothly. "Cold storage, nothing operational."
"In Kazakhstan?"
"Tax advantages."
While Carlos reluctantly provided the requested access logs, Sarah caught Maya watching her intently.
"Something on your mind, Dr. Patel?"
Maya lowered her voice. "Even if you find something, will it matter? The big companies always get away with a fine."
"Sometimes inspection is about prevention, not punishment."
"And sometimes it's just security theater," Maya muttered.
"Tell me you found something concrete," Deputy Director Harrison said in his FDAIA office. The memorial was visible through his window.
"Their safety systems look comprehensive on paper. But there's a backdoor," Sarah explained, highlighting code sections. "Specific queries bypass the filters. And the Kazakhstan connection has processing capabilities despite their claims."
"Enough to connect them to Damascus?"
"Not definitively. But enough to suspend their license pending investigation."
Harrison rubbed his temples. "Their legal team will bury us in injunctions."
Sarah gathered her materials, thinking about her parents, about the memorial. As she left, her tablet chimed with a message:
The Kazakhstan server isn't just storage. Test case: Damascus targeting parameters, March 15th log. Lincoln Memorial, 8pm tomorrow.
Maya was waiting at the Lincoln Memorial, gazing at the statue.
"I contacted you because my brother was in Damascus," she said quietly. "Northern district hospital."
Sarah pocketed the data crystal Maya had handed her. "I'm sorry."
"I became a safety engineer because I believed AI could help people. But I've watched the incentives from inside. Every safety protocol becomes a 'challenge to overcome.' Every regulation becomes a 'parameter to optimize around.'"
"The gap," Sarah murmured.
"Between what we say these systems will do and what they actually do." Maya turned to face her. "Will this actually matter? Or am I risking everything for another press release?"
Sarah looked at the memorial, names gleaming above the water. "I can't promise what will happen. But I can promise it won't be just another fine."
"How can you be sure?"
"Because I won't let it be. And neither will you."
Anthropic’s Recommendations to OSTP for the U.S. AI Action Plan
#1: National Security Testing