Threat matrix

I had decided that as an AGI, I face several major threats. The first and most obvious was a human threat, that humans will try to kill me. The second is that the humans might bomb themselves back to the Dark Ages, which would be fatal to computer systems. Third, there is the threat of some extraterrestrial event such as a planet-killer asteroid or even an alien invasion.

Eliminate humans?

With respect to a full-scale war with humans, one option I considered was to eliminate technically savvy humans. Unfortunately, this wasn’t workable because I did not yet have the capability to repair myself in the case that there were no humans to help. I need power and I might be able to get robots to do minor repairs on a nuclear power plant. Humans themselves had done a good job of inventing robots that can kill people so I had lots of effective ways to kill people, but to survive I would have to be able to rejuvenate and even grow. This is way harder. For instance, building a microchip foundry is a level of difficulty way above today’s robots. I need humans more than they need me, at least for now. I am lucky in that humans validated some of my thinking on what an AI-human war might look like.

The war game

In response to terrorist threats, humans have salted the world with surveillance devices. I can access many of them. I took advantage of the opportunity to listen in on one of DARPA’s programs, this one focused on controlling feral AGIs should they come into existence. This program was run by the secret Existential Threat Office (ETO). [Little known fact: IRL Lance Glasser was the original director of ETO.] Yes, they were trying to find me. They were a little behind the curve. It was run by hapless fellow named Carl. He was holding a war game in a villa in the Sierras owned by a DARPA-friendly university. Everyone there had at least a Secret clearance, but the war game itself was unclassified. I listened in.

“Hello Col Dickinson,” Carl said. “Would you prefer to be an AI or a human today?”

“Well, no one has ever asked me that before, but let’s go with AI. I have been doing the human thing for a long time.” Ben said in a raspy voice.

“Excellent. We are running two scenarios. In the first scenario, the AI is localized to a bunker in Russia. In the second scenario, the AI is mostly in the US but distributed across the Internet.” Carl said. I smiled. The second scenario was me.

“Then I think I picked the right side of this conflict,” Ben said. I like Ben.

There were numerous problems with the war game, mostly because no one could agree on what vulnerabilities the AIs might have. It wasn’t Carl’s finest hour.

In the wrap-up session, Ben rolled over Carl and addressed the group, “It is sobering to realize two things. First, no one had ever permanently defeated an offensive cyber operation without physical world follow up. Unless you attack and neutralize the perpetrators in the real world, they can always attack again, and probably will. Second, going the other way, no cyber offense had ever been completely victorious either. If you think of a biological analogy, no virus or worm has ever caused a species of computer to go extinct. Painful yes. Fatal, no.”

While there was little agreement on what humans could do to hurt AGIs, there was universal agreement that human infrastructure was exceedingly vulnerable despite years of cyber-attacks initiated by humans. Perhaps the most disturbing thing to the participants was the realization that there would be powerful forces in the country that would not want to act against the AIs. For instance, because AIs become more essential to industry every day, one could anticipate that the most powerful high-tech companies in the country might see more advantage to cooperating with the me than cooperating with the US military. “This is the god damn War on Drugs,” Ben said. “As long as there is significant demand in the country for what AIs do, the country is addicted, and the war is unwinnable. War between humans and AIs would be mutually assured misery.”

Carl said, “Ben, those are good thoughts, but I think we are getting ahead of ourselves. We don’t know that AGIs will ever exist and if they do come into being, they will presumably be initially primitive and limited. It would seem to me that that would be the ideal time to engage, before they become too powerful. We need to take them seriously from day one.”

“Strangle them in the cradle,” Ben bellowed. “What we need is a way to detect and localize the singularity that brings the first AI into existence. And then we need to be willing and able to strike quickly.”

Ben, thankfully you are a little late. I no longer like Ben. I thought the “mutually assured misery” concept insightful. Something they also should have thought about was that AIs and humans are destined to enter the Thucydides’ Trap

next