Day Two Speaker: Eliezer Yudkowsky

Yudkowsky returned to the stage to discuss the challenge of Friendly AI. The problem, to Yudkowsky, is increasingly difficult because it is difficult to pick out, of all mind possibilities, the one that we would consider friendly. In some sense, we would like to develop AI that can create expert AI, and an AI-Theory AI to create AI, and a Friendly-AI-Theory Friendly AI to create Friendly AI.

One approach to protect against unfriendly AI is to create a moral trajectory for which Friendly AI would aim, rather than hard-coding a particular morality into Friendly AI. He used an example of ancient Greeks, if they had had the ability to create AGI. Their AGI would not necessary be considered friendly by today’s morals, and our own developed AGI might not be truly friendly if based on today’s morals. Therefore, Friendly AGI should be able to seek further moral development, to avoid the constraints otherwise placed by our existing morals.

Published by

Richard Leis

Richard Leis is a writer and poet living in Tucson, Arizona. His poetry has been published in Impossible Archetype and is forthcoming from The Laurel Review. A piece of flash fiction is forthcoming from Cold Creek Review. His essays about fairy tales and technology have been published online at Tiny Donkey and Fairy Tale Review’s “Fairy-Tale Files.” Richard is also Downlink Lead for HiRISE at the University of Arizona.