Day Two Speaker: Eliezer Yudkowsky

Yudkowsky returned to the stage to discuss the challenge of Friendly AI. The problem, to Yudkowsky, is increasingly difficult because it is difficult to pick out, of all mind possibilities, the one that we would consider friendly. In some sense, we would like to develop AI that can create expert AI, and an AI-Theory AI to create AI, and a Friendly-AI-Theory Friendly AI to create Friendly AI.

One approach to protect against unfriendly AI is to create a moral trajectory for which Friendly AI would aim, rather than hard-coding a particular morality into Friendly AI. He used an example of ancient Greeks, if they had had the ability to create AGI. Their AGI would not necessary be considered friendly by today’s morals, and our own developed AGI might not be truly friendly if based on today’s morals. Therefore, Friendly AGI should be able to seek further moral development, to avoid the constraints otherwise placed by our existing morals.

%d bloggers like this: