News and commentary about the Great Frontiers

ISS007-E-10807 (21 July 2003) --- This view of Earth's horizon as the sunsets over the Pacific Ocean was taken by an Expedition 7 crewmember onboard the International Space Station (ISS). Anvil tops of thunderclouds are also visible. Credit: Earth Science and Remote Sensing Unit, NASA Johnson Space Center

Image Credit: ISS007-E-10807 (21 July 2003) – Earth Science and Remote Sensing Unit, NASA Johnson Space Center

Speaker: Wendell Wallach


Wendell Wallach is an expert in the emerging discipline of “Machine Ethics”, a topic he discussed within the context of ethics as it relates to AGI and the Technological Singularity.

Wallach presented a number of difficulties that make these technologies difficult to develop, including complexity of the task, thresholds that must be surpassed first, and bioethical concerns. Can Moore’s Law be equated to the development of minds? Do synapses really relate to bits in any useful sense? Among other obstacles, Wallach mentioned the primitiveness of current brain scanning technologies as well as our poor understanding of semantics, vision, and locomotion.

Should these obstacles be overcome, Wallach said that integration of technologies to create AGI would create another level of difficulty.

These difficulties aside, there are risks and concerns involved with AGI development and a potential Singularity. Wallach predicted that in the next few years a major catastrophe caused by autonomous expert systems will occur, leading to a surge in AI fears. Popular sentiments could turn from indifference to calls for banning or curtailing further AGI research.

The field of machine ethics explores moral decision-making facilities in artificial agents and the host of questions the advent of AGI will pose. Do we need these facilities? Do we want computers to make ethical decisions. On whose morality will these artificial moral agents be based? How can ethics be made computable by computing devices?

Wallach proposed two approaches to developing a moral decision-making facility in artificial agents:

  • top-down: parse provided statements of ethics into code
  • bottom-up: let the artificial agent learn ethics through evolution, development, learning, or fine-tuning.

Finally, Wallach listed some reasons why artificial moral agents might apply ethics and morality better than humans. While humans are based on biochemical processes, machines would be built as logical platforms. Their calculated morality might stand in contrast to human morality. For one thing, machines will be able to look at many moral options at a faster rate than humans, before choosing their actions. Without greed or emotions, machines might make better choices than humans.

%d bloggers like this: