News and commentary about the Great Frontiers

ISS007-E-10807 (21 July 2003) --- This view of Earth's horizon as the sunsets over the Pacific Ocean was taken by an Expedition 7 crewmember onboard the International Space Station (ISS). Anvil tops of thunderclouds are also visible. Credit: Earth Science and Remote Sensing Unit, NASA Johnson Space Center

Image Credit: ISS007-E-10807 (21 July 2003) – Earth Science and Remote Sensing Unit, NASA Johnson Space Center

Day Two Speaker: James Hughes

Published.

James Hughes is the the executive director of the Institute for Ethics and Emerging Technologies and host of Changesurfer Radio. He discussed preparation for the future and how futurists can have an impact on public policy.

Hughes listed a series of assumptions he makes about AGI and the Technological Singularity, such as AGI is likely, but probably dangerous and radically alien. While he does not adhere to either a utopian or dystopian view of the Singularity, Hughes did suggest that there are real risks involved, and that our assumptions about AGI may need to be analysized in some detail.

For example, do we really want a perfect utilitarian AGI, or a perfectly truthful AGI? These AGI might not turn out in ways that are beneficial to humans.

While AGI is likely, Hughes suggested that we might also want to augment human intelligence in ways that attempt to allow us to keep up with AGI. In this way we might be able to keep up with technological advance including AGI so that we can deal with any potential problems and consequences.

Other considerations Hughes highlighted included robot/AI rights and the possibility that we will need to regulate and license these superintelligences to better control their use.

%d bloggers like this: