This not an uncommon question in the futurist sphere, and especially among transhumanists and singularitarians. Bruce Klein, President of Novamente, a company pursuing Artificial General Intelligence by way of virtual agents, asked this question on the company’s blog and invited people to respond. The question appears straight forward enough. Whether or not you follow progress in artificial intelligence, it is easy enough to parse what is being asked. A layperson might reply with “never” or “hundreds of years” or, if they know something about the field, maybe a date some time this century.
What the layperson probably will not do is contentiously set up their answer by deconstructing the original question. Leave it to the experts to do that. A sampling of response introductions:
- “Bruce, I generally think the question is misphrased.”
- “I can’t answer the question because too many of the terms aren’t clear. There is a limited number of things that can be usefully said while standing on one foot.”
- “Depends on how we define things.”
- “I sometimes hate the relative opacity of this popular question, yet always love the motivations behind asking it!”
- “In a way computers are already smarter than people.”
- “Essentially, I don’t think that is a well-founded question. When it comes to playing chess or balancing accounts, AI has already exceeded human intelligence. Operations can be performed orders of magnitude faster and more accurately.”
In most of the responses, a series of dates are eventually provided, along with assumptions being made, rationalizations, personal beliefs, and sometimes sources. Not only the question but the asking of the question could be examined further. Who is asking the question and why? Who is answering and why? What is the context – historical, social – in which this question is being asked? How does this question-answer process work in cyberspace, via email and blog post, as opposed to other media? How might people who do not speak or write English pose the same question, and how might they respond? Why have only men responded so far?
This then is intellectual discourse, an activity that might lead some laypeople to scratch their heads and ask why the responders did not just provide a year and a brief explanation for their selection. Let us look back at the opening statement of the original post by Klein: “A
question very simply crafted poll I’m asking a few friends to gain a better perspective on the time-frame for when we may see greater-than-human level AI.”
The edit says a lot about the responses that were received. Sometimes it is interesting, even useful, to just ask a simple question and expect a simple answer. We know what is being asked without becoming facetious about it.
My answer after my own verbose setup? Next decade (2010 – 2019).
Why (uh-oh)? Trends and technologies converge. Too often people examine individual trends and ignore convergence and surprises, all the while keeping in place their biases, including human-centric biases. The substrate from which human-level artificial intelligence will arise is a matrix of computing hardware, software, communications technology, progress in our understanding of the human brain, experiments in social networking and the metaverse, robotics, economic (the cost of human labor versus automation, robotics, and AI; military, government and private investments), etc. This substrate is all but in place.
To a historian, new technology might appear to have arrived suddenly, as if one day Technology X did not exist and the next day it did. In the days that follow, Technology X loses its luster and becomes just another part of the background noise of our technological existence, another piece of the substrate, a historical footnote. Convergence occurs and technologies appear to vanish into one another.
We – proponents and critics alike – place human-level artificial intelligence on a pedestal. That this was a hard problem, or is not a hard problem at all, will be all but forgotten with the advent of AGI. Human-level intelligence itself is only as miraculous or mundane as we individually and subjectively choose to view it.
Whether or not we as laypeople or we as experts define our terms, we make assumptions about intelligence from our interactions with other humans. When those assumptions about intelligence match with our interactions with other sentient beings, AI will have surpassed human-level intelligence. Yes, surpassed, not just equaled. That will happen within the next decade, when the substrate is appropriate for it.