Each year, the artificial intelligence community convenes to administer the famous -- and famously controversial -- Turing test, pitting sophisticated software programs against humans to determine if a computer can "think." The machine that most often fools the judges wins the Most Human Computer Award. But there is also a prize, strange and intriguing, for the "Most Human Human." Brian Christian, a young poet with degrees in computer science and philosophy, was chosen to participate in a recent competition. This playful, profound book is not only a testament to his efforts to be deemed more human than a computer, but also a rollicking exploration of what it means to be human in the first place.
Most researchers you speak to these days predict that after the boom of neural networks in machine learning, we will reach A.G.I. (artificial general intelligence), and then soon A.H.I. (artificial human intelligence), until the final step A.S.I. (artificial super intelligence). While this seems like the most logical path, and a solid theory based on logic, does this mean that we should rigidly follow this direction? There are a lot of downsides to especially the artificial human intelligence step, both in the implementation details (which can be overcome), as well as the implications it will have on the following step, artificial super intelligence.
My colleague Ahmed posted yesterday an article that had appeared about a new record for ASR performance. Cortona had reached an ultra-low 6.3% error rate. Human voice dictation error is in the 4–6% level, which means we're just on the cusp. In May, I made a prediction at SpeechTek that within the next twelve months, we'd surpass human error rates in ASR. Of course, the caveat is that it would be a well trained and tuned ASR and for a specific individual.