Twitter has been aflame with a pronouncement from Elon Musk. According to the visionary entrepreneur, the odds are very high that we're all living in a version of the Matrix. The logic goes something like this: If it's possible to create consciousness out of a computer, someone, either human or alien, will have done it. If they've done it, they've probably done it a number of times (since after all computers can run infinitely many programs). If that's true, then the number of consciousnesses that are simulated is probably much higher -- even orders of magnitude higher -- than the number of consciousnesses that aren't.
Most researchers you speak to these days predict that after the boom of neural networks in machine learning, we will reach A.G.I. (artificial general intelligence), and then soon A.H.I. (artificial human intelligence), until the final step A.S.I. (artificial super intelligence). While this seems like the most logical path, and a solid theory based on logic, does this mean that we should rigidly follow this direction? There are a lot of downsides to especially the artificial human intelligence step, both in the implementation details (which can be overcome), as well as the implications it will have on the following step, artificial super intelligence. Our Brains Are Not Computers Recently I came across a truly awesome article called The Empty Brain, in which Robert Epstein explains that our brains are not like computers at all, even though that is mostly the model we adhere to these days. His most convincing arguments stem from his brief history rundown on how humans have perceived their own brain along our historical roots.
Because navigation produces readily observable actions, it provides an important window into how perception and reasoning support intelligent behavior. This paper summarizes recent results in navigation from the perspectives of cognitive neuroscience, cognitive psychology, and cognitive robotics. Together they argue for the significance of a learned spatial cognitive model. The feasibility of such a model for navigation is demonstrated, and important issues raised for a standard model of the mind.
Will learner-centred AI be banned from classrooms like smartphones? I was struck by a statement in this promotional video for IBM's Watson AI technology that said, In the 30 or so years of working with digital platforms across the education and creative sectors I've noticed that these sort of claims appear every time a new bit of tech arrives. Watson, of course, is very smart technology. It hasn't passed the Turing test but it did beat the human champions on the TV trivia game show Jeopardy! It achieves this with some impressive computing power.