Goto

Collaborating Authors

 human-like reasoning


Proof AI coming alive? Microsoft says its GPT-4 is already 'showing signs of human reasoning'

Daily Mail - Science & tech

Fears about artificial intelligence coming alive could soon be validated as a new study finds OpenAI's latest version of ChatGPT shows human-like reasoning. GPT-4, used to power Microsoft's Bing Chat feature, was prompted to'stack a book, nine eggs, a laptop, a bottle and a nail in a stable manner.' The system arranged the items so the eggs would not break, detailing how each should be placed on the other - starting with the book and ending with the nail. It also commented on arranging the items so the eggs do not crack - something only humans could fully understand. Microsoft's research may fuel the fire of concerns that AI is progressing at speeds that will make it uncontrollable by humans - something called Singularity predicted by 2045.


Deep Learning is Human, Through and Through

#artificialintelligence

Bengio and LeCun see no reason why deep learning systems cannot be made to reason. Said Bengio, "Humans also use some kind of neural nets in their brains, and I believe that there are ways to get to human-like reasoning with deep learning architectures." It was 10 years ago, in 2012, that deep learning made its breakthrough, when an innovative algorithm for classifying images based on multi-layered neural networks suddenly turned out to do spectacularly better than all algorithms before it. That breakthrough has led to deep learning's adoption in domains like speech and image recognition, automatic translation and transcription, and robotics. As deep learning was embedded into ever-more everyday applications, more and more examples of what can go wrong also surfaced: artificial intelligence (AI) systems that discriminate, confirm stereotypes, make inscrutable decisions and require a lot of data and sometimes also a huge amount of energy.


Bringing human-like reasoning to driverless car navigation

#artificialintelligence

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.


The cognitive AI breakthrough: Real human-like reasoning in business AI solutions

#artificialintelligence

Conventional, data-crunching artificial intelligence, which is the foundation of deep learning, isn't enough on its own; the human-like reasoning of symbolic artificial intelligence is fascinating, but on its own, it isn't enough either. The unique hybrid combination of the two -- numeric data analytics techniques that include statistical analysis, modeling, and machine learning, plus the explainability (and transparency) of symbolic artificial intelligence -- is now termed "cognitive AI." It's an extraordinary breakthrough to have the ability to implement a human-like ability to perceive, understand, correlate, learn, teach, reason, and solve problems faster than existing AI solutions. Key technology components were at the core of the wildly successful NASA Mars Rover's mission. Alone and 150 million miles from Earth, the rover was able to successfully adapt to conditions without direct instruction. After a dust storm, it taught itself to rotate its solar panels and shake off accumulated dust blocking essential solar ray absorption.


Bringing human-like reasoning to driverless car navigation

Robohub

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.