AI tools like ChatGPT and Google's Gemini are 'irrational' and prone to making simple mistakes, study finds
While you might expect AI to be the epitome of cold, logical reasoning, researchers now suggest that they might be even more illogical than humans. Researchers from University College London put seven of the top AIs through a series of classic tests designed to test human reasoning. Even the best-performing AIs were found to be irrational and prone to simple mistakes, with most models getting the answer wrong more than half the time. However, the researchers also found that these models weren't irrational in same way as a human while some even refused to answer logic questions on'ethical grounds'. Olivia Macmillan-Scott, a PhD student at UCL and lead author on the paper, says: 'Based on the results of our study and other research on Large Language Models, it's safe to say that these models do not'think' like humans yet.'
Jun-4-2024, 23:01:55 GMT
- Country:
- Europe > Italy (0.05)
- North America > United States (0.05)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Technology: