real danger
Geoffrey Hinton, dubbed the 'Godfather of AI,' warns technology will be smarter than humans in five years
The'Godfather of AI' has warned the tech will be smarter than humans in some ways by the end of the decade - and he believes it will ultimately destroy humanity. In a doom-laden interview with 60 Minutes, Geoffrey Hinton, 75, predicted that in five years, the systems will be surpass human intelligence that would lead to the rise of'killer robots,' fake news and a boom in unemployment. Hinton is a former Google executive credited with creating the technology that became the bedrock of systems like ChatGPT and Google Bard. He recently revealed his fears that the technology could go rogue and write its own code, allowing it to modify itself. While the scientist fears many aspects of the technology, he said AI has huge benefits in healthcare, such as designing drugs and recognizing medical issues.
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.60)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.60)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.60)
Apocalypse not now? AI's benefits may yet outweigh its very real dangers
Stephen Cave has considerable experience of well-intentioned actions that have unhappy consequences. A former senior diplomat in the foreign office during the New Labour era, he was involved in treaty negotiations which later – and unexpectedly – unravelled to trigger several international events that included Brexit. "I know the impact of well-meant global events that have gone wrong," he admits. His experience could prove valuable, however. The former diplomat, now a senior academic, is about to head a new Cambridge University institute which will investigate all aspects of artificial intelligence in a bid to pinpoint the intellectual perils we face from the growing prowess of computers and to highlight its positive uses.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.61)
- North America > United States > Massachusetts (0.05)
- Government > Regional Government > Europe Government > United Kingdom Government (0.36)
- Media (0.31)
Mind-reading tech 'must include neurodivergent people to avoid bias'
Mind-reading technologies pose a "real danger" of discrimination and bias, the Information Commissioner's Office has warned, as it develops specific guidance for companies working in the sci-fi field of neurodata. The use of technology to monitor information coming directly from the brain and nervous system "will become widespread over the next decade", the ICO said, as it moves from a highly regulated medical advancement to a more general purpose technology. It is already being explored for potential applications in personal wellbeing, sport and marketing, and even for workplace monitoring. The current state-of-the-art in the field is demonstrated by individuals like Gert-Jan Oskam, a 40-year-old Dutch man who was paralysed in a cycling accident 12 years ago. In May, electronic implants in his brain gave him the ability to walk. "To many, the idea of neurotechnology conjures up images of science fiction films, but this technology is real and it is developing rapidly," said Stephen Almond, the ICO's executive director of regulatory risk.
- Information Technology > Security & Privacy (0.52)
- Health & Medicine > Therapeutic Area > Neurology (0.34)
The Benefits, Future and some Real Dangers of Artificial Intelligence
The Artificial Intelligence (AI) can be best described as a combination of algorithms embedded into an automated machine which will enable them have same level of "thinking intelligence" as that of human beings. It is deemed as one of the most important revolution in technology since computing was invented. The artificial intelligence is being projected change everything (and is doing in many industries). There is no one definition accepted by all experts of what artificial intelligence means . First, because it is a new, changing and experimental science.
- Asia > China (0.10)
- North America > United States (0.05)
Op-Ed: The real reason we're afraid of robots
It helps drive your car, recognizes your face at the airport's immigration checkpoint, interprets your CT scans, reads your resume, traces your interactions on social media, and even vacuums your carpet. As AI encroaches on every aspect of our lives, people watch with a mixture of fascination, bewilderment and fear. AI's overthrow of humanity is a familiar trope in popular culture, from Isaac Asimov's "I, Robot" to the "Terminator" movies and "The Matrix." Some scholars express similar concerns. The Oxford philosopher Nick Bostrom worries that artificial intelligence poses a greater threat to humanity than climate change, and the bestselling historian Yuval Noah Harari warns that the history of tomorrow may belong to the cult of Dataism, in which humanity willingly merges itself into the flow of information controlled by artificial systems.
5 Real Dangers of AI
In the past few years, AI has advanced our technology at an incredible rate. From completely automating labor-intensive jobs to diagnosing lung cancer, AI has achieved feats previously thought impossible. However, in the wrong hands, an algorithm can be a destructive weapon. To ensure that malicious actors don't wreak havoc in our society, there are several key challenges which we have to solve. The real danger of AI is not the rise of a sentient algorithm like SkyNet taking over the world.
- North America > United States (0.29)
- Asia > China (0.05)
- Government (0.73)
- Health & Medicine > Therapeutic Area (0.56)
- Information Technology > Security & Privacy (0.48)
Global Big Data Conference
In my last post, "It's Time to Demystify Machine Learning," I shared an easy explanation of machine learning: teaching computers to learn by repeatedly correcting models derived from data until the machine correctly applies the model rapidly to datasets. Now, to take a look at the other side of this, I'll share three things machine learning can't do well. Our brain works by collecting sensory data around us and encoding it in pairs at junctions called synapses. The more often these experiences repeat, the stronger the synapses' chemical bonds become, enabling us to practice improving our skills. Our brains also take in live data and use past experiences as a filter to quickly and effortlessly assess and understand what's going on around us.
- Information Technology > Artificial Intelligence > Machine Learning (0.89)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
Forget rampant killer robots: AI's real danger is far more insidious
WHEN I was growing up, nobody promised me a flying car. But I was promised an AI apocalypse. Those shiny machines were going to crush our skulls underfoot, and we were all going to welcome our new robot overlords. Many people still seem to think it is likely to happen. But we might still get a deadly AI nightmare.
- Media (0.50)
- Information Technology (0.37)
- Government > Military (0.31)
- Banking & Finance > Trading (0.31)
Richard Socher: The real danger of AI is human bias, not evil robots
He's the founder of MetaMind, an artificial intelligence (AI) startup that raised more than $8 million in venture capital backing from Khosla Ventures and others before being acquired by Salesforce in 2016, and he previously served as adjunct professor at Stanford's computer science department, where he also received his Ph.D. (He earned his bachelor's degree at Leipzig University and his master's at Saarland University.) In 2007, Socher was part of the team that won first place in the semantic robot vision challenge. And he was instrumental in assembling ImageNet, a publicly available database of annotated images used to test, train, and validate computer vision models. Socher -- who's now Saleforce's chief data scientist -- has long been attracted to the field of natural language processing, a subfield of computer science concerned with interactions between computers and human languages. His dissertation demonstrated that deep learning -- layered mathematical functions loosely modeled on neurons in the human brain -- could solve several different natural language processing tasks simultaneously, obviating the need to develop multiple models.
- Banking & Finance (0.50)
- Information Technology > Security & Privacy (0.31)
The real danger of deepfake videos is that we may question everything
FAKE videos created by artificial intelligence, known as deepfakes, are becoming incredibly convincing. They show people saying or doing things they never said or did, and recent technological leaps have made producing realistic ones easier than ever (see "AI can make high-definition fake videos from just a simple sketch"). Although having fakes masquerade as the genuine article is a risk, it may not be the main problem. Instead it could be that with such convincing fakes around, it is easier for someone to falsely dispute the authenticity of the real deal. A stark illustration of this can be found in the US, where possession of computer-generated images of child sexual abuse is treated more leniently by the courts than the real thing.
- Information Technology > Security & Privacy (0.65)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.63)
- Law (0.63)