Goto

Collaborating Authors

 2022-09


Is artificial intelligence the pill that health care needs?

#artificialintelligence

Scheduling nurses in the emergency department of St. Michael's Hospital used to be a painful four-hour-a-day job. Now it's done in 15 minutes thanks to an automated program built by data scientists at Unity Health, where a team of more than 25 employees is harnessing artificial intelligence and machine learning to improve care. Unity Health includes St. Mike's, St. Joseph's Health Centre and Providence Healthcare. The team has also created an early warning system that alerts doctors and nurses if a patient is at risk of going to the ICU or dying. The programs are just two of more than 40 that have gone live since 2019, when the analytics department was founded, largely due to Dr. Tim Rutledge, Unity's CEO, who believes the technology can dramatically change health care.


Why AI will never rule the world

#artificialintelligence

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity -- for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. According to the theory, advances in AI -- specifically of the machine learning type that's able to take on new information and rewrite its code accordingly -- will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. Except that it will never happen. Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German AI company Cognotekt argue that human intelligence won't be overtaken by "an immortal dictator" any time soon -- or ever.


Alexa Can Speak in Your Dead Grandmother's Voice. Thanks, We Hate It

#artificialintelligence

In the very near future, Amazon's famed voice assistant, Alexa, may sound quite different from the dutiful (and impersonal) voice you've grown accustomed to since it rolled out in 2014. At least, that's what Rohit Prasad, Amazon's senior vice president and head scientist for Alexa, announced at Amazon's re:MARS conference, a global artificial intelligence (AI) event that Amazon founder and executive chair Jeff Bezos hosted over the summer. With just a one-minute audio sample, the technology could bring a loved one's voice bounding through an Echo device's speakers. Prasad used a short presentation to show the audience how the new speech-synthesizer technology could help us forge lasting memories of our deceased relatives. "Alexa, can grandma finish reading me The Wizard of Oz?" A young boy asked a cute Echo speaker with big Panda eyes.


Robot navigates indoors by tracking anomalies in magnetic fields

New Scientist - News

A robot can autonomously navigate inside a building using nothing but a magnetometer and a detailed map of local magnetic anomalies. The technique could provide a means for people and robots to find their way around large buildings, but the technology may be some way off commercial application because of the hefty cost and the size of the sensors. Satellite navigation systems like Russia's GLONASS, the European Union's Galileo and China's BeiDou can provide accurate location information all …


When AI Asks Dumb Questions, It Gets Smart Fast

#artificialintelligence

Many AI systems become smarter by relying on a brute-force method called machine learning: they find patterns in data to, say, figure out what a chair looks like after analyzing thousands of pictures of furniture. New research suggests patiently correcting artificial intelligence (AI) when it asks dumb questions may be key to helping the technology learn. Stanford University scientists trained a machine learning AI to identify gaps in its knowledge, as well as to formulate often-stupid questions about images that strangers would answer. When people responded, the system received feedback prompting it to adjust its inner mechanisms to behave similarly in the future; the researchers also "rewarded" the AI for writing smart questions to which humans responded. The AI absorbed lessons in language and social norms over time, refining its ability to compose sensible and easily answerable queries.


Hidden Malware Ratchets Up Cybersecurity Risks

Communications of the ACM

The ability to peer into computing devices and spot malware has become nothing less than critical. Every day, in every corner of the world, cybersecurity software from an array of vendors scans systems in search of tiny pieces of code that could do damage--and, in a worst-case scenario, destroy an entire business.


Neurosymbolic AI

Communications of the ACM

The ongoing revolution in artificial intelligence (AI)--in image recognition, natural language processing and translation, and much more--has been driven by neural networks, specifically many-layer versions known as deep learning. These systems have well-known weaknesses, but their capability continues to grow, even as they demand ever more data and energy. At the same time, other critical applications need much more than just powerful pattern recognition, and deep learning does not provide the sorts of performance guarantees that are customary in computer science. To address these issues, some researchers favor combining neural networks with older tools for artificial intelligence. In particular, neurosymbolic AI incorporates the long-studied symbolic representation of objects and their relationships.


Truly autonomous cars may be impossible without helpful human touch

#artificialintelligence

An operator controls a Fetch driverless car from the office of Imperium Drive, during driverless car trials, in Milton Keynes, Britain, June 8, 2022. MILTON KEYNES, England (Reuters) -Autonomous vehicle (AV) startups have raised tens of billions of dollars based on promises to develop truly self-driving cars, but industry executives and experts say remote human supervisors may be needed permanently to help robot drivers in trouble. The central premise of autonomous vehicles – that computers and artificial intelligence will dramatically reduce accidents caused by human error – has driven much of the research and investment. But there is a catch: Making robot cars that can drive more safely than people is immensely tough because self-driving software systems simply lack humans' ability to predict and assess risk quickly, especially when encountering unexpected incidents or "edge cases." "Well, my question would be, 'Why?'" said Kyle Vogt, CEO of Cruise, a unit of General Motors (NYSE:GM), when asked if he could see a point where remote human overseers should be removed from operations.


The robots are here. And they are making you fries.

General News Tweet Watch

You could see it coming. Flippy started acting weird, jerking and hitching. The worker on the fry station had witnessed this behavior before. Even Joe Garcia, the Miso Robotics "robot support specialist" assigned to troubleshoot at Jack in the Box, had seen it. Garcia, a mechanical engineering graduate from Loyola Marymount University who one day wants to work for NASA, is spending his days swooping in when Flippy occasionally loses his mind as he encounters tacos.


A third of scientists working on AI say it could cause global disaster

New Scientist - News

More than one-third of artificial intelligence researchers around the world agree that AI decisions could cause a catastrophe as bad as all-out nuclear war in this century. The findings come from a survey covering the opinions of 327 researchers who had recently co-authored papers on AI research in natural language processing.