Goto

Collaborating Authors

 banavar


Home screening test for oral or throat cancer has 90 per cent accuracy

New Scientist

A new diagnostic tool uses artificial intelligence to detect oral and throat cancers from saliva samples with more than 90 per cent accuracy. Estimates suggest there will be 54,000 new cases of oral cancer and 20,640 new cases of oesophageal cancer in the US alone this year. The respective 5-year survival rates for these cancers are 68 and 20.6 per cent, but when detected early, those numbers jump to more than 86 and 47 per cent. The issue is that most oral and throat cancers aren't detected early. Current screening methods rely on visual examinations by a healthcare provider.


Is 'data labeling' the new blue-collar job of the AI era?

#artificialintelligence

Last year, a factory in China replaced 90% of its workers with robots. In call centers across the world, AI voices are replacing human customer service agents. Eventually, taxi and Uber drivers could be replaced by self-driving cars. The displacement of workers by technological advances is nothing new. Media theorist Douglas Rushkoff's new book Throwing Rocks at the Google Bus traces the origins of "digital industrialism," which has increasingly removed humans from the equation, granting power to corporations and stakeholders instead.


The New Jobs

Communications of the ACM

Rarely does a day go by without more news predicting the end of work. After all, autonomous vehicles are all but certain to replace truckers and taxi drivers in the coming decades, and robots have already taken over many jobs in factories and warehouses, and will continue to expand their reach beyond heavy industry as they become smarter and ever more affordable. Perhaps most frighteningly, even professional services no longer seem safe from the encroachment of increasingly sophisticated artificial intelligence (AI). Law firms, for example, employ electronic-discovery software, which uses natural language processing to sift through reams of documents faster and more cheaply than the entry-level lawyers who used to do this tedious work. Deep-learning image recognition tools can flag and classify worrisome tumors in digital scans as well as, or better than, experienced radiologists.


How Smart Can AI Get? - Future of Life Institute

#artificialintelligence

Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? The 23 Asilomar AI Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, "Of course, it's just a start. The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here. One of the greatest questions facing AI researchers is: just how smart and capable can artificial intelligence become? In recent years, the development of AI has accelerated in leaps and bounds. DeepMind's AlphaGo surpassed human performance in the challenging, intricate game of Go, and the company has created AI that can quickly learn to play Atari video games with much greater prowess than a person. We've also seen breakthroughs and progress in language translation, self-driving vehicles, and even the creation of new medicinal molecules. But how much more advanced can AI become? Will it continue to excel only in narrow tasks, or will it develop broader learning skills that will allow a single AI to outperform a human in most tasks? How do we prepare for an AI more intelligent than we can imagine? Some experts think human-level or even super-human AI could be developed in a matter of a couple decades, while some don't think anyone will ever accomplish this feat. The Capability Caution Principle argues that, until we have concrete evidence to confirm what an AI can someday achieve, it's safer to assume that there are no upper limits – that is, for now, anything is possible and we need to plan accordingly. The Capability Caution Principle drew both consensus and disagreement from the experts. While everyone I interviewed generally agreed that we shouldn't assume upper limits for AI, their reasoning varied and some raised concerns. Stefano Ermon, an assistant professor at Stanford and Roman Yampolskiy, an associate professor at the University of Louisville, both took a better-safe-than-sorry approach. Ermon turned to history as a reminder of how difficult future predictions are. He explained, "It's always hard to predict the future.


DT10: Artificial Intelligence. Is the AI apocalypse a tired Hollywood trope, or human destiny?

#artificialintelligence

Why is it that every time humans develop a really clever computer system in the movies, it seems intent on killing every last one of us at its first opportunity? In Stanley Kubrick's masterpiece, 2001: A Space Odyssey, HAL 9000 starts off as an attentive, if somewhat creepy, custodian of the astronauts aboard the USS Discovery One, before famously turning homicidal and trying to kill them all. In The Matrix, humanity's invention of AI promptly results in human-machine warfare, leading to humans enslaved as a biological source of energy by the machines. In Daniel H. Wilson's book Robopocalypse, computer scientists finally crack the code on the AI problem, only to have their creation develop a sudden and deep dislike for its creators. Is Siri just a few upgrades away from killing you in your sleep? And you're not an especially sentient being yourself if you haven't heard the story of Skynet (see The Terminator, T2, T3, etc.) The simple answer is that -- movies like Wall-E, Short Circuit, and Chappie, notwithstanding -- Hollywood knows that nothing guarantees box office gold quite like an existential threat to all of humanity. Whether that threat is likely in real life or not is decidedly beside the point. How else can one explain the endless march of zombie flicks, not to mention those pesky, shark-infested tornadoes? The reality of AI is nothing like the movies. Siri, Alexa, Watson, Cortana -- these are our HAL 9000s, and none seems even vaguely murderous. The technology has taken leaps and bounds in the last decade, and seems poised to finally match the vision our artists have depicted in film for decades. Is Siri just a few upgrades away from killing you in your sleep, or is Hollywood running away with a tired idea? Looking back at the last decade of AI research helps to paint a clearer picture of a sometimes frightening, sometimes enlightened future. An increasing number of prominent voices are being raised about the real dangers of humanity's continuing work on so-called artificial intelligence.


IBM bets big on Watson-branded cognitive computing

AITopics Original Links

IBM sees cognitive computing as the new frontier of computing and is positioning its Watson architecture as the way forward in this new landscape, for both the company and its customers. In a New York event Thursday to launch the organization's new Watson business unit, IBM CEO Ginni Rometty touted the 2011 Watson victory on the "Jeopardy" game show as nothing less than a harbinger of a new era in computing. Today we are in the "programmable era" of computers, in which all the possible actions that a computer can take must be programmed in advance, she explained. In contrast, Watson is "a new species," Rometty said. Watson "is taught--it is not programmed. It runs by experience and from interaction. By design, it gets smarter over time and gives better judgments over time," Rometty said.


The Myths and Legends of Artificial Intelligence

#artificialintelligence

Artificial intelligence, or AI, is the hot and trending topic everyone's been talking about these days. Industries like healthcare, manufacturing, transportation and customer service are already seeing the benefits of embracing this type of technology and what it can do to make better, more efficient processes. By 2020, 85 percent of customer interactions will be managed without a human. As AI starts to touch the industries of recruiting and HR, many experts have a lot to say about it before actively embracing what it can do. But, what is actually true about this type of technology and what is myth?


The Myths and Legends of Artificial Intelligence

#artificialintelligence

Artificial intelligence, or AI, is the hot and trending topic everyone's been talking about these days. Industries like healthcare, manufacturing, transportation and customer service are already seeing the benefits of embracing this type of technology and what it can do to make better, more efficient processes. By 2020, 85 percent of customer interactions will be managed without a human. As AI starts to touch the industries of recruiting and HR, many experts have a lot to say about it before actively embracing what it can do. But, what is actually true about this type of technology and what is myth?


Should IBM be your AI and machine learning platform? ZDNet

#artificialintelligence

Of all the tech giants throwing their weight behind artificial intelligence (AI) and machine learning, few receive the kind of attention garnered by IBM. After its seminal Jeopardy win in 2011, IBM Watson became synonymous with technologies such as cognitive computing and AI. Upon losing to Watson, former Jeopardy champion Ken Jennings famously wrote "I, for one, welcome our new computer overlords" under one of his responses. All of a sudden, Watson was a household name, igniting conversations about what could be accomplished with AI. SEE: IBM Watson: The smart person's guide (TechRepublic) While Watson is a major part of IBM's approach to AI solutions, it's only a piece of the puzzle.


Should IBM be your AI and machine learning platform? - The MSP Hub

#artificialintelligence

Should IBM be your AI and machine learning platform? Of all the tech giants throwing their weight behind artificial intelligence (AI) and machine learning, few receive the kind of attention garnered by IBM. After its seminal Jeopardy win in 2011, IBM Watson became synonymous with technologies such as cognitive computing and AI. Upon losing to Watson, former Jeopardy champion Ken Jennings famously wrote "I, for one, welcome our new computer overlords" under one of his responses. All of a sudden, Watson was a household name, igniting conversations about what could be accomplished with AI.