An artificial intelligence enabled robot has passed the written test of China's national medical licensing examination for the first time, marking another milestone in the quest for AI technology to match or surpass human intelligence. Named Xiaoyi, the robot developed by Tsinghua University and Chinese information technology firm iFlytek, achieved a score of 456, 96 points higher than the required mark of 360 points, according to a company announcement. To pass the text, Xiaoyi was required to memorize and understand the contents of one million medical images, 53 medical books, two million medical records, and 400,000 pieces of medical literature and medical reports, a task which normally takes five years of study by a medical student. The robot reportedly failed an earlier attempt to pass the test. "Xiaoyi's successful pass in the written exam represent a significant development in the field of cognitive intelligence," said iFlytek in the company announcement.
Electropherograms are produced in great numbers in forensic DNA laboratories as part of everyday criminal casework. Before the results of these electropherograms can be used they must be scrutinised by analysts to determine what the identified data tells us about the underlying DNA sequences and what is purely an artefact of the DNA profiling process. A technique that lends itself well to such a task of classification in the face of vast amounts of data is the use of artificial neural networks. These networks, inspired by the workings of the human brain, have been increasingly successful in analysing large datasets, performing medical diagnoses, identifying handwriting, playing games, or recognising images. In this work we demonstrate the use of an artificial neural network which we train to'read' electropherograms and show that it can generalise to unseen profiles.
The rise of robots could lead to'unprecedented' change and wipe out over a third of jobs in some areas by the 2030's a new report warns. A'heat map' of Britain shows the areas most at risk of automation, with workers in the ex industrial heartlands of the North and Midlands most likely to lose their jobs. The upheaval tossed up by'supercharged' technological change over the next 15 years could make the industrial revolution pale in comparison, the study says. The report, The impact of AI in UK constituencies, by think-tank Future Advocacy, slams the government for failing to prepare for the rapid change looming. Researchers said the results are'startling' and told ministers to urgently look at new education and training to help the country adapt to the challenge.
About 4,000 people listened to Cuban as he kicked off his shoes--literally--and explained how AI will change the game for companies, educators, and future developments. He's also keeping his eyes peeled for smaller companies in machine learning and AI, and already has at least three companies in his investment portfolio. "[Software writing] skill sets won't be nearly as valuable as being able to take a liberal arts education … and applying those [skills] in assisting and developing networks." But in order for the country to advance to that future, AI and robotics need to become core competencies in the U.S., and not just in the business world, Cuban said.
And in an increasingly data-driven industry, medical education hasn't kept pace. Medical education does little to train doctors in the data science, statistics, or behavioral science required to develop, evaluate, and apply algorithms in clinical practice." But that's only possible if medical teams adopt new members: Clinicians well-versed in computer science that can interpret analytics designed to "systematically analyze every heartbeat" that can treat "tens of thousands of Americans who might otherwise drop dead unexpectedly in any given year." And while developers and computer scientists are often frustrated with the industry's slow adoption of technology, validated clinical trials are a critical part of preventing the use of potentially harmful tools.
An estimated 4,000 people listened to Cuban as he kicked off his shoes and explained how AI will change the game for companies, educators, and future developments. He's also keeping his eyes peeled for smaller companies in machine learning and AI, and already has at least three companies in his investment portfolio. "[Software writing] skill sets won't be nearly as valuable as being able to take a liberal arts education … and applying those [skills] in assisting and developing networks." But in order for the country to advance to that future, AI and robotics need to become core competencies in the U.S., and not just in the business world, Cuban said.
Sign language translators are scarce. Three engineering students from the University of Antwerp have novel solution: Cheap 3D printed humanoids that can translate to sign language on the fly. It's a solution that's only become possible with the converge of 3D printing, the massive popularity of microcontrollers like the Arduino Due, and falling prices for robotics components. ASLAN is an abbreviation which stands for: "Antwerp's Sign Language Actuating Node."
A little more conservative, but just as eager to please, is virtual personal assistant Amy Ingram, the brainchild of New York start-up X.ai. Dr Ileana Stigliani, assistant professor of design and innovation at London's Imperial College Business School, says the answer is a resounding yes. London's Imperial College Business School runs an MBA [Master of Business Administration] programme that considers the social impact of AI and how it can address fundamental human needs. "Teaching the robot to ignore the bad ideas is critical," says Kriti Sharma, vice-president of bots and AI at financial services firm Sage Group.
Or -- and this one is real -- a robot arm that can perform rudimentary sign language. Their solution is "Antwerp's Sign Language Actuating Node," or ASLAN. It's a robotic hand and forearm that can perform sign language letters and numbers. It also could be used to help teach sign language -- a robot doesn't get tired of repeating a gesture for you to learn.
Both the DeepMind and CMU approaches use deep reinforcement learning, popularized by DeepMind's Atari-playing AI. A neural network is fed raw pixel data from a virtual environment and uses rewards, like points in a computer game, to learn by trial and error (see "10 Breakthrough Technologies 2017: Reinforcement Learning"). By running through millions of training scenarios at accelerated speeds, both AI programs learned to associate words with particular objects and characteristics, which let them follow the commands. The millions of training runs required means Domingos is not convinced pure deep reinforcement learning will ever crack the real world.