It's been 15 years since a degenerative eye disease forced Erich Manser to stop driving. Today, he commutes to his job as an accessibility consultant via commuter trains and city buses, but he has trouble locating empty seats sometimes and must ask strangers for guidance. A step toward solving Manser's predicament could arrive as soon as next year. Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. The buses, dubbed "Olli," are designed to transport people around neighborhoods at speeds below 35 miles per hour and will be sold to cities, counties, airports, companies, and universities.
IBM said Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning so enterprises can take advantage of PowerAI performance and GPU (graphics processing unit) optimization for data intensive cognitive workloads. The agreement gives IBM another leg up in the fight to win business from data scientists and developers working on deep learning applications. Anaconda said it already has more than 16 million downloads. Offering the platform through IBM's Cognitive Systems unit will allow clients to quickly scale up the deep learning applications they develop using Anaconda. The advent of big data has been a bonanza for IT providers aiming to provide enterprises and other organizations with the tools to identify patterns in large data sets to convert information to actionable intelligence.
A new patent application has revealed that Disney is looking into the development of robotic versions of its animated characters. The document describes "soft body" robots built specifically for "physical interaction with humans". It doesn't mention any specific characters, but the images alongside the filing show off a bulbous torso resembling that of Big Hero 6's Baymax. The entertainment firm's application repeatedly stresses the importance of safety. It says the robots would have a "rigid support element" and soft, deformable body parts that could potentially be filled with a gas or fluid.
Computer vision is ready for its next big test: seeing in 3D. The ImageNet Challenge, which has boosted the development of image-recognition algorithms, will be replaced by a new competition next year that aims to help robots see the world in all its depth. Since 2010, researchers have trained image recognition algorithms on the ImageNet database, a go-to set of more than 14 million images hand-labelled with information about the objects they depict. The algorithms learn to classify the objects in the photos into different categories, such as house, steak or Alsatian. Almost all computer vision systems are trained like this before being fine-tuned on a more specific set of images for different tasks.
Most consumers don't really know what artificial intelligence (AI) does, and the basic misunderstanding has some fearful of the technology. In a survey of 6,000 customers in six countries, the findings from Pegasystems study released this week found that consumers are hesitant to embrace AI devices and services. Some 36% are comfortable to engage with businesses using AI even if it results in a better customer experience. About 72% said they have some sort of fear about AI, with 24% worried about robots taking over the world. Only 34% of survey respondents thought they had directly experienced AI, but when asked about the technologies in their lives, the survey found that 84% use at least one AI-powered service or device such as virtual home assistants, intelligent chatbots, or predictive product suggestions.
When using Siri, users can give commands the digital assistant to do many tasks. However, the mechanism by which Google Assistant's rival responds to voice commands is proven to be flawed. For one thing, Siri is designed to respond to commands given by anyone, so people aside from the owner of the device can ask the AI assistant to do stuff, including accessing personal data. Fortunately, Apple appears to be working on a solution already. A new patent application by the Cupertino giant contains details on how Samsung's biggest rival is planning to make Siri more secure.
Medable announced today the launch of Cerebrum, a new cloud-based machine learning tool for healthcare apps including HealthKit, ResearchKit, and CareKit compatible apps. In recent years, we've seen a number of healthcare-focused developers emerge that provide HIPAA-compliant health app development as well as cloud-based data management and analytics. We've covered some of Medable's work with a ResearchKit app focused on patient's with LVADs as well as a virtual care clinic. They also recently launched Axon, a do-it-yourself platform for development of ResearchKit apps. As health apps collect ever increasing types and volumes of data on individuals, a core challenge is how to analyze that data and generate actionable insights that can improve patient care.
When I tell people that I work at an AI company, they often follow up with, "So, what kind of machine learning/deep learning do you do?" This isn't surprising, as most of the market attention (and hype) in and around AI has been centered around machine learning and its high-profile subset deep learning and around natural language processing with the rise of the chatbot and virtual assistants. But while machine learning is a core component of artificial intelligence, AI is, in fact, more than just ML. So, what does it really mean for an application to be "intelligent"? What does it take to create a system that is artificially intelligent?
Depression is a simple-sounding condition with complex origins that aren't fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment. For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.
The 78-video playlist above comes from a course called Neural Networks for Machine Learning, taught by Geoffrey Hinton, a computer science professor at the University of Toronto. The videos were created for a larger course taught on Coursera, which gets re-offered on a fairly regularly basis. Neural Networks for Machine Learning will teach you about "artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc." The courses emphasizes " both the basic algorithms and the practical tricks needed to get them to work well." It's geared for an intermediate level learner – comfortable with calculus and with experience programming (Python).