A fruit sorting robot built by a British consulting firm may not sound like a riveting leap forward. Later this month, Cambridge Consultants will demonstrate its fruit sorting robot at the AgriTechnica show in Germany. "Our world-class industrial sensing and control team has combined high-powered image-processing algorithms with low-cost sensors and commodity hardware to allow'soft' control of robots when the task is not rigidly defined," says Chris Roberts, head of industrial robotics at Cambridge Consultants. It was just a matter of deploying existing solutions intelligently and strategically.
A group of Swiss researchers from the Dalle Molle Institute for Artificial Intelligence, the University of Zurich, and NCCR Robotics has developed Artificial Intelligence software to teach a small quadrocopter to recognize and follow forest trails all by itself, staying low enough to avoid tree canopies. The Swiss team solved the problem using a so-called Deep Neural Network, a computer algorithm that learns to solve complex tasks from a set of "training examples," much like a brain learns from experience. Prof. Luca Maria Gambardella, director of the Dalle Molle Institute for Artificial Intelligence in Lugano, explains: "Many technological issues must be overcome before the most ambitious applications can become a reality. One day robots will work side by side with human rescuers to make our lives safer: this is a small but important step in that direction."
The Toyota Research Institute (TRI) just announced its technology leadership team at CES. In November Toyota announced an initial five-year, $1 billion investment in TRI, which will be a research and development enterprise designed to bridge the gap between fundamental research in robotics and artificial intelligence and product development. Some of TRI's specific mandates are to enhance the safety of automobiles, with the ultimate goal of creating a car that is incapable of causing a crash; to increase access to cars to those who otherwise cannot drive, including the handicapped and the elderly; to help translate outdoor mobility technology into products for indoor mobility; and to accelerate scientific discovery by applying techniques from artificial intelligence and machine learning. James Kuffner's research concerns path planning for obstacle avoidance, balance control, self-collision detection, and integrated sensor feedback systems.
Thanks to some truly extraordinary tires, riders flirt with the terminal edge of physics at screaming speeds during race laps, leaning their bikes far enough to scrape knees, shoulders, and elbows. The humanoid, which sits atop a motorcycle just like a human rider, has six actuators that enable it to operate a motorcycle's basic controls: steering, throttle, front brake, rear brake, clutch, and gearshift pedal. Based on data for vehicle speed, engine rpm, machine attitude, etc., MOTOBOT controls its actuators to autonomously operate the vehicle. Yamaha hopes this project will enable its engineers to visualize data about human motorcycle operation, deduce the relationship between rider input and machine behavior, and then use the resulting know-how in developing better, more responsive vehicles.
A scanning robot from 4D Retail Technology can scan an entire grocery store in about an hour. Case in point: A company called 4D Retail Technology wants to ensure that no retail jockey will ever again have to endure the indignity of the scanner gun walk of shame. The company just announced something it's calling the 4D Space Genius, a robotic imaging platform powered by Segway that can scan any store in less than an hour, imaging every product and barcode in every aisle in ultra-high resolution and 3D. I'm spitballing here, but if a store manager uploaded fresh scan of her store every morning and linked it with an online retail system, customers could theoretically go shopping online.
SAS on Tuesday marked the general release of SAS Factory Miner, an automated tool that uses machine learning techniques to develop, test and identify hundreds of best-fit predictive models within minutes. It helps companies with find-grained segmentation by automating model building across hundreds of segments and, potentially, thousands of sub-segments. Similarly, all models generate in-database/in-Hadoop compatible score code, so they can be efficiently deployed and maintained with the help of integrated tools including SAS Model Manager and SAS Decision Manager. This company relies on automated Teradata digital marketing apps to ease the burden of all that campaign management and measurement, but a separate analytics staff develops and maintains all those models.
In particular, the researchers want to improve the performance of neural networks -- computational models for artificial intelligence inspired by the central nervous systems of animals. Artificial neural nets process information in one direction, from input nodes to output nodes. The CMU-led team will collaborate with another MICrONS team at the Wyss Institute for Biologically Inspired Engineering, led by George Church, professor of genetics at Harvard Medical School. In this MICrONS project, CMU researchers and their collaborators in other universities will use these massive databases to evaluate a number of computational and learning models as they improve their understanding of the brain's computational principles and reverse-engineer the data to build better computer algorithms for learning and pattern recognition.
The company posted record-revenue for the quarter, and once again credits strong sales of its GPUs and deep learning technology for the boost on its balance sheet. We have made significant investments over the past five years to evolve our entire GPU computing stack for deep learning. Non-GAAP earnings were 53 cents per share on a revenue of $1.43 billion, up 24 percent year-over-year. Last spring, Huang unveiled several new technologies for advancing deep learning amid the GPU Technology Conference.
Nvidia CEO Jen-Hsun Huang said the partnership illustrates the commitment both companies have made to advancing the use-cases of AI. The partnership combines Nvidia's self-driving computing platform with Baidu's cloud and mapping technology to develop an algorithm-based operating system capable of powering complex navigation systems in autonomous vehicles. Nvidia CEO Jen-Hsun Huang said the partnership illustrates the commitment both companies have made to advancing the use-cases of AI. "Baidu has already built a strong team in Silicon Valley to develop autonomous driving technologies, and being able to do road tests will greatly accelerate our progress," said Wang Jing, general manager of Baidu's Autonomous Driving Unit, in a statement.
Not surprisingly, the company's approach has largely been informed by the impressive open-source robotics pedigree of Wise, who got her start at Willow Garage, developer of the now-ubiquitous open-source Robotic Operating System (ROS). Used in conjunction with Fetch's autonomous cart, nicknamed "Freight," the system can automate pick and place processes in fulfillment warehouses without requiring costly reconfiguration or setup. That's a similar approach to the one Wise used at Willow Garage when she helped develop TurtleBot, an open-source mobile robotic platform that was designed to be developed into a useful system by the end users -- in essence, a blank slate. The new growth in industrial robotics market, which is set to grow to $44.45B in sales by 2020, will be driven not by companies that make quality robots for different sectors, but by companies making highly flexible robots that empower outside developers to experiment and adapt systems to new industries.