sensor


Nvidia's plan to turn data from 500 million cameras into AI gold

#artificialintelligence

Video is the world's largest generator of data, created every day by over 500 million cameras worldwide. That number is slated to double by 2020. The potential there, if we could actually analyze the data, is off the charts. It's data from government property and public transit, commercial buildings, roadways, traffic stops, retail locations, and more. The result would be what NVIDIA calls AI Cities, a thinking robot, with billions of eyes trained on residents and programmed to help keep people safe.


This Robotics Startup Wants to Be the Boston Dynamics of China

IEEE Spectrum Robotics Channel

Of all the legged robots built in labs all over the world, few inspire more awe and reverence than Boston Dynamics' quadrupeds. Chinese roboticist Xing Wang has long been a fan of BigDog, AlphaDog, Spot, SpotMini, and other robots that Boston Dynamics has famously introduced over the years. "Marc Raibert … is my idol," Wang once told us about the founder and president of Boston Dynamics. Now Wang, with funding from a Chinese angel investor, has founded his own robotics company, called Unitree Robotics and based in Hangzhou, outside Shanghai. Wang says his plan is making legged robots as popular and affordable as smartphones and drones.


Banking on Big Data -- Environmental Protection

#artificialintelligence

Sophisticated tools capable of collecting and analyzing massive data sets and then displaying the results in visual form are no longer an option. They are becoming a necessity. On a daily basis, thousands upon thousands of monitoring stations around the world collect vast quantities of air quality data for use in spotting pollution problems, analyzing air quality trends, and guiding effective responses. To date, these monitoring stations have served as digital eyes and ears trained on the planet's atmosphere. But all of that seems likely to change in the not-too-distant future as evolving networks of air sensors that are just now beginning to be deployed around the globe result in an avalanche of data, all of which has the very real potential to overwhelm those trying to make sense of it.


#ftag=RSSbaffb68

ZDNet

According to a new analysis by Inkwood Research, the global market for collaborative robots is on track to generate a net revenue of about $9.27 billion by 2025. Many so-called cobots cost around $30K. There's increasing demand for small, flexible robotic platforms in numerous industries. Other factors responsible for the surging market and stiffening competition include widening applications of collaborative robots, falling sensor and platform prices, and heavy investments by robotics companies over the past decade in research & development.


My summer project: a rock-paper-scissors machine built on TensorFlow Google Cloud Big Data and Machine Learning Blog Google Cloud Platform

@machinelearnbot

It runs a very simple machine learning (ML) algorithm that detects your hand posture through an Arduino micro controller connected to the glove. The Arduino module converts the input signal voltage (0V - 5V) to numbers ranging from 0 to 1023. A linear model can transform raw input data into feature space where you have different axes for each feature you want to capture, so that the transformed data is much easier to handle. In the example of the graph above, we supplied glove sensor data (glove1, 2, 3) with their expected results (rock, paper or scissors).


Why swallowable robots could be the future of healthcare

ZDNet

The conversation took place between professor of chemical engineering Mikhail Shapiro and professor of electrical engineering Azita Emami. The pair's ATOMS (addressable transmitters operated as magnetic spins) devices borrow principles from magnetic resonance imaging, better known as MRI. The integrated chips that power ATOMS devices also resonate at different frequencies under a magnetic field gradient, showing their location as they move through the body. Among the challenges inherent in translating that idea into a fully-functional ATOMS device was making a device that was small enough to move through the tiniest structures of the human body yet big enough to pack in all the features it needs.


future-manufacturing-conference-2017?utm_source=Twitter&utm_medium=post&utm_campaign=SMAS_Conference&utm_content=%23AI%2C%20%23data%2C%20sensors%20%26%20robotics%20set%20to%20shape%20%23manufacturing%20industry.%20Fri%2013%20Oct

#artificialintelligence

Find out what the future of manufacturing looks like at the SMAS Future Manufacturing Conference 2017, and get practical support and advice on how you can start to leverage the opportunities of Industry 4.0. With manufacturing at a crossroads of challenges and opportunities, how will these smart technologies give your business competitive advantage, improve productivity, shorten product development cycles and produce products as efficiently and cost effectively as possible? What will it mean for your workforce as the number of routine jobs decrease, while the number of higher value jobs grows? The SMAS Future Manufacturing Conference 2017 will focus on four key themes during the packed one-day programme: Technology in Manufacturing – investing in a factory fit for the future with automation, augmented & virtual reality, sensors, additive manufacturing and cloud computing.


Waymo is the first company to give a detailed self-driving safety report to federal officials

Los Angeles Times

The National Highway Traffic Safety Administration has suggested a set of 28 "behavioral competencies," or basic things an autonomous vehicle should be able to do. Some are exceedingly basic ("detect and respond to stopped vehicles," "navigate intersections and perform turns"); others, more intricate ("respond to citizens directing traffic after a crash.) "This overview of our safety program reflects the important lessons learned through the 3.5 million miles Waymo's vehicles have self-driven on public roads, and billions of miles of simulated driving, over the last eight years," Waymo Chief Executive John Krafcik said in a letter Thursday to U.S. Transportation Secretary Elaine Chao. "You can't expect to program the car for everything you're possibly going to see," said Ron Medford, Waymo's safety director and a former senior National Highway Traffic Safety Administration official.


How do we use AI for our automated vehicles? Very carefully.

#artificialintelligence

Specifically how Artificial Intelligence (AI) plays a part in the development of autonomous vehicle technology (AVT). There are three main types of sensors -- vision (cameras), radar and LiDAR. You can't make a rule for every situation the car will encounter and AI helps us solve those corner cases. This hybrid approach, using AI based modules within a deterministic framework gives us the best of both worlds: clear generalized rules and policies governing the overall behavior of the vehicle and AI based algorithms to help us solve the most complex corner cases.


The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning IoT For All

#artificialintelligence

As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear. I'll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they're different. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things.