When we come into this world, it takes us three years or less to build up in our brain a model box with which to reconstruct in our mind real-time representations of the reality immediately surrounding us. We are then able to speak and be conscious about that reality, act purposefully and start establishing in our memory a personal history. The raw amount of sensory information we have taken up by that time is less than 200 Million sensory patterns -- counting a handful per waking second. A decent nursery could probably be recreated by a compact virtual reality system not bigger than the human genome. Compare this to the flood of information needed to train a deep learning system on a single task like sorting objects on photographs, each into one of a thousand classes.
Google's strategic move into selling own branded Mobile phones is another step in the merging of "Software plus Hardware" that Apple, Microsoft, Amazon and recently Facebook have realized at the making of the "Internet of Things" Era. This is the critical issue of not just providing the software and operating system but increasing the value in the devices that become the Interface to the Customer: the smart phone, the smart tablet/laptop of Microsoft Surface, the Smart Speaker of Amazon Echo and Alexa, and the Facebook Oculus Rift and Microsoft Hololens that are the new foundations of Natural Language speech recognition services and the VR Virtual Reality and AR Augmented Reality breaking now and into 2017 and onward. Google's long-term market is changing, the advertising revenue from search engines while still strong is now seeing new ways to search via speech or Virtual image recognition and virtual interaction Google has been late to realizing perhaps the shift to software hardware is where the Internet of Things may be shaping the market with the Connected Home, Connected Car and Connected Work through these devices. It's all about "market marking" beyond just the big cloud data centers and big data analytics to how to build out the edge of the cloud network with all these potentially billions of connected sensors and devices. If the Mobile phone is becoming the "remote control to this world" and platforms the "fabric of social networks and connected experiences" then Google like others is rushing to get into this space with stronger software and hardware offerings
CARY – Oliver Schabenberger, to use a military term embraced by industry, is "dual hatted." And as chief operating officer as well as chief technology officer at SAS, he serves as No. 2 to CEO and co-founder Jim Goodnight. So no one other than Goodnight has a better hands-on, inside view of the 2018 landscape and beyond for the global software firm that is one of the world's biggest big data juggernauts with an industry-leading emphasis on analytics. And a big part of that future is the Internet of Things as SAS moves to create a division focusing on IoT, which is transforming tech around the world and is forecast as a multi-trillion dollar opportunity by Cisco as well as other firms.
Data analysis and visualization startup Virtualitics LLC just announced the launch of a new tool that will allow researchers to better understand data using virtual reality and augmented reality. The new tool, unveiled Thursday, combines the use of VR and AR with machine learning and Big Data analysis to allow data scientists to immerse themselves in the data to better discover hidden insights hidden in complex data sets. The company also announced that it just closed a $3 million seed funding round from angel investors. "Big Data is worthless if we cannot extract actionable knowledge from it," said Michael Amori, chief executive officer of Virtualitics. "Visualization can reveal the knowledge hidden in data, but traditional 2D and 3D data visualizations are inadequate for large and complex data sets."
SAN ANTONIO--In the warzones of the future, medics touching down amid heavy battlefield casualties will know who to treat first, how to approach every injury, and even who is most likely to live or die -- all before looking at a single wounded soldier. That's the vision of Col. (and Dr.) Jerome Buller, who leads the U.S. Army Institute of Surgical Research. Buller says biometric data gleaned from soldier-borne sensors, combined with in-depth medical and training data and augmented reality lenses, will help medics in combat evaluate the battlefield and everyone in it from a safe distance. They will make their most important decisions before even seeing their patients. "Imagine that [the hypothetical future] medic is able to scan the battlefield and instead of seeing rubble, he's seeing red or green dots, or amber dots, and he knows where to apply resources or not," Buller said during the Defense One and NextGov Genius Machines event here on Wednesday.