If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Tinder is adding a'panic button' to its app that will allow people to alert the police if they feel unsafe while out on a date. It will be rolled out to users of the dating service from the end of January in the USA, according to a Wall Street Journal report. They will use a technology that tracks the location of users and notifies authorities of any safety issues that is built by company Noonlight. Tinder has not said when or if the service will be rolled out to the rest of the world. 'You should run a dating business as if you are a mom,' Mandy Ginsberg, CEO of Tinder parent company Match Group, told the Wall Street Journal.
Anthony Levandowski makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep rather than cloaked acolytes, the engineer known for self-driving cars--and triggering a notorious lawsuit--could be unveiling his latest startup instead of laying the foundations for a new religion. But he is doing just that. Artificial intelligence has already inspired billion-dollar companies, far-reaching research programs, and scenarios of both transcendence and doom. Now Levandowski is creating its first church.
Uber AI Labs has developed an algorithm called Generative Teaching Networks (GTN) that produces synthetic training data for neural networks which allows the networks to be trained faster than when using real data. Using this synthetic data, Uber sped up its neural architecture search (NAS) deep-learning optimization process by 9x. In a paper published on arXiv, the team described the system and a series of experiments. GTN is motivated by the problem of neural architecture search (NAS), which trains many different deep-learning model structures and selects the one that performs best on a set of test data. While a typical approach would train each model on the full data set for multiple iterations (or epochs), this is time-consuming and expensive.
The world's first self-driving electric-powered ride-sharing vehicle is here, but no word on when you'll actually be able to app-hail this robotaxi. Cruise, the self-driving car division of General Motors, unveiled the Origin on Tuesday night in a former Honda dealership just south of downtown. The six-passenger vehicle looks a bit like a small bus, has no steering wheel or pedals, and offers a cavernous area where two rows of three passengers face each other. In introducing the vehicle, Cruise CEO Dan Ammann, a former president of GM, told a crowd made up mostly of company employees that the Origin "is a production vehicle," adding that an announcement about where and when manufacturing will begin is coming soon. Kyle Vogt, Cruise's co-founder who sold the company to GM in 2016 for $1 billion and now serves as chief technology officer, said that being the first automotive or tech company to introduce a dedicated autonomous ride-sharing car doesn't guarantee success.
Artificial intelligence(AI) has been an atypical technology trend. In a traditional technology cycle, innovation typically begins with startups trying to disrupt industry incumbents. In the case of AI, most of the innovation in the space has been coming from the big corporate labs of companies like Google, Facebook, Uber or Microsoft. Those companies are not only leading impressive tracks of research but also regularly open sourcing new frameworks and tools that streamline the adoption of AI technologies. In that context, Uber has emerged as one of the most active contributors to open source AI technologies in the current ecosystems.
Public streets could get a fresh look via the world of autonomous vehicles in roughly the next two years. Self-driving technology supplier Mobileye CEO Amnon Shashua told CNBC Friday that the first phase of autonomous driving will come in the form of cab services. "Robotaxi is not that far away," he said in a "Mad Money" interview. "We are targeting early 2022." The roll-out of self-driving cars must be marketed to fleet operators before it will be available to the general public, Shashua explained to show host Jim Cramer.
With the likes of Uber, Amazon, and Deliveroo changing the way we live, shop, work and consume content, innovation is happening faster than ever before. In light of economic uncertainty, it's become even more vital for businesses to deploy cutting-edge technology to maintain competitiveness. Over the course of the next year, board-level conversations will be dominated by ways to ensure a seamless customer experience, formulating tactics to embrace disruptive technologies, as well as grappling with the implications of the future workplace. Consumers can now order a meal, book a taxi and do their shopping with a few clicks of a button, without even leaving their living rooms. As a result, customers are increasingly expecting services to be'Apple Easy' and'Google Fast' in all aspects of their lives, demanding quick and seamless experiences across the board.
In January 2019, Uber introduced Manifold, a model-agnostic visual debugging tool for machine learning that we use to identify issues in our ML models. To give other ML practitioners the benefits of this tool, today we are excited to announce that we have released Manifold as an open source project. Manifold helps engineers and scientists identify performance issues across ML data slices and models, and diagnose their root causes by surfacing feature distribution differences between subsets of data. At Uber, Manifold has been part of our ML platform, Michelangelo, and has helped various product teams at Uber analyze and debug ML model performance. Since highlighting this project on the Uber Eng Blog earlier this year, we have received a lot of feedback from the community regarding its potential in general purpose ML model debugging scenarios.
This terrible accident happened on March 19, 2018, late in the night. An Uber self-driving car, running in autonomous mode with a safety driver behind its wheel, hit and killed a woman in Tempe, Arizona. You can find the detailed investigation results here. From the dash-cam and internal driver-seat camera footages, the accident happened on a poorly lit road with a speed limit of 40 mph. The safety driver was watching her cellphone （possibly watching Hulu) right before the car hit the woman.
According to a study carried out in the United States by McKinsey Global Institute, it is estimated that in 2030, 30% of the work that humans currently do will be replaced by machines with artificial intelligence (AI). In other words, the development of artificial intelligence would represent the loss of 70 million jobs in the United States alone. It is expected that machines will be better than humans in countless tasks, both manual and cognitive. Automation will produce much fewer errors, substantially improving the productivity and quality of goods and services sold by companies. The McKinsey Global Institute researchers estimate that, by the same year, the demand for administrative staff will fall by 20% and for personnel performing physical work by 30%, especially in the manufacturing, construction, and food industry sectors, among others.