If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Are you sure you want to view these Tweets? Agreed, and appreciate the parallel drawn here. Definitely a huge challenge to regulate these emerging & booming sectors. Interesting reading this as well: «I have been proud to work with #Tesla on advancing cleaner, more #sustainable #transportation technologies. Impact of #Digitalization and #Automation, #futureofwork "This is your #pilot speaking.
DAVOS, Switzerland (Reuters) - Sundar Pichai, the CEO of Alphabet Inc and its Google subsidiary, said on Wednesday that healthcare offers the biggest potential over the next five to 10 years for using artificial intelligence to improve outcomes, and vowed that the technology giant will heed privacy concerns. U.S. lawmakers have raised questions about Google's access to the health records of tens of millions of Americans. Ascension, which operates 150 hospitals and more than 50 senior living facilities across the United States, is one of Google's biggest cloud computing customers in healthcare. "When we work with hospitals, the data belongs to the hospitals," Pichai told a conference panel at the World Economic Forum in Davos, Switzerland. "But look at the potential here. Cancer if often missed and the difference in outcome is profound. In lung cancer, for example, five experts agree this way and five agree the other way. We know we can use artificial intelligence to make it better," Pichai added.
The algorithm lets robots find the shortest route in unfamiliar environments, opening the door to robots that can work inside homes and offices. The news: A team at Facebook AI has created a reinforcement learning algorithm that lets a robot find its way in an unfamiliar environment without using a map. Using just a depth-sensing camera, GPS, and compass data, the algorithm gets a robot to its goal 99.9% of the time along a route that is very close to the shortest possible path, which means no wrong turns, no backtracking, and no exploration. This is a big improvement over previous best efforts. Why it matters: Mapless route-finding is essential for next-gen robots like autonomous delivery drones or robots that work inside homes and offices.
KNIME, a unified software platform for creating and productionizing data science, announced the availability of KNIME on AWS, its commercial offering for productionizing artificial intelligence (AI)/machine learning (ML) solutions on Amazon Web Services (AWS). KNIME on AWS is designed to allow customers to assemble and deploy ML solutions across the enterprise at scale and securely on AWS and to gain tangible value quickly. The offering is now featured in AWS Marketplace, including free trials. Many enterprises seek to create value by deploying ML and AI solutions but can lack the data scientists, data platform engineers, experience, money and time necessary to make a meaningful impact quickly. The result is that teams and individuals lacking this set of highly technical skills are left out of the innovation loop and are unable to realize the potential that their data offers.
Facebook has scored an impressive feat involving AI that can navigate without any map. Facebook's wish for bragging rights, although they said they have a way to go, were evident in its blog post, "Near-perfect point-goal navigation from 2.5 billion frames of experience." Long story short, Facebook has delivered an algorithm that, quoting MIT Technology Review, lets robots find the shortest route in unfamiliar environments, opening the door to robots that can work inside homes and offices." And, in line with the plain-and-simple, Ubergizmo's Tyler Lee also remarked: "Facebook believes that with this new algorithm, it will be capable of creating robots that can navigate an area without the need for maps...in theory, you could place a robot in a room or an area without a map and it should be able to find its way to its destination." Erik Wijmans and Abhishek Kadian in the Facebook Jan. 21 post said that, well, after all, one of the technology key challenges is "teaching these systems to navigate through complex, unfamiliar real-world environments to reach a specified destination--without a preprovided map." Facebook has taken on the challenge. The two announced that Facebook AI created a large-scale distributed reinforcement learning algorithm called DD-PPO, "which has effectively solved the task of point-goal navigation using only an RGB-D camera, GPS, and compass data," they wrote. DD-PPO stands for decentralized distributed proximal policy optimization. This is what Facebook is using to train agents and results seen in virtual environments such as houses and office buildings were encouraging. The bloggers pointed out that "even failing 1 out of 100 times is not acceptable in the physical world, where a robot agent might damage itself or its surroundings by making an error." Beyond DD-PPO, the authors gave credit to Facebook AI's open source AI Habitat platform for its "state-of-the-art speed and fidelity." AI Habitat made its open source announcement last year as a simulation platform to train embodied agents such as virtual robots in photo-realistic 3-D environments. Facebook said it was part of "Facebook AI's ongoing effort to create systems that are less reliant on large annotated data sets used for supervised training." InfoQ had said in July that "The technology was taking a different approach than relying upon static data sets which other researchers have traditionally used and that Facebook decided to open-source this technology to move this subfield forward." Jon Fingas in Engadget looked at how the team worked toward AI navigation (and this is where that 25 billion number comes in). "Previous projects tend to struggle without massive computational power.
MADS East 2019 was a two-day conference in December that gave attendees endless opportunities to expose themselves to new ideas in the space of data science for marketing. Some of this year's conference perks included: tables for one-on-one networking, a half-an-hour off the record roundtable with 7 industry leaders, two unique tracks per day, buffet-style lunches, breakfasts, snacks, a refreshing break for cocktails at the Opening Night Party, and NYC Times Square views. This article is my summary of the Day 1 presentations I was able to attend, including lessons and reminders from the speakers. Aside from staying up to date on industry trends, MADS East has also proven itself a valuable opportunity for data and marketing people who are looking to engage with professionals of varying career levels. I was expecting to be the only individual with little background in data or extended industry experience present, but to my surprise, there was a decent balance between early, mid and late-career attendees.
Way back in May 2011, Eric Schmidt, who was then the executive chairman of Google, said that the rapid development of facial recognition technology had been one of the things that had surprised him most in a long career as a computer scientist. But its "surprising accuracy" was "very concerning". Questioned about this, he said that a database using facial recognition technology was unlikely to be a service that the company would create, but went on to say that "some company … is going to cross that line". As it happens, Dr Schmidt was being economical with the actualité, as the MP Alan Clark used to say. He must surely have known that a few months earlier Facebook had announced that it was using facial recognition in the US to suggest names while tagging photos.
Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind. It's part of the company's new focus on "embodied AI," meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things -- responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I'll leave to your own speculation, but the fact is they've recruited and funded serious researchers to look into this and related domains of AI work.
It's one thing to develop a working machine learning model, it's another to put it to work in an application. Cortex Labs is an early-stage startup with some open-source tooling designed to help data scientists take that last step. The company's founders were students at Berkeley when they observed that one of the problems around creating machine learning models was finding a way to deploy them. While there was a lot of open-source tooling available, data scientists are not experts in infrastructure. CEO Omer Spillinger says that infrastructure was something the four members of the founding team -- himself, CTO David Eliahu, head of engineering Vishal Bollu and head of growth Caleb Kaiser -- understood well.