Events to see robot serving food that manages delivery route by itself. A company based at the University of California, Berkeley, is using machine learning to teach its delivery robots how to cross the road safely, without any human intervention. This technology will be a helping hand for event's catering service whose usual challenge is to manage serving staff during events. Not just caterers but they could also be helpful to carry food plates with serving staff who are usually seen jumbling with a stake of dishes.
While BERT is a significant improvement in how computers'understand' human language, it is still far away from understanding language and context in the same way that humans do. We should, however, expect that BERT will have a significant impact on many understanding focused NLP initiatives. The General Language Understanding Evaluation benchmark (GLUE) is a collection of datasets used for training, evaluating, and analyzing NLP models relative to one another. The datasets are designed to test a model's language understanding and are useful for evaluating models like BERT. As the GLUE results show, BERT makes it possible to outperform humans even in comprehension tasks previously thought to be impossible for computers to outperform humans.
In a plain factory building in the San Marcos hills, north of San Diego in California, a technological revolution is under way. There, a team of AI experts are developing a new brand of woman that can smile, flutter her eyelids, make small-talk and remember the names of your siblings. Harmony – for that is her name – is a cut above your average sex doll. More than merely a masturbatory aid, she is a friend, lover and potential life partner. In Sex Robots & Vegan Meat, Jenny Kleeman examines the innovations that promise to change the way we love, eat, reproduce and die in the future. "What you are about to read is not science fiction," she warns in her preface.
In this episode of the McKinsey on AI podcast miniseries, McKinsey's David DeLallo speaks with McKinsey Global Institute partner Michael Chui and associate partner Bryce Hall about the latest trends in business adoption of artificial intelligence (AI). They discuss where the technology is being used most across industries, companies, and business functions; the keys to getting impact from AI investments; and what lies ahead. There's no shortage of predictions about how it could fundamentally change the way we live and work. Over the past few years, companies around the world have been figuring out exactly how AI technologies can improve their performance in a number of areas across their business. But is AI actually delivering significant results? Moreover, what can we expect to see as we move into a new decade of AI use and development? To answer some of these questions today, I'm joined by Michael Chui, a McKinsey partner with the McKinsey Global Institute, who is based in our San Francisco office, and associate partner Bryce Hall from our Washington, DC, office.
Law enforcement in America is facing a day of reckoning over its systemic, institutionalized racism and ongoing brutality against the people it was designed to protect. Virtually every aspect of the system is now under scrutiny, from budgeting and staffing levels to the data-driven prevention tools it deploys. A handful of local governments have already placed moratoriums on facial recognition systems in recent months and on Wednesday, Santa Cruz, California became the first city in the nation to outright ban the use of predictive policing algorithms. While it's easy to see the privacy risks that facial recognition poses, predictive policing programs have the potential to quietly erode our constitutional rights and exacerbate existing racial and economic biases in the law enforcement community. Simply put, predictive policing technology uses algorithms to pore over massive amounts of data to predict when and where future crimes will occur.
Today, we are bombarded by messages about the ways in which artificial intelligence (AI) is changing our world and its future promises and perils. But today's AI, called machine learning, is very different from much of AI in the past. From the 1970s until the 1990s, a very different approach, called "expert systems," appeared poised to radically change society in many of the same ways that today's machine learning seems. Expert systems seek to encode into software systems the experience and understanding of the finest human specialists in everything from diagnosing an infectious disease to identifying the sonar fingerprint of enemy submarines, and then have these systems suggest reasoned decisions and conclusions in new, real-world cases. Today, many of these expert systems are commonplace in everything from systems for maintenance and repair, to automated customer support systems of various sorts.
Smoke & Mirrors: How Hype Obscures the Future and How To See Past It • By Gemma Milne • Little, Brown Book Group • 322 pages • ISBN 978-1-4721-4366-2 • £14.99 There was a story that made the rounds in the middle of the dot-com bust. As share prices of tech companies -- both good and bad -- cratered, someone asked a bunch of Silicon Valley types these two questions: Was the internet hyped? How many thought that in five years the internet would be bigger than it was then? Even at the time, if you were spending any time online you knew that the internet wasn't hyped -- but many internet businesses were.
The brain of a human child is spectacularly amazing. Even in any previously unknown situation, the brain makes a decision based on its primal knowledge. Depending on the outcome, it learns and remembers the most optimal choices to be taken in that particular scenario. On a high level, this process of learning can be understood as a ’trial and error’ process, where the brain tries to maximise the occurrence of positive outcomes.
In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that "AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible." Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor. Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2
In computer vision, one key property we expect of an intelligent artificial model, agent, or algorithm is that it should be able to correctly recognize the type, or class, of objects it encounters. This is critical in numerous important real-world scenarios--from biomedicine, where an intelligent system might be tasked with distinguishing between cancerous cells and healthy ones, to self-driving cars, where being able to discriminate between pedestrians, other vehicles, and road signs is crucial to successfully and safely navigating roads. Deep learning is one of the most significant tools for state-of-the-art systems in computer vision, and its use has resulted in models that have reached or can even exceed human-level performance in important and challenging real-world image classification tasks. Despite their successes, these models still have difficulty generalizing, or adapting to tasks in testing or deployment scenarios that don't closely resemble the tasks they were trained on. For example, a visual system trained under typical weather conditions in Northern California may fail to properly recognize pedestrians in Quebec because of differences in weather, clothes, demographics, and other features.