In the food industry, it seems, the robot revolution is well underway, with machines mastering skilled tasks that have always been performed by people. In Boston, robots have replaced chefs and are creating complex bowls of food for customers. In Prague, machines are displacing bartenders and servers using an app. Robots are even making the perfect loaf of bread these days, taking charge of an art that has remained in human hands for thousands of years. Now comes Briggo, a company that has created a fully automated, robotic brewing machine that can push out 100 cups of coffee in a single hour -- equaling the output of three to four baristas, according to the company.
It might not be the first place you imagine when you think about robots. But in the Renaissance splendour of the Vatican, thousands of miles from Silicon Valley, scientists, ethicists and theologians gather to discuss the future of robotics. The ideas go to the heart of what it means to be human and could define future generations on the planet. The workshop, Roboethics: Humans, Machines and Health was hosted by The Pontifical Academy for Life. The Academy was created 25 years ago by Pope John Paul II in response to rapid changes in biomedicine.
To celebrate the German composer's March 21, 1685 birthday, Doodle lets users compose a melody in Bach's style. The interactive Doodle is the product of collaboration between Google's Magenta – which helps people make their own music and art through machine learning – and Google's PAIR – which makes the tools that allow machine-learning to be accessed by everyone. A machine-learning model called Coconet made it all possible. Developed by Google, Coconet was trained on 306 of Bach's chorale harmonizations. "His chorales always have four voices: each carries their own melodic line, creating a rich harmonic progression when played together," writes Google.
A new area in artificial intelligence involves using algorithms to automatically design machine-learning systems known as neural networks, which are more accurate and efficient than those developed by human engineers. But this so-called neural architecture search (NAS) technique is computationally expensive. A state-of-the-art NAS algorithm recently developed by Google to run on a squad of graphical processing units (GPUs) took 48,000 GPU hours to produce a single convolutional neural network, which is used for image classification and detection tasks. Google has the wherewithal to run hundreds of GPUs and other specialized hardware in parallel, but that's out of reach for many others. In a paper being presented at the International Conference on Learning Representations in May, MIT researchers describe an NAS algorithm that can directly learn specialized convolutional neural networks (CNNs) for target hardware platforms -- when run on a massive image dataset -- in only 200 GPU hours, which could enable far broader use of these types of algorithms.
A swarm of robots inspired by living cells can squeeze through gaps and keep moving even if many of its parts fail. Living cells gather together and collectively migrate under certain conditions, such as when inflammatory cells travel through the bloodstream to a wound site to help the healing process. To mimic this, Hod Lipson at Columbia University in New York and his colleagues created 25 disc-shaped robots that can join together. Each is equipped with cogs that cause the robot's outer shell to expand and contract and magnets around its perimeter that let it stick to neighbouring bots. Individually, the bots can't move, but once stuck together, the swarm can slither across a surface by making individual bots expand and contract at different times.
The world's smallest bears copy one another's facial expressions as a means of communication. A team at the University of Portsmouth, UK, studied 22 sun bears at the Bornean Sun Bear Conservation Centre in Malaysia. In total, 21 matched the open-mouthed expressions of their playmates during face-to-face interactions. When they were facing each other, 13 bears made the expressions within 1 second of observing a similar expression from their playmate. "Mimicking the facial expressions of others in exact ways is one of the pillars of human communication," says Marina Davila-Ross, who was part of the team.
For most people who talk to our technology -- whether it's Amazon's Alexa, Apple Siri or the Google Assistant -- the voice that talks back sounds female. Some people do choose to hear a male voice. Now, researchers have unveiled a new gender-neutral option: Q. "One of our big goals with Q was to contribute to a global conversation about gender, and about gender and technology and ethics, and how to be inclusive for people that identify in all sorts of different ways," says Julie Carpenter, an expert in human behavior and emerging technologies who worked on developing Project Q. The voice of Q was developed by a team of researchers, sound designers and linguists in conjunction with the organizers of Copenhagen Pride week, technology leaders in an initiative called Equal AI and others. They first recorded dozens of voices of people -- those who identify as male, female, transgender or nonbinary.
Expert System is making enhancements to Cogito, its Artificial Intelligence platform that understands textual information and automatically processes natural language, delivering key updates in the areas of knowledge graphs, machine learning, and RPA. Cogito 14.4 enables users to more easily customize its Knowledge Graph of approximately 350,000 concepts connected by 2.8 Million relationships and lets them import targeted knowledge from any sources (such as company repositories Wikipedia or Geonames) in only a few clicks, enabling the platform to resolve references to real-world entities (such as people, companies, locations) and to link them to knowledge repositories by using standardized identifiers. Cogito 14.4 also extends its Natural Language Processing (NLP) extraction pipeline with a new active learning workflow that accelerates machine-learning-based analytics projects. Through an intuitive web application, Cogito 14.4's active learning workflow enables end-users to visualize the quality of extraction and provide feedback to the engine, which iteratively retrains the engine to reach the user's quality goals, thus reducing the amount of manual annotation needed Cogito 14.4 includes a Robotic Process Automation (RPA) connector that extends the use of RPA bots into process automation leveraging knowledge (and not only structured data) as well as requiring human-like judgement. The Cogito RPA Connector leverages deep contextual understanding to extract precise data from unstructured business documents.
The internet is full of lies. That maxim has become an operating assumption for any remotely skeptical person interacting anywhere online, from Facebook and Twitter to phishing-plagued inboxes to spammy comment sections to online dating and disinformation-plagued media. Now one group of researchers has suggested the first hint of a solution: They claim to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But what they've actually demonstrated, according to a few machine learning academics, is the inherent danger of overblown machine learning claims. In last month's issue of the journal Computers in Human Behavior, Florida State University and Stanford researchers proposed a system that uses automated algorithms to separate truths and lies, what they refer to as the first step toward "an online polygraph system--or a prototype detection system for computer-mediated deception when face-to-face interaction is not available."
Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations. Factors like the wind, local topography, and meteorological exchanges make it difficult to compare one area to another. To add to the complexity, one would have to analyze billions of data points for numerous parameters -- temperature, salinity, velocity, how things change with depth, whether there is a trend present -- to pinpoint what physics are most dominant in a given region. "You would have to look at an overwhelming number of different global maps and mentally match them up to figure out what matters most where," says Maike Sonnewald, a postdoc working in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and a member of the EAPS Program in Atmospheres, Oceans and Climate (PAOC). Sonnewald, who has a background in physical oceanography and data science, uses computers to reveal connections and patterns in the ocean that would otherwise be beyond human capability.