Machine Learning: AI-Alerts


Two Startups Use Processing in Flash Memory for AI at the Edge

IEEE Spectrum Robotics Channel

Irvine Calif.-based Syntiant thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. Austin, Tex.-based Mythic thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. They both might be right. A growing crowd of companies is hoping to deliver chips that accelerate otherwise onerous deep learning applications, and to some degree they all have similarities because "these are solutions that are created by the shape of the problem," explains Mythic founder and CTO Dave Fick. When executed in a CPU, that problem is shaped like a traffic jam of data.


Artificial intelligence model "learns" from patient data to make cancer treatment less toxic

MIT News

MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer. Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.


Can Silicon Valley workers rein in Big Tech from within? Ben Tarnoff

The Guardian

An unprecedented wave of rank-and-file rebellion is sweeping Big Tech. At one company after another, employees are refusing to help the US government commit human rights abuses at home and abroad. At Google, workers organized to shut down Project Maven, a Pentagon project that uses machine learning to improve targeting for drone strikes – and won. At Amazon, workers are pushing Jeff Bezos to stop selling facial recognition to police departments and government agencies, and to cut ties with Immigration and Customs Enforcement (Ice). At Microsoft, workers are demanding the termination of a $19.4m cloud deal with Ice.


Magical thinking about machine learning won't bring the reality of AI any closer John Naughton

#artificialintelligence

"Any sufficiently advanced technology," wrote the sci-fi eminence grise Arthur C Clarke, "is indistinguishable from magic." This quotation, endlessly recycled by tech boosters, is possibly the most pernicious utterance Clarke ever made because it encourages hypnotised wonderment and disables our critical faculties. For if something is "magic" then by definition it is inexplicable. There's no point in asking questions about it; just accept it for what it is, lie back and suspend disbelief. Currently, the technology that most attracts magical thinking is artificial intelligence (AI).


Machine Learning And AI Will Disrupt All Careers According To Dell's Roese

#artificialintelligence

Machine learning (ML) and Artificial Intelligence (AI) represent one of the biggest disruptions to your career according to John Roese, CTO of Dell Technologies. During the Dell Technology World keynote, Roese made this bold but accurate statement. Despite the hype, AI is real and can't be ignored. Leading businesses are using machine learning to deliver quantifiable business value today. For example, Google used the AI knowledge gathered from its DeepMind acquisition to improve its cooling systems, saving the company of hundreds of millions of dollars.


AI-driven robot hand spent hundred years teaching itself to rotate cube

#artificialintelligence

AI researchers have demonstrated a self-teaching algorithm that gives a robot hand remarkable new dexterity. Their creation taught itself to manipulate a cube with uncanny skill by practicing for the equivalent of a hundred years inside a computer simulation (though only a few days in real time). The robotic hand is still nowhere near as agile as a human one, and far too clumsy to be deployed in a factory or a warehouse. Even so, the research shows the potential for machine learning to unlock new robotic capabilities. It also suggests that someday robots might teach themselves new skills inside virtual worlds, which could greatly speed up the process of programming or training them.


Amazon's Facial Recognition System Mistakes Members of Congress for Mugshots

WIRED

Amazon touts its Rekognition facial recognition system as "simple and easy to use," encouraging customers to "detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases." And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that's simply not good enough. The ACLU study also illustrated the racial bias that plagues facial recognition today. "Nearly 40 percent of Rekognition's false matches in our test were of people of color, even though they make up only 20 percent of Congress," wrote ACLU attorney Jacob Snow.


'The discourse is unhinged': how the media gets AI alarmingly wrong

The Guardian

In June of last year, five researchers at Facebook's Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations. While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: "Balls have zero to me to me to me to me to me to me to me to." On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves. These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking. A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can't Understand.


Helping computers perceive human emotions

MIT News

MIT Media Lab researchers have developed a machine-learning model that takes computers a step closer to interpreting our emotions as naturally as humans do. In the growing field of "affective computing," robots and computers are being developed to analyze facial expressions, interpret our emotions, and respond accordingly. Applications include, for instance, monitoring an individual's health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions. A challenge, however, is people express emotions quite differently, depending on many factors. General differences can be seen among cultures, genders, and age groups.


From Imitation Games To The Real Thing: A Brief History Of Machine Learning

#artificialintelligence

Hephaestus, the Greek god of blacksmiths, metalworking and carpenters, was said to have fashioned artificial beings in the form of golden robots. Myth finally moved toward truth in the 20th century, as AI developed in series of fits and starts, finally gaining major momentum--and reaching a tipping point--by the turn of the millennium. Here's how the modern history of AI and ML unfolded, starting in the years just following World War II. In 1950, while working at the University of Manchester, legendary code breaker Alan Turing (subject of the 2014 movie The Imitation Game) released a paper titled "Computing Machinery and Intelligence." It became famous for positing what became known as the "Turing test."