There's a new AI that can guess how you feel just by watching you walk

#artificialintelligence

So is it possible to interpret how someone is feeling based on their gait alone? That's exactly what scientists at the University of North Carolina at Chapel Hill and the University of Maryland at College Park have taught a computer to do. Using deep learning, their software can analyze a video of someone walking, turn it into a 3D model, and extract their gait. A neural network then determines the dominant motion and how it matches up to a particular feeling, based on the data on which it's trained. According to their research paper, published in June on arXiv, their deep learning model can guess four different emotions--happy, sad, angry, and neutral--with 80% accuracy.


Thoughtful Machine Learning with Python - Programmer Books

#artificialintelligence

Gain the confidence you need to apply machine learning in your daily work. With this practical guide, author Matthew Kirk shows you how to integrate and test machine learning algorithms in your code, without the academic subtext. Featuring graphs and highlighted code examples throughout, the book features tests with Python's Numpy, Pandas, Scikit-Learn, and SciPy data science libraries. If you're a software engineer or business analyst interested in data science, this book will help you:


Artificial Intelligence – Implementation of GAN - Amazing Images and Artwork

#artificialintelligence

Artificial Intelligence (AI) is not considered just an emerging technology with a bright future, it is indeed a robust growing platform, impacting several industries and touching numerous spheres of life. AI algorithms need enormous volumes of datasets to be trained appropriately, after which the system can not only decipher pictures, such as recognizing a dog is a dog or differentiating a chair from a table, it can also generate original images and create exceptionally amazing artistry of quality associated with those of Picasso or Michelangelo. AI model that makes it possible has matured substantially over the recent years and it produces perfect output for certain applications but needs more refinement in other cases. Computer scientists have spent around two decades to teach, train and build machines which can visualize the world around them, a normal skill that humans take for granted, yet it's one that's highly challenging to train a machine to do, kudos to artificial intelligence for making it possible!! Two major ground-breaking improvements in AI image processing have been facial-recognition technology in both retail and security, as well as image generation in all fields of art. The commercialized usage of facial recognition technology is to improve sales and marketing of products including efficient targeting of audience.


Artificial Intelligence Inching Closer to Deciphering Long Lost Languages

#artificialintelligence

With new technology available to us, we're inching closer to the end of the days when deciphering ancient languages is a painstaking task filled with frustration and confusion. Nifty machines following complex algorithms are helping researchers around the globe as they take on the often monumental task of understanding ancient texts and lost languages. Big Think reports that linguistic experts estimate there have been approximately 31,000 languages spoken throughout human history. Many of them are now dead and forgotten, but a new AI project may be part of the answer in how to decipher the writing of ancient languages. "While languages change, many of the symbols and how the words and characters are distributed stay relatively constant over time. Because of that, you could attempt to decode a long-lost language if you understood its relationship to a known progenitor language."


Professor Patrick Winston, former director of MIT's Artificial Intelligence Laboratory, dies at 76

Robohub

Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. A professor at MIT for almost 50 years, Winston was director of MIT's Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). A devoted teacher and cherished colleague, Winston led CSAIL's Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don't yet understand. "My principal interest is in figuring out what's going on inside our heads, and I'm convinced that one of the defining features of human intelligence is that we can understand stories,'" said Winston, the Ford Professor of Artificial Intelligence and Computer Science, in a 2011 interview for CSAIL.


Automated system generates robotic parts for novel tasks

Robohub

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand. In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators -- devices that mechanically control robotic systems in response to electrical signals -- that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it's activated, however, it portrays the famous Edvard Munch painting "The Scream."


Robots in Depth with Federico Pecora

Robohub

In this episode of Robots in Depth, Per Sjöborg speaks with Federico Pecora about AI and robotics. Federico Pecora is Associate Professor in Computer Science at the Center for Applied Autonomous Sensor Systems at Örebro University, Sweden.


#291: Medieval Automata and Cathartic Objects: Modern Robots Inspired by History, with Michal Luria

Robohub

In this episode, Lauren Klein interviews Michal Luria, a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, about research that explores the boundaries of Human-Robot Interaction. Michal draws inspiration from the Medieval Times for her project to test how historical automata can inform modern robotics. She also discusses her work with cathartic objects to support emotional release. Michal Luria is a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, advised by Professors Jodi Forlizzi and John Zimmerman. Prior to her PhD, Michal studied Interactive Communication at the Interdisciplinary Center Herzliya in Israel.


Honor CEO Seth Sternberg: 'We're Using the Past to Predict the Future' - Home Health Care News

#artificialintelligence

Home care is often singled out for being slow to embrace and implement technology, but as the demand for care services grows, providers are forced to think outside of the box when it comes curbing caregiver turnover. San Francisco-based home care startup Honor understands this all too well, according to CEO Seth Sternberg. The company is using insights gleaned from machine learning to examine and address turnover internally and with its network of home care partners. Honor, which has raised $115 million since launching in 2014, teams up with independently owned and operated agencies by taking over caregiver recruiting, onboarding and training, in addition to day-to-day logistics. Currently, the company operates in Arizona, California, New Mexico and Texas.


Free Trial Signup - Gather Twitter Data DiscoverText

#artificialintelligence

Use this information to train machine-learning classifiers to recognize relevant text and social media data. Jump into data using an interactive word CloudExplorer or build a mini topic dictionary using "defined" search.