New Finding


Will AI Ever Become Conscious?

#artificialintelligence

One example of a sci-fi struggle to define AI consciousness is AMC's "Humans" (Tues. At this point in the series, human-like machines called Synths have become self-aware; as they band together in communities to live independent lives and define who they are, they must also battle for acceptance and survival against the hostile humans who created and used them. But what exactly might "consciousness" mean for artificial intelligence (AI) in the real world, and how close is AI to reaching that goal? Philosophers have described consciousness as having a unique sense of self coupled with an awareness of what's going on around you. And neuroscientists have offered their own perspective on how consciousness might be quantified, through analysis of a person's brain activity as it integrates and interprets sensory data.


Woman says her Echo device recorded and sent a private conversation

Daily Mail

Be careful of what you say around your Echo devices. A Portland woman was shocked to discover that Echo recorded and sent audio of a private conversation to one of their contacts without their knowledge, according to KIRO 7. The woman, who is only identified as Danielle, said her family had installed the popular voice-activated speakers throughout their home. It wasn't until a random contact called to let them know that he'd received a call from Alexa that they realized their device had mistakenly transmitted a private conversation. The contact, who was one of her husband's work employees, told the woman to'unplug your Alexa devices right now. 'We unplugged all of them and he proceeded to tell us that he had received audio files of recordings from inside our house,' the woman said.


All-terrain microbot moves by tumbling over complex topography

@machinelearnbot

The "microscale magnetic tumbling robot," or μTUM (microTUM), is about 400 by 800 microns, or millionths of a meter, smaller than the head of a pin. A continuously rotating magnetic field propels the microbot in an end-over-end or sideways tumbling motion, which helps the microbot traverse uneven surfaces such as bumps and trenches, a difficult feat for other forms of motion. "The μTUM is capable of traversing complex terrains in both dry and wet environments," said David Cappelleri, an associate professor in Purdue University's School of Mechanical Engineering and director of Purdue's Multi-Scale Robotics and Automation Lab. Findings are detailed in a research paper published online Feb. 3 in the journal Micromachines. The paper was authored by Purdue graduate student Chenghao Bi; postdoctoral research associate Maria Guix; doctoral student Benjamin V. Johnson; Wuming Jing, an assistant professor of mechanical engineering at Lawrence Technological University; and Cappelleri.


Using Machine learning tools to gain new insights from Earthquake data - Tech Explorist

#artificialintelligence

Scientists at the Columbia University have discovered a totally new way to study earthquakes. They picked out different types of earthquakes from three years using machine learning algorithms. According to them, these machine learning methods pick out very subtle differences in the raw data that we're just learning to interpret. Scientists particularly identified earthquake recordings at The Geysers in California, one of the world's oldest and largest geothermal fields. They assembled a catalog of 46,000 earthquake recordings, each represented as energy waves in a seismogram.


Researchers use AI to turn smartphones into lab grade microscopes

#artificialintelligence

Today we are on the verge of being able to give hundreds of millions of people around the world who still have no formal access to primary or secondary healthcare access to powerful smartphone based detection and diagnostic tools that help them detect the onset of everything from dementia and general disease, to heart conditions, inherited genetic disorders, pancreatic cancers, and skin cancer with nothing more than a smartphone and a free app. As a result the breakthrough could have big implications especially for healthcare in remote regions, and even environmental and pollution monitoring. The breakthrough not only opens up the possibility of one day soon being able to provide people, for example, in deprived, poor or remote areas, who don't have access to labs or lab grade microscopy an easy way to analyse environmental samples and medical samples, but it also helps feed the trend of helping to democratise access to primary and secondary healthcare, where today we can use smartphones and apps to diagnose everything from general disease, dementia, pancreatic cancer, skin cancer, and much more, to produce "better healthcare outcomes." The team's technique is low cost and simple, using attachments that can be inexpensively produced with a 3D printer, for less than $100 a piece, versus the thousands of dollars it would cost to buy lab grade equipment that produces images of similar quality. Cameras on today's smartphones are designed to photograph people and scenery, not to produce high resolution microscopic images, so the researchers had to develop an attachment that could be placed over the smartphone's lens to increase the resolution and the visibility of tiny details of the photos they take, down to a scale of approximately one millionth of a meter.



Understanding the AI Skills Gap

#artificialintelligence

However, it's become clear that there simply aren't enough people skilled in AI to meet demand, a state that's increasingly being viewed as a crisis. According to a recent study by EY, 80 percent of experts believe that there's a shortage of AI talent, even though the benefits of AI have never been more clear and attainable. How can tech keep up? Although demand for AI continues to grow, EY's research shows a lack of clarity from the top: 53 percent of respondents point to a lack of AI insight in current business practices, while 48 percent believe a lack of managerial understanding and sponsorship plays a role in slow adoption. The tech field, by and large, is nimble, and developers are happy to adapt to change.


Social media posts may signal whether a protest will become violent

#artificialintelligence

A USC-led study of violent protest has found that moral rhetoric on Twitter may signal whether a protest will turn violent. The researchers also found that people are more likely to endorse violence when they moralize the issue that they are protesting--and when they believe that others in their social network moralize that issue, too. "Extreme movements can emerge through social networks," said the study's corresponding author, Morteza Dehghani, a researcher at the Brain and Creativity Institute at USC. "We have seen several examples in recent years, such as the protests in Baltimore and Charlottesville, where people's perceptions are influenced by the activity in their social networks. People identify others who share their beliefs and interpret this as consensus. In these studies, we show that this can have potentially dangerous consequences."



The human brain got so big because life was tough in ancient Africa

Daily Mail

Coping with harsh conditions, rather than social challenges, was chiefly responsible for boosting the size of our brains, a new study has found. The research found'ecological' challenges like finding food and lighting fires boosted the capacity of our ancestors to think ahead. The finding may settle a decades-long debate on the origins of human intelligence and our social relationships, scientists said. The human brain got so big because life was tough on the African savannah around two million years ago, according to new research. The human brain has tripled in size compared to the white matter of our ancestor Australopithecus afarensis, which roamed the Earth more than 3 million years ago.