News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people to warn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level. However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences.
ARTIFICIAL intelligence is taking image recognition tips from a real expert: the human brain. Using fMRI brain activity scans as a training tool has boosted the ability of machine learning algorithms to recognise objects. The technique could improve face recognition systems or help autonomous vehicles better understand their surroundings. Machine learning is still a long way behind humans when it comes to tasks like object recognition, says David Cox at Harvard University. So his group trained algorithms to process images more like we do.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next two months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Japan recently announced a major robotics event for next year. The World Robot Summit will feature a series of competitions, talks, and exhibits.
In celebration of our 5th anniversary, this month we're publishing a series of interviews with innovative leaders about what the next five years hold. To read more about this series, read our editor Nilay Patel's introduction here. Few subsidiaries at Alphabet Inc. inspire as much curiosity as Google X, now called simply "X." X is the company's innovation lab, where ambitious but far-fetched tech ideas are pitched, tested, and either come to life or are ultimately killed. It's where Google's self-driving car concept was developed, where giant internet access balloons were conceived, where glucose-monitoring contact lenses were first experimented with, and where burrito-delivering drones are part of a beta test for bigger things. And while more than 250 employees are behind these far-fetched projects, for the past five years the face of X has been Astro Teller, the so-called "Captain of Moonshots."
We've written a lot about artificial intelligence (AI) here at Nanalyze, and just when we feel like there's not much more we can add to the topic, we find loads more interesting companies to write about. There has been a lot of talk lately about how machines just won't be able to capture that "human element" of emotions or "emotional intelligence" as it is often called. The act of building an emotional quotient or EQ as a layer on top of AI is being referred to as affective computing, a topic we covered before. The first step towards AI being able to demonstrate emotional intelligence, is that it needs to see emotions in our behaviour, hear our voices, and feel our anxieties. To do this, AI must be able to extract emotional cues or data from us through conventional means like eye tracking, galvanic skin response, voice and written word analysis, brain activity via EEG, facial mapping, and even gait analysis.