If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
London, the United Kingdom - Twenty-nine-year-old Angela* had not had more than an hour's sleep in two days when she heard a knock on her front door. Opening it, she was surprised to find a large parcel. "I haven't ordered anything," she told the deliveryman, who stood at a distance with his mask and gloves on. "It's from your son's school," he responded. Inside the parcel was an assortment of fresh and nonperishable food: pasta, lentils, chili con carne and long-life milk.
Multifunction glasses that can monitor your health, let you play video games with your eyes and still work as sunglasses are developed by South Korean scientists. The groundbreaking new wearable tech built at Korea University, Seoul, can provide more advanced personal health data than devices like Fitbits or smart watches. Devices that measure electrical signals from the brain or eyes can help to diagnose conditions like epilepsy and sleep disorders -- as well as in controlling computers. A long-running challenge in measuring these electronic signals, however, has been in developing devices that can maintain the needed steady physical contact between the wearable's sensors and the user's skin. The researchers overcame this issue by integrating soft, conductive electrodes into their glasses that can wirelessly monitor the electrical signals.
'Passive' visual experiences play a key part in our early learning experiences and should be replicated in AI vision systems, according to neuroscientists. Italian researchers argue there are two types of learning – passive and active – and both are crucial in the development of our vision and understanding of the world. Who we become as adults depends on the first years of life from these two types of stimulus – 'passive' observations of the world around us and'active' learning of what we are taught explicitly. In experiments, the scientists demonstrated the importance of the passive experience for the proper functioning of key nerve cells involved in our ability to see. This could lead to direct improvements in new visual rehabilitation therapies or machine learning algorithms employed by artificial vision systems, they claim.
Amazon today announced the general availability of Multi-Capability Skills for Alexa, a way to combine smart home and custom Alexa apps into single, unified voice apps. Starting this week, developers can publish and maintain an Alexa app that enables both internet of things and third-party features for their devices, extending built-in smart home commands with custom voice interaction models to support nearly any feature without forcing customers to enable and invoke two separate apps. Before the advent of Multi-Capability Skills, Alexa developers had to publish and maintain multiple apps to enable custom features: a smart home app to leverage built-in smart home capabilities and a custom app to support capabilities not included in the Alexa smart home API. Now, they don't -- and customers don't have to remember two different app names. In this way, Multi-Capability Skills make it easier for developers to create better Alexa experiences.
Headbands developed by BrainCo measure electric signals from neurons in the brain and translate that into an attention score using an algorithm. These days, many students at Jinhua Xiaoshun Primary School in eastern China begin their lessons not by opening textbooks, but by putting on headbands. The headbands, developed by startup BrainCo Inc. of Somerville, Mass., use three electrodes -- one on the forehead and two behind the ears -- to detect electrical activity in the brain, sending the data to a teacher's computer. Software generates real-time alerts about students' attention levels and gives an analysis at the end of each class. The pilot project, designed to help teachers keep tabs on and improve students' attentiveness, offers a glimpse into an artificial-intelligence boom in classrooms across China.
Researchers have developed an algorithm that can detect and identify different types of brain injuries. The team, from the University of Cambridge, Imperial College London and CONICET, have clinically validated and tested their method on large sets of CT scans and found that it was successfully able to detect, segment, quantify and differentiate different types of brain lesions. Their results, reported in The Lancet Digital Health, could be useful in large-scale research studies, for developing more personalised treatments for head injuries and, with further validation, could be useful in certain clinical scenarios, such as those where radiological expertise is at a premium. Head injury is a huge public health burden around the world and affects up to 60 million people each year. It is the leading cause of mortality in young adults.
According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real-time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered. The inclusivity of AI systems refers to whether they are effective for diverse user populations.
This is a crazy idea," the review read. Closing my laptop lid, I added in my mind "and ... it will never work," as a lump welled in my throat. What we were proposing to do was simple yet ambitious. Using functional magnetic resonance imaging, we might better understand what goes on in the minds of programmers as they read and understand code. We had performed pilot experiments with a neurobiologist, had promising results, and encouraging words from colleagues and reviewers.
Peter Souza'a employer trains adults on the autism spectrum for tasks and roles in IT. Years ago, Michael Field-house had a dinner party and friends attended with their young son Andrew, who is autistic, non-verbal, and low-functioning. At one point, Fieldhouse noticed Andrew, who was five- or six-years-old at the time, outside dropping pebbles into an urn in a Japanese garden. "I was curious about that and I started timing him," recalls Fieldhouse. "I noted there were perfect intervals between every stone. He did that for at least an hour."
Users can perform a function by pressing an Action Block. Google has announced a slew of updates to its suite of accessibility apps to celebrate Global Accessibility Awareness Day. One of the updates is the release of Action Blocks, which allows users to create customisable, home screen buttons for relatively complex actions like playing music or calling somebody that typically require multiple steps -- tasks that may be difficult for people with limited mobility or a cognitive disability. "For people with cognitive disabilities or age-related cognitive conditions, it can be difficult to learn and remember each of these steps. For others, it can be time consuming and cumbersome -- especially if you have limited mobility," Google said.