If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Pretrained Artificial Neural Networks used to work like a Blackbox: You hand them an input and they predict an output with a certain probability -- but without us knowing the internal processes of how they came up with their prediction. A Neural Network to recognize images usually consists of around 20 neuron layers, trained with millions of images to tweak the network parameters to give high quality classifications. The layers consist of neurons that are trained to only forward information if they recognize one specific image feature, resulting in an action potential that serves as an input for the neurons of the next deeper layer. Each layer gets the information of the previous layer and supplies information to the next one until the output layer states the networks prediction. How many neurons of a certain layer fired their action potential implies how strongly the layer recognized its training features in the provided image.
It's a bright April day in Boston, and Gabi Zijderveld, a pioneer in the field of emotional artificial intelligence, is trying to explain why teaching robots to feel is as important as teaching them to think. "We live in a world surrounded by all these super-advanced technologies, hyper-connected devices, AI systems with super cognitive abilities -- or, as I like to say, lots of IQ but absolutely no EQ," says Zijderveld, chief marketing officer of Affectiva, the startup that spun out of the MIT Media Lab 10 years ago to build emotionally intelligent machines. "Just like humans that are successful in business and in life -- they have high emotional intelligence and social skills -- we should expect the same with technology, especially for these technologies that are designed to interact with humans." Giving machines a soul has been a dream of scientists, and sci-fi writers, for decades. But until recently, the idea of robots with heart was the stuff of moviemaking.
The treatment of mental health conditions appears to have received a boost with a recently announced research collaboration between digital mental health company SilverCloud Health and Microsoft Research. The partnership was designed to further step up the former's online offering with artificial intelligence. A little background: During the past 18 months, the two have worked in tandem on research that marries Microsoft's machine learning and AI technologies with SilverCloud, which specializes in the digital delivery of evidence-based mental healthcare to improve outcomes. Ken Cahill, CEO of Boston based SilverCloud, said the technology enables "very tailored support" to each patient; meaning "more responsive and reactive care." He called that process a "big departure" from existing digital delivery that's generic or a one-size-fits-all approach and doesn't accommodate for factors such as behavior, engagement, and effectiveness.
Artificial intelligence (AI) solutions are bringing about a renaissance in people's daily lives and in business operations globally. AI is designed to be fast and efficient and surpass human abilities in ways that will simplify the tasks, activities and issues that users and corporations come across on a daily basis. But is this kind of new "intelligence" a technology, or can it take on characteristics that set humans apart besides reason and logic? More specifically, what will be the role of emotion in the way the technology will operate, and will it ever catch up with the human ability to sense and feel? It is no secret that AI is built upon the concepts of pattern recognition and training, which allows it to take over more mundane, time-consuming and low-involvement tasks.
According to the Centers for Disease Control and Prevention (CDC), an estimated 50 million adults in the U.S. suffered from chronic pain in 2016, and according to the Substance Abuse and Mental Health Services Administration (SAMHSA), an estimated 10.3 million people in the U.S. ages 12 and older misused opioids in 2018. As such, the National Institutes of Health (NIH) have announced the awarding of $945 million in research grants to tackle the national opioid crisis through NIH HEAL Initiative (Helping to End Addiction Long-term Initiative). The UC San Francisco Department of Radiology and Biomedical Imaging is pleased to announce that one such project is the Back Pain Consortium (BACPAC) Research Program of which Sharmila Majumdar, PhD, vice chair for Research, is a part of. At this time, chronic low back pain is one of the most common forms of chronic pain in adults, and current treatments are ineffective, leading to increased use of opioids. This research will also lay the foundation for NIH funded research at the newly established Center for Intelligent Imaging, using artificial intelligence fueled algorithms for fast image acquisition, data analysis, quantitative sensory assessments, brain imaging, and biomechanical evaluation of the spine.
One of the biggest issues with Artificial Intelligence and Data Science is the integrity of our data. Even if we did all the right things in our models, and our testing, data might conform to some technical standard of "cleanliness," there might still be biases in our data as well as "common sense" issues. With Big Data, it is difficult to get to a certain granularity of data validity without proper real-world testing. By real-world testing, we mean that when data is being used to make decisions, as consumers, as testers, as programmers, as data scientists, we look at groups of scenarios to see if the decisions made conform to a kind of "common sense" standard. This is when we discover the most important biases in our data.
One of the biggest issues with Artificial Intelligence and Data Science is the integrity of our data. Even if we did all the right things in our models, and our testing, data might conform to some technical standard of "cleanliness", there might still be biases in our data as well as "common sense" issues. With Big Data, it is difficult to get to a certain granularity of data validity without proper real-world testing. By real-world testing, we mean that when data is being used to make decisions, as consumers, as testers, as programmers, as data scientists, we look at groups of scenarios to see if the decisions made conform to a kind of "common sense" standard. This is when we discover the most important biases in our data.
Ieso's senior VP for artificial intelligence, Valentin Tablan, talks about the challenges of adopting technology in mental health and how things are changing through Ieso's Eight Billion Minds program. Compared to physical medicine, mental health has traditionally been slow in its adoption of technology. There are multiple reasons for this, some psychological, some organisational, and some technological. Physical medicine has seen a lot of progress in the last century as science and technology advances have led to better understanding of diseases and patients. CAT and MRI scanners, and advanced lab tests make it easier to diagnose, treat and design personalised interventions for medical conditions.
This post is offered as a concise overview of important advances in artificial intelligence that will soon impact the way mental health care is practiced in day to day clinical settings. The result will be more individualized treatment incorporating both conventional and evidence-based complementary and alternative medicine (CAM) modalities, more effective and more cost-effective treatments of many common mental health problems, and improved outcomes. To have practical clinical utility in medicine and mental health care, an AI system must encompass machine-learning software capable of processing very large volumes of structured data, and natural language processing (NLP) software capable of mining unstructured data such as narrative text in electronic health records and medical imaging data. To assist health-care providers with clinical decision-making, the AI system must be'trained' to a requisite level of expertise within a particular domain of medical knowledge. Following completion of training, it is vital to keep the supply of pertinent medical data current.