If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Yale University researchers have developed a way to leverage neural networks to reveal patterns of activity of individual cells from multiple individuals. Researchers at Yale University have developed a method of leveraging artificial intelligence (AI) neural networks to reveal larger patterns of activity of individual cells that come from several individuals. The AI neural network, called SAUCIE (Sparse Autoencoder for Clustering, Imputation, and Embedding), can reveal minute cellular differences within individuals, as well as broader patterns that describe how the body functions. The new method will allow researchers to identify larger clusters of cellular activity that could shed light on the basis of a host's pathogens. For example, the team used SAUCIE to analyze 20 million cells from 60 patients and identify rare Gamma-Delta T cell types that regulate how the body responds to the virus that causes Dengue fever.
It is undeniable that our lives have been made better by artificial intelligence (AI). AI technology allow us to get almost anything, anytime, anywhere in the world at the click of a button; prevent disease epidemics and keep them from spiralling out of control, and generally just make day-to-day life a bit easier by helping us to save energy, book a babysitter, manage our cash and our health all at a very low cost. AI's penetration into systems and processes in virtually all sectors of business and life has been rapid and global. The speed and scale at which AI is proliferating does however raise the question of how at-risk we may be that the AI we are building for good can also be introducing damaging bias at scale. In this two-part series, I explore the issues with AI constructs, the good bad and the ugly and how we can think about shaping a future through AI in financial services that helps lift people up rather than scaling problems up.
Fox News Flash top headlines for Oct. 14 are here. Check out what's clicking on Foxnews.com The U.S. continues to see a rise in the number of sexually transmitted diseases, according to health officials -- and in Hawaii, the increase is believed to be linked to online dating. Health officials in the Aloha State have reported a significant increase in chlamydia, gonorrhea and syphilis. All three of the infections were at or near their highest rates in about 30 years.
Recent advancements in the field of computer vision (CV) have led to new applications that could benefit people globally, and especially those in developing countries. To bring the CV community closer to tasks, data sets, and applications that can have a global impact, Facebook AI launched the Computer Vision for Global Challenges (CV4GC) initiative earlier this year. Through a series of academic programs, mentorships, sponsorships, and events, CV4GC brings together field experts from around the world to discuss potential CV applications to address issues that affect developing regions. One such program is the CV4GC request for proposals, a research award opportunity that launched in February with the goal of supporting research that aligns with CV4GC's mission. We were particularly interested in proposals that extended CV technology to achieve global development priorities, especially those captured in the United Nations' Sustainable Development Goals.
Of the world's 10 million people in the world diagnosed with both tuberculosis (TB) and drug-resistant tuberculosis in 2017, 2.7 million live in India, making us a country with the highest burden of the disease, according to the World Health Organization (WHO). Using the contact-free sensor that's placed under your mattress, Dozee tracks and analyzes your heart health, respiration, sleep quality, stress levels and more. What's worse, many remain undiagnosed and those who are detected with TB are only diagnosed weeks after they get it. With this delay, these unsuspecting carriers spread the disease to others in their homes or workplaces. This is particularly a risk for young children.
For the first time ever, a human drug has been created entirely by artificial intelligence (AI). This news comes from a team at Flinders University in Australia, who claims to have created an enhanced influenza vaccine using an AI program known Search Algorithm for Ligands (SAM). Though computers have been used to make drugs before, this was the first time it was done independently by an AI system. The researchers described this drug as a flu vaccine with an added compound that better stimulates the human immune system. This addition causes more antibodies to be formed against the flu virus than with the traditional vaccination, increasing the vaccine's efficacy.
The U.S. Centers for Disease Control and Prevention has named Carnegie Mellon University as an Influenza Forecasting Center of Excellence, a five-year designation that includes $3 million in research funding. For four of the past five years, Carnegie Mellon's forecasting efforts have proven the most accurate of all the research groups participating in the CDC's FluSight Network. In addition to expanding CMU's existing forecasting research, the new funding will enable CMU to initiate studies on how to best communicate forecast information to the public and to leaders. It will also support efforts to determine how forecasting techniques might apply to pandemics -- the rare occasions when a truly novel strain of flu is prevalent around the world. Roni Rosenfeld, head of CMU's Machine Learning Department and leader of its epidemic forecasting efforts, said the designation of CMU and the University of Massachusetts at Amherst as the first two CDC flu forecasting centers of excellence marks a coming of age for the epidemic forecasting community.
I. Is AI doing any good at all? Researchers, entrepreneurs, and policy-makers are increasingly using AI to tackle development challenges. In other words, using AI for a greater good is a real thing. However, it is becoming clear that AI poses as many threats as benefits, although the former ones are usually neglected. I do not want to get into trust, accountability, or safety issues in this short piece (if you want, here there is more), but avoiding the negative effects of AI is why incorporating a set of ethical principles into our technology development process is so paramount. Ethics plays a key role by ensuring that regulations of AI harness its potential while mitigating its risks (Taddeo and Floridi, 2018) and it would help us understand how to use responsibly the power coming from this technology.
Scott Sanchez used to have a hard time deciding what to eat, especially when he was traveling. The 42-year-old wanted to lose weight and found he needed to dissect a menu with the waiter before he could order. It was a challenge, he says, but one that gave birth to The Fit, a menu personalization platform that uses artificial intelligence to give restaurant brands and their customers the option to customize their menu and food choices. At least 32 million Americans -- including 5 million children -- have food allergies, according to nonprofit Food Allergy Research & Education. Whenever they eat out, they need to make sure there are no ingredients in the food that could trigger an allergic reaction.
By the end of this decade, a milestone is reached in artificial intelligence, with computers now routinely passing the Turing Test.** This test is conducted by a human judge who is made to engage in a natural language conversation with one human and one machine, each of which tries to appear human. Participants are placed in isolated locations. For several decades, information technology had seen exponential growth – leading to vast improvements in computer processing power, memory, bandwidth, voice recognition, image recognition, deep learning and other software algorithms. By the end of the 2020s, it has reached the stage where an independent judge is literally unable to tell which is the real human and which is not.* Answers to certain "obscure" questions posed by the judge may appear childlike from the AI – but they are humanlike nonetheless.*