If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
After decades of a heavy slog with no promise of success, quantum computing is suddenly buzzing! Nearly two years ago, IBM made a quantum computer available to the world. The 5-quantum-bit (qubit) resource they now call the IBM Q experience. It was more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled.
Biopharmas are warming up to artificial intelligence (AI), but a series of challenges will need to be addressed before it becomes widely used by drug developers, a panel of industry executives agreed. Speaking at the 2019 Annual Meeting of NewYorkBIO in New York City yesterday, panelists identified those challenges as finding more and better data, integrating data from multiple sources, and creating partnerships to gather and analyze that data. The panel also cited challenges that go beyond data, such as attracting a new generation of professionals capable of applying AI and related technologies such as machine learning--and adapting biopharmas to the new technologies. Those observations are in line with a study released today by The Pistoia Alliance, a global not-for-profit organization of more than 150 members established by executives from AstraZeneca, GlaxoSmithKline (GSK), Novartis, and Pfizer. The Alliance surveyed 190 life sciences professionals in the US and Europe, with 52% citing access to data, and 44% a lack of skills, as the two key barriers of adoption of AI and machine learning.
Adobe, the company behind the ubiquitous photo-editing program Photoshop, just unveiled a new artificial intelligence tool capable of spotting whether images have been manipulated. The research, which sprang from a partnership with scientists from UC Berkeley and funding from DARPA, focuses on edits made with Photoshop's "liquify" tool, which can subtly reshape and touch up parts of an image, according to an Adobe blog post. While Adobe doesn't plan to release the tool to the public, reports The Verge, it's a sign that the company is taking seriously the propagation of digitally-altered, misleading media. To train the edit-detecting neural net, the Adobe scientists fed it pairs of images -- an undoctored photo of someone's face and a version that had been tweaked with the liquify tool. After enough training, the neural net could spot the edited face 99 percent of the time.
"Reference resolution" is a considerable challenge in natural language processing -- in the context of AI assistants like Alexa, it entails correctly associating a word like "their" in the utterance like "play their latest album" with a given musician. Scientists at Amazon have previously addressed it by tapping AI that maps correspondences between variables used by different services, but these mappings tend to be application-specific and not particularly scalable. That's why now, researchers at the Seattle company are actively exploring a technique that rewrites commands in natural language by substituting names and other data for references (for instance, rewriting "Play their latest album" as "Play Imagine Dragons' latest album"). Given a word of an input sequence, their contextual query rewrite engine adds a word to an ouput sequence according to probabilities computed by the machine learning algorithm. They describe it in a paper ("Scaling Multi-Domain Dialogue State Tracking via Query Reformulation") that's scheduled to be presented at the North American chapter of the Association for Computational Linguistics.
We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task. But imagine that all that video were being watched -- that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don't need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they've seen. Such an army of watchers could scrutinize every person they see for signs of "suspicious" behavior.
According to the National Academy of Sciences, a sixth mass extinction is underway. Animal species are disappearing at 1,000 to 10,000 times the natural rate. We recently reported on scientists using artificial intelligence to analyze photos to help track at-risk species such as giraffes and whale sharks. Now AI is being used to analyze sound to help protect forest elephants in central Africa. Mainly due to poachers and habitat destruction, the number of forest elephants went from an estimated 100,000 in 2011 to fewer than 40,000 today.
An interdisciplinary center that integrates artificial intelligence, data science and genomic screening--the first of its kind in New York City--is slated to open in late 2021. Mount Sinai's Icahn School of Medicine announced on Tuesday the launch of the Hamilton and Amabel James Center for Artificial Intelligence and Human Health, which will be staffed by about 40 principal investigators and 250 graduate students, postdoctoral fellows and computer scientists. "Mount Sinai has consistently been at the forefront of advancing healthcare across medical disciplines and this initiative represents our next step forward in building on that legacy," says Kenneth Davis, MD, Mount Sinai Health System's president and CEO. "We see the potential of artificial intelligence to radically transform the care that patients receive, and we want to shape and lead this effort. Davis added that the new center will serve as a "hub where our talented researchers can collaborate in unprecedented ways and bring forward ideas and innovative technologies that achieve better outcomes for our patients."
NCAR scientists used machine learning to emulate the results of the "bin microphysics" parameterization, a package of equations used to simulate the formation and evolution of clouds inside a climate model. While bin microphysics gives a more realistic representation of clouds than the simpler "bulk microphysics" scheme, scientists often cannot afford the computing resources needed to run bin microphysics for long periods of time. The use of machine learning may allow scientists to approximate the results of bin microphysics in a computationally efficient way. To answer critical questions about the climate and how it's changing, scientists are pressing sophisticated Earth system models to solve increasingly complex equations. The result is more detailed simulations -- and also more demand for the scarce supercomputing resources needed to run them.
A new neural network developed by researchers from the Massachusetts Institute of Technology is capable of constructing a rough approximation of an individual's face based solely on a snippet of their speech, a paper published in pre-print server arXiv reports. The team trained the artificial intelligence tool--a machine learning algorithm programmed to "think" much like the human brain--with the help of millions of online clips capturing more than 100,000 different speakers. Dubbed Speech2Face, the neural network used this dataset to determine links between vocal cues and specific facial features; as the scientists write in the study, age, gender, the shape of one's mouth, lip size, bone structure, language, accent, speed and pronunciation all factor into the mechanics of speech. According to Gizmodo's Melanie Ehrenkranz, Speech2Face draws on associations between appearance and speech to generate photorealistic renderings of front-facing individuals with neutral expressions. Although these images are too generic to identify as a specific person, the majority of them accurately pinpoint speakers' gender, race and age.
AI is rapidly changing everything. It is transforming society, the very way we live and act as social creatures, how we behave, how we do business, and even the very fabric of our own human identity. Cities are managed by machine driven big data gathered by sensors, constructing together what we known as the Internet of Things. An escalating digital transformation is transforming our vast amounts of paper ledgers into digital records, from traffic to finance to medical records. These, which convey most of the data gathered about us, are now processed in the cloud and augmented with machine learning artificial intelligence's multiple tools, which are stored in blockchain distributed ledgers.