If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The mammalian neocortex is one of the most intricate entities found in nature, both in terms of structure and function. It is the brain region responsible for the execution of high-order functions, including sensory perception, motor control, cognition, and speech. Its development is equally complex because it requires that millions to billions (depending on the species) of neurons assemble in distinct layers and connect with exquisite precision to perform complicated information processing operations. During embryonic development, formation of the cerebral cortex involves the migration of excitatory neurons generated in the ventricular zone toward the cortical plate, where they establish their final position in six well-defined horizontal layers consisting of different types of neurons and architecture. Along this migratory phase, developing neurons undergo a morphological transition from multipolar shape to bipolar morphology.
Open any newspaper, on-screen or off, and you'll find that scientific controversy underlies many of the day's most hotly debated issues. The arguments surrounding genetically modified organisms, the threat of artificial intelligence to human existence, and stem cell research are exemplary. Science, a domain that we might naively expect to provide objective knowledge and definitive answers, has always been and will remain forever contested. What is the non-expert--that is, most of us--to do? For most issues, interpreting research findings or parsing the academic debate is infeasible.
Before Josh McDermott was a neuroscientist, he was a club DJ in Boston and Minneapolis. He saw first-hand how music could unite people in sound, rhythm, and emotion. "One of the reasons it was so fun to DJ is that, by playing different pieces of music, you can transform the vibe in a roomful of people," he says. With his club days behind him, McDermott now ventures into the effects of sound and music in his lab at the Massachusetts Institute of Technology, where he is an assistant professor in the Department of Brain and Cognitive Sciences. In 2015, he and a post-doctoral colleague, Sam Norman-Haignere, and Nancy Kanwisher, a professor of cognitive neuroscience at MIT, made news by locating a neural pathway activated by music and music alone.
Every year, more than a billion people around the world celebrate Chinese New Year and engage in a subtle linguistic dance with luck. You can think of it as a set of holiday rituals that resemble a courtship. To lure good fortune into their lives, they may decorate their homes and doors with paper cutouts of lucky words or phrases. Those who need a haircut make sure to get one before the New Year, as the word for "hair" (fa) sounds like the word for "prosperity"--and who wants to snip away prosperity, even if it's just a trim? The menu of food served at festive meals often includes fish, because its name (yu) sounds the same as the word for "surplus"; a type of algae known as fat choy because in Cantonese it sounds like "get rich"; and oranges, because in certain regions their name sounds like the word for "luck."
The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units--neurons and transistors, respectively--that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1 Which has more problem-solving power--the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge.
Nanosize pores can turn semimetallic graphene into a semiconductor and, from being impermeable, into the most efficient molecular-sieve membrane. However, scaling the pores down to the nanometer, while fulfilling the tight structural constraints imposed by applications, represents an enormous challenge for present top-down strategies. Here we report a bottom-up method to synthesize nanoporous graphene comprising an ordered array of pores separated by ribbons, which can be tuned down to the 1-nanometer range. The size, density, morphology, and chemical composition of the pores are defined with atomic precision by the design of the molecular precursors. Our electronic characterization further reveals a highly anisotropic electronic structure, where orthogonal one-dimensional electronic bands with an energy gap of 1 electron volt coexist with confined pore states, making the nanoporous graphene a highly versatile semiconductor for simultaneous sieving and electrical sensing of molecular species.
Machine learning methods are becoming integral to scientific inquiry in numerous disciplines. We demonstrated that machine learning can be used to predict the performance of a synthetic reaction in multidimensional chemical space using data obtained via high-throughput experimentation. We created scripts to compute and extract atomic, molecular, and vibrational descriptors for the components of a palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of various potentially inhibitory additives. Using these descriptors as inputs and reaction yield as output, we showed that a random forest algorithm provides significantly improved predictive performance over linear regression analysis. The random forest model was also successfully applied to sparse training sets and out-of-sample prediction, suggesting its value in facilitating adoption of synthetic methodology.
The recent development of single-cell genomic techniques allows us to profile gene expression at the single-cell level easily, although many of these methods have limited throughput. Rosenberg et al. describe a strategy called split-pool ligation-based transcriptome sequencing, or SPLiT-seq, which uses combinatorial barcoding to profile single-cell transcriptomes without requiring the physical isolation of each cell. The authors used their method to profile 100,000 single-cell transcriptomes from mouse brains and spinal cords at 2 and 11 days after birth. Comparisons with in situ hybridization data on RNA expression from Allen Institute atlases linked these transcriptomes with spatial mapping, from which developmental lineages could be identified.
For more than half a century, U.S. government officials have considered disaster scenarios, such as the consequences of a nuclear bomb going off in Washington, D.C. Only now, instead of following fixed story lines and predictions assembled ahead of time, they are using computers to play what-if with an entire artificial society: an advanced type of computer simulation called an agent-based model. Today's version of the nuclear attack model includes a digital simulation of every building in the area affected by the bomb, as well as every road, power line, hospital, and even cell tower. The model includes weather data to simulate the fallout plume. And the scenario is peopled with some 730,000 agents.