Here are two sets of statements from far-distant opposites in the climate change debate. The first is from Naomi Klein, who in her book This Changes Everything paints a bleak picture of a global socioeconomic system gone wrong: "There is a direct and compelling relationship between the dominance of the values that are intimately tied to triumphant capitalism and the presence of anti-environment views and behaviors." The second is from Larry Bell, professor of architecture and climate skeptic, whom Klein quotes in her book. He argues that climate change "has little to do with the state of the environment and much to do with shackling capitalism and transforming the American way of life ...". Let us put aside whether we agree or disagree with these statements or are offended by them.
Reprinted with permission from Quanta Magazine's Abstractions blog. It's very easy to break things in biology," said Loren Frank, a neuroscientist at the University of California, San Francisco. "It's really hard to make them work better." Yet against the odds, researchers at the New York University School of Medicine reported earlier this summer that they had improved the memory of lab animals by tinkering with the length of a dynamic signal in their brains--a signal that has fascinated neuroscientists like Frank for decades. The feat is exciting in its own right, with the potential to enhance recall in people someday, too. But it also points to a more comprehensive way of thinking about memory, and it identifies an important clue, rooted in the duration of a neural event, that could pave the way to a greater understanding of how memory works. Since the 1980s, scientists have been tuning in to short bursts of synchronized neural activity in the brain area called the hippocampus.
Artificial Intelligence, it seems, is now everywhere. Text translation, speech recognition, book recommendations, even your spam filter is now "artificially intelligent." But just what do scientists mean with "artificial intelligence," and what is artificial about it? Artificial intelligence is a term that was coined in the 1980s, and today's research on the topic has many facets. But most of the applications we now see are calculations done with neural networks.
In some ways, the history of science is the history of a philosophical resistance to mythical explanations of reality. In the ancient world, when we asked "Where did the world come from?" we were told creation myths. In the modern world, we are instead told a convincing scientific story: Big Bang theory, first proposed in 1927 by the Belgian Roman Catholic priest Georges Lemaître. It is based on observations that galaxies appear to be flying apart from one another, suggesting that the universe is expanding. We trace this movement back in space and time to nearly the original point of the explosion, the single original atom from which all the universe emerged 14 billion years ago.
What is it exactly that makes humans so smart? In his seminal 1950 paper, "Computer Machinery and Intelligence," Alan Turing argued human intelligence was the result of complex symbolic reasoning. Philosopher Marvin Minsky, cofounder of the artificial intelligence lab at the Massachusetts Institute of Technology, also maintained that reasoning--the ability to think in a multiplicity of ways that are hierarchical--was what made humans human. Patrick Henry Winston begged to differ. "I think Turing and Minsky were wrong," he told me in 2017. "We forgive them because they were smart and mathematicians, but like most mathematicians, they thought reasoning is the key, not the byproduct." Winston, a professor of computer science at MIT, and a former director of its AI lab, was convinced the key to human intelligence was storytelling. "My belief is the distinguishing characteristic of humanity is this keystone ability to have descriptions with which we construct stories. I think stories are what make us different from chimpanzees and Neanderthals. And if story-understanding is really where it's at, we can't understand our intelligence until we understand that aspect of it."
Now tell me: How much time has passed since you first logged on to your computer...READ MORE An efficient pattern recognition of a lion makes perfect evolutionary sense. If you see a large feline shape moving in some nearby brush, it is unwise to wait until you see the yellows of the lion's eyes before starting to run up the nearest tree. You need a brain that quickly detects entire shapes from fragments of the total picture and provides you with a powerful sense of the accuracy of this recognition. One need only think of the recognition of a new pattern that is so profound that it triggers an involuntary "a-ha!" to understand the degree of pleasure that can be associated with learning. It's no wonder that once a particular pattern-recognition-reward relationship is well grooved into our circuitry, it is hard to shake.
Language," the Beat writer William S. Burroughs supposedly once exclaimed, "is a virus from outer space." Burroughs was making a metaphorical extrapolation about the ways in which words, phrases, idioms, sentences, lines, and narratives can seemingly rewire our brains; how literature has the power to reprogram a mind just as a virus can alter the DNA of its host. Such a concept holds that more than just a simple means of expressing and communicating ideas, language is its own potent agent, a force that actually has the ability to shape the world, often in ways that we're unconscious of and with an almost autonomous sense of itself. As with something biological, language is capable of infecting, of propagating and spreading, of indelibly marking its host. In Burroughs' characteristically experimental 1962 novel The Ticket that Exploded, he writes that "Word is an organism… a parasitic organism that invades and damages."
Suppose we scan 1 million similar women, and we tell everyone who tests positive that they have cancer. Then we will have correctly told all 10,000 women with cancer that they have it. Of the remaining 990,000 women whose lumps were benign, we will incorrectly tell 49,500 women that they have cancer. Therefore, of the women we identify as having cancer, about 83 percent will have been incorrectly diagnosed. Imagine you or a loved one received a positive test result.
Ada Lovelace was an English mathematician who lived in the first half of the 19th century. In 1842, Lovelace was tasked with translating an article from French into English for Charles Babbage, the "Grandfather of the Computer." Babbage's piece was about his Analytical Engine, a revolutionary new automatic calculating machine. Although originally retained solely to translate the article, Lovelace also scribbled extensive ideas about the machine into the margins, adding her unique insight, seeing that the Analytical Engine could be used to decode symbols and to make music, art, and graphics. Her notes, which included a method for calculating the Bernoulli numbers sequence and for what would become known as the "Lovelace objection," were the first computer programs on record, even though the machine could not actually be built at the time.1 Though never formally trained as a mathematician, Lovelace was able to see beyond the limitations of Babbage's invention and imagine the power and potential of programmable computers; also, she was a woman, and women in the first half of the 19th century were typically not seen as suited for this type of career. Lovelace had to sign her work with just her initials because women weren't thought of as proper authors at the time.2 Still, she persevered,3 and her work, which would eventually be considered the world's first computer algorithm, later earned her the title of the first computer programmer.
The Canadian poet Dennis Lee once wrote that the consolations of existence might be improved if we thought, worked, and lived as though we were inhabiting "the early days of a better civilization." The test of this would be whether humans, separately and together, are able to generate and make better choices. This is as much a question about wisdom as it is about science. We don't find it too hard to imagine continued progress in science and technology. We can extrapolate from the experiences of the last century toward a more advanced civilization that simply knows more, can control more, and is less vulnerable to threats.