Goto

Collaborating Authors

 20th century


Steven Pinker's new book shows how he's become a contradictory figure

New Scientist

Steven Pinker's new book shows how he's become a contradictory figure Steven Pinker's new book When Everyone Knows That Everyone Knows makes a compelling case for common knowledge. Steven Pinker argues that "cancel culture" is a form of censorship Steven Pinker's new book perfectly encapsulates what a contradictory figure he has become. Much of it is a clear, fascinating explanation of a major psychological phenomenon . But then he starts telling you what he thinks about current affairs. Pinker is a psychologist at Harvard University who has written a string of popular science books. Some, like Words and Rules, are rooted in his own research and are a good read.


We have run out of new visions of the future. This needs to change

New Scientist

The 20th century was a famously fertile time for visions of the future, but the 21st century has failed to inspire them in the same way. Science fiction writer William Gibson, author of the prescient cyberpunk novel Neuromancer, has called this "future fatigue", pointing out that we barely ever make reference to the 22nd century. One reason for this apparent stasis is that most of the ideas of the future that captured people's imaginations in the 20th century have mutated since then. For example, plastic was billed as the material of the future. It has become an abundant material resource that is durable and versatile, just as its manufacturers promised.


Why we should thank pigeons for our AI breakthroughs

MIT Technology Review

People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is Skinner's research with pigeons in the middle of the 20th century. Skinner believed that association--learning, through trial and error, to link an action with a punishment or reward--was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His "behaviorist" theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI. These companies' programs are increasingly incorporating a kind of machine learning whose core concept--reinforcement--is taken directly from Skinner's school of psychology and whose main architects, the computer scientists Richard Sutton and Andrew Barto, won the 2024 Turing Award, an honor widely considered to be the Nobel Prize of computer science.


What if L.A.'s so-called flaws were underappreciated assets rather than liabilities?

Los Angeles Times

In the wake of January's horrific fires, detractors of Los Angeles -- an urban reality often seen as a toxic mixture of unsustainable resource planning and structurally poor governance systems -- are having a field day. Los Angeles knows how to weather a crisis -- or two or three. Angelenos are tapping into that resilience, striving to build a city for everyone. Their criticism is not new: For most of the 20th century -- and certainly for the last five decades or so -- Los Angeles has been seen by many urbanists as less city and more cautionary tale -- a smoggy expanse of subdivisions and spaghetti junctions, where ambition came with a two-hour commute. Planners shuddered, while architects looked away, even as they accepted handsome commissions to build some of L.A.'s -- if not the world's -- most iconic buildings.


Homo Ratiocinator (Reckoning Human)

Communications of the ACM

Homo Sapiens, "wise human" in Latin, is the taxonomic species name for modern humans. But observing the current state of the world and its trajectory, it is hard for me to accept the description "wise." I am not the first to object to the "sapiens" descriptor. The French philosopher Henri-Louis Bergson argued in 1911 that a better term would be Homo Faber, referring to human tool-making ability. This ability goes back to early humans, about three million years ago. Most importantly, human tools got better and better due to innovation and cultural transmission.


Elon Musk and the problem with immortality - by Ginger Liu

#artificialintelligence

Interactive internet-based technologies are transforming the way in which we understand death, grieving, and coping with loss. Online communication together with changes in social and religious attitudes in western society has created a space where the individual is part of the collective. The transition from analog to digital combines the private with the public and the real with the virtual. Feeding the digital afterlife zeitgeist are tech giants who are eager to build a synthetic heaven where big egos go to die. The idea of a synthetic heaven is offensive to many with long-standing religious beliefs even though those same beliefs are as synthetic as digital data. GLIU AI and Visual Arts is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. We are living in an AI-powered Matrix future and the richest man in the world agrees.


Does Working With AI Help Or Hinder Employees?

#artificialintelligence

As we have gained a greater understanding of just what AI can and cannot do, there is a greater sense that it will augment the work humans do rather than replace them. For this augmentation to be effective, however, will quite probably require a rethinking of the processes we use at work so that the capabilities of the technology are fully capitalized on. Research from the University of Georgia finds, however, that this ability to work effectively alongside humans may be hindered by our perceptions of just what is good in the workplace. The study argues that the characteristics we typically value at work, such as conscientiousness, are also things that AI often thrives in, creating an unhelpful overlap of strengths. It's commonly assumed that conscientious employees are strong performers at work, due to their focus on details and generally strong work ethic.


AI And The Limits Of Language

#artificialintelligence

Jacob Browning is a postdoc in NYU's Department of Computer Science working on the philosophy of AI. Yann LeCun is a Turing Award-winning machine learning researcher and an NYU Silver professor. When a Google engineer recently declared Google's AI chatbot a person, pandemonium ensued. The chatbot, LaMDA, is a large language model (LLM) that is designed to predict the likely next words to whatever lines of text it is given. Since many conversations are somewhat predictable, these systems can infer how to keep a conversation going productively. LaMDA did this so impressively that the engineer, Blake Lemoine, began to wonder about whether there was a ghost in the machine.


Top Programming Languages 2022 - IEEE Spectrum - Channel969

#artificialintelligence

As Verne understood, the U.S. Civil War (during which 60,000 amputations were performed) inaugurated the modern prosthetics era in the United States, thanks to federal funding and a wave of design patents filed by entrepreneurial prosthetists. The two World Wars solidified the for-profit prosthetics industry in both the United States and Western Europe, and the ongoing War on Terror helped catapult it into a US $6 billion dollar industry across the globe. This recent investment is not, however, a result of a disproportionately large number of amputations in military conflict: Around 1,500 U.S. soldiers and 300 British soldiers lost limbs in Iraq and Afghanistan. Limb loss in the general population dwarfs those figures. A much smaller subset--between 1,500 to 4,500 children each year--are born with limb differences or absences, myself included.


Alan Turing: Tech Ideas that revolutionized 20th Century

#artificialintelligence

Turing over his lifetime produced various groundbreaking seminal papers and works. In this write-up, I am going to delve into four such major works: "On Computable Numbers, with an Application to the Entscheidungsproblem" (1936); Bombe and Spider, Banburismus (1940–41); Computing Machinery and Intelligence (1950); Solvable and Unsolvable Problems (1954). David Hilbert and Wilhelm Ackermann discussed Entscheidungsproblem in their 1928 writing, The Principles of Mathematical Logic. They posed universal validity and satisfiability, which customarily was referred to as the decision problem (Hilbert et al., 1950). The decision problem can hence loosely be defined as coming up with an algorithm that takes an input and replies with a Yes or No depending on whether the statement is universally satisfiable.