skinner
Immigration Agents Are Killing and Abusing People. So Civilians Are Turning to a Controversial Tool to Find Justice.
Users Civilians Are Using A.I. to Unmask ICE Agents. Websites like ICEList are attempting to hold federal agents accountable--but it's unclear whether they make the system safer or more dangerous. After federal immigration officers shot Alex Pretti in Minneapolis, social media users called for the unmasking of the agents responsible. On X, users shared photos of the agents involved. It didn't take long before A.I.-generated pictures made their appearance: One user posted a seemingly deepfaked picture of a masked ICE agent, writing, "This is one of the soulless lowlife ghouls who executed Alex Pretti in cold blood!
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.62)
- North America > United States > New York (0.05)
- North America > United States > Missouri > Greene County > Springfield (0.05)
- (2 more...)
The Download: pigeons' role in developing AI, and Native artists' tech interpretations
People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is American psychologist B.F. Skinner's research with pigeons in the middle of the 20th century. Skinner believed that association--learning, through trial and error, to link an action with a punishment or reward--was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His "behaviorist" theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI. This story is from our forthcoming print issue, which is all about security.
Why we should thank pigeons for our AI breakthroughs
People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is Skinner's research with pigeons in the middle of the 20th century. Skinner believed that association--learning, through trial and error, to link an action with a punishment or reward--was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His "behaviorist" theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI. These companies' programs are increasingly incorporating a kind of machine learning whose core concept--reinforcement--is taken directly from Skinner's school of psychology and whose main architects, the computer scientists Richard Sutton and Andrew Barto, won the 2024 Turing Award, an honor widely considered to be the Nobel Prize of computer science.
How to use AI to get a job interview and nail it – along with the salary you deserve
The fear that artificial intelligence (AI) will replace millions of jobs is widespread. But equally, in today's tough job market, not using AI wisely as part of your search could mean you miss out. You can use AI models such as ChatGPT and Perplexity to research employers, competitors and industry trends before applying for a job. Hannah Salton, a careers coach, says some of her clients have successfully used AI to find out more about companies, allowing them to "gain insights into culture, competitors and market positioning. It can also help identify SMEs [small and medium-sized enterprises] to apply to or network with."
- Information Technology > Artificial Intelligence > Applied AI (0.56)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.32)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.32)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.32)
Large Language Models and the Rationalist Empiricist Debate
To many Chomsky's debates with Quine and Skinner are an updated version of the Rationalist Empiricist debates of the 17th century. The consensus being that Chomsky's Rationalism was victorious. This dispute has reemerged with the advent of Large Language Models. With some arguing that LLMs vindicate rationalism because of the necessity of building in innate biases to make them work. The necessity of building in innate biases is taken to prove that empiricism hasn't got the conceptual resources to explain linguistic competence. Such claims depend on the nature of the empiricism one is endorsing. Externalized Empiricism has no difficulties with innate apparatus once they are determined empirically (Quine 1969). Thus, externalized empiricism is not refuted because of the need to build in innate biases in LLMs. Furthermore, the relevance of LLMs to the rationalist empiricist debate in relation to humans is dubious. For any claim about whether LLMs learn in an empiricist manner to be relevant to humans it needs to be shown that LLMs and humans learn in the same way. Two key features distinguish humans and LLMs. Humans learn despite a poverty of stimulus and LLMs learn because of an incredibly rich stimulus. Human linguistic outputs are grounded in sensory experience and LLMs are not. These differences in how the two learn indicates that they both use different underlying competencies to produce their output. Therefore, any claims about whether LLMs learn in an empiricist manner are not relevant to whether humans learn in an empiricist manner.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
An Essay concerning machine understanding
Herbert L. Roitblat ABSTRACT Artificial intelligence systems exhibit many useful capabilities, but they appear to lack understanding. This essay describes how we could go about constructing a machine capable of understanding. As John Locke (1689) pointed out words are signs for ideas, which we can paraphrase as thoughts and concepts. To understand a word is to know and be able to work with the underlying concepts for which it is an indicator. Understanding between a speaker and a listener occurs when the speaker casts his or her concepts into words and the listener recovers approximately those same concepts. Current models rely on the listener to construct any potential meaning. The diminution of behaviorism as a psychological paradigm and the rise of cognitivism provide examples of many experimental methods that can be used to determine whether and to what extent a machine might understand and to make suggestions about how that understanding might be instantiated. I know there are not words enough in any language to answer all the variety of ideas that enter into men's discourses and reasonings. But this hinders not but that when any one uses any term, he may have in his mind a determined idea, which he makes it the sign of, and to which he should keep it steadily annexed during that present discourse. John Locke 1689 Artificial intelligence systems exhibit many useful capabilities, but as has often been said, they lack "understanding," which would be a critical capability for general intelligence. The transformer architecture on which current systems are based takes one string of tokens and produces another string of tokens (one token at a time) based on the aggregated statistics of the associations among tokens. The representations mediating between the inputs (e.g., prompts) and their production is one purely of the statistical relations among the word tokens. In the case of large language models, we know these facts to be true because this is how the models were designed and they were trained on a kind of fill-in-the-blank test to guess the next word. What exactly would it mean for an artificial intelligence system to understand? How would we know that it does?
- Europe > Austria > Vienna (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Hitting the Books: During World War II, even our pigeons joined the fight
In the years leading up to, and through, World War II, animal behaviorist researchers thoroughly embraced motion picture technology as a means to better capture the daily experiences of their test subjects -- whether exploring the nuances of contemporary chimpanzee society or running macabre rat-eat-rat survival experiments to determine the Earth's "carrying capacity." However, once the studies had run their course, much of that scientific content was simply shelved. In his new book, The Celluloid Specimen: Moving Image Research into Animal Life, Seattle University Assistant Professor of Film Studies Dr. Ben Schultz-Figueroa, pulls these historic archives out of the vacuum of academic research to examine how they have influenced America's scientific and moral compasses since. In the excerpt below, Schultz-Figueroa recounts the Allied war effort to guide precision aerial munitions towards their targets using live pigeons as onboard targeting reticles. Excerpted from The Celluloid Specimen: Moving Image Research into Animal Life by Ben Schultz-Figueroa, published by the University of California Press.
- North America > United States > California (0.61)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > Denmark (0.05)
- Government > Regional Government > North America Government > United States Government (0.49)
- Government > Military > Navy (0.47)
'Extinction is on the table': Jaron Lanier warns of tech's existential threat to humanity
Jaron Lanier, the eminent American computer scientist, composer and artist, is no stranger to skepticism around social media, but his current interpretations of its effects are becoming darker and his warnings more trenchant. Lanier, a dreadlocked free-thinker credited with coining the term "virtual reality", has long sounded dire sirens about the dangers of a world over-reliant on the internet and at the increasing mercy of tech lords, their social media platforms and those who work for them. Nothing about the last few weeks – of chaos on Twitter and the ever-increasing spread of conspiracy theory and disinformation – has changed that. The current state of the tech industry is ripe with danger and poses an existential threat, he believes. "People survive by passing information between themselves," Lanier, 61, told the Guardian in an interview.
- Information Technology (0.50)
- Media (0.36)
- Government (0.31)
The AI in a jar
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. The "brain in a jar" is a thought experiment of a disembodied human brain living in a jar of sustenance. The thought experiment explores human conceptions of reality, mind, and consciousness. This article will explore a metaphysical argument against artificial intelligence on the grounds that a disembodied artificial intelligence, or a "brain" without a body, is incompatible with the nature of intelligence. The brain in a jar is a different inquiry than traditional questions about artificial intelligence.
The AI in a jar
The "brain in a jar" is a thought experiment of a disembodied human brain living in a jar of sustenance. The thought experiment explores human conceptions of reality, mind, and consciousness. This article will explore a metaphysical argument against artificial intelligence on the grounds that a disembodied artificial intelligence, or a "brain" without a body, is incompatible with the nature of intelligence.[1] The brain in a jar is a different inquiry than traditional questions about artificial intelligence. The brain in a jar asks whether thinking requires a thinker. The possibility of artificial intelligence primarily revolves around what is necessary to make a computer (or a computer program) intelligent.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)