marvin minsky
Inside Jeffrey Epstein's Forgotten AI Summit
In 2002, artificial intelligence was still in winter. Despite decades of effort, dreams of bestowing computers with human-like cognition and real-world understanding had not materialized. To look for a way forward, a small group of scientists gathered for "The St. Thomas Common Sense Symposium." AI pioneer Marvin Minsky was the central presence, along with his protégé Pushpinder Singh. After the symposium, Minsky, Singh, and renowned philosopher Aaron Sloman published a paper on the group's ideas for how to reach human-like AI.
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)
Is a Chat with a Bot a Conversation?
You are at the Princess's ball, and she is telling you a secret, but her orchestra of bears is making such a fearful lot of noise you cannot hear what she is saying. What do you say, dear? I'd lean in closer and say, "Could you repeat that? The bear-itone section is a bit too enthusiastic tonight!" In 1958, the year the illustrated children's book "What Do You Say, Dear?" appeared, the leaders of a field newly dubbed "artificial intelligence" spoke at a conference in Teddington, England, on "The Mechanisation of Thought Processes." Marvin Minsky, of M.I.T., talked about heuristic programming; Alan Turing gave a paper called "Learning Machines"; Grace Hopper assessed the state of computer languages; and scientists from Bell Labs débuted a computer that could synthesize human speech by having it sing "Daisy Bell" ("Daisy, Daisy, give me your answer, do . .
- Europe > United Kingdom > England (0.24)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > France (0.04)
Investigating AI's Challenges in Reasoning and Explanation from a Historical Perspective
This paper provides an overview of the intricate relationship between social dynamics, technological advancements, and pioneering figures in the fields of cybernetics and artificial intelligence. It explores the impact of collaboration and interpersonal relationships among key scientists, such as McCulloch, Wiener, Pitts, and Rosenblatt, on the development of cybernetics and neural networks. It also discusses the contested attribution of credit for important innovations like the backpropagation algorithm and the potential consequences of unresolved debates within emerging scientific domains. It emphasizes how interpretive flexibility, public perception, and the influence of prominent figures can shape the trajectory of a new field. It highlights the role of funding, media attention, and alliances in determining the success and recognition of various research approaches. Additionally, it points out the missed opportunities for collaboration and integration between symbolic AI and neural network researchers, suggesting that a more unified approach may be possible in today's era without the historical baggage of past debates.
- North America > United States > New York > Bronx County > New York City (0.04)
- North America > United States > New Hampshire > Grafton County > Hanover (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
- Research Report (0.82)
- Overview (0.54)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government > Military (0.93)
- Education (0.93)
Artificial Intelligence History (Chapter 1: AI Handbook)
This is the first of a multi-part series on AI that I will be writing. At the end of this post, you'll find a link to the list where I'll save all future posts. Artificial intelligence (AI) has a long and storied history that can be traced back to the earliest days of computing systems, with examples like automata and mechanical robots. During the first stages of artificial intelligence development, a robot called an automaton was created using just mechanical pieces. The robot could only carry out the predetermined duties that had been programmed into it.
History of Artificial Intelligence
Through generations, the field of artificial intelligence has persevered and become a hugely significant part of modern life. Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon's Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021. AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.
- North America > United States (0.29)
- Europe > Greece (0.04)
- Asia > Japan (0.04)
- Information Technology (1.00)
- Leisure & Entertainment > Games (0.96)
A Personal Tribute to Patrick Henry Winston
Patrick Henry Winston was, by all standards, a rock star in the field of Artificial Intelligence. In 1970, Patrick wrote his Ph.D. thesis, in which he explored -- under the improvisational supervision of his advisor, Marvin Minsky -- the theoretical difficulties of learning, and wrote in Lisp a blocks-world program that could perceive blocks and block-enabled architectures (e.g. That computer program was able to learn to generalize its existing knowledge when comparing a baseline example architecture with a new example, and specialize its existing knowledge when comparing a baseline example with a near miss. That was the first effort ever in making machines learn things in ways that resemble how humans learn things. Some say that was "real" Machine Learning, much unlike statistical Machine Learning and neural-net Machine Learning, whereby programmers would program their computers to slavishly crunch through hundreds of billions of data points, which is nothing like how people learn new things, but has become popular because the theory behind them are much more understood and much easier to implement, and because this kind of big-data crunching is practically allowed for due to the tremendous computing power that we have today.
- North America > United States > Illinois > Tazewell County > East Peoria (0.05)
- North America > United States > Illinois > Peoria County > Peoria (0.05)
Singularity Is Fast Approaching, and It Will Happen First in the Metaverse
Recently there has been a lot of discussion around singularity and whether we soon will be entering a phase where artificial general intelligence will become reality. However, before we delve deep into the philosophical and ethical implications of singularity, we have to understand what it really is, its actual limitations and why it may happen in a way that is different than anticipated. Singularity is the notion that the exponential acceleration of technological development will lead to a situation where artificial intelligence supersedes human intelligence and will eventually escape our control. Some even predict catastrophic consequences for humanity where machines will become the dominant species on this planet. This may seem a bit far-fetched, at least for the near future, given that advancements in hardware development and robotics are not catching up with software when it comes to artificial general intelligence.
A brief history of AI: how to prevent another winter (a critical review)
Toosi, Amirhosein, Bottino, Andrea, Saboury, Babak, Siegel, Eliot, Rahmim, Arman
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- (14 more...)
- Leisure & Entertainment > Games > Chess (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Health Care Technology (1.00)
- (8 more...)
114 Milestones In The History Of Artificial Intelligence (AI)
In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: "Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others."
- Europe (0.93)
- North America > United States (0.68)
- Media > Film (0.93)
- Leisure & Entertainment > Games > Chess (0.69)
- Information Technology (0.69)
114 Milestones In The History Of Artificial Intelligence (AI)
In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: "Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others."
- Europe (0.93)
- North America > United States (0.68)
- Media > Film (0.93)
- Leisure & Entertainment > Games > Chess (0.69)
- Information Technology (0.69)