metadevo
How Long Will Hot AI Summer Last? – MetaDevo
I've posted some skepticism of the new AI models that are getting all the press--and all the money--in the past year. I said in "When AI Phones It In" that a lot of the fear of jobs being taken away is vaporous. And in "The AI Winter Shit-Winds Are Coming" I suggested we might be heading for an AI crash. Which would be unfortunate since we've been riding crashes in the markets already for a couple years. Last month, in "Smells a little bit like AI winter?" writer/scientist Gary Marcus asked if the simultaneous "implosion" of AI failures at Tesla, Google and Microsoft could lead to an AI Winter.
- North America > United States > Ohio (0.05)
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Government (0.50)
- Transportation > Ground (0.32)
Enactive Interface Perception – MetaDevo
What are these and could they overlap? The key element of the enactive approach to perception--aka enactivism--is that sensorimotor knowledge and skills are a required part of perception.1Noë, A. (2004). Enactivism diverges from tradition, such as in the case of vision--the norm is to keep vision separate from the other senses and sensorimotor abilities and also treat it as a reconstruction program (inverse optics). The enactive approach suggests that visual perception is not simply a transformation of 2D pictures into a 3D representation. And that vision is dependent on sensorimotor skills.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Oceania > Australia (0.05)
Trends in Analog and Neural Computation – MetaDevo
Cognitive Science and AI typically subscribe to computationalism--the mind is a form of computation in the brain (or the overall nervous system including the brain). In the 1940s, explaining cognition as the brain computing was new, and started catching on in what would become computer science and AI…and eventually to some degree neuroscience. But many were modeling the brain using what you could call analog math.1Piccinini, And there were actual analog computers, many of which were used by the U.S. military starting in World War 2. Nowadays, most people use digital computers for research and AI work…and pretty much everything. But what happened to the non-digital theories, and why aren't there analog computers any more to experiment on those?
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > Canada > Ontario > Toronto (0.05)
AI Will Make These 5 Jobs Extinct in the Next 20 Years – MetaDevo
There won't be much of a web soon, so we won't need web developers. Everything will transfer into Meta's Metaverse and Microsoft Mesh. Some of you could transfer your skills into these new "mixed reality" platforms. But even there AI will be doing a lot of work. It'll do procedural generation and help enable users to create the bulk of the content.
Miscellaneous #7 – MetaDevo
Why don't AIs have anything like brain waves? AI co-writer: Making Room for the Past: How forgetting the past can lead to losing sight of who we are; an A.I. newsletter. Will Our Appliances Think Someday? "Doing the actual A.I. is the easiest part of these projects. The hardest part is helping everyone involved understand what the problem is that we're trying to solve." "In 1956 for a bet and while drunk, Thomas Fitzpatrick stole a small plane from New Jersey and landed it perfectly on a narrow Manhattan street in front of the bar he had been drinking in.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Massachusetts > Middlesex County > Reading (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
AI Don't Know Jack? – MetaDevo
Think your AI understands the meanings of words? Or understands anything at all? Guess again. There's a big issue inherent in trying to make artificial minds that understand like a human does. It's called the Symbol Grounding Problem1S. TLDR: How can understanding in an AI be made intrinsic to the system, rather than just parasitic on the meanings in the minds of the developers / trainers?