Philosophy
The Thinking Machine: Jensen Huang, Nvidia and the World's Most Coveted microchip – review
This is the latest confirmation that the "great man" theory of history continues to thrive in Silicon Valley. As such, it joins a genre that includes Walter Isaacson's twin tomes on Steve Jobs and Elon Musk, Brad Stone's book on Jeff Bezos, Michael Becraft's on Bill Gates, Max Chafkin's on Peter Thiel and Michael Lewis's on Sam Bankman-Fried. Notable characteristics of the genre include a tendency towards founder worship, discreet hagiography and a Whiggish interpretation of the life under examination. The great man under Witt's microscope is the co-founder and chief executive of Nvidia, a chip design company that went from being a small but plucky purveyor of graphics processing units (GPUs) for computer gaming to its current position as the third most valuable company in the world. Two things drove this astonishing transition.
Largest mammalian brain map ever could unpick what makes us human
The largest and most comprehensive 3D map of a mammalian brain to date offers an unprecedented insight into how neurons connect and function. The new map, which captures a cubic millimetre of a mouse's visual cortex, will allow scientists to study brain function in extraordinary detail, potentially revealing crucial insights into how neural activity shapes behaviour, how complex traits like consciousness arise, and even what it means to be human. "Our behaviours ultimately arise from activity in the brain, and brain tissue shares very similar properties in all mammals," says team member Forrest Collman at the Allen Institute for Brain Science in Seattle. "This is one reason we believe insights about the mouse cortex can generalise to humans." The achievement – something that biologist Francis Crick said in 1979 was "impossible" – took seven years to complete and involved 150 researchers from three institutions.
Mira Murati Launches Thinking Machines Lab to Make AI More Accessible
Last September, Mira Murati unexpectedly left her job as chief technology officer of OpenAI, saying, "I want to create the time and space to do my own exploration." The rumor in Silicon Valley was that she was stepping down to start her own company. Today she announced that indeed she is the CEO of a new public benefit corporation called Thinking Machines Lab. Its mission is to develop top-notch AI with an eye toward making it useful and accessible. Murati believes there's a serious gap between rapidly advancing AI and the public's understanding of the technology.
Why Is Anything Conscious?
Bennett, Michael Timothy, Welsh, Sean, Ciaunica, Anna
We tackle the hard problem of consciousness taking the naturally selected, embodied organism as our starting point. We provide a formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information according to valence. Such interpretations imply behavioural policies which are differentiated from each other only by the qualitative aspect of information processing. Natural selection favours systems that intervene in the world to achieve homeostatic and reproductive goals. Quality is a property arising in such systems to link cause to affect to motivate interventions. This produces interoceptive and exteroceptive classifiers and determines priorities. In formalising the seminal distinction between access and phenomenal consciousness, we claim that access consciousness at the human level requires the ability to hierarchically model i) the self, ii) the world/others and iii) the self as modelled by others, and that this requires phenomenal consciousness. Phenomenal without access consciousness is likely common, but the reverse is implausible. To put it provocatively: death grounds meaning, and Nature does not like zombies. We then describe the multilayered architecture of self-organisation from rocks to Einstein, illustrating how our argument applies. Our proposal lays the foundation of a formal science of consciousness, closer to human fact than zombie fiction.
Agnosticism About Artificial Consciousness
Could an AI have conscious experiences? Any answer to this question should conform to Evidentialism - that is, it should be based not on intuition, dogma or speculation but on solid scientific evidence. I argue that such evidence is hard to come by and that the only justifiable stance on the prospects of artificial consciousness is agnosticism. In the current debate, the main division is between biological views that are sceptical of artificial consciousness and functional views that are sympathetic to it. I argue that both camps make the same mistake of over-estimating what the evidence tells us. Scientific insights into consciousness have been achieved through the study of conscious organisms. Although this has enabled cautious assessments of consciousness in various creatures, extending this to AI faces serious obstacles. AI thus presents consciousness researchers with a dilemma: either reach a verdict on artificial consciousness but violate Evidentialism; or respect Evidentialism but offer no verdict on the prospects of artificial consciousness. The dominant trend in the literature has been to take the first option while purporting to follow the scientific evidence. I argue that if we truly follow the evidence, we must take the second option and adopt agnosticism.
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, support ing our cl aim that machine consciousness is inevitable.
Consciousness defined: requirements for biological and artificial general intelligence
Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems. Introduction The study of consciousness requires the integration of many fields of research including but not limited to neuroscience, psychology, philosophy, physics and artificial general intelligence (AGI). Definitions of consciousness remain disconnected from the fundamental principles required to generate it. For example, common mistakes include conflating "awareness" with consciousness, likely due to the way the phrase "to be conscious of something" is synonymous with an awareness of that "something". As Crick and Koch wrote in their paper titled "Towards a neurobiological theory of consciousness," they deliberately avoid defining consciousness by explaining that "it is better to avoid a precise definition of consciousness because of the dangers of premature definition." After more than three decades, it is past time we generate a precise definition of consciousness and its requirements that are free of subjective biases.
Neuromorphic Correlates of Artificial Consciousness
The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.
Can a Machine be Conscious? Towards Universal Criteria for Machine Consciousness
Anwar, Nur Aizaan, Badea, Cosmin
As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.
5 extraordinary ideas about the mind and what it means to be conscious
Two years after opening our bureau in New York, we are delighted to share that New Scientist is launching a new live event series in the US. This kicks off on 22 June in New York with a one-day masterclass on the science of the brain and human consciousness. To celebrate, we have unlocked access to five in-depth features exploring mysteries of the human mind. There is perhaps no bigger puzzle of human experience than consciousness. In the simplest terms, it is awareness of our existence. It is our experience of ourselves and the world.