New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
Our brains are incredibly good at processing faces, and even have specific regions specialized for this function. But what face dimensions are we observing? Do we observe general properties first, then look at the details? Or are dimensions such as gender or other identity details decoded interdependently? In a study published in Nature Communications, neuroscientists at the McGovern Institute for Brain Research measured the response of the brain to faces in real-time, and found that the brain first decodes properties such as gender and age before drilling down to the specific identity of the face itself.
Mini brains have been grown in a lab by scientists striving to cure motor neuron disease. The tiny organoid - approximately the size of a lentil - was made of connected human brain cells. It was then able to create connections with nearby spinal cord and muscular tissue. Scientists say they were able to see it spontaneously merge with the spinal cord of the animal while also contracting the muscles. The tiny organoid - approximately the size of a lentil - was made of connected human brain cells.
High-tech chips implanted in the brain could soon give humans an intelligence boost. Researchers have been working to develop minimally invasive methods to hack the human brain and squeeze out more of its potential. Recent technological advancements could make this possible within the next five years, Northwestern University neuroscientist Dr. Moran Cerf told CBS – but, he warns the move could also create new forms of social inequality. High-tech chips implanted in the brain could soon give humans an intelligence boost. Researchers have been working to develop minimally invasive methods to hack the human brain and squeeze out more of its potential.
We are a small (but growing) community of people interested in computational neuroscience, from laymen, to students, PhDs, and researchers. Most posts revolve around new papers in the field, resources (e.g. Want to discuss a new connectomics paper? If you've ever been interested in computational neuroscience, check us out!
Research groups at KAIST, the University of Cambridge, Japan's National Institute for Information and Communications Technology, and Google DeepMind argue that our understanding of how humans make intelligent decisions has now reached a critical point in which robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses when we make decisions in our everyday lives. In our rapidly changing world, both humans and autonomous robots constantly need to learn and adapt to new environments. But the difference is that humans are capable of making decisions according to the unique situations, whereas robots still rely on predetermined data to make decisions. Despite the rapid progress being made in strengthening the physical capability of robots, their central control systems, which govern how robots decide what to do at any one time, are still inferior to those of humans. In particular, they often rely on pre-programmed instructions to direct their behavior, and lack the hallmark of human behavior, that is, the flexibility and capacity to quickly learn and adapt.
People's interactions with machines, from robots that throw tantrums when they lose a colour-matching game against a human opponent to the bionic limbs that could give us extra abilities, are not just revealing more about how our brains are wired – they are also altering them. Emily Cross is a professor of social robotics at the University of Glasgow in Scotland who is examining the nature of human-robot relationships and what they can tell us about human cognition. She defines social robots as machines designed to engage with humans on a social level – from online chatbots to machines with a physical presence, for example, those that check people into hotel rooms. According to Prof. Cross, as robots can be programmed to perform and replicate specific behaviours, they make excellent tools for shedding light on how our brains work, unlike humans, whose behaviour varies. 'The central tenets to my questions are, can we use human-robot interaction to better understand the flexibility and fundamental mechanisms of social cognition and the human brain,' she said.
As automation devastates the fundamental economic model of the human civilization, we are faced with the option of augmenting the human brain such that ALL unskilled, low-IQ humans become upgraded. I wrote on this topic last year: Man and Superman: Can Displaced Blue Collar Workers Become Doctors? Since then, Elon Musk has made public statements about Neuralink, his venture to connect the human brain to computers. In China, He Jiankui, a Chinese researcher, has gene edited a pair of twin girls last November. All this has far-ranging consequences.
Humans retrieve the memory of an event in reverse to how they saw it, a report published today has discovered. Instead of constructing a past memory by building a picture from details of the event, the brain forms an overall'gist' of what happened first. It then fills out the story by retrieving more detail. This process seems to be the opposite of how the brain works when first encountering an event. The latest findings may give scientists greater insight into the reliability and accuracy of memory and witness accounts of incidents such as crime.
A research collaboration headed up at the National University of Singapore (NUS) has successfully employed machine learning to investigate the cellular architecture of the human brain. The approach uses functional MRI (fMRI) data to automatically estimate brain parameters, enabling neuroscientists to infer the cellular properties of different brain regions without having to surgically probe the brain. The researchers say that their technique could potentially be used to assess treatment of neurological disorders or develop new therapies (Science Advances 10.1126/sciadv.aat7854). "The underlying pathways of many diseases occur at the cellular level, and many pharmaceuticals operate at the microscale level," explains team leader Thomas Yeo. "To know what really happens at the innermost levels of the human brain, it is crucial for us to develop methods that can delve into the depths of the brain non-invasively." Currently, most human brain studies employ non-invasive approaches such as MRI, which limits examination of the brain at a cellular level.
Brain development is a remarkable self-organization process in which cells proliferate, differentiate, migrate, and wire to form functional neural circuits. In humans, this process takes place over a long fetal phase and continues into the postnatal period, but it is largely inaccessible for direct, functional investigation at a cellular level. Therefore, the features that make the human central nervous system unique and the sequence of molecular and cellular events underlying brain disorders remain largely uncharted. Human pluripotent stem (hPS) cells, including those obtained by reprogramming somatic cells, have the ability to self-organize and differentiate when grown in three-dimensional (3D) aggregates rather than in direct contact with a flat plastic surface (1). Such 3D neural cultures, also known as organoids and organ spheroids, recapitulate many aspects of human brain development in vitro (1) and have the potential to accelerate progress in human neurobiology.