Neuroscientists have constructed a network map of connections between cortical neurons, traced from a 100 terabytes 3D data set. The data were created by an electron microscope in nanoscopic detail, allowing every one of the "wires" to be seen, along with their connections. Some of the neurons are color-coded according to their activity patterns in the living brain. The largest network of the connections between neurons in the cortex to date has been published by an international team of researchers from the Allen Institute for Brain Science, Harvard Medical School, and Neuro-Electronics Research Flanders (NERF). In the process of their study*, the researchers developed new tools that will be useful for "reverse engineering the brain by discovering relationships between circuit wiring and neuronal and network computations," says Wei-Chung Lee, Ph.D., Instructor in Neurobiology at Harvard Medicine School and lead author of a paper published this week in the journal Nature.
The Chinese government produces 488 million'fake' social media posts a year to distract citizens from news critical of the Communist Party, a new study has revealed. According to the study, written by Harvard University professor Gary King, the goal of the secretive army of commenters is to "distract the public and change the subject" in online discussions which paint the government in a negative light. The study is reportedly the first of its kind to show concrete evidence of the existence of the '50 Cent Party', a name which references the 50 cents each worker is thought to be paid for a post. During the study, co-authored by Stanford University's Jennifer Pan and UC San Diego's Margaret E. Roberts, machine learning techniques were used to analyse millions of social media posts, based on leaked emails and databases which detail the work of the group. The research revealed co-ordinated commenting efforts, usually timed to coincide with government announcements or patriotic public holidays.
A research team from Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS) has developed an artificial intelligence (AI) method, aimed at training computers to interpret pathology images. The team trained the computer to distinguish between cancerous tumor regions and normal regions based on a deep multi-layer convolutional network. In an objective evaluation in which researchers were given slides of lymph node cells and asked to determine whether or not they contained cancer, the team's automated diagnostic method proved accurate approximately 92 per cent of the time. One of the researchers, Aditya Khosla, said, "This nearly matched the success rate of a human pathologist, whose results were 96 percent accurate." "In our approach, we started with hundreds of training slides for which a pathologist has labeled regions of cancer and regions of normal cells," said Dayong Wang.
New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. Artificial neural networks (ANNs) exhibit learning abilities and can perform tasks which are difficult for conventional computing systems, such as pattern recognition, on-line learning and classification. Practical ANN implementations are currently hampered by the lack of efficient hardware synapses; a key component that every ANN requires in large numbers. In the study, published in Nature Communications, the Southampton research team experimentally demonstrated an ANN that used memristor synapses supporting sophisticated learning rules in order to carry out reversible learning of noisy input data. Memristors are electrical components that limit or regulate the flow of electrical current in a circuit and can remember the amount of charge that was flowing through it and retain the data, even when the power is turned off.
The idea that we have brains hardwired with a mental template for learning grammar--famously espoused by Noam Chomsky of the Massachusetts Institute of Technology--has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky's "universal grammar" theory in droves because of new research examining many different languages--and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky's assertions. The research suggests a radically different view, in which learning of a child's first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all--such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique hu man ability to grasp what others intend to communicate, allow language to happen. The new findings indicate that if researchers truly want to understand how children, and others, learn languages, they need to look outside of Chomsky's theory for guidance.