brain


Why Your Brain Hates Other People - Issue 49: The Absurd

Nautilus

To simplify, this can be revealed with the Implicit Association Test, where subjects look at pictures of humans or trolls, coupled with words with positive or negative connotations. Recent work, adapting the Implicit Association Test to another species, suggests that even other primates have implicit negative associations with Others. And monkeys would look longer at pairings discordant with their biases (e.g., pictures of members of their own group with pictures of spiders). Thus, the strength of Us/Them-ing is shown by the: speed and minimal sensory stimuli required for the brain to process group differences; tendency to group according to arbitrary differences, and then imbue those differences with supposedly rational power; unconscious automaticity of such processes; and rudiments of it in other primates.


Tech Metaphors Are Holding Back Brain Research

WIRED

If memory works the way most neuroscientists think it does--by altering the strength of connections between neurons--storing all that information would be way too energy-intensive, especially if memories are encoded in Shannon information, high fidelity signals encoded in binary. That assumption leads some scientists--mind-body dualists--to argue that we won't learn much by studying the physical brain. Over time, our memories are physically encoded in our brains in spidery networks of neurons--software building new hardware, in a way. That's because the street lamp infrastructure in the two halves of the city remain different, to this day--West Berlin street lamps use bright white mercury bulbs and East Berlin uses tea-stained sodium vapor bulbs.


Modeling the way towards Artificial General Intelligence

#artificialintelligence

The potential for improvement to the system will be made realizable as a consequent of the Google researches making their work available on Tensor2Tensor library, allowing more researches to work and improve on the algorithim. Although the algorithim is not yet as powerful as DeepMind's work on networks that only have to perform individual tasks, this work could become a further step towards making artifical neural networks work like our own natural neural networks. Memory capability that allows human-like learning, linked with Google's MultiModel algorithim, will make it possible for future AI algorithims and systems to be trained on less training data. This pollination of intellectual work will allow strides to be made in more varied tasks, which will overall allow systems to be able to handle multiple tasks and multiple contexts, paving the way towards artifical general intelligence and eventually artifical superintelligence as these systems become more than just human-like.


Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology

Robohub

Just using an individual's brain activity – specifically, their P300 response – we could determine a subject's preferences for things like favorite coffee brand or favorite sports. The potential ability to determine individuals' preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. The goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices.


The Future of Artificial Intelligence and Cybernetics

#artificialintelligence

For example, if the robot brain has roughly the same number of human neurons as a typical human brain, then could it, or should it, have rights similar to those of a person? Also, if such robots have far more human neurons than in a typical human brain--for example, a million times more neurons--would they, rather than humans, make all future decisions? With those cases, the situation isn't straightforward, as patients receive abilities that normal humans don't have--for example, the ability to move a cursor on a computer screen using nothing but neural signals. It's clear that connecting a human brain with a computer network via an implant could, in the long term, open up the distinct advantages of machine intelligence, communication, and sensing abilities to the individual receiving the implant.


Storytelling with Data: Our Brains Crave Structure Love Oddballs

@machinelearnbot

So you can potentially activate parts of your brain involved in motor control or your sense of touch. When creating your own stories, remember that the brain craves structure and loves oddballs. The brain processes information by taking information it already knows to infer what a new piece of information might be. Now that you have some basic understanding of brain anatomy and neuroscience, try applying the lessons learned to your data stories.


Beyond the Five Senses

The Atlantic

The world we experience is not the real world. Which raises the question: How would our world change if we had new and different senses? More recently, researchers in the emerging field of "sensory enhancement" have begun developing tools to give people additional senses--ones that imitate those of other animals, or that add capabilities nature never imagined. Researchers are working on other technologies that could restore sight or touch to those who lack it.


AI (Deep Learning) explained simply

#artificialintelligence

Machine learning (ML), a subset of AI, make machines learn from experience, from examples of the real world: the more the data, the more it learns. In ML instead we only feed data samples of the problem to solve: lots of spam and no spam emails, cancer and no cancer photos etc., all first sorted, polished, and labeled by humans. MLs looking at screenshots of web pages or apps, can write code producing similar pages or apps. ML trained to win at poker game learned to bluff, handling missing and potentially fake, misleading information.


Top 10 Most Promising Toronto AI Startups

#artificialintelligence

As an effort to retain talent and make Toronto a global supplier of AI capability, the University of Toronto gathered a team of globally renowned researchers and founded the Vector Institute. Google and Uber are also investing in their own Artificial Intelligence hubs: Google Brain Toronto, which is the second Google Brain satellite office based in Canada, and Toronto division of Uber's Advanced Technologies Group. According to the team: "Meta is a tool that helps researchers understand what is happening globally in science and shows them where science is headed. Artificial Intelligence based Tenant Screening platform.


Looking to the human brain to improve artificial intelligence

#artificialintelligence

This process has not been well-understood and the lack of clarity is one of the barriers to replicating this process in computer systems, where the aim is to advance artificial intelligence systems. This area receives connections from the thalamus, which is in the primary visual cortex (or Visual area one). As Professor Tatyana Sharpee explains: "Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general." Finally, combined with the similarly oriented neurons, the brain pieces together the information to create a scene.