If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
For many expectant parents, the first opportunity to "meet" their baby comes at 20-weeks of pregnancy. The ultrasound scan performed at that time gives the parents a sense of the health of the growing fetus. The images produced in this important exam reveal the shape and structure of the head and brain, which are of particular interest because severe brain problems may become visible at this stage in the pregnancy. As the brain develops, maternal-fetal specialists keep a close eye on the cerebellum – the part of the brain that coordinates and regulates muscular activity. A healthy cerebellum can typically rule out fetal complications, such as spina bifida – a neural tube defect in which the spinal cord fails to properly develop.
My name is Andrew Zaldivar, and I am a Developer Advocate for Google AI in SF. Specifically, I work on a research-based team focused on developing and promoting socio-technical strategies that can advance positive outcomes from AI long-term. In my role, I act as a servant to the public's interest in developing ethical AI systems. I completed my doctorate degree in cognitive neuroscience, but my studies were complemented with informatics, psychology and data science, which helped prepare me to examine the interplay of people and technology and what it means for our future. To help developers take on the challenge of building fairness into their machine learning models, I helped develop a short, self-study Fairness Module that is part of our Machine Learning Crash Course.
Columbia neuroscientists have revealed that a simple brain region, known for processing basic sensory information, can also guide complex feats of mental activity. The new study involving mice demonstrated that cells in the somatosensory cortex, the brain area responsible for touch, also play a key role in reward learning, the sophisticated type of learning that allows the brain to associate an action with a pleasurable outcome. It is the basis for how we connect our work in the office to that paycheck, or that A to the studying we did in preparation for the test. The new research, published today in Cell Reports, provides evidence that learning and memory are not relegated to a few select regions, but instead may permeate the brain. "Our brains are masterful at making connections, or associations, between seemingly disparate pieces of information, but where those associations are stored has remained an unresolved question," said Randy Bruno, PhD, a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's senior author.
The human brain is capable of communicating in a way scientists previously thought was impossible. Brain cells can create an electrical field that triggers nearby neurons to pass on a message without any physical or chemical connections. Slow and mysterious waves produced by the brain, which have long been known to exist but whose function has been a long-standing mystery, are responsible. The discovery is so unusual the scientific journal that made the findings public demanded that the experiments were repeated before they were willing to publish. 'It was a jaw-dropping moment, for us and for every scientist we told about this so far, said Dominique Duran, a professor at the Case School of Engineering in Cleveland, Ohio.
Learning new things is a huge part of life -- we should always be striving to learn and grow. But it takes time, and time is precious. So how can you make the most of your time by speeding up the learning process? Thanks to neuroscience, we now have a better understanding of how we learn and the most effective ways our brains process and hold on to information. If you want to get a jump start on expanding your knowledge, here are 10 proven ways you can start learning faster today.
Alita: Battle Angel is an interesting and wild ride, jam-packed full of concepts around cybernetics, dystopian futures and cyberpunk themes. The film – in cinemas now – revolves around Alita (Rosa Salazar), a female cyborg (with original human brain) that is recovered by cybernetic doctor Dyson Ido (Christoph Waltz) and brought into the world of the future (the film is set in 2563). Hundreds of years after a catastrophic war, called "The Fall", the population of Earth now resides in a wealthy sky city called Zalem and a sprawling junkyard called Iron City where the detritus from Zalem is dumped. We follow Alita's story as she makes friends and enemies, and discovers more about her past. Her character is great – she has many of the mannerisms of a teenage girl combined with a determination and overarching sense of what is right – "I do not stand by in the presence of evil."
LONDON, Feb 12: Researchers say they have developed a machine learning algorithm for drug discovery which is twice as efficient as the industry standard, and could accelerate the process of developing new treatments for diseases such as Alzheimer's. The team led by researchers at the University of Cambridge in the UK used the algorithm to identify four new molecules that activate a protein thought to be relevant for symptoms of Alzheimer's disease and schizophrenia. A key problem in drug discovery is predicting whether a molecule will activate a particular physiological process, according to the study published in the journal PNAS. It is possible to build a statistical model by searching for chemical patterns shared among molecules known to activate that process, but the data to build these models is limited because experiments are costly and it is unclear which chemical patterns are statistically significant. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed "Machine learning has made significant progress in areas such as computer vision where data is abundant," said Alpha Lee from Cambridge's Cavendish Laboratory.
It began about a decade ago at Syracuse University, with a set of equations scrawled on a blackboard. Marc Howard, a cognitive neuroscientist now at Boston University, and Karthik Shankar, who was then one of his postdoctoral students, wanted to figure out a mathematical model of time processing: a neurologically computable function for representing the past, like a mental canvas onto which the brain could paint memories and perceptions. "Think about how the retina acts as a display that provides all kinds of visual information," Howard said. "That's what time is, for memory. And we want our theory to explain how that display works."
When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence. Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity. During this same period, the FDA has given 70 AI healthcare tools and devices'fast-tracked approval' because of their ability to save both lives and money. The pace of AI-augmented healthcare innovation is only accelerating. In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.
When the mathematician Alan Turing posed the question "Can machines think?" in the first line of his seminal 1950 paper that ushered in the quest for artificial intelligence (AI) (1), the only known systems carrying out complex computations were biological nervous systems. It is not surprising, therefore, that scientists in the nascent field of AI turned to brain circuits as a source for guidance. One path that was taken since the early attempts to perform intelligent computation by brain-like circuits (2), and which led recently to remarkable successes, can be described as a highly reductionist approach to model cortical circuitry. In its basic current form, known as a "deep network" (or deep net) architecture, this brain-inspired model is built from successive layers of neuron-like elements, connected by adjustable weights, called "synapses" after their biological counterparts (3). The application of deep nets and related methods to AI systems has been transformative.