Spending time on the internet is reducing our ability to focus on one task at a time - and it means we no longer store facts in our brains. Our lives have been forever changed by gaining access to infinite amounts of information at the touch of a button, but the way our head works has too. A new review looking into the effect of the online world on our brain functions from researchers in the UK, US and Australia, has drawn a number of surprising conclusions. The review focused on the world wide web's influence in three areas: attention spans, memory, and social cognition. It notes that the internet is now'unavoidable, ubiquitous, and a highly functional aspect of modern living' before diving into how it has changed our society.
Jimmy Curran controls the TV with his eyes through this web-based Comcast remote. Most TV viewers take for granted the ability to change the channel from their couches with a remote control. That task may be near impossible for viewers with the most severe physical challenges. On Monday, Comcast launches a free web-based remote on tablets and computers that lets Xfinity X1 customers with spinal cord injuries, ALS (Lou Gehrig's disease) or other disabilities change channels on the TV, set recordings, launch the program guide and search for a show with their eyes. The free X1 eye control works with whatever eye gaze hardware and software system the customer is using, as well as, "sip-and-puff" switches and other assistive technologies.
Breast cancer is the second leading cancer-related cause of death among women in the US. Early detection, through routine annual screening mammography, is the best first line of defense against breast cancer. However, these screening mammograms require expert radiologists (i.e. A radiologist can spend up to 10 hours a day working through these mammograms, in the process experiencing both eye-strain and mental fatigue. Modern computer vision models, built principally on Convolutional Neural Networks (CNNs), have seen incredible progress in recent years.
I just briefly wanted to say a little bit about my background. I studied Math and Computer Science in college and then did a Ph.D. in Math. I worked as a quant in Energy Trading and that's where I first started working with data. I was an early data scientist and backend developer at Uber. I taught full stack software development at Hackbright. I really love teaching and I think I'll always return to teaching in some form. And then two years ago, together with Jeremy Howard, I started fast.ai with the goal of making deep learning more accessible and easier to use. I just have one slide about fast.ai. We have this, as William mentioned, a totally free course, "Practical Deep Learning for Coders." The only prerequisite is one year of coding experience. It's distinctive in that there are no advanced math prerequisites, yet it takes you to the state-of-the-art. We've had a lot of success. We've had students get jobs at Google Brain, have their work featured on HBO and in Forbes, launch new companies, get new jobs. I wanted to let you know that this is out here, and this was a partnership between fast.ai, which is a non-profit research lab, and the University of San Francisco's Data Institute.
Doctors could soon get some help from an artificial intelligence tool when diagnosing brain aneurysms -- bulges in blood vessels in the brain that can leak or burst open, potentially leading to stroke, brain damage or death. The AI tool, developed by researchers at Stanford and detailed in a paper published June 7 in JAMA Network Open, highlights areas of a brain scan that are likely to contain an Aneurysm. "There's been a lot of concern about how Machine Learning will actually work within the medical field," said Allison Park, a graduate student in statistics and co-lead author of the paper. "This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool." This tool, which is built around an algorithm called HeadXNet, improved clinicians' ability to correctly identify aneurysms at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms.
A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people, according to new research published in the Journal of Biomedical and Health Informatics. Around one in five children suffer from anxiety and depression, collectively known as "internalizing disorders." But because children under the age of eight can't reliably articulate their emotional suffering, adults need to be able to infer their mental state, and recognise potential mental health problems. Waiting lists for appointments with psychologists, insurance issues, and failure to recognise the symptoms by parents all contribute to children missing out on vital treatment. "We need quick, objective tests to catch kids when they are suffering," says Ellen McGinnis, a clinical psychologist at the University of Vermont Medical Center's Vermont Center for Children, Youth and Families and lead author of the study.
A machine-learning method discovered a hidden clue in people's language predictive of the later emergence of psychosis -- the frequent use of words associated with sound. A paper published by the journal npj Schizophrenia published the findings by scientists at Emory University and Harvard University. The researchers also developed a new machine-learning method to more precisely quantify the semantic richness of people's conversational language, a known indicator for psychosis. Their results show that automated analysis of the two language variables -- more frequent use of words associated with sound and speaking with low semantic density, or vagueness -- can predict whether an at-risk person will later develop psychosis with 93 percent accuracy. Even trained clinicians had not noticed how people at risk for psychosis use more words associated with sound than the average, although abnormal auditory perception is a pre-clinical symptom.
Biopharmas are warming up to artificial intelligence (AI), but a series of challenges will need to be addressed before it becomes widely used by drug developers, a panel of industry executives agreed. Speaking at the 2019 Annual Meeting of NewYorkBIO in New York City yesterday, panelists identified those challenges as finding more and better data, integrating data from multiple sources, and creating partnerships to gather and analyze that data. The panel also cited challenges that go beyond data, such as attracting a new generation of professionals capable of applying AI and related technologies such as machine learning--and adapting biopharmas to the new technologies. Those observations are in line with a study released today by The Pistoia Alliance, a global not-for-profit organization of more than 150 members established by executives from AstraZeneca, GlaxoSmithKline (GSK), Novartis, and Pfizer. The Alliance surveyed 190 life sciences professionals in the US and Europe, with 52% citing access to data, and 44% a lack of skills, as the two key barriers of adoption of AI and machine learning.
A trio of researchers have developed an experimental machine learning method that allows AI to listen for the early whispers of psychotic break that humans can't hear. The team, consisting of Neguine Rezaii of Harvard Medical School and Emory School of Medicine, and Elaine Walker and Philipp Wolff from Emory University's Department of Psychology, set out to see if there was any way to use language as an indicator of impending latent onset psychosis. They developed a machine learning method that looks for specific indicators long thought associated with psychosis, especially schizophrenia. The team then spent two years observing study volunteers, a significant portion of whom ended up demonstrating psychotic break (the first experience of a fully psychotic episode). The results of the study were incredible.