At least 40 people have died and more than 1,000 have tested positive for swine flu since the beginning of the year in a western Indian state popular with foreigners, according to authorities. The highly contagious H1N1 virus that spreads from human-to-human killed around 1,100 people and infected 15,000 across the country last year. "Total deaths are 40 and positive cases are 1,036 as from January 1 to 17 in Rajasthan. One of the deaths occurred on Thursday," a statement by the Rajasthan health department said on Friday. Rajasthan's Jodhpur district recorded the highest death toll with 16 fatalities and 225 people testing positive.
Dear Reader, As you can imagine, more people are reading The Jerusalem Post than ever before. Nevertheless, traditional business models are no longer sustainable and high-quality publications, like ours, are being forced to look for new ways to keep going. Unlike many other news organizations, we have not put up a paywall. We want to keep our journalism open and accessible and be able to keep providing you with news and analyses from the frontlines of Israel, the Middle East and the Jewish World. As one of our loyal readers, we ask you to be our partner.
Doctors in intensive care units face a continual dilemma: Every blood test they order could yield critical information, but also adds costs and risks for patients. To address this challenge, researchers from Princeton University are developing a computational approach to help clinicians more effectively monitor patients' conditions and make decisions about the best opportunities to order lab tests for specific patients. Using data from more than 6,000 patients, graduate students Li-Fang Cheng and Niranjani Prasad worked with Associate Professor of Computer Science Barbara Engelhardt to design a system that could both reduce the frequency of tests and improve the timing of critical treatments. The team presented their results on Jan. 6 at the Pacific Symposium on Biocomputing in Hawaii. The analysis focused on four blood tests measuring lactate, creatinine, blood urea nitrogen and white blood cells.
Ideally, Face2Gene would be able to correctly identify a disorder every time. To get closer to that goal, the FDNA team needs more training data, which it hopes to generate by making the app available to healthcare professionals for free. It also needs that training data to include more non-Caucasian faces -- a 2017 study using Face2Gene to identify Down Syndrome found the healthcare app was 80 accurate in its diagnosis if a photo featured a white Belgian child, but only 37 accurate if it featured a black Congolese child. Even at its current rate of accuracy, though, the app has already impressed at least one rare disease specialist: the University of Oxford's Christoffer Nellåker, who was not associated with the research. "The real value here is that for some of these ultra-rare diseases, the process of diagnosis can be many, many years," he told New Scientist.
A group of scientists from the Massachusetts Institute of Technology (United States) has created a machine learning system that processes sounds like people. This model can understand the meaning of a word and classify a song according to its genre or style: classical, jazz, pop, rock, blues, soul, hip hop, techno, house, etc. It is the first invention of this type that mimics the way the brain works. As the experiments carried out at MIT show, it can compete in precision with humans. The research, published in the journal Neuron, is based on deep neural networks, that is, a structure inspired by brain cells that analyses information by layers.
In a hilarious turn of events last month, a Russian robot named Boris was unmasked as a man in a robot suit. Likewise, state-run media in China unveiled its AI reporter in November, and to this day it's not clear if this is an actual AI system boiling down news stories or just a synthesized voice with an avatar. More fabricated robotic theatrics appeared to be on display this week at the Consumer Electronics Show in Las Vegas, where LG CTO I.P. Park delivered the opening CES keynote address. Park was accompanied onstage for the hour-long presentation by CLOi, a conceptual robot line perhaps best known for failing during a live demo at CES a year ago. This year, however, CLOi did a bit of everything: The robot acted as co-host, cracked jokes, delivered some LG HomeBrew beer, and even helped some guy who hates blind dates find true love.
In a video uploaded to YouTube on Friday, Shitty Robots creator Simone Giertz elaborated on what she disclosed on Twitter last week: that her non-cancerous brain tumor, for which she had brain surgery last year, has grown enough that she now plans to undergo radiation treatment. Giertz explained that her doctors decided to leave a piece of the tumor behind in a section of the brain where performing surgery was particularly risky. That's the piece that, over the past eight months, has grown. "We knew that there was a risk it would grow down the line, which it has," she said. "It's just that the'line' in'down the line' was a little bit shorter than I had anticipated, because it's only been eight months [since the procedure]."
An artificial intelligence program that's better than human doctors at recommending treatment for sepsis may soon enter clinical trials in London. The machine learning model is part of a new way of practicing medicine that mines electronic medical-record data for more effective ways of diagnosing and treating difficult medical problems, including sepsis, a blood infection that kills an estimated 6 million people worldwide each year. The discovery of a promising treatment strategy for sepsis didn't come about the regular way, through lengthy, carefully-controlled experiments. Instead, it emerged during a free-wheeling hackathon in London in 2015. In a competition bringing together engineers and health care professionals, one team hit on a better way to treat sepsis patients in the intensive-care unit, using MIT's open-access MIMIC database.
Doctors Are Confident That AI Won't Replace ThemGetty Companies such as DeepMind and IBM have been at the forefront of introducing AI into healthcare, with their smart algorithms proving adept at finding patterns in large quantities of data that enable the machines to make accurate predictions in the diagnosis of a range of conditions. There have been projects to provide more accurate and earlier diagnoses for mental health, dementia, Parkinson's, skin cancer, Alzheimer's, arthritis and, well, you get the picture. The reporting of many of these projects has been accompanied by breathless claims that doctors work will soon be automated, with these claims seeming to be supported by the growing number of AI-based triage systems on the market that claim to be able to accurately diagnose patients after hearing of their various symptoms. One might imagine that given this technological onslaught, doctors are a bit edgy about their long-term prospects, but that doesn't appear to be the case at all. Indeed, a recent study set out to ask doctors operating in primary care across the United Kingdom how they felt about AI technologies and whether they believed their job was at risk, and the answer was an overwhelming no.
Artificial intelligence, or AI for short, is one of the most highly anticipated digital healthcare technologies. While the concept of AI may still seem futuristic to some, the era of machine learning is already here. Uptake in pharma has been relatively slow compared to in other industries. However, this is gradually changing. AI is developing at a rapid rate and pharma will need to adapt if they want to stay relevant.