SINGAPORE: By 2022, people living in Singapore will be able to report municipal issues via a chatbot that asks for details in real time and automatically identifies the correct government agency in charge. This will be made possible by artificial intelligence (AI), which is also set to power a tool that helps in the detection of diabetic eye disease and an automated marking system for English in primary and secondary education by the same year. More initiatives tapping on AI technologies, such as machine learning and computer vision, are in the pipeline over the next decade, according to five projects unveiled on Wednesday (Nov 13) as part of Singapore's new "National AI Strategy". The new strategy, which maps out how Singapore will develop and use AI to transform the economy and improve peoples' lives, was announced by Deputy Prime Minister Heng Swee Keat at the final day of the "Singapore FinTech Festival (SFF) x the Singapore Week of Innovation and TeCHnology (SWITCH) Conference". Describing it as the next step in Singapore's Smart Nation Journey, Mr Heng said: "Countries will need to keep pace with technology, and harness it to tackle common challenges and national priorities."
Dr Peter B Scott-Morgan has just turned from Peter 1.0 to Peter 2.0, to use his own term, and has become the world's first full Cyborg. He's real and you can see his posts on Twitter. Dr Morgan is a scientists who has a muscle wasting disease that has now taken its toll on his body. In other words, he is terminally ill with a motor neurone disease. As the muscles in his body lose their power completely, only his brain will be alive.
In the golden age of Artificial Intelligence, healthcare is the new frontier of research and development. Surgeons are routinely using robotic assists to operate with less invasiveness and more precision. Gene sequencing and gene editing aided by AI is transforming the way scientists obtain cures for diseases. But, most notably, research is underway to allow AI to transform the way doctors diagnose patients. You have symptoms of a cold.
Elon Musk's Neuralink has been on a hiring spree since summer. Tesla and SpaceX CEO Elon Musk doesn't often publicly talk about his low-profile side hustle at biotech startup Neuralink. But when he does, the news is usually far more exiting than any of his updates on electric cars or rockets. In July, Neuralink published a white paper about an implantable brain chip it had been working on, which Musk said would help "merge biological intelligence with machine intelligence." This week, speaking on the Artificial Intelligence podcast hosted by MIT research scientist Lex Fridman, Musk shared a more detailed explanation of how things are unfolding at Neuralink and his ultimate vision for the sci-fi-sounding device that's in the making.
Elon Musk believes his neural technology company Neuralink will be able to "solve" schizophrenia and autism. Speaking on the Artificial Intelligence podcast with Lex Fridman, published Tuesday, Musk was asked what he thinks are the most exciting impacts he foresees for his company Neuralink. Neuralink's goal is to develop an AI-enabled chip that could be implanted in a person's brain, where it would be able to both record brain activity and potentially stimulate it. "So Neuralink, I think at first will solve a lot of brain-related diseases. So could be anything from like autism, schizophrenia, memory loss -- like everyone experiences memory loss at certain points in age. Parents can't remember their kids' names and that kind of thing," replied Musk.
So, how do I see the future of healthcare using AI? Well, let's just face it. AI is heading to transform medicine and somewhere even replace real-medicine workers. Every year we observe the appearance of new and more advanced solutions. This, by the way, provides a whole slew of advantages, one of the most important of them is reducing the time needed to reach a diagnosis that allows medical workers to better prioritize patient case.
Our nation is understandably grieving with each suicide, prompting our collective and tireless pursuit of evidence-based clinical interventions and expansion of community prevention strategies to reach each Veteran. As part of recent efforts to support Veterans in crisis, VA is using artificial intelligence (AI) systems capabilities leveraged by customer feedback industry best practices in partnership with Booz Allen Hamilton, Deloitte, Medallia, and Halfaker to detect and respond to Veterans in crisis. Starting in fall 2017, VA began digitally collecting customer feedback from Veterans receiving VA services and VA digital properties in the Veterans Signals (VSignals) program. Since then, Veterans have responded with more than 4.2 million surveys, including more than 1.6 million free-text comments. This feedback is accessible to VA employees across the country for action, often prompting customer service efforts and influencing VA decision making.
Is there anything today that can't possibly be done by Artificial Intelligence? From self-driving cars, 3D printing, sex robots that can breathe, and many other AI innovations, AI can do just about everything. To that end, researchers from Pennsylvania healthcare provider, Geisinger have trained an AI to predict which patients are at risk of dying within a course of a year, reports New Scientist. SEE ALSO: Chrome's New Feature Uses AI To Describe Images For Blind And Low-Vision Users Artificial Intelligence can reportedly determine when a person will die based on their heart test results, even if these results look normal to doctors. Dr. Brandon Fornwalt at the healthcare provider, Geisinger, trained the AI with examining 1.77 million electrocardiogram (ECG) results from almost 400,000 people to predict patterns that signal towards future cardiac issues.
Increases in the number of cell therapies in the preclinical and clinical phases have prompted the need for reliable and non-invasive assays to validate transplant function in clinical biomanufacturing. We developed a robust characterization methodology composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to non-invasively predict tissue function and cellular donor identity. The methodology was validated using clinical-grade induced pluripotent stem cell derived retinal pigment epithelial cells (iPSC-RPE). QBAM images of iPSC-RPE were used to train DNNs that predicted iPSC-RPE monolayer transepithelial resistance, predicted polarized vascular endothelial growth factor (VEGF) secretion, and matched iPSC-RPE monolayers to the stem cell donors. DNN predictions were supplemented with traditional machine learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity.
Technique key to scale up manufacturing of therapies from induced pluripotent stem cells. Researchers used artificial intelligence (AI) to evaluate stem cell-derived "patches" of retinal pigment epithelium (RPE) tissue for implanting into the eyes of patients with age-related macular degeneration (AMD), a leading cause of blindness. The proof-of-principle study helps pave the way for AI-based quality control of therapeutic cells and tissues. The method was developed by researchers at the National Eye Institute (NEI) and the National Institute of Standards and Technology (NIST) and is described in a report appearing online today in the Journal of Clinical Investigation. NEI is part of the National Institutes of Health.