James Loy has more than five years, expert experience in data science in the finance and healthcare industries. He has worked with the largest bank in Singapore to drive innovation and improve customer loyalty through predictive analytics. He has also experience in the healthcare sector, where he applied data analytics to improve decision-making in hospitals. He has a master's degree in computer science from Georgia Tech, with a specialization in machine learning. His research interest includes deep learning and applied machine learning, as well as developing computer-vision-based AI agents for automation in industry.
It is becoming increasingly clear that for most working people, a proportion of the working tasks they currently perform will be either completely replaced by machines (AI if the tasks are cognitive, robots if they are manual) or augmented by a human-machine interface. While there is less clarity about the types of tasks that will remain within the human domain, we can make some predictions. We know that, right now and in the foreseeable future, machines are generally poor at understanding a person's mood, at sensing the situation around them, and at developing trusting relationships. So as the World Economic Forum report on future skills argued, it is human "soft skills" that will become increasingly valuable -- skills such as empathy, context sensing, collaboration, and creative thinking. That means that millions of people across the world will have to make the transition toward becoming a great deal better versed in these soft skills.
These days it seems that nearly every product and startup boasts some kind of A.I. capability, but when it comes to advancing this domain beyond simplistic machine learning technologists at MIT Technology Review's Future Compute conference say these A.I. will need to be more human than not. When discussing A.I. during the conference's first day on December 2nd, speakers focused on two distinct paths for this technology: more human-like A.I.'s as well as more computer-like humans. This dual approach was presented as a potential future for human-machine symbiosis. But what exactly does that all mean, and is it even a good thing? A research Scientist from Oak Ridge National Laboratory, Catherine Schuman began the conversation by presenting her work on neuromorphic computing.
Thanks to advances in artificial intelligence, computers can now assist doctors in diagnosing disease and help monitor patient vital signs from hundreds of miles away. Now, CU Boulder researchers are working to apply machine learning to psychiatry, with a speech-based mobile app that can categorize a patient's mental health status as well as or better than a human can. "We are not in any way trying to replace clinicians," says Peter Foltz, a research professor at the Institute of Cognitive Science and co-author of a new paper in Schizophrenia Bulletin that lays out the promise and potential pitfalls of AI in psychiatry. "But we do believe we can create tools that will allow them to better monitor their patients." Nearly one in five U.S. adults lives with a mental illness, many in remote areas where access to psychiatrists or psychologists is scarce.
With the help of computational pathology firm Paige, healthcare technology giant Royal Phillips is bringing clinical artificial intelligence to pathology laboratories to help improve a pathologist's workflow and treatment planning for patients. According to a joint news release Thursday, this strategic collaboration will first start with Paige Prostate to help pathologists quantify and characterize cancer in tissue samples and make precise and efficient diagnoses. The release noted the need for more advanced cancer diagnosis technology as the number of cancer cases rises. Glass slide-based laboratory workflows are being converted to digital using solutions like ones offered by Phillips. Once digital images are created, the CE-marked Paige Prostate software is applied automatically to detect and localize prostate cancer, providing pathologists with valuable information they can use to evaluate prostate biopsies.
Nvidia unveiled a new federated learning edge computing reference application for radiology to help hospitals crunch medical data for better disease detection while protecting patient privacy. Called Clara Federal Learning, the system relies on Nvidia EGX, a computing platform which was announced earlier in 2019. It uses the Jetson Nano low wattage computer which can provide up to one-half trillion operations per second of processing for tasks like image recognition. EGX allows low-latency artificial intelligence at the edge to act on data, in this case images from MRIs, CT scans and more. Nvidia made its announcement of Clara on Sunday at the Radiological Society of North America conference in Chicago.
The Department of Veterans Affairs, the nation's largest integrated healthcare system, is centralizing the agency's efforts to advance its artificial intelligence research and development capabilities. The VA on Thursday announced the establishment of the National Artificial Intelligence Institute, a joint initiative by the Office of Research and Development and the Office of the Secretary's Center for Strategic Partnerships. "VA has a unique opportunity to be a leader in artificial intelligence," said VA Secretary Robert Wilkie in a written statement. "VA's artificial intelligence institute will usher in new capabilities and opportunities that will improve health outcomes for our nation's heroes." The National Artificial Intelligence Institute will solicit, develop and execute flagship AI research and development projects--with veteran input--focusing on deep learning, explainable AI, privacy-preserving AI as well as AI for multi-scale time series.
I'm trying to explain to Arthur I. Miller why artworks generated by computers don't quite do it for me. The works aren't a portal into another person's mind, where you can wander in a warren of intention, emotion, and perception, feeling life being shaped into form. What's more, it often seems, people just ain't no good, so it's transcendent to be reminded they can be. Art is one of the few human creations that can do that. No matter how engaging the songs or poems that a computer generates may be, they ultimately feel empty.
While this column has covered many areas of artificial intelligence (AI), there hasn't been much described about augmented reality (AR). This is a technology made famous and then forgotten by many via the overblown hype of Google glasses earlier this decade. The concept was that general use AR was just around the corner. What has slowly happened is that AR providers have focused more on narrow applications in the same way as early intelligence systems focused on specific domains rather than genera intelligence. As with other aspects of AI, including vision and robotics, one of the areas of AR focus is medicine.
Every year, our team of futurists, analysts, and consultants at Frost & Sullivan's Transformational Healthcare Group comes together to brainstorm and predict the themes, technologies, and global forces that will define the next 12 to 18 months for the healthcare industry. We also retrospect how we did each year, and each year we are becoming more accurate in the predictions we make. For the 2019 predictions that were released in November 2018, six out of eight predictions realized as anticipated, while the two remaining predictions have not panned out exactly the way we thought. The new vision for healthcare for 2020 and beyond will not just focus on access, quality, and affordability but also on predictive, preventive, and outcome-based care models promoting social and financial inclusion. As we are on the verge of entering a new decade of change globally, 2020 will be a reality check for long-pending national healthcare policies and regulatory reforms that must reinvigorate future strategies.