In this third part of my series on the future of work, I want to deal with the impact of automation, in particular robots and artificial intelligence (AI) on jobs. I have covered this issue of the relationship between human labour and machines before, including robots and AI. But is there anything new that we can find after the COVID slump? The leading American mainstream expert on the impact of automation on future jobs is Daron Acemoglu, Institute Professor at MIT. In testimony to the US Congress, Acemoglu started by reminding Congress that automation was not a recent phenomenon.
In the past decade, the application of machine learning (ML) to healthcare has helped drive the automation of physician tasks as well as enhancements in clinical capabilities and access to care. This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance. This Review discusses the use of deep generative models, federated learning and transformer models to address challenges in the deployment of machine learning for healthcare.
We all know technology is a driver of change. Much of the development and improvements we've seen in the healthcare industry today as compared to 20, 10, or even five years ago, have been a direct result of technological innovation. As technology continues to get smarter, faster and more reliable, it seems like the possibilities are endless. Artificial intelligence (AI), and especially machine learning (ML), are likely to have a tremendous impact on the future of the healthcare industry for patients, physicians and medical researchers. As one example, according to a 2016 study out of Johns Hopkins, medical errors stemming from individual and/or system-level mistakes were identified as the third-leading cause of death in the United States.
I am delighted to present my new blog – AI Business Transformation Playbook for Executives. originally posted here. I get into the nuts-and-bolts of AI Systems Solutioning in this rather lengthy blog but the “First Ten Plays” at the end summarizes the key steps. I look forward to your thoughts and comments. – PG “AI, IoT &… Read More »AI Business Transformation Playbook for Executives
For once, algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it. A group of social and data scientists developed a machine learning tool it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles. Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to "changes in policy that result in more equitable, need-based resource allocation," including sending officials other than law enforcement to certain kinds of calls, according to a report published Thursday in the journal Nature Human Behavior. The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes.
IEEE International Conference on Robotics and Automation (ICRA) has given many opportunities over the years for researchers, industries, students and the enthusiasts to network and collaborate. In a similar fashion, this year in 2022, there were great number of opportunities to involve and engage as well including networking events. A week before the conference, IEEE Robotics and Automation Society, Women in Engineering (RAS WiE) organized a free virtual event for the enthusiasts from the robotics research field to learn and discuss the aspects of Becoming a Plenary/Keynote Speaker in an International Robotics Conference. Three extraordinary robotics researchers, Dr. Vandi Verma, NASA Jet Propulsion Laboratory, USA, Dr. Katherine Kuchenbecker, Director, Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Germany and Prof. Lydia Kavraki, Greek-American computer scientist, the Noah Harding Professor of Computer Science, a professor of bioengineering, electrical and computer engineering, and mechanical engineering, Rice University discussed their career paths, opportunities and difficulties they've faced along their journey as a woman in engineering, mentoring, STEM promotion and work-life balance. The panelists also shared their invaluable personal experience and discussed the importance of learning together. There were a lot of in-depth discussions duing the workshop.
Since humans invented art, sometime in the Paleolithic era, they've produced lots of pictures--"The Starry Night," some memes, that photo of Donald Trump staring at the eclipse. What does it all add up to? A few years ago, a company called OpenAI fed a good deal of those images, along with text descriptions, into the neural network of an artificial intelligence named DALL-E. DALL-E was being trained to create original art of its own, in any style, depicting in uncanny detail almost anything desired, based on written prompts. But a mastery of the entire universe of human imagery makes for difficult choices.
Scientists are looking for a way to predict crime using, you guessed it, artificial intelligence. There are loads of studies that show using AI to predict crime results in consistently racist outcomes. For instance, one AI crime prediction model that the Chicago Police Department tried out in 2016 tried to get rid of its racist biases but had the opposite effect. It used a model to predict who might be most at risk of being involved in a shooting, but 56% of 20-29 year old Black men in the city appeared on the list. Despite it all, scientists are still trying to use the tool to find out when, and where, crime might occur.
Meta AI has partnered with the University of Texas to open source three new models based on audio-visual perception that can help improve AR/VR experiences. The release is another step toward the direction Meta has taken to shift to a virtual universe. The first model, the Visual Acoustic Matching model or AViTAR, can help transform the acoustics in audio clips and make them sound like the target space in a specific image. For instance, an audio clip that sounded like it was recorded in an empty space could be matched with the image of a crowded restaurant and result in audio that sounded like it was in the restaurant. The second model, called Visually-Informed Dereverberation or VIDA, as the name suggests, performs the opposite function.