Goto

Collaborating Authors

 safety net


DrugGPT: new AI tool could help doctors prescribe medicine in England

The Guardian

Drugs are a cornerstone of medicine, but sometimes doctors make mistakes when prescribing them and patients don't take them properly. A new AI tool developed at Oxford University aims to tackle both those problems. DrugGPT offers a safety net for clinicians when they prescribe medicines and gives them information that may help their patients better understand why and how to take them. Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient's conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions.


Trump's Budget Is Awful if You're a Worker, Great if You're a Robot

#artificialintelligence

When the robots rise up, they won't take your life. They'll take your job, particularly those in fields primed for automation, like manufacturing, trucking, and customer service. Technologists, economists, and policymakers believe this future is all but inevitable, and say it's time to begin thinking seriously about how to ensure artificial intelligence advances humanity--and improves the economy, without leaving the middle class behind. Two economists who recently left Washington say the answer lies in ensuring the government provides enough of a safety net to help middle class Americans navigate the coming transition. Jason Furman and Gene Sperling--former chief economic advisors to President Obama--prefer to think of it as a bridge, not a net, that will help people reach the future.


The AI reliability paradox

#artificialintelligence

Here comes the million-dollar question. Which worker is more dangerous to your business? On a high-stakes task, the answer could be Ronnie Reliable… but perhaps not for the first reason that comes to mind. In another article, I've pointed out that ultra-reliable workers can be dangerous when the decision-maker is deranged. They "just follow orders" even if those orders are terrible, so they can amplify incompetence (or malice).


Introducing Multiple ModelCheckpoint Callbacks

#artificialintelligence

When training a model, there is always a chance that something might fail unexpectedly. Proper checkpointing provides a safety net during failures that enables users to restore the state of the model and trainer from a checkpoint file. In Lightning, checkpointing is a core feature in the Trainer and is turned on by default to create a checkpoint after each epoch. But checkpointing provides more than just a safety net in case of failure. Often we care about keeping track of the "best" model weights encountered during the course of training, because in practice not every new epoch leads to an improved generalization error (unstable optimization, overfitting).


Josh Hawley Gets One Thing Right About the Plight of Men

Slate

During his recent keynote address to the National Conservative Conference, Republican Sen. Josh Hawley of Missouri brought attention to the crisis of a marginalized and long-forgotten group: men. "Over the last 30 years and more, government policy has helped destroy the kind of economy that gave meaning to generations of men," he said, describing low wages and corporate consolidation brought on by globalization. The result, he said, is "more and more men are withdrawing into the enclaves of idleness, and pornography, and video games." Hawley's remarks were immediately met with derision, criticism, and exasperation: Here was another conversative--a presidential hopeful no less--hand-wringing over pornography, another traditionalist subscribing to outdated gender norms by saying "a man is a father, a man is a husband, a man is someone who takes responsibility," and another male politician cautioning that a supposed liberal attack on manhood was at the root of this rot. Here's the issue: Hawley is partially right.


The AI road not taken

#artificialintelligence

Does this have to be the way? Artificial intelligence was supposed to boost productivity and create better futures in medicine, transportation, and workplaces. Instead, AI research and development has focused on only a few sectors, ones that are having a net negative impact for humanity, MIT economistDaron Acemogluargues in "Redesigning AI," a Boston Review book. "Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities," Acemoglu writes. AI also threatens "democracy and individual freedoms," he writes.


The AI reliability paradox

#artificialintelligence

Here comes the million-dollar question. Which worker is more dangerous to your business? On a high-stakes task, the answer could be Ronnie Reliable… but perhaps not for the first reason that comes to mind. In another article, I've pointed out that ultra-reliable workers can be dangerous when the decision-maker is deranged. They "just follow orders" even if those orders are terrible, so they can amplify incompetence (or malice).


'Safety nets' built by army ants could help engineers design self-healing robot swarms

Daily Mail - Science & tech

Teamwork isn't just a human characteristic: Colonies of army ants will form living'scaffolding' to protect members from falling. The insects are blind and have no designated leader but, according to new research, they're able to use simple behavioral rules to develop these safety structures without the need for direct communication. Once a scaffold was built, worker ants were almost 100 percent protected from falling off steep inclines. Understanding how they design such complex structures could help engineers development self-healing materials and swarm robotics, researchers said. Army ants in Central American rainforests will build scaffolds out of their body to help them traverse steep terrain.


Machine Learning: The Great Stagnation - AI Summary

#artificialintelligence

This blog post generated a lot of discussion on Hacker News -- many people have reached out to me giving more examples of the stagnation and more examples of projects avoiding it. Maybe I'll add to this article or maybe I'll write a new one, let's see what happens. In the meantime if you can't wait for me to stop staring at the ceiling and write something new, I'm pretty sure you'll enjoy my e-book robotoverlordmanual.com Academics think of themselves as trailblazers, explorers -- seekers of the truth. Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering.


Welcome! You are invited to join a webinar: Putting patients first - How LucidHealth is utilizing AI as a safety net. After registering, you will receive a confirmation email about joining the webinar.

#artificialintelligence

Since the onset of the global COVID-19 pandemic, improving the quality and efficiency of patient care is more critical than ever before Learn how LucidHealth is supporting their patients by utilizing AI as a safety net for ensuring shortened turnaround times and high clinical quality. This webinar will take you into a real-life AI-driven radiology workflow, where you'll learn the best practices in utilizing AI, as well as the outcomes in quality and efficiency that the Aidoc and LucidHealth partnership provides. This session will feature Dr. Peter Lafferty, Chief Physician Integration Officer from LucidHealth, and Elad Walach, CEO of Aidoc. Join this session to: 1. Learn how AI contributed to the quality of LucidHealth's radiology workflow 2. See examples of cases that benefited from the presence of AI 3. Understand how "Always-on" AI works in a clinical setting The live webinar will take place on Thursday, April 23rd at 11:30 AM EST. Ask the Experts is a quarterly webcast series introducing new trends and real use cases in the world of AI and Radiology.