Goto

Collaborating Authors

Broward County


AI uncovers Eli Lilly's rheumatoid arthritis drug Olumiant as potential Alzheimer's treatment

#artificialintelligence

Could janus kinase (JAK) inhibitors like Eli Lilly's rheumatoid arthritis drug Olumiant be repurposed to treat Alzheimer's disease? Researchers at Harvard University and Massachusetts General Hospital have set out to find the answer to that question with a new clinical trial that was born from artificial intelligence. The researchers used a type of AI called machine learning to identify existing drugs that might be able to prevent neuronal death in Alzheimer's. The screen pulled up a list of 15 FDA-approved drugs as candidates for repurposing in Alzheimer's, and five of them were JAK inhibitors, they reported in the journal Nature Communications. JAK proteins fuel inflammation and have long been suspected to play a role in Alzheimer's.


Artificial intelligence reveals current drugs that may help combat Alzheimer's disease

#artificialintelligence

New treatments for Alzheimer's disease are desperately needed, but numerous clinical trials of investigational drugs have failed to generate promising options. Now a team at Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) has developed an artificial intelligence-based method to screen currently available medications as possible treatments for Alzheimer's disease. The method could represent a rapid and inexpensive way to repurpose existing therapies into new treatments for this progressive, debilitating neurodegenerative condition. Importantly, it could also help reveal new, unexplored targets for therapy by pointing to mechanisms of drug action. "Repurposing FDA-approved drugs for Alzheimer's disease is an attractive idea that can help accelerate the arrival of effective treatment--but unfortunately, even for previously approved drugs, clinical trials require substantial resources, making it impossible to evaluate every drug in patients with Alzheimer's disease," explains Artem Sokolov, Ph.D., director of Informatics and Modeling at the Laboratory of Systems Pharmacology at HMS. "We therefore built a framework for prioritizing drugs, helping clinical studies to focus on the most promising ones."


Parkland parents create artificial intelligence video of slain son to spur voters

FOX News

FORT LAUDERDALE, Fla. -- Wearing his signature hoodie and beanie, an earbud casually hanging from one ear, passionate Parkland teen Joaquin Oliver urges his peers to vote for lawmakers who will end gun violence in a new video released Friday. Next month's election would have been his first chance to vote. The 17-year-old's mannerisms and vernacular "yo, it's me" are shockingly life like, but it is just a mirage -- a realistic, almost eerie artificial intelligence re-creation of the teen who was among the 17 killed in the 2018 Valentine's Day massacre at Marjory Stoneman Douglas High School in Florida, the worst school shooting in history. From the grave, the teen is now begging his peers to cast the vote that he will never cast. "I've been gone for two years and nothing's changed, bro. People are still getting killed by guns," he implores in the video created by his parents' charity to end gun violence.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities.


Why are Artificial Intelligence systems biased? – IAM Network

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male.A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates.Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color. Another commercial face-recognition technology that Amazon tried to sell to government agencies has been shown to have significantly higher error rates for minorities. And a popular selfie lens by Snapchat appears to "whiten" people's faces, apparently to make them more attractive.ADVERTISEMENTThese are not just academic curiosities.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to …


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.


The benefits of implementing RPA in finance

#artificialintelligence

RPA works well for simple processes that operate in relatively high transaction volumes -- and finance and accounting are ripe with them, said Craig Le Clair, vice president and principal analyst at Forrester Research. "One bank that I interviewed had 1,400 people closing the books monthly, quarterly, end of year, and they felt they could automate [the work of] about a third of those full-time employees with RPA." Imran Sabir, the senior manager of RPA at OZ, a consulting company based in Fort Lauderdale, Fla., agreed that RPA can improve an organization's end-of-year closing, which is the most hectic time for finance. The financial close and reporting process encompasses numerous tasks that involve many systems, departments and individuals, from closing out subledgers to creating and delivering financial filings to regulatory bodies, Sabir said. The process requires posting data from sources such as Microsoft Excel to these subledgers -- a tedious undertaking that RPA can mitigate and solve efficiently. Reporting is another common use case for RPA in finance, according to Sabir.


Former PPPL intern honored for outstanding machine learning poster

#artificialintelligence

The American Physical Society (APS) has recognized a summer intern at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) for producing an outstanding research poster at the world-wide APS Division of Plasma Physics (DPP) gathering last October. The student, Marco Miller, a senior at Columbia University majoring in applied physics, used machine learning to accelerate a leading PPPL computer code known as XGC as a participant in the DOE's Summer Undergraduate Laboratory Internship (SULI) program in 2019. The modifications, which will enable the XGC code to calculate more quickly, could help expand the physics included in detailed simulations of the plasma that fuels fusion reactions. The poster, prepared under the mentorship of PPPL physicist Michael Churchill, showed how Miller used machine learning techniques in his research and was presented at the APS-DPP conference in Fort Lauderdale, Florida. "It felt great to get the award," Miller said.