Goto

Collaborating Authors

 hagerty


Congress weighs ban on government contracts for 'adversarial biotech companies' like China's BGI

FOX News

Defense companies exploring artificial intelligence will help the U.S. military "keep up" with rivals like China, a former fighter pilot told Fox News. The Senate version of the National Defense Authorization Act could include a House-authored provision that prohibits the United States government and its contractors from buying equipment from "adversarial biotech companies" that work to "exploit" Americans' genetic information for "malign purposes," Fox News Digital has learned. Both the Senate and the House of Representatives are currently conferencing and negotiating on final NDAA text that can be passed by both chambers. The provision, which was passed in the original House bill, was introduced by House China Select Committee Chairman Mike Gallagher, R-Wis. The provision prohibits the purchase of biotechnology equipment or services from all United States adversaries, including North Korea, Russia, Iran and China.


GOP senators push for hearings after Russia downs US drone

FOX News

Rep. Nick LaLota, R-N.Y., reacts to new video showing a Russian jet hitting a U.S. drone and weighs in on the border crisis. EXCLUSIVE: Several Republican senators are calling on Congress to exercise its oversight authority after a Russian fighter jet downed a U.S. drone over international waters earlier this week – an incident that has many worried about more direct conflict between the two superpowers as the Ukraine war enters its second year. "We need a hearing on it. We've asked the Pentagon for hearing, of course, they're probably a little bit busy right now," Sen. Tommy Tuberville, R-Ala., a member of the Armed Services Committee, told Fox News Digital this week. When asked if he was concerned about Russian officials' announcement that Moscow will try to retrieve the debris, Tuberville said, "Yeah, we should all be."


Amazon.com: Introducing HR Analytics with Machine Learning: Empowering Practitioners, Psychologists, and Organizations: 9783030676254: Rosett, Christopher M., Hagerty, Austin: Books

#artificialintelligence

This book directly addresses the explosion of literature about leveraging analytics with employee data and how organizational psychologists and practitioners can harness new information to help guide positive change in the workplace. In order for today's organizational psychologists to successfully work with their partners they must go beyond behavioral science into the realms of computing and business acumen. Similarly, today's data scientists must appreciate the unique aspects of behavioral data and the special circumstances which surround HR data and HR systems. Finally, traditional HR professionals must become familiar with research methods, statistics, and data systems in order to collaborate with these new specialized partners and teams. Despite the increasing importance of this diversity of skill, many organizations are still unprepared to build teams with the comprehensive skills necessary to have high performing HR Analytics functions.


Use of AI to fight COVID-19 risks harming "disadvantaged groups", experts warn

AIHub

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease. This is according to researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (CFI) in two articles published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain normalcy in 2021. "Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic," said Dr Stephen Cave, Director of CFI and lead author of one of the articles. "The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology." In a further paper, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale – predicting deterioration rates of patients who might need ventilation, for example – if it does so based on biased data.


Scientists create online games to show risks of AI emotion recognition

The Guardian

It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate. Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased. A team of researchers have created a website – emojify.info One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context. Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.


Use of artificial intelligence to tackle coronavirus must go through ethical checks, say experts

#artificialintelligence

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease. This is according to researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (CFI) in two articles, published today in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain some normalcy in 2021. Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic." "The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology." In a further paper, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale - predicting deterioration rates of patients who might need ventilation, for example - if it does so based on biased data. Datasets used to "train" and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of "lower socioeconomic status". "COVID-19 has already had a disproportionate impact on vulnerable communities.


Movement rises to keep humans, not robots, in the driver's seat

#artificialintelligence

Hagerty, the largest insurer of classic cars, wants to save driving as more automakers push to bring self-driving cars to the roads in the future. McKeel Hagerty stands with his 1967 Porsche 911S which he bought for $500 when he was 13 and restored it in the garage with his Dad. It was his first car and he still owns it 37 years later. Car enthusiast McKeel Hagerty's future changed in March 2017. He was at a car event in Vancouver, British Columbia, when a stranger involved in developing self-driving cars took Hagerty by the elbow, looked him in the eye, and laid forth the future.