Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things. U.S. Army and industry researchers said this week they have developed a "confidence metric" for assessing the reliability of AI and machine learning algorithms used in deep neural networks. The metric seeks to boost reliability by limiting predictions based strictly on the system's training. The goal is to develop AI-based systems that are less prone to deception when presented with information beyond their training. SRI International has been working since 2018 with the Army Research Laboratory as part of the service's Internet of Battlefield of Things Collaborative Research Alliance.
A paper coauthored by over 112 researchers across 160 data and social science teams found that AI and statistical models, when used to predict six life outcomes for children, parents, and households, weren't very accurate even when trained on 13,000 data points from over 4,000 families. They assert that the work is a cautionary tale on the use of predictive modeling, especially in the criminal justice system and social support programs. "Here's a setting where we have hundreds of participants and a rich data set, and even the best AI results are still not accurate," said study co-lead author Matt Salganik, a professor of sociology at Princeton and interim director of the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs. "These results show us that machine learning isn't magic; there are clearly other factors at play when it comes to predicting the life course." The study, which was published this week in the journal Proceedings of the National Academy of Sciences, is the fruit of the Fragile Families Challenge, a multi-year collaboration that sought to recruit researchers to complete a predictive task by predicting the same outcomes using the same data.
Like many agencies, the Census Bureau looks for reductions in expenses and workloads when it makes decisions about machine learning. But the agency has discovered another advantage in the technology: It can find data that employees never knew they needed. More than 100 different surveys are handled by siloed programs within the Census Bureau, and the capture, instrumentation, processing and summation of the resulting data is "really hard to manage," said Zachary Whitman, chief data officer, at an AFCEA Bethesda event Wednesday, The bureau's dissemination branch exports data in a consolidated system where discovery and preparation is "difficult" for employees, Whitman said. So the agency is piloting ML that flags valuable information employees may not have even been searching for originally. "How do you get people to translate into information they might not know about but would be very valuable to them?" Whitman said.
WASHINGTON: The National Geospatial-Intelligence Agency (NGA) will announce plans in May to contract with commercial companies to for analyze satellite and other imagery data of military targets, says David Gauthier, head of NGA's new(ish) Commercial and Business Operations Group. While the first contracts will be small, the move is a big step toward the spy agency's goal of creating a "hybrid" pool of data that combines commercial imagery with low-resolution but high re-revisit rates with traditional high-resolution that is less timely Intelligence Community imagery provided by the National Reconnaissance Office (NRO) and others. "We do foresee in the future a hybrid architecture, where we definitely require both national systems for their capabilities, and commercial systems for their capabilities," he said. While Gauthier wouldn't provide a budget for the new effort, he told me earlier this week that the plan is to evaluate the capabilities of a number of commercial companies to meet NGA's needs. "I don't want to discuss numbers at this time, but we are still operating at small scale and plan on contracting with multiple vendors to compare and contrast their capabilities," he said.
Given the outsized hold Artificial Intelligence (AI) technology has acquired on public imagination of late, it comes as no surprise that many are wondering what AI can do for the public health crisis wrought by the COVID-19 coronavirus. A casual search of AI and COVID-19 already returns a plethora of news stories, many of them speculative. While AI technology is not ready to help with the magical discovery of a new vaccine, there are important ways it can assist in this fight. Controlling epidemics is, in large part, based on laborious contact tracing and using that information to predict the spread. We live in a time in which we constantly leave digital footprints through our daily life and interactions.
I joined Infosys in June of 2019. The reason I came here is that we have the unique intersection of being able to build an executable strategy. Many services firms love to do strategy work and then fail at execution. Some are great at execution but make everything about price. What drew me to Infosys is that we make it about realized value.
The impact of Artificial Intelligence (AI) goes back to 1950 when the computer programming industry was just starting to boom. For several years many healthcare sectors have used AI and mobile apps for their analytics algorithms, and data visualization tools to try to get ahead of the virus, or at least keep up with it. Through these technologies, experts have the potential to track where the disease will go next, as well as identify drugs that may be effective. So today lets discuss how these technologies have been able to provide help in this global pandemic. As the world is getting more precautious about the Covid19 pandemic, organizations are brainstorming new ideas to handle the situation.
You'd thinking flying in a plane would be more dangerous than driving a car. In reality it's much safer, partly because the aviation industry is heavily regulated. Airlines must stick to strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go wrong, we investigate and attempt to rectify the issue to improve safety in the future. Other industries where things can go very badly wrong, such as pharmaceuticals and medical devices, are also heavily regulated.
The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemic's spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AI's value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI. Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI.
One subject never fails to light up the eyes of senior bankers and regulators when they're questioned about their efforts to end the money laundering-related scandals that have spread across northern Europe over the last two years: technology. There can be no more damning indictment of the integrity of a bank, or its host nation, than the public revelation that a licensed institution is being used as a laundromat for ill-gotten gains. And what is more enlivening for money-laundering supervisors and bank-compliance officers than showing your firm and country is at the forefront of a technology that could make these troubles disappear? Some of the biggest actors in Europe's financial sector are converts. The UK's Financial Conduct Authority is particularly enthusiastic about using technology to fight money laundering.