US Government


Talent and data top DOD's challenges for AI, chief data officer says - FedScoop

#artificialintelligence

The Pentagon has made big plans to adopt artificial intelligence across the department, but two very large hurdles stand in the way of that goal, its new chief data officer said Wednesday: structuring data and recruiting the talent to manage it. "You can't feed the algorithms if you don't have data. Solid, clean data in large volumes, well-tagged and well organized," Michael Conlin said at the ACT-IAC Artificial Intelligence and Intelligent. "People will tell you that the machine learning algorithms, AI technologies can clean the data for you. The Department of Defense has no shortage of data to pull from, but for it to be of any use to the AI capabilities, the department has to make sure that data is recorded in consistent, machine-readable formats for accuracy and to ensure it doesn't present the algorithms with unintended bias, Conlin said. "The more data you have to train your algorithms, the more accurate the algorithms are and the faster you get your results," he said. As an example, he detailed the department's efforts to improve the flight readiness of aircraft by tracking the lifecycle of parts that are replaced frequently versus those that can be sustained for longer -- dubbed "lemons" and "peaches," respectively. Conlin said the department tracked the serial numbers for the parts from aircraft maintenance records and determined with 99.9 percent accuracy which parts were lemons and which were peaches after nine maintenance stops on each part. The problem stems from the data itself, however. Because department officials used the serial numbers to identify the lemons versus peaches, Conlin said, only 25 percent of the data was useful. "Some [records] had a blank or a'To be completed later,' or'I don't know,' or something that wasn't the serial number," he said. "So you couldn't connect the maintenance records together in order to be able to identify to nine consistent maintenance activities." Considering that Silicon Valley is focused more on delivering AI solutions tailored for specific use cases rather than enterprisewide applications, as well as the growing importance of edge computing, the quality of the structured data becomes that much more important. Equally important is the Pentagon's need for data scientists to help oversee the AI systems, Conlin said. But the challenge is the current federal workforce structure isn't designed for the job. "We don't train data scientists in the government.


Facial Recognition in Law Enforcement – 6 Current Applications Emerj - Artificial Intelligence Research and Insight

#artificialintelligence

According to the US Government Accountability Office, the Federal Bureau of Investigation's database contains over 30 million mugshots of criminals and ID card images from 16 states. This is just one of many law enforcement databases which also contain further identity information, including fingerprints and text data. With needs to improve investigation times and streamline the task of matching suspect images within a pool of numerous identities, government officials, law enforcement offices, and commercial vendors are researching how AI, specifically computer vision, can be used to improve facial recognition. Through our research, we aim to show insights on how various law enforcement agencies and companies are implementing facial recognition technologies. Readers interested in AI for law enforcement might be interested in our founder's presentation at a joint INTERPOL-UN conference on AI in law enforcement given in the summer of 2018.


2018 AI review: A year of innovation

#artificialintelligence

This past year has seen a number of remarkable technological advancements in several arenas, AI not the least among them. In particular, it has been both an exciting and busy year at HPE, with advancements made in AI and supercomputing that have already benefited several industries. Take a look through this 2018 AI review to discover more about HPE's recent advancements in the AI space. One of the most prominent beneficiaries of these AI advances is the U.S. military. Back in February 2018, HPE announced it had been selected by the US Department of Defense (DoD) to provide supercomputers for its High Performance Computing Modernization Program (HPCMP).


How India can harness 'globotics' revolution in artificial intelligence

#artificialintelligence

The revolutionary development in artificial intelligence and machine learning, and its dramatic consequences, is a global economic upheaval and one that will both provide opportunities for and challenge India dramatically. Discussing India's response to this global upheaval, Martin Wolf, associate editor and chief economics commentator, Financial Times, London, said, "India needs to devote careful thought to the domestic implications of the revolution in artificial intelligence." He explained, "As a country with a growing population and labour force and huge employment in services, the implications might be very radical, both creating and destroying opportunities on a massive scale." Wolf, who delivered the seventh NCAER CD Deshmukh Memorial Lecture 2019 in New Delhi on Tuesday, spoke on the theme of Challenges for India from the Global Economic Upheavals. The other upheavals he addressed were the rapid economic rise of Asia, the strategic rivalry between the US and China, growing protectionism in the US and the associated erosion of the liberal global economic order and the threat of climate change.


Moving Graph Analytics Testing On Supercomputers Forward

#artificialintelligence

If it's the SC18 supercomputing conference, then there must be lists. The twice-yearly show is most famous for the Top500 list of the world's fastest supercomputers that use the Linpack parallel Fortan benchmark, a list that helps the industry gauge progress in performance, the growing influence of new technologies like GPU accelerators from Nvidia and AMD and the rise of new architectures, as marked this year by the introduction of the first supercomputer on the list powered by Arm-based processors. The "Astra" supercomputer, built by Hewlett Packard Enterprise and deployed at the Sandia National Laboratories, runs on 125,328 Cavium ThunderX2 cores and now sits in the number 205 slot. The list also helps fuel the ongoing global competition for supercomputer supremacy, with the United States this year finally retaking the top spot from China's Sunway TaihuLight in July with the Summit system based on IBM Power9 and Nvidia Volta compute engines, and then Sierra, a similarly architected machine, taking the number-two slot at this week's SC18 show in Dallas, pushing TaihuLight to number-three. However, China now claims 227 systems – or about 45 percent of the total number – on the Top500 list, with the United States dropping to an all-time low of 109, or 22 percent.


U.S. regulators have met to discuss imposing a record-setting fine against Facebook for some of its privacy violations

Washington Post

U.S. regulators have met to discuss imposing a record-setting fine against Facebook for violating a legally binding agreement with the government to protect the privacy of its users' personal data, according to three people familiar with the deliberations but not authorized to speak on the record. The fine under consideration at the Federal Trade Commission, a privacy and security watchdog that began probing Facebook last year, would mark the first major punishment levied against Facebook in the United States since reports emerged in March that Cambridge Analytica, a political consultancy, accessed personal information on about 87 million Facebook users without their knowledge. The penalty is expected to be much larger than the $22.5 million fine the agency imposed on Google in 2012. That fine set a record for the greatest penalty for violating an agreement with the FTC to improve its privacy practices. The FTC's exact findings in its Facebook investigation and the total amount of the fine, which the agency's five commissioners have discussed at a private meeting in recent weeks, have not been finalized, two of the people said.


U.S. regulators have met to discuss imposing a record-setting fine against Facebook for some of its privacy violations

Washington Post

U.S. regulators have met to discuss imposing a record-setting fine against Facebook for violating a legally binding agreement with the government to protect the privacy of its users' personal data, according to three people familiar with the deliberations but not authorized to speak on the record. The fine under consideration at the Federal Trade Commission, a privacy and security watchdog that began probing Facebook last year, would mark the first major punishment levied against Facebook in the United States since reports emerged in March that Cambridge Analytica, a political consultancy, accessed personal information on about 87 million Facebook users without their knowledge. The penalty is expected to be much larger than the $22.5 million fine the agency imposed on Google in 2012. That fine set a record for the greatest penalty for violating an agreement with the FTC to improve its privacy practices. The FTC's exact findings in its Facebook investigation and the total amount of the fine, which the agency's five commissioners have discussed at a private meeting in recent weeks, have not been finalized, two of the people said.


AI, the law, and our future

MIT News

Scientists and policymakers converged at MIT on Tuesday to discuss one of the hardest problems in artificial intelligence: How to govern it. The first MIT AI Policy Congress featured seven panel discussions sprawling across a variety of AI applications, and 25 speakers -- including two former White House chiefs of staff, former cabinet secretaries, homeland security and defense policy chiefs, industry and civil society leaders, and leading researchers. Their shared focus: how to harness the opportunities that AI is creating -- across areas including transportation and safety, medicine, labor, criminal justice, and national security -- while vigorously confronting challenges, including the potential for social bias, the need for transparency, and misteps that could stall AI innovation while exacerbating social problems in the United States and around the world. "When it comes to AI in areas of public trust, the era of moving fast and breaking everything is over," said R. David Edelman, director of the Project on Technology, the Economy, and National Security (TENS) at the MIT Internet Policy Research Initiative (IPRI), and a former special assistant to the president for economic and technology policy in the Obama White House. Added Edelman: "There is simply too much at stake for all of us not to have a say."


NASA reveals four options for its future flagship telescope

Daily Mail

NASA's next flagship telescope is the James Webb spacecraft, but the long-term direction of NASA's research remains uncertain. America's space agency has now turned to a team of expert astronomers to choose its eventual successor which will be built and sent into space in the 2030s. Four vastly different designs have been put forward which are designed to look for alien life, distant Earth-like worlds, black holes and the birth of new galaxies and high-energy gas disks. All four of the proposed missions look vastly different and the momentous decision will likely sculpt NASA's research for decades to come. Analysis of The Great Observatory programme in the 1970s gave the scientific community, and the wider world at large, access to analysis of the entire spectrum of electromagnetic light from Gamma rays to infrared radiation. LUVOIR will continue a mission similar to that which has been covered over the last two decades by Hubble and will study the first stars of the universe to find signs of life and the creation of worlds.


The FBI Says Its Photo Analysis is Scientific Evidence. Scientists Disagree.

Mother Jones

This story was originally published by ProPublica. At the FBI Laboratory in Quantico, Virginia, a team of about a half-dozen technicians analyzes pictures down to their pixels, trying to determine if the faces, hands, clothes or cars of suspects match images collected by investigators from cameras at crime scenes. The unit specializes in visual evidence and facial identification, and its examiners can aid investigations by making images sharper, revealing key details in a crime or ruling out potential suspects. But the work of image examiners has never had a strong scientific foundation, and the FBI's endorsement of the unit's findings as trial evidence troubles many experts and raises anew questions about the role of the FBI Laboratory as a standard-setter in forensic science. FBI examiners have tied defendants to crime pictures in thousands of cases over the past half-century using unproven techniques, at times giving jurors baseless statistics to say the risk of error was vanishingly small. Much of the legal foundation for the unit's work is rooted in a 22-year-old comparison of bluejeans. Studies on several photo comparison techniques, conducted over the last decade by the FBI and outside scientists, have found they are not reliable. Since those studies were published, there's no indication that lab officials have checked past casework for errors or inaccurate testimony. Image examiners continue to use disputed methods in an array of cases to bolster prosecutions against people accused of robberies, murder, sex crimes and terrorism. The work of image examiners is a type of pattern analysis, a category of forensic science that has repeatedly led to misidentifications at the FBI and other crime laboratories. Before the discovery of DNA identification methods in the 1980s, most of the bureau's lab worked in pattern matching, which involves comparing features from items of evidence to the suspect's body and belongings. Examiners had long testified in court that they could determine what fingertip left a print, what gun fired a bullet, which scalp grew a hair "to the exclusion of all others." Research and exonerations by DNA analysis have repeatedly disproved these claims, and the U.S. Department of Justice no longer allows technicians and scientists from the FBI and other agencies to make such unequivocal statements, according to new testimony guidelines released last year. Though image examiners rely on similarly flawed methods, they have continued to testify to and defend their exactitude, according to a review of court records and examiners' written reports and published articles.