Over the last 15 years, the United States military has developed a new addition to its arsenal. The weapon is deployed around the world, largely invisible, and grows more powerful by the day. That weapon is a vast database, packed with millions of images of faces, irises, fingerprints, and DNA data -- a biometric dragnet of anyone who has come in contact with the U.S. military abroad. The 7.4 million identities in the database range from suspected terrorists in active military zones to allied soldiers training with U.S. forces. "Denying our adversaries anonymity allows us to focus our lethality. It's like ripping the camouflage netting off the enemy ammunition dump," wrote Glenn Krizay, director of the Defense Forensics and Biometrics Agency, in notes obtained by OneZero.
Significant advances have been achieved over the past decade in language processing for information extraction from unstructured multilingual text (see, e.g., trec.nist.gov). However, the advent of increasingly large collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) together with rapid and widespread growth and innovation in new information services (e.g., blogging, podcasting, media editing) is driving the need not only for multimedia retrieval but also for information extraction from and across media. Scientists and engineers are innovating in new challenges such as multimodal sentiment analysis, multimodal summarization, and collaborative multimedia editing. While largely independent research communities have addressed extracting information from single media (e.g., text, imagery, audio), to date there has been no forum focused exclusively on cross media information extraction. The AAAI Fall Symposium presents a unique opportunity to move toward an integrated view of media information extraction.
HOUSTON – An arriving passenger uses a biometric scanner at George H. W. Bush Intercontinental Airport February 1, 2008 in Houston, Texas. Under President Donald Trump, technology companies have started cashing in on a little-noticed government push to ramp up the use of biometric tools -- such as fingerprinting and iris scanners -- to track people who enter and exit the country. Silicon Valley firms that specialize in data collection are taking advantage of a provision tucked into Mr. Trump's executive order on immigration, which included his controversial travel ban, that called for the completion of a "Biometric Entry-Exit Tracking System" for screening travelers entering and leaving the United States. The tracking system was mandated in a 1996 immigration law passed by Congress but never fully implemented by Trump's past three predecessors. In Trump's first months in office, federal courts blocked the sections of his original and revised immigration orders that called for a temporary travel ban on visitors from seven majority Muslim countries.
The next time you have trouble accessing a mission critical application and need to prove your identity, you may be making your case not to network administrators or IT support but to a machine learning algorithm. The oft-discussed machine learning model has already taken root in the information security industry, as several vendors have embraced the technology to improve malware and threat detection and displace traditional signature-based detection. But now machine learning is making its way into identity and access management (IAM) to make rulings for authentications and authorizations. Several experts at the 2017 Cloud Identity Summit this week discussed machine learning in cybersecurity applications for identity management systems, as well the risks and rewards of such applications. The appeal of machine learning in cybersecurity is straightforward: IAM increasingly relies on a growing number of factors – from physical and behavioral biometrics to geolocation data -- to determine the identity and authorizations of an individual, and companies are turning to algorithms to process and judge those factors for IAM systems.
A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people's lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies - but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.