If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
What do we mean when we say'context'? In essence, context is the information that frames something to give it meaning. Taken on its own, a shout could be anything from an expression of joy to warning. In the context of a structured piece of on-stage Grime, it's what made Stormzy's appearance at Glastonbury the triumph it was. The problem is that context doesn't come free – it has to be discovered.
The use of facial recognition in the United States public sector has received a great deal of press lately, and most of it isn't positive. There's a lot of concern over how state and federal government agencies are using this technology and how the resulting biometric data will be used. Many fear that the use of this technology will lead to a Big Brother state. Unfortunately, these concerns are not without merit. We're already seeing damaging results where this technology is prevalent in countries like China, Singapore, and even the United Kingdom where London authorities recently fined a man for disorderly conduct for covering his face to avoid surveillance on the streets.
A police department in Orlando has terminated its trial of Amazon's AI-powered facial recognition for the second time, citing costs and complexity. According to a report from Orlando Weekly, the department ended its trial of the technology, called Rekognition, after 15 months of glitches and concerns over whether the technology was actually working. 'At this time, the city was not able to dedicate the resources to the pilot to enable us to make any noticeable progress toward completing the needed configuration and testing,' Orlando's Chief Administrative Office said in a memo to City Council, as reported by Orlando Weekly. A police department in Orlando has ended its pilot of Amazon's facial recognition software after being unable to get its system working properly. The decision marks the second time in just 10 months that the department decided not to proceed with using the technology.
With images aggregated from social media platforms, dating sites, or even CCTV footage of a trip to the local coffee shop, companies could be using your face to train a sophisticated facial recognition software. As reported by the New York Times, among the sometimes massive data sets that researchers use to teach artificially intelligent software to recognize faces is a database collected by Stanford researchers called Brainwash. More than 10,000 images of customers at a cafe in San Francisco were collected in 2014 without their knowledge. OKCupid and photo-sharing platforms like Flickr are among for researchers looking to load their databases up with images that help train facial recognition software. That same database was then made available to other academics, including some in China at the National University of Defense Technology.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).
Roundup Hello, here's a few announcements from the world of machine learning beyond what we've already covered this week. AlphaStar is coming out to play: AlphaStar, the StarCraft II-playing bot built by DeepMind researchers, will be facing human players in a series of 1v1 games online. StarCraft II players can enter the open competition league set up by Blizzard Entertainment, the creators of the popular battle strategy game, and opt-in to play against AlphaStar. But nobody will know if they're facing the bot, however, because it'll be entering the matches anonymously. Characters in the StarCraft II are from three species: Terran, Zerg or Protoss.
Washington D.C. [USA], July 14 (ANI): Researchers developed a new artificial intelligence (AI) tool for detecting unfair discrimination such as race or gender. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision-makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains including policing, consumer finance, higher education, and business. "Artificial intelligence systems such as those involved in selecting candidates for a job or for admission to a university are trained on large amounts of data. But if these data are biased, they can affect the recommendations of AI systems," said Vasant Honavar, one of the researchers of the study presented at the meeting of The Web Conference.
Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains -- including policing, consumer finance, higher education and business. "Artificial intelligence systems -- such as those involved in selecting candidates for a job or for admission to a university -- are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State. "But if these data are biased, they can affect the recommendations of AI systems."
A New Mexico man was arrested for allegedly beating his girlfriend after their Amazon device alerted police. Eduardo Barros, 28, was with his girlfriend and her daughter at a residence in Tijeras, outside of Albuquerque, on July 2. The pair got into an argument and the confrontation became physical, according to the Bernalillo County Sheriff Department's spokesperson, Deputy Felicia Romero. Eduardo Barros, 28, (pictured), was arrested for allegedly threatening to kill his girlfriend after he mentioned'calling sheriffs' during a fight, which prompted an Alexa device to call 911 It is understood Barros allegedly became angered because of a text message that the woman received and he accused her of cheating on him. He was allegedly in possession of a firearm and threatened to kill his unidentified girlfriend, saying to her, 'Did you call the sheriffs?' A smart speaker, which was connected to a surround sound system inside the house, recognized the comment as a voice command and called 911, Romero told the New York Post.