Judge Napolitano's Chambers: Judge Andrew Napolitano breaks down why the Fourth Amendment is an intentional obstacle to government, an obstacle shown necessary by history to curtail tyrants. A trial in Great Britain has just concluded with potentially dangerous implications for personal freedom in the U.S. Great Britain is currently one of the most watched countries in the Western world – watched, that is, by its own police forces. In London alone, one study found that more than 420,000 surveillance cameras were present in public places in 2017. What do the cameras capture? Everything done and seen in public.
What began as a way to increase public safety has turned into a civil rights concern. Some residents of San Diego, California are demanding the removal of some 4,000 'Smart Streetlights' which they claim are an invasion of privacy. The devices use sensor nodes to gather a range of information, such as weather and parking counts, but also uses facial recognition technology to count pedestrians. Some residents of San Diego, CA are demanding the removal of some 4,000 'Smart Streetlights' which they claim are an invasion of privacy. The San Diego City Council approved the installation of the Smart StreetLights in December 2016 - and now approximately 4,200 are in place.
British police officers are among those concerned that the use of artificial intelligence in fighting crime is raising the risk of profiling bias, according to a report commissioned by government officials. The paper warned that algorithms might judge people from disadvantaged backgrounds as "a greater risk" since they were more likely to have contact with public services, thus generating more data that in turn could be used to train the AI. "Police officers themselves are concerned about the lack of safeguards and oversight regarding the use of algorithms in fighting crime," researchers from the defence think-tank the Royal United Services Institute said. The report acknowledged that emerging technology including facial recognition had "many potential benefits". But it warned that assessment of long-term risks was "often missing".
Advances in AI and computer graphics over the last several years are now being harnessed to create, modify, and disseminate modified or fabricated images, audio, and video content, often referred to broadly as synthetic media. These new content generation and modification capabilities have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions--especially given that some of these techniques may be used maliciously as a source of misinformation, manipulation, harassment, and persuasion. The ability to create synthetic or manipulated content that is difficult to discern from real events frames the urgent need for developing new capabilities for detecting such content, and for authenticating trusted media and news sources. AI techniques are being developed to detect and defend against synthetic and modified content. However, further investment and collaboration will be required for the advancement and application of these techniques, and for strengthening capacity in organizations and communities affected by these developments.
How are you supposed to react when a robot calls you a "gook"? At first glance, ImageNet Roulette seems like just another viral selfie app – those irresistible 21st-century magic mirrors that offer a simulacrum of insight in exchange for a photograph of your face. Want to know what you will look like in 30 years? If you were a dog what breed would you be? That one went viral in 2016.
An artificial-intelligence art project has been criticised for using racist and sexist tags to classify its users. When they share a selfie with ImageNet Roulette, the web app matches it to the ones it most closely resembles from an enormous library of profile photos. It then reveals the most popular tag, assigned to the matching pictures by human workers using data set WordNet. These include racial slurs, "first offender", "rape suspect", "spree killer", "newsreader", and "Batman". Those responsible for assigning the tags to the library pictures were recruited via a service offered by Amazon, called Mechanical Turk, which pays workers around the world pennies to perform small, monotonous tasks.
A colossal online image database used as the blueprint for artificial intelligence systems has been labelled "racist" and "cruel" after an online tool revealed strange and disturbing results. ImageNet, a trove of 14 million images hand-labelled by humans as a training guide for AIs, has been credited with kickstarting the modern AI boom and has become a benchmark against which new image recognition systems are measured. This week a public internet tool called ImageNet Roulette, created as part of an art exhibition, went viral on social media as hundreds of people uploaded pictures of their own faces to be classified by an AI. Users were perturbed to find the system tagging their faces with labels...
Many readers will remember The Jetsons – a futuristic world in which sophisticated robots in both the home and the workplace had the ability to do, think, learn, and interact with humans. While The Jetsons' rendering of the "future" has not come to fruition, robots and artificial intelligence (AI) have made and continue to make their way into the modern workplace at breakneck speed, creating unprecedented opportunities and challenges for employers in nearly every sector of the economy. This series will explore those challenges, a topic of considerable importance to employers but one that has been overshadowed by the cost-savings and potentially positive economic impact that robots and AI can bring to a workplace. As the use of robots and AI in the workplace have increased and will continue to do so, employers must be proactive about identifying, understanding, and mitigating risks and areas of potential exposure. The future is coming, and in many ways is already here.
Concern at the use of facial recognition technology continues as California lawmakers ban its use for the body cameras used by state and local law enforcement officers. It comes after civil rights campaign group in the US called ACLU ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots. The test saw the facial recognition program falsely flag 26 legislators as criminals. And to make matters worse, more than half of the falsely matched lawmakers were people of colour, according to the ACLU. Officials in San Francisco have already banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
Facial recognition technology is all around us--it's at concerts, airports, and apartment buildings. But its use by law enforcement agencies and courtrooms raises particular concerns about privacy, fairness, and bias, according to Jennifer Lynch, the Surveillance Litigation Director at the Electronic Frontier Foundation. Some studies have shown that some of the major facial recognition systems are inaccurate. Amazon's software misidentified 28 members of Congress and matched them with criminal mugshots. These inaccuracies tend to be far worse for people of color and women.