Civil Rights & Constitutional Law


Why Businesses Should Adopt an AI Code of Ethics -- Now - InformationWeek

#artificialintelligence

The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse -- different genders, races, values and cultural norms -- AI algorithms and automated processes won't work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition. There are many AI success stories with positive outcomes in fields from healthcare to education to transportation. But there have also been unexpected problems with several AI applications including facial recognition and unintended bias in numerous others.


How AI systems can learn and unlearn to beat Internet censorship - Express Computer

#artificialintelligence

Internet censorship by authoritarian governments prohibits free and open access to information for millions of people around the world. Attempts to evade such censorship have turned into a continually escalating race to keep up with ever-changing, increasingly sophisticated internet censorship. Censoring regimes have had the advantage in that race, because researchers must manually search for ways to circumvent censorship, a process that takes considerable time. New work led by University of Maryland computer scientists could shift the balance of the censorship race. The researchers developed a tool called Geneva (short for Genetic Evasion), which automatically learns how to circumvent censorship.


New artificial intelligence system automatically evolves to evade internet censorship

#artificialintelligence

Internet censorship by authoritarian governments prohibits free and open access to information for millions of people around the world. Attempts to evade such censorship have turned into a continually escalating race to keep up with ever-changing, increasingly sophisticated internet censorship. Censoring regimes have had the advantage in that race, because researchers must manually search for ways to circumvent censorship, a process that takes considerable time. New work led by University of Maryland computer scientists could shift the balance of the censorship race. The researchers developed a tool called Geneva (short for Genetic Evasion), which automatically learns how to circumvent censorship.


New artificial intelligence system automatically evolves to evade internet censorship

#artificialintelligence

New work led by University of Maryland computer scientists could shift the balance of the censorship race. The researchers developed a tool called Geneva (short for Genetic Evasion), which automatically learns how to circumvent censorship. Tested in China, India and Kazakhstan, Geneva found dozens of ways to circumvent censorship by exploiting gaps in censors' logic and finding bugs that the researchers say would have been virtually impossible for humans to find manually. The researchers will introduce Geneva during a peer-reviewed talk at the Association for Computing Machinery's 26th Conference on Computer and Communications Security in London on November 14, 2019. "With Geneva, we are, for the first time, at a major advantage in the censorship arms race," said Dave Levin, an assistant professor of computer science at UMD and senior author of the paper.


Microsoft will honor California's CCPA privacy law across the U.S.

#artificialintelligence

Microsoft said in a blog post on Monday that it would honor California's privacy law throughout the United States, expanding the impact of a strict set of rules meant to protect consumers and their data. Microsoft said in the post it was a "strong supporter" of the California Consumer Privacy Act, known as CCPA, which will go into effect on Jan. 1. The California law is widely expected to harm profits over the long term for technology companies, retailers, advertising firms and other businesses dependent on collecting consumer data to track users and increase sales. The law raised fears among companies of a rise in a patchwork of state laws and prompted efforts in Washington to write federal legislation that would pre-empt state efforts. In September, Reuters was first to report that the federal privacy bill is not likely to come before Congress this year as lawmakers disagreed over several issues.


Former Intelligence Professionals Use AI To Uncover Human Trafficking

#artificialintelligence

It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions. The existing FR technology is based, as ZDNet explains, on "identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions."


Will AI promote Gender Equality or make it worse?

#artificialintelligence

In a world where inequality between men and women rules in many sectors of activity, the power of AI could help identify, address and possibly solve those inequalities. Only 22% of AI professionals globally and only 12% of the leading machine-learning researchers are female, according to recent international reports. Because algorithms learn from real-world data, AI can potentially adopt and reinforce existing social biases. Developers could unconsciously integrate gender biases into their AI systems and perpetrate them in recruiting tools, search engines, face recognition systems, medical diagnosis and loan approval tools. AI digital assistants, obedient and obliging machines that pretend to be women are entering our homes, cars and offices and provide a powerful illustration of gender biases coded into mass market products.


Technology dominates our lives – that's why we should teach human rights law to software engineers

#artificialintelligence

Artificial Intelligence (AI) is finding its way into more and more aspects of our daily lives. It is in the algorithms designed to improve our health diagnostics. And it is used in the predictive policing tools used by the police to fight crime. Each of these examples throws up potential problems when it comes to the protection of our human rights. Predictive policing, if not correctly designed, can lead to discrimination based on race, gender or ethnicity.


Big Tech Tries to Fight Racist and Sexist Data

#artificialintelligence

The fact that AI can pass on bias and prejudice is now widely recognized, probably because recent incidents of apparently racist or sexist algorithms involved big companies like Google and Amazon. A better understanding of how bad data gets encoded might make it easier to prevent. The large-scale machine learning AI that undergirds most recent advances relies on immense quantities of data. As the system feeds on the data provided, thousands of small adjustments are made to internal parameters to tweak how the data will be categorized. So, if the original training data is biased, the training is biased and the results will be biased.


Artificial Intelligence And Human Rights Issues In Cyberspace – Techno Legal Tele Law And E-Lawyering Services By PTLB

#artificialintelligence

Human rights or Civil Liberties issues are not considered in their true perspective world over. Traditionally Governments across the world have been investing heavily in knowing more and more about their citizens and residents. This hunger to know everything could have been catastrophic if civil liberties activists were not so active. Nevertheless, we are slowly moving towards a totalitarian and Orwellian world thanks to the super pervasive and intruding technologies. We anticipated this trend way back in 2009 when we started discussing about Human Rights Protection In Cyberspace.