civil rights & constitutional law

The Charter and Human Rights in the Digital Age


When the Canadian Charter of Rights and Freedoms became part of the Constitution Act in 1982, it created an authoritative set of rules that circumscribed state authority and embedded certain rights into the very fabric of Canadian society. In doing this, the Charter also created a powerful moral imperative that reflected the core of Canadian values. This includes fundamental freedoms around expression and religion; democratic rights; mobility rights, such as the right to leave and re-enter the country; legal rights, including the right not to be arbitrarily detained, and the rights to life, liberty and security of the person; and equality rights, including the right to equal protection and equal benefit of the law without discrimination. The Charter has long been considered a "living tree," meaning that it must be understood within the context of an ever-changing society, allowing for a "progressive interpretation, [that] accommodates and addresses the realities of modern life," as noted in the Supreme Court's 2004 judgment on same-sex marriage. However, it is unlikely that the drafters of the Charter could have foreseen the extent to which technology would change the Canadian economy, political system and society in the next 35 years.

Critics call NYPD's drone deployment 'a serious threat to privacy'


Drones are coming to New York City, and that should worry you. So argues the New York Civil Liberties Union, which in a Dec. 7 statement blasts the forthcoming NYPD deployment of the flying surveillance bots as "a serious threat to privacy." The 14 police drones, which the New York Times reports had been acquired by city police in June, are ostensibly to be used for tasks like keeping an eye on large crowds or hostage situations. However, critics see the deployment as the start of a very slippery, privacy-eroding slope. After all, large crowds of people often gather together to lawfully protest something like, say, police brutality.

Microsoft Pushes Urgency of Regulating Facial-Recognition Technology WSJD - Technology

Brad Smith, Microsoft's president and chief legal officer, dialed up the urgency on Thursday, arguing that delays to enacting new rules could "exacerbate societal issues." Society is ill-served "by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success," he wrote in a blog post. Mr. Smith also was scheduled to speak about Microsoft's position Thursday at the Brookings Institution in Washington, D.C., the same day a group of tech leaders from Microsoft and other companies visited the White House for a summit on issues including artificial intelligence. Microsoft's advocacy of regulation underlines the ambivalence over powerful new technologies enabled by advances in AI. Adoption of facial recognition is proceeding quickly--especially in China, where the government uses it extensively for surveillance--stirring concerns about potential misuse.

AWS CEO discusses machine learning ethics at AWS reInvent


Amazon Web Services (AWS) has been heavily focused on machine learning over the past few years, releasing a number of products and features which showcase how effective the technology can be for organisations and consumers. But while the technology – much like its parent artificial intelligence – can do a lot of good, there are always questions about what this means in the long-term for humanity, both in terms of a reduction in jobs but also in terms of how these products and services can be used for unethical reasons. The following post emphasises on why technologies such as AI, machine learning turns out to be a big deal for python experts. In a press Q&A last week at AWS reInvent in Las Vegas, AWS CEO Andy Jassy fielded several questions about how the company intends to ensure its machine learning capabilities are used ethically by customers. In response, Jassy cited use cases such as reducing human trafficking and reuniting children with parents where machine learning has already had a positive influence.

Opinion Chatbots Are a Danger to Democracy


As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated "chatbots," which masquerade as humans and try to hijack the political process. Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly "taught" vocabulary, grammar and syntax but rather "learn" to respond appropriately using probabilistic inference from large data sets, together with some human guidance. Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation.

Why Australia is quickly developing a technology-based human rights problem


Artificial intelligence (AI) might be technology's Holy Grail, but Australia's Human Rights Commissioner Edward Santow has warned about the need for responsible innovation and an understanding of the challenges new technology poses for basic human rights. "AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we're told AI will bring us everything from the perfect dating algorithm to interstellar travel -- it's easy in other words to get carried away, yet we should remember AI is still in its infancy," Santow told the Human Rights & Technology conference in Sydney in July. Santow was launching the Human Rights and Technology Issues Paper, which was described as the beginning of a major project by the Human Rights Commission to protect the rights of Australians in a new era of technological change. The paper [PDF] poses questions centred on what protections are needed when AI is used in decisions that affect the basic rights of people. It asks also what is required from lawmakers, governments, researchers, developers, and tech companies big and small. Pointing to Microsoft's AI Twitter bot Tay, which in March 2016 showed the ugly side of humanity -- at least as present on social media -- Santow said it is a key example of how AI must be right before it's unleashed onto humans.

Google attacked over reported plans to launch secret, censored search engine in China called 'Dragonfly'

The Independent

Google has been attacked over reported plans to launch a "censored" search engine in China. Amnest International has launched a petition against the plans, arguing that the apparently launch should be cancelled. Human rights campaigners claim developing a specifically censored search engine would be in conflict with the company's values and that it will limit freedom of expression. They also point out that Google's own staff appear to disagree with the plans. There are a lot of Easter Eggs hidden in Chrome, and more and more are discovered each year.

Protests against Google's 'dystopian' CENSORED search engine for China

Daily Mail

Amnesty International are holding protests across the globe today calling for an end to Googles plan of censoring their search engine in China. Demonstrations will take place outside Googles HQ's in the United States, the United Kingdom, Australia,Canada, Germany, Hong Kong, The Netherlands and Spain. It was revealed that Google secretly built the censored search engine, code-named Dragonfly, to blacklist certain words such as'human rights' and'student protest'. Amnesty have launched a petition to stop works on the'dystopian' platform which are said to launch in China between January and April 2019. The human rights group say that the move would'set a dangerous precedent for tech companies enabling rights abuses by governments.'

Hundreds of Employees Demand Google Stop Work on Censored Search Engine for China


Hundreds of Google employees have signed an open letter published Tuesday on Medium demanding that the company cease work on Project Dragonfly, which is aimed at creating a search engine that the Chinese government would be able to control to censor certain results and surveil users. "International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on Google to cancel the project," the letter reads in part. "So far, our leadership's response has been unsatisfactory." Google has kept much of Project Dragonfly under wraps, but news outlets like the Intercept have obtained documents revealing some of the details. The search engine reportedly would block websites having to do with democracy and political dissidents and also blacklist terms like "human rights."

'We're Taking A Stand': Google Workers Protest Plans For Censored Search In China


A security guard stands in front of Google's booth at the China International Import Expo earlier this month in Shanghai. A security guard stands in front of Google's booth at the China International Import Expo earlier this month in Shanghai. Several Google employees have gone public with their opposition to the tech giant's plans for building a search engine tailored to China's censorship demands. The project, code-named Dragonfly, would block certain websites and search terms determined by the Chinese government -- a move that, according to a growing number of workers at Google, is tantamount to enabling "state surveillance." "We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on Google to cancel the project," said the letter's signatories, whose group initially numbered nine employees but has ballooned since its publication on Medium.