Artificial intelligence (AI) has already had a profound impact on business and society. Applied AI and machine learning (ML) are creating safer workplaces, more accurate health diagnoses and better access to information for global citizens. The Fourth Industrial Revolution will represent a new era of partnership between humans and AI, with potentially positive global impact. AI advancements can help society solve problems of income inequality and food insecurity to create a more "inclusive, human-centred future" according to the World Economic Forum (WEF). There is nearly limitless potential to AI innovation, which is both positive and frightening.
Researchers at the University of Michigan have been exploring the need to set ethics standards and policies when it comes to the use of artificial intelligence, and they now have their own place to do so. The university has created a new Center of Ethics, Society and Computing (ESC) that will focus on AI, data usage, augmented and virtual reality, privacy, open data and identity. According to the center's website, the name and abbreviation alludes to the "ESC" key on a computer keyboard, which was added to interrupt a program when it produced unwanted results. "In the same way, the Center for Ethics, Society and Computing (ESC -- pronounced'escape') is dedicated to intervening when digital media and computing technologies reproduce inequality, exclusion, corruption, deception, racism or sexism," the center's mission statement reads. The center will bring together scholars who are committed to "feminist, justice-focused, inclusive and interdisciplinary approaches to computing," the university said in a news release.
The Metropolitan police will start using live facial recognition, Britain's biggest force has announced. The decision to deploy the controversial technology, which has been dogged by privacy concerns and questions over its lawfulness, was immediately condemned by civil liberties groups, who described the move as "a breathtaking assault on our rights". But the Met said that after two years of trials, it was ready to use the cameras within a month. The force said it would deploy the technology overtly and only after consulting communities in which it is to be used. Nick Ephgrave, an assistant commissioner, said: "As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London. Independent research has shown that the public support us in this regard."
As artificial intelligence or AI keeps on discovering its way into our everyday lives, its propensity to interfere with human rights just gets progressively extreme. There are a few lenses through which experts examine artificial intelligence. The utilization of international human rights law and its well-created standards and organizations to examine artificial intelligence frameworks can add to the conversations already occurring, and give a universal vocabulary and forums set up to address power differentials. Moreover, human rights laws contribute a system for solutions. General solutions fall inside four general classifications: data protection rules to ensure rights in the data sets used to create and encourage artificial intelligence systems; special safeguards for government uses of artificial intelligence; safeguards for private sector use of artificial intelligence systems; and investment in more research to keep on looking at the future of artificial intelligence and its potential interferences with human rights.
Check out what's clicking on Foxnews.com Google and Alphabet CEO Sundar Pichai supports a temporary ban on facial recognition technology in the European Union. Activists and technologists have called the controversial technology racially biased, and voiced concerns about privacy, regarding its use by governments and law enforcement. "I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it," Pichai told a conference in Brussels, according to Reuters. Alphabet is Google's parent company.
Save big on the things you actually want on your day off. Purchases you make through our links may earn us a commission. If you're lucky enough to have the day off, you should treat yourself a little further with some online shopping. Even better, you can get incredible discounts on the products you actually want thanks to all of the holiday sales. For example, top-rated iRobot Roombas, weighted blankets, and travel mugs are all on sale at Amazon--and you don't want to pass up on these savings.
Data science consultant Cathy O'Neil helps companies audit their algorithms for a living. And when it comes to how algorithms and artificial intelligence can enable bias in the job hiring process, she said the biggest issue isn't even with the employers themselves. A new Illinois law that aims to help job seekers understand how AI tools are used to evaluate them in video interviews recently resurfaced the debate over AI's role in recruiting. But O'Neil believes the law tries to tackle bias too late in the process. "The problem actually lies before the application comes in. The problem lies in the pipeline to match job seekers with jobs," said O'Neil, founder and CEO of O'Neil Risk Consulting & Algorithmic Auditing.
Are you sure you want to view these Tweets? Is it really that different than the West? BREAKING: Military to jam GPS signals across East Coast through Jan. 24th; FBI asserting imminent domain to seize… http://disq.us/t/3ldp0z3 What Will It Take to Get the Public to Embrace Sound Money? Human Rights Watch says China is trying to censor critics abroad http://cnb.cx/35UVs43
Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.
New York City is hiring. The city earlier this month unveiled a description of its new Algorithms Management Policy Officer role. But some worry the creation of a procedural position forced to maneuver within an arguably flawed bureaucratic structure only perpetuates the city's imperfect approach to developing policy for government AI use. "It appears this role will simply provide a rubber stamp to current and future use of [Automated Decision Systems] without evaluating or even attempting to address known concerns with ADS currently used by city agencies," Rashida Richardson, director of policy research at the AI Now Institute at NYU and a critic of the city's task force, told RedTail. "This role is unique in urban governance and is intended to help provide protocols and information about the systems and tools City agencies use to make decisions," the city said in a statement.