Machine Learning, OSS & Ethical Conduct

#artificialintelligence

"Machine learning is the subfield of computer science that "gives computers the ability to learn without being explicitly programmed" (Arthur Samuel, 1959).[1] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning explores the study and construction of algorithms that can learn" or be trained to make predictions on based on input "data[3]" such algorithms are not bound by static program instructions, instead making their predictions or decisions according to models they themselves build from sample inputs. "Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible; example applications include spam filtering, optical character recognition (OCR),[5] search engines and computer vision." Machine learning is also employed experimentally to detect patterns and linkages in seemingly random or unrelated data. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.


Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "Computers are going to become increasingly important parts of our lives, if they aren't already, and the automation is just going to improve over time, so it's increasingly important to know why these complicated systems are making the decisions that they are," assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTV's Your Morning on Tuesday. Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected. "Sometimes it's a good thing, it's doing something much smarter than we realize," he said. Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day.


Rights group files federal complaint against AI-hiring firm HireVue, citing 'unfair and deceptive' practices

#artificialintelligence

A prominent rights group is urging the Federal Trade Commission to take on the recruiting-technology company HireVue, arguing the firm has turned to unfair and deceptive trade practices in its use of face-scanning technology to assess job candidates' "employability." The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue's business practices, saying the company's use of unproven artificial-intelligence systems that scan people's faces and voices constituted a wide-scale threat to American workers. HireVue's "AI-driven assessments," which more than 100 employers have used on more than a million job candidates, use video interviews to analyze hundreds of thousands of data points related to a person's speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates' skills and behaviors, including their "willingness to learn" and "personal stability." Candidates aren't told their scores, but employers can use those reports to decide whom to hire or disregard.


Google's new principles on AI need to be better at protecting human rights

#artificialintelligence

There are growing concerns about the potential risks of AI โ€“ and mounting criticism of technology giants. In the wake of what has been called an AI backlash or "techlash", states and businesses are waking up to the fact that the design and development of AI have to be ethical, benefit society and protect human rights. In the last few months, Google has faced protests from its own staff against the company's AI work with the US military. The US Department of Defense contracted Google to develop AI for analysing drone footage in what is known as "Project Maven". A Google spokesperson was reported to have said: "the backlash has been terrible for the company" and "it is incumbent on us to show leadership".


Artificial Intelligence Is Not The Future Of Work; It's Already Here

#artificialintelligence

Business pundits trumpet AI as the future for U.S. employment, but a large-scale survey of U.S. workers indicates that more than 32% are already exposed to some form of AI in their jobs. An additional 6% of workers will begin using AI tools for the first time in 2019. Optimized Workforce โ€“ a crowd-sourced think tank that studies the intersection of technology and employment โ€“ surveyed more than 10,000 U.S. workers to understand the time they spend on specific tasks, the technologies they work with, and the technologies they will deploy next year to help with those tasks. The survey sampled workers from 19 of the 20 Census Bureau NAICS codes and all of the Bureau of Labor Statistics' top-level occupational codes. The findings, released in a report available on the think tank's Web site, titled "AI Opportunity Report 2018: Which Industries Are Investing in AI? Which Ones Should Be?" reveal that AI-enabled document classification and document creation technologies lead all AI penetration and will continue to see strong investment in 2019.