According to a new Princeton study, though, the engineers responsible for teaching these AI programs things about humans are also teaching them how to be racist, sexist assholes. The study, published in today's edition of Science magazine by Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, focuses on machine learning, the process by which AI programs begin to think by making associations based on patterns observed in mass quantities of data. In a completely neutral vacuum, this would mean that AI would learn to provide responses based solely on objective, data-driven facts. To demonstrate this, Caliskan and her team created a modified version of an Implicit Association Test, an exercise that tasks participants to quickly associate concrete ideas like people of color and women with abstract concepts like goodness and evil.
Likewise in 2007, scientists created a computer program called Chinook that cannot be beat at checkers. In earlier victories, the computer program "memorized" every potential move and mathematically calculated the odds of success for each. By using the "brute force" technique, the computer was able to quickly determine the outcome of every potential move and choosing the one that leads to success. Deep Blue processed 200 million board positions per second in determining its next move.
We describe a variety of projects developed as part of a course in Artificial Intelligence at the University of Minnesota. The projects cover navigation of small mobile robots and learning to accomplish simple tasks, and require a variety of approaches from neural networks to genetic programming to reactive behaviors. The projects have all been implemented on real robots. We discuss how the combination of robotics with Artificial Intelligence adds value to the learning of AI concepts and how the fun of building and programming a robot is a highly motivating force for the learning process. 1 Introduction The major goal of this paper is to describe examples of integration of real robotics projects in a course in Artificial Intelligence. The examples presented here are some of the class projects done by students taking a course in Artificial Intelligence at the University of Minnesota. The course is intended for senior undergraduate and first year graduate students. The textbook we use i...
Similarly, we might ask ourselves where we draw the line when it comes to what we find ethically acceptable in terms of artificial intelligence (AI) as it relates to composition/creation in the worlds of art, writing, performing arts and music--as well as liberal arts education. Most of us are aware of music streaming services that select songs for us based on data about users' listening preferences. Although we can recognize that AI cannot provide an adequate replacement for human teachers and tutors, that recognition shouldn't mean total rejection of any sort of adaptive technology such as adaptive learning software or learning management systems. For example, consider the potential of adaptive learning software, in terms of providing an affordable alternative to textbooks.
A recent project entitled'Your face is big data' saw an art school student photograph people who happened to sit across from him on the subway and then he used FindFace, a facial recognition app that taps neural-network technology, to track them down on Russian social media site VK. The FindFace service was designed for users of the largest Russian social network "Vkontakte" and is based on face recognition technology developed by N-Tech.Lab. According to a report by PC World, the Rodchenko Art School student said it was ridiculously easy to find 60 to 70 percent of the subjects aged between 18 and 35, and, along the way, he said he learned a lot about the lives of complete strangers. "My point in this art project is to show how technology breaks down the possibility of private life," he said.