In this guest post, Jacqueline M. Kory Westlund, a researcher in the Personal Robots Group at the MIT Media Lab describes her projects and explorations to understand children's relationships with social robots. What design features of the robots affect children's learning--like the expressivity of the robot's voice, the robot's social contingency, or whether it provides personalized feedback? When I tell people about the Media Lab's work with robots for children's education, a common question is: "Are you trying to replace teachers?" Despite all the research that seems to point to the conclusion "robots can be like people," there are also studies showing that children learn more from human tutors than from robot tutors.
It is very desirable to be able to inject domain knowledge into machine learning models, which is something that deep learning methods aren't able to do. We already know quite a bit about English language grammar and sentence construction, why is it then that our latest and greatest deep learning based language model can't be guaranteed to obey those rules? I don't see deep learning as completely over-shadowing machine learning five years down the road. Bio: Zeeshan Zia researches computer vision and machine learning solutions at Microsoft.
The 34-year-old assistant professor of physics at the Massachusetts Institute of Technology is the architect of a new theory called "dissipative adaptation," which has helped to explain how complex, life-like function can self-organize and emerge from simpler things, including inanimate matter. Wittgenstein argued that a word's meaning depends on its context, a context determined by the people who are using it. Wittgenstein, however, argued that a word's meaning depends on its context, a context determined by the people who are using it. "In the beginning, God created the heavens and the earth ..." Here, the Hebrew word for "create" is bara, the word for "heavens" is shamayim, and the word for "earth" is aretz; but their true meanings, England says, only come into view through their context in the following verses.
This conversation occurred between two AI agents developed inside Facebook. As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a "generative adversarial network"–neither was offered any sort of incentive for speaking as a normal person would. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they'd be predisposed to have a better understanding of the words we speak.
It is this glaring flaw of computers that has allowed CAPTCHAs, computer generated tests, to reliably tell a human apart from computers, preventing machine based attacks on a network, preventing robots from creating fake accounts, and carrying out large scale spamming attacks on websites. Artificial neural networks involve a large number of processors working in tandem, arranged in a way similar to how human brains work. It wasn't really the programmers with fancy computer science degrees that powered Google Translate's innate ability to translate languages, it was the common folks, whose incessant need to constantly poke fun at its inability to accurately convey one language into another – finally gave it a more uncanny, human like accuracy. Facial recognition software has allowed computers to identify a human based on a photo.
In the past, we looked at top messages across all Facebook bots. We looked at the most common languages in Facebook bots based on the percentage of bots having users with the particular language setting. Where it gets more interesting is looking at the most common messages based on the user's language. We also see some of the other common English words in the native language -- like "شكرا" is "thank" in Arabic and "謝謝" is "thank you" in Chinese.
Apple just launched a blog focused on machine learning research papers and sharing the company's findings. The Apple Machine Learning Journal is a bit empty right now as the company only shared one post about turning synthetic images into realistic ones in order to train neural networks. According to this paper, Apple has had to train its neural network to detect faces and other objects on photos. But instead of putting together huge libraries with hundreds of millions of sample photos to train this neural network, Apple has created synthetic images of computer-generated characters and applied a filter to make those synthetic images look real.
The voice component of Samsung's Bixby assistant has been a long time in coming. At last, though, it's becoming widely available: Samsung is officially rolling out Bixby's voice assistance to S8 and S8 Plus owners in the US. It can also read out the latest text messages in Samsung's official app, so you can catch up on conversations while your hands are tied. Still, this makes Bixby's voice assistant much more accessible.
Good teachers meet their students where they are, and they adapt their methods accordingly. Tutoring systems, language learning apps, and educational games are all designed to change our mental abilities. It's when we consider what it takes to change mental abilities or behaviors that things start to get interesting. It isn't just that people adapt to technology, and that technology adapts to people.
The thesis of strong Artificial Intelligence (AI), defended by philosophers and neuroscientists such as the American Daniel Dennet, argues that if a computer develops smart actions (interacting with the environment, reasoning, solving problems, planning, learning, communicating, etc.) Philosopher John Searle devised a curious mental experiment, called the "Chinese room", to refute strong AI. By analogy, Searle concludes that a program cannot provide understanding or consciousness to a computer regardless of how smart makes it behave. As Dennet puts in his book Consciousness Explained, the argument of Searle to show that the Chinese room is not aware at all misses a key factor: complexity.