rahwan
Human intuition as a defense against attribute inference
Waniek, Marcin, Suri, Navya, Zameek, Abdullah, AlShebli, Bedoor, Rahwan, Talal
Attribute inference - the process of analyzing publicly available data in order to uncover hidden information - has become a major threat to privacy, given the recent technological leap in machine learning. One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference. We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose. We focus on three attributes: the gender of the author of a piece of text, the country in which a set of photos was taken, and the link missing from a social network. For each of these attributes, we find that people's effectiveness is inferior to that of AI, especially when it comes to hiding the attribute in question. Moreover, when people are asked to modify the publicly available information in order to hide these attributes, they are less likely to make high-impact modifications compared to AI. This suggests that people are unable to recognize the aspects of the data that are critical to an inference algorithm. Taken together, our findings highlight the limitations of relying on human intuition to protect privacy in the age of AI, and emphasize the need for algorithmic support to protect private information from attribute inference.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > New York (0.05)
- Europe > Italy (0.04)
- (24 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Humans won't be able to control a superintelligent AI, according to a study
It may not be theoretically possible to predict the actions of artificial intelligence, according to researchers from the Max-Planck Institute for Humans and Machines. "A super-intelligent machine that controls the world sounds like science fiction," said Manuel Cebrian, co-author of the study and leader of the research group. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it [sic]." Our society is moving increasingly towards a reliance on artificial intelligence -- from AI-run interactive job interviews to creating music and even memes, AI is already very much part of everyday life. According to the research group's study, published in the Journal of Artificial Intelligence Research, to predict an AI's actions, a simulation of that exact superintelligence would need to be made.
Crowdsourcing Moral Machines
Robots and other artificial intelligence (AI) systems are transitioning from performing well-defined tasks in closed environments to becoming significant physical actors in the real world. No longer confined within the walls of factories, robots will permeate the urban environment, moving people and goods around, and performing tasks alongside humans. Perhaps the most striking example of this transition is the imminent rise of automated vehicles (AVs). They are expected to increase the efficiency of transportation, and free up millions of person-hours of productivity. Even more importantly, they promise to drastically reduce the number of deaths and injuries from traffic accidents.12,30 Indeed, AVs are arguably the first human-made artifact to make autonomous decisions with potential life-and-death consequences on a broad scale. This marks a qualitative shift in the consequences of design choices made by engineers. The decisions of AVs will generate indirect negative consequences, such as consequences affecting the physical integrity of third parties not involved in their adoption--for example, AVs may prioritize the safety of their passengers over that of pedestrians.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.05)
- (8 more...)
- Automobiles & Trucks (1.00)
- Law (0.94)
- Transportation > Passenger (0.68)
Ethics, efficiency, and artificial intelligence - The Boston Globe
In 2018, Google unveiled Duplex, an artificial intelligence-powered assistant that sounds eerily human-like, complete with'umms' and'ahs' that are designed to make the conversation more natural. The demo had Duplex call a salon to schedule a haircut and then call a restaurant to make a reservation. As Google's CEO Sundar Pichai demonstrated, the system at Google's I/O (input/output) developer conference, the crowd cheered, hailing the technological achievement. Indeed, this represented a big leap toward developing AI voice assistants that can pass the "Turing Test," which requires machines to be able to hold conversations while being completely indistinguishable from humans. But not everyone was so enthusiastic.
- North America > United States > California (0.06)
- North America > United States > North Carolina (0.05)
- North America > United States > New York (0.05)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.05)
Bots Outperform Humans if They Impersonate Us
"How can I help you?" "Hi, I'm calling to book a women's haircut for a client. "Sure, give me one second." "For what time are you looking for around?" The machine assistant never identified itself as a bot in the demo. And Google got a lot of flak for that. They later clarified that they would only launch the tech with "disclosure built in." But therein lies a dilemma, because a new study in the journal Nature Machine Intelligence suggests that a bot is most effective when it hides its machine identity. "That is, if it is allowed to pose as human." Talal Rahwan is a computational social scientist at New York University's campus in Abu Dhabi. His team recruited nearly 700 online volunteers to play the prisoner's dilemma--a classic game of negotiation, trust and deception--against either humans or bots. Half the time, the human players were told the truth about who they were matched up against. The other half, they were told they were playing a bot when they were actually playing a human or that they were battling a human when, in fact, it was only a bot. And the scientists found that bots actually did remarkably well in this game of negotiation--if they impersonated humans. "When the machine is reported to be human, it outperforms humans themselves.
- North America > United States > New York (0.26)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.26)
- Information Technology > Artificial Intelligence (0.54)
- Information Technology > Communications > Mobile (0.42)
Study Suggests Robots Are More Persuasive When They Pretend To Be Human
Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human. Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.26)
- Oceania > New Zealand (0.06)
Study Suggests Robots Are More Persuasive When They Pretend To Be Human
Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human. Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.26)
- Oceania > New Zealand (0.06)
New Research Suggests Robots Appear More Persuasive When Pretending to be Human
Recent technological breakthroughs in artificial intelligence have made it possible for machines, or bots, to pass as humans. A team of researchers led by Talal Rahwan, associate professor of Computer Science at NYU Abu Dhabi, conducted an experiment to study how people interact with bots whom they believe to be human, and how such interactions are affected once bots reveal their identity. The researchers found that bots are more efficient than humans at certain human–machine interactions, but only if they are allowed to hide their non-human nature. In their paper titled "Behavioral Evidence for a Transparency-Efficiency Tradeoff in Human-Machine Cooperation" published in Nature Machine Intelligence, the researchers presented their experiment in which participants were asked to play a cooperation game with either a human associate or a bot associate. This game, called the Iterated Prisoner's Dilemma, was designed to capture situations in which each of the interacting parties can either act selfishly in an attempt to exploit the other, or act cooperatively in an attempt to attain a mutually beneficial outcome.
Holding Algorithms Accountable
Artificial intelligence programs are extremely good at finding subtle patterns in enormous amounts of data, but don't understand the meaning of anything. Whether you are searching the Internet on Google, browsing your news feed on Facebook, or finding the quickest route on a traffic app like Waze, an algorithm is at the root of it. Algorithms have permeated our daily lives; they help to simplify, distill, process, and provide insights from massive amounts of data. According to Ernest Davis, a professor of computer science at New York University's Courant Institute of Mathematical Sciences whose research centers on the automation of common-sense reasoning, the technologies that currently exist for artificial intelligence (AI) programs are extremely good at finding subtle patterns in enormous amounts of data. "One way or another," he says, "that is how they work."
- North America > United States > New York > New York County > Manhattan (0.05)
- Europe > Germany > Berlin (0.05)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.71)
The Anthropologist of Artificial Intelligence
How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described "maverick" tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car -- should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist?