In the past five years, Machine Learning has come a long way. You might have noticed that Siri, Alexa, and Google Assistant are way better than they used to be, or that automatic translation on websites, while still fairly spotty, is hugely improved from where it was a few years ago. But many still don't quite grasp how far we've come, and how fast. Recently, two images made the rounds that underscore the huge advances machine learning has made -- and show why we're in for a new age of mischief and online fakery. The first was put together by Ian Goodfellow, the director of machine learning at Apple's Special Projects Group and a leader in the field.
'Divorced from reality,' says critical law professor Are "virtuous sex robots" the way of the future? University researchers suggest that robots created for human pleasure should be designed so that they can grant or withhold consent, as well as teach sex education. Anco Peeters, a doctoral student at Australia's University of Wollongong, and Pim Haselager, associate professor at The Netherlands' Radboud University, published "Designing Virtuous Sex Robots" in the International Journal of Social Robotics last month. The paper examined four areas: "virtue ethics and social robotics," "Contra instrumentalist accounts," "Consent practice through sex robots" and "Implications of virtuous sex robots." The authors do not focus on child sex robots or sex robots that play into rape fantasies, but "the potential positive aspects of intimate human–robot interactions through the cultivation of virtues."
Financial crimes continue to plague the global economy. As nefarious actors become smarter, the costs of money laundering and associated crimes has reached the trillions. At the same time, the disparities between those who can access financial services – and those who cannot – threatens to widen. To help solve these gaps, IBM Research is building new AI and data encryption tools to help keep data safe, cybercrime at bay, and make financial services more accessible. The onus to spot financial crimes falls on banking and financial institutions, who can face enormous fines for compliance failures and failing to detect, report and pre-empt criminal activities.
The American criminal justice system has never been great for minorities. But in 2011, it got a lot worse. This was the year that the tech industry innovated its way into policing. It began with a group of researchers at the University of California, Los Angeles, who developed a system for predicting which areas of a city crimes were most likely to occur. Police could then flood these areas with officers in order to prevent offenses from being committed, or so the thinking went.
Please understand the truth about Falun Gong. Please understand the truth about Falun Gong and the brutal persecution of Falun Gong in China. Please do not believe the Chinese Communist Party's lies. Falun Gong (Falun Dafa) teaches'Truthfulness, Compassion, Tolerance', it teaches us to be a GOOD person, and makes us HEALTHY! And it is embraced in 114 nations!
"Less than a minute after finishing the call with Johannes, the fake Johannes rang again. His voice was identical but as soon as I asked who was calling, the line went dead." The criminals have yet to be identified, the company's insurer, Euler Hermes, said. Philipp Amann, head of strategy at the cybercrime centre at Europol, said that similar frauds may have already been made but gone undetected. Experts have raised concerns in the past year about the rapid acceleration of the technology but it had been believed that only video footage could be mimicked with such accuracy.
AI has been used to create deep fake images, voices and videos. Researchers believe that it may soon be impossible to tell the difference between a real person and a fake. "Criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking. The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm's German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company's insurance firm, Euler Hermes Group SA. Euler Hermes declined to name the victim companies. Law enforcement authorities and AI experts have predicted that criminals would use AI to automate cyberattacks. Whoever was behind this incident appears to have used AI-based software to successfully mimic the German executive's voice by phone. The U.K. CEO recognized his boss' slight German accent and the melody of his voice on the phone, said Rüdiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Munich-based financial services company Allianz SE. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI. Euler Hermes, which covered the entire amount of the victim company's claim, hasn't dealt with other claims seeking to recover losses from crimes involving AI, according to Mr. Kirsch."
In the United States, it seems we never have to go more than a few weeks without hearing about another mass shooting. With each new incident comes renewed calls to strengthen gun control laws, expand federal background checks, and get rid of assault rifles. Though the opposing faction promptly dismisses each appeal by citing 2nd Amendment rights, other discussions of practicality often emerge. Specifically, the efficacy of such laws is often called into question. How do we know which laws work and which ones don't?
In our case, the data was provided by Safecity India, which is a platform launched on 2012, that crowdsources personal stories of sexual harassment and abuse in public spaces . They have collected over 10,000 stories from over 50 cities in India, Kenya, Cameroon, and Nepal. More specifically they provided us a .cvs Additionally to the focal tasks of this project and as part of the NLP channel we decided to automate the category classification based on the sexual harassment case descriptions. Performing this classification task manually is time-consuming and leaving it entirely on the hands of the victim could produce ambiguity in the discrimination of the categories.
In October 2017, we published an article on how legal Artificial Intelligence systems had turned out to be as biased as we are. One of the cases that had made headlines was the COMPAS system, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants. To this day, the problems persist. By now, other cases have come to light.