Police in Texas investigating a Tesla car crash in which two men died will serve search warrants on the company to ascertain if the vehicle's autopilot mode was engaged at the time of the incident. However Tesla's CEO, Elon Musk, has said the self-driving feature was not being used, based on an internal probe by the company. In the incident, two men, both in their 50s, were killed after their 2019 Tesla Model S crashed into a tree and caught fire. According to police reports, the car was travelling at a high speed and failed to negotiate a curve in the road. Texas police noted that nobody was at the driving seat at the time of impact, raising doubts about the involvement of the car's autopilot mode.
Eyes are important, don't get me wrong. So are ears, noses, tongues, fingers, balance calibration organs and everything else that feeds that massive brain of yours.1 Salinity detectors in narwhals, electrical sensors in freshwater bottom feeders, echolocation in bats all provide sensory input that humans couldn't adequately process. Every beast has its own senses relevant to its own living conditions. Even your smartphone has cameras, microphones, gyroscopes, an accelerometer, a magnetometer, interfaces for phone/GPS/Bluetooth/WiFi, and some have a barometer, proximity sensors, and ambient light sensors. Biometric sensing equipment in today's phones can include optical, capacitive or ultrasonic fingerprint readers and an infrared map sensor for faces.
As cases of violence against women and girls have surged in South Asia in recent years, authorities have introduced harsher penalties and expanded surveillance networks, including facial recognition systems, to prevent such crimes. Police in the north Indian city of Lucknow earlier this year said they would install cameras with emotion recognition technology to spot women being harassed, while in Pakistan, police have launched a mobile safety app after a gang rape. But use of these technologies with no evidence that they help reduce crime, and with no data protection laws, has raised alarm among privacy experts and women's rights activists who say the increased surveillance can hurt women even more. "The police does not even know if this technology works," said Roop Rekha Verma, a women's rights activist in Lucknow in Uttar Pradesh state, which had the highest number of reported crimes against women in India in 2019. "Our experience with the police does not give us the confidence that they will use the technology in an effective and empathetic manner. If it is not deployed properly, it can lead to even more harassment, including from the police," she said.
In 2012, in United States' Santa Cruz, a company called Predpol Inc devised a software that promised to predict future criminal activities by analysing past criminal records and identifying patterns. This simple idea of "predictively policing" an unsuspecting population aimed to change the face of law and order in the US. Police departments in major US cities began to use such predictive technology in their efforts to curb crime. In India too, such artificial intelligence tools are increasingly being put to use. For instance, during his annual press briefing in February, the Delhi police commissioner said that 231 of the 1,818 people arrested for their alleged role in the 2020 Delhi riots had been identified using technological tools.
The year is 2029, and you wake up one morning living in a community called Hope, a dystopian dictatorship. "Everyone here wears the same outfit, lives the same repetitive routine, and is happy … For many, Hope is their entire universe. They are uninterested in the outside world. However, you are different--you have the ability to choose." This is how you are introduced to the game Name of the Will on Kickstarter.
As financial institutions push out more digital products focused on speed and convenience, it creates additional points of vulnerability that fraudsters could exploit online. As a result, financial institutions are also expected to stay agile and deploy the latest technologies to protect their customers. In fact, the Movement Control Order (MCO) period last year presented a case study of what could happen as more financial transactions move online. Globally, a record-high number of scam and phishing sites were detected in 2020, according to Atlas VPN. "Propelled by the pandemic, there has been a significant shift towards digital transactions and real-time payments. This new normal has brought [not only] unprecedented efficiency and convenience but also an increase in payment-related fraud," says Abrar A Anwar, managing director and CEO of Standard Chartered Malaysia.
"Framing" involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983). Differences in lexical framing, the focus of our work, can have large effects on peoples' opinions and beliefs. To make progress towards reframing arguments for positive effects, we create a dataset and method for this task. We use a lexical resource for "connotations" to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post-decoding entailment component (same denotation). Our results show that our method is effective compared to strong baselines along the dimensions of fluency, meaning, and trustworthiness/reduction of fear.
Artificial Intelligence (AI) is a beautiful piece of technology made to seamlessly augment our everyday experience. It is widely utilized in everything starting from marketing to even traffic light moderation in cities like Pittsburg. However, swords have two edges and the AI is no different. There are a fair number of upsides as well as downsides that follow such technological advancements. One way or another, the technology is moving too quickly while the education about the risks and safeguards that are in place are falling behind for the vast majority of the population. The whole situation is as much of a blessing for humankind as it is a curse.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
Recently, significant advancements have been made in face recognition technologies using Deep Neural Networks. As a result, companies such as Microsoft, Amazon, and Naver offer highly accurate commercial face recognition web services for diverse applications to meet the end-user needs. Naturally, however, such technologies are threatened persistently, as virtually any individual can quickly implement impersonation attacks. In particular, these attacks can be a significant threat for authentication and identification services, which heavily rely on their underlying face recognition technologies' accuracy and robustness. Despite its gravity, the issue regarding deepfake abuse using commercial web APIs and their robustness has not yet been thoroughly investigated. This work provides a measurement study on the robustness of black-box commercial face recognition APIs against Deepfake Impersonation (DI) attacks using celebrity recognition APIs as an example case study. We use five deepfake datasets, two of which are created by us and planned to be released. More specifically, we measure attack performance based on two scenarios (targeted and non-targeted) and further analyze the differing system behaviors using fidelity, confidence, and similarity metrics. Accordingly, we demonstrate how vulnerable face recognition technologies from popular companies are to DI attack, achieving maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks, respectively. Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0.02% for targeted and non-targeted attacks, respectively.