Goto

Collaborating Authors

Results


TikTok Has Started Collecting Your 'Faceprints' and 'Voiceprints.' Here's What It Could Do With Them

TIME - Tech

Recently, TikTok made a change to its U.S. privacy policy, allowing the company to "automatically" collect new types of biometric data, including what it describes as "faceprints" and "voiceprints." TikTok's unclear intent, the permanence of the biometric data and potential future uses for it have caused concern among experts who say users' security and privacy could be at risk. On June 2, TikTok updated the "Information we collect automatically" portion of its privacy policy to include a new section called "Image and Audio Information," giving itself permission to gather certain physical and behavioral characteristics from its users' content. The increasingly popular video sharing app may now collect biometric information such as "faceprints and voiceprints," but the update doesn't define these terms or what the company plans to do with the data. "Generally speaking, these policy changes are very concerning," Douglas Cuthbertson, a partner in Lieff Cabraser's Privacy & Cybersecurity practice group, tells TIME.


Expect an Orwellian future if AI isn't kept in check, Microsoft exec says

#artificialintelligence

Artificial intelligence could lead to an Orwellian future if laws to protect the public aren't enacted soon, according to Microsoft President Brad Smith. Smith made the comments to the BBC news program "Panorama" on May 26, during an episode focused on the potential dangers of artificial intelligence (AI) and the race between the United States and China to develop the technology. The warning comes about a month after the European Union released draft regulations attempting to set limits on how AI can be used. There are few similar efforts in the United States, where legislation has largely focused on limiting regulation and promoting AI for national security purposes. "I'm constantly reminded of George Orwell's lessons in his book '1984,'" Smith said.


AI warning: Life will be like Orwell's 1984 'without curbs on AI'

#artificialintelligence

Life could become like George Orwell's 1984 within three years if laws aren't introduced to protect the public from artificial intelligence, Microsoft president Brad Smith has warned. Smith predicts that the kind of controlled, mass surveillance society portrayed by Orwell in his 1949 dystopian novel'could come to pass in 2024' if more isn't done to curb the spread of AI. It is going to be difficult for lawmakers to catch up with rapidly advancing artificial intelligence and surveillance technology, he told BBC Panorama during a special exploring China's increasing use of AI to monitor its citizens. The Microsoft president said: 'If we don't enact the laws that will protect the public in the future, we are going to find the technology racing ahead.' Life for humans will'become like Orwell's 1984' by 2024 if laws aren't introduced to protect the public from artificial intelligence, warns Microsoft president Brad Smith Facial recognition software works by matching real time images to a previous photograph of a person.


Microsoft president Brad Smith warns 'life will be like Orwell's 1984' by 2024

Daily Mail - Science & tech

Life could become like George Orwell's 1984 within three years if laws aren't introduced to protect the public from artificial intelligence, Microsoft president Brad Smith has warned. Smith predicts that the kind of controlled, mass surveillance society portrayed by Orwell in his 1949 dystopian novel'could come to pass in 2024' if more isn't done to curb the spread of AI. It is going to be difficult for lawmakers to catch up with rapidly advancing artificial intelligence and surveillance technology, he told BBC Panorama during a special exploring China's increasing use of AI to monitor its citizens. The Microsoft president said: 'If we don't enact the laws that will protect the public in the future, we are going to find the technology racing ahead.' Life for humans will'become like Orwell's 1984' by 2024 if laws aren't introduced to protect the public from artificial intelligence, warns Microsoft president Brad Smith Facial recognition software works by matching real time images to a previous photograph of a person.


Cybersecurity 101: Protect your privacy from hackers, spies, and the government

#artificialintelligence

"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.


The Impact of Artificial Intelligence on the IC

#artificialintelligence

Ian Fitzgerald is an M.A. student in International Security at George Mason University with research interests in Great Power Competition, Cyber Warfare, Emerging Technologies, Russia and China. ACADEMIC INCUBATOR -- The explosion of data available to today's analysts creates a compelling need to integrate artificial intelligence (AI) into intelligence work. The objective of the Intelligence Community (IC) is to analyze, connect, apply context, infer meaning, and ultimately, make analytical judgments based on that data. The data explosion offers an incredible source of potential information, but it also creates issues for the IC. Today's intelligence analysts find themselves working from an information-scarce environment to one with an information surplus.


The Challenges and Opportunities of Human-Centered AI for Trustworthy Robots and Autonomous Systems

arXiv.org Artificial Intelligence

The trustworthiness of Robots and Autonomous Systems (RAS) has gained a prominent position on many research agendas towards fully autonomous systems. This research systematically explores, for the first time, the key facets of human-centered AI (HAI) for trustworthy RAS. In this article, five key properties of a trustworthy RAS initially have been identified. RAS must be (i) safe in any uncertain and dynamic surrounding environments; (ii) secure, thus protecting itself from any cyber-threats; (iii) healthy with fault tolerance; (iv) trusted and easy to use to allow effective human-machine interaction (HMI), and (v) compliant with the law and ethical expectations. Then, the challenges in implementing trustworthy autonomous system are analytically reviewed, in respects of the five key properties, and the roles of AI technologies have been explored to ensure the trustiness of RAS with respects to safety, security, health and HMI, while reflecting the requirements of ethics in the design of RAS. While applications of RAS have mainly focused on performance and productivity, the risks posed by advanced AI in RAS have not received sufficient scientific attention. Hence, a new acceptance model of RAS is provided, as a framework for requirements to human-centered AI and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human's capacity. while focusing on contributions to humanity.


Understanding and Avoiding AI Failures: A Practical Guide

arXiv.org Artificial Intelligence

With current AI technologies, harm done by AIs is limited to power that we put directly in their control. As said in [59], "For Narrow AIs, safety failures are at the same level of importance as in general cybersecurity, but for AGI it is fundamentally different." Despite AGI (artificial general intelligence) still being well out of reach, the nature of AI catastrophes has already changed in the past two decades. Automated systems are now not only malfunctioning in isolation, they are interacting with humans and with each other in real time. This shift has made traditional systems analysis more difficult, as AI has more complexity and autonomy than software has before. In response to this, we analyze how risks associated with complex control systems have been managed historically and the patterns in contemporary AI failures to what kinds of risks are created from the operation of any AI system. We present a framework for analyzing AI systems before they fail to understand how they change the risk landscape of the systems they are embedded in, based on conventional system analysis and open systems theory as well as AI safety principles. Finally, we present suggested measures that should be taken based on an AI system's properties. Several case studies from different domains are given as examples of how to use the framework and interpret its results.


Machine Learning, Ethics, and Open Source Licensing (Part I/II)

#artificialintelligence

The unprecedented interest, investment, and deployment of machine learning across many aspects of our lives in the past decade has come with a cost. Although there has been some movement towards moderating machine learning where it has been genuinely harmful, it's becoming increasingly clear that existing approaches suffer significant shortcomings. Nevertheless, there still exist new directions that hold potential for meaningfully addressing the harms of machine learning. In particular, new approaches to licensing the code and models that underlie these systems have the potential to create a meaningful impact on how they affect our world. This is Part I of a two-part essay.


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.