AI Regulation Is Coming
For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud. Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images--most notably, the European Union's 2018 General Data Protection Regulation (GDPR). Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. Others point out that there's plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon. But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software--particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.
Sep-7-2021, 12:54:43 GMT
- AI-Alerts:
- 2021 > 2021-09 > AAAI AI-Alert Ethics for Sep 28, 2021 (1.00)
- Country:
- Asia > China (0.34)
- Europe (1.00)
- North America > United States
- New York (0.04)
- Genre:
- Research Report (0.94)
- Industry:
- Government > Regional Government
- Europe Government (0.34)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Government > Regional Government
- Technology: