Trust, Regulation, and Human-in-the-Loop AI
Artificial intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit's factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.
Mar-20-2022, 19:48:12 GMT
- Country:
- Asia
- Europe
- Austria > Vienna (0.05)
- France (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Switzerland > Basel-City
- Basel (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- Hampshire > Southampton (0.05)
- North America > United States (0.04)
- Industry:
- Government
- Military (0.94)
- Regional Government > Europe Government (0.47)
- Health & Medicine
- Diagnostic Medicine (0.68)
- Therapeutic Area (0.94)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Government
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence