make ai safe
Elon Musk claims we only have a 10% chance of making AI safe
Elon Musk has put a lot of thought into the harsh realities and wild possibilities of artificial intelligence (AI). These considerations have left him convinced that we need to merge with machines if we're to survive, and he's even created a startup dedicated to developing the brain-computer interface (BCI) technology needed to make that happen. But despite the fact that his very own lab, OpenAI, has created an AI capable of teaching itself, Musk recently said that efforts to make AI safe only have "a five to 10 percent chance of success." Musk shared these less-than-stellar odds with the staff at Neuralink, the aforementioned BCI startup, according to recent Rolling Stone article. Despite Musk's heavy involvement in the advancement of AI, he's openly acknowledged that the technology brings with it not only the potential for, but the promise of serious problems.
- Health & Medicine > Therapeutic Area (0.36)
- Information Technology > Services (0.32)
Ten recommendations to make AI safe for humanity
A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector. The first of these is "Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g "high stakes" domains) should no longer use'black box' AI and algorithmic systems." The remaining recommendations deal with operational details, like examining training data for bias and validating the performance of the models to ensure that they aren't misfiring; and areas where work needs to be done, like evaluation of the impact of AI on hiring and HR, setting data-set quality standards; bringing cross-disciplinary expertise to bias evaluation; and the active inclusion of women, minorities and other marginalized populations in systems design and evaluation. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards.
- North America > United States > Texas (0.07)
- North America > United States > New York (0.07)
- Transportation > Air (0.66)
- Law (0.42)