AI is already causing unintended harm. What happens when it falls into the wrong hands? David Evan Harris
A researcher was granted access earlier this year by Facebook's parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta's civic integrity and responsible AI teams, I am terrified by what could happen next. Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta's branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world. This could position Meta as owner of the centrepiece of the dominant AI platform, much in the same way that Google controls the open-source Android operating system that is built on and adapted by device manufacturers globally. If Meta were to secure this central position in the AI ecosystem, it would have leverage to shape the direction of AI at a fundamental level, controlling both the experiences of individual users and setting limits on what other companies could and couldn't do.
Jun-16-2023, 07:00:38 GMT
- Country:
- North America > United States (0.15)
- Industry:
- Government (1.00)
- Information Technology > Services (0.35)
- Law (1.00)
- Technology: