Can We Enhance AI Safety By Teaching AI To Love Humans And Learning How To Love AI?
Large language models (LLMs) based on transformer architectures have taken the world by storm, with ChatGPT quickly becoming a household name. While the concept of generative AI is not new and can be traced back to Jürgen Schmidhuber's (now at KAUST) work in the 1990s and even further into history, Ian Goodfellow's generative adversarial networks (GANs) and Google's transformers published in 2017 enabled the development and industrialization of multi-purpose AI. My teams have been working in this area since 2015 both in generative biology and generative chemistry, with AI-generated drugs in human clinical trials and the most advanced departments in pharma companies using our software, and we have utilized LLMs almost since they were first published. OpenAI's GPT has also been available to the public since 2020. However, the public release and consumerization of ChatGPT have taken the world by surprise and triggered a new cycle of hyper investment and productization of LLMs that are propagating into the search market. Although both Recurrent Neural Network (RNN) and transformer-based LLMs, as well as multimodal LLMs, are surprisingly good at language understanding and generation, I believe they are still as far from human-level consciousness as a calculator.
Apr-1-2023, 14:40:42 GMT
- Country:
- Asia > China (0.06)
- North America > United States
- California > San Francisco County > San Francisco (0.05)
- Industry:
- Health & Medicine > Pharmaceuticals & Biotechnology (0.69)
- Leisure & Entertainment (0.50)
- Media (0.49)
- Technology: