Can Artificial Intelligence be Open Sourced?
At what was billed as a "fireside chat" at Tel Aviv University in June 2023, the very first question from the audience posed to OpenAI CEO Sam Altman and chief scientist Ilya Sutskever was, "Could open source LLMs (large language models) potentially match GPT-4's abilities without additional technical advances, or is there a'secret sauce' in GPT-4 unknown to the world that sets it apart from the other models?" After nervous laughter and applause, Sutskever said, "You don't want to think about it in binary black-and-white terms where there is a secret sauce that will never be rediscovered," adding that perhaps someday, an open source model would reproduce GPT-4--"but when it will be, there will be a much more powerful model in the companies, so there will always be a gap between the open source models and the private models, and this gap may even be increasing." In the ensuing months, despite Sutskever's caution that binary thinking about future AI development methods is too simplistic, there have been numerous opinions published that proclaim diametrically opposed opinions about whether or not open sourcing AI, particularly generative AI, is an imperative social necessity to counter corporate concentration, or opening an existentially threatening Pandora's box of anarchic instructions on how to make weapons or promulgate disinformation on massive scales. Examples of these seemingly incompatible opinions include "Make No Mistake – AI Is Owned by Big Tech," published in MIT Technology Review, and "Open-Source AI Is Uniquely Dangerous," published in IEEE Spectrum. The question regarding complex and nuanced reality around open source AI, especially in the context of large language models, however, is not whether or not it will emerge as a powerful force.
Jul-1-2024, 17:07:37 GMT