The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1 - Future of Life Institute
The second option, then, is to permit only limited forms of self-improvement that have been deemed sufficiently safe, such as software updates or processor and memory upgrades. Yet, Kumar explains that vetting these forms of self-improvement as safe and unsafe is still exceedingly complicated. In fact, he says that preventing the construction of one specific kind of modification is so complex that it will "require such a deep understanding of what self-improvement involves that it will likely be enough to solve the full safe self-improvement problem."
Dec-5-2019, 22:34:09 GMT