Even if artificial general intelligence (AGI) could be achieved, a problem looms: The more complex a system is, the more can go wrong. If a computer could really match human thinking, a great deal could go wrong. In "When AI goes wrong" (podcast 160), Walter Bradley Center director Robert J. Marks is joined once again by members of his research group, Justin Bui and Samuel Haug, who is a PhD student in computer and electrical engineering. The topic is, what happens if AI starts behaving in bizarre and unpredictable ways? A partial transcript and notes, Show Notes, and Additional Resources follow. I want to start out with Paul Harvey's The Rest of the Story. Either Sam or Justin, have you ever heard of Paul Harvey? Justin Bui: I have not. Sam Haug: No, I have not.
Dec-2-2021, 20:03:43 GMT