Google Decision Scientist Splits AI Science, From Science Fiction
What's really going on currently is a human understanding process where we're trying to work out what the machine brains we are building are really capable of. But to understand what AI software engines are capable of, we need to understand how they learn in the first place. AI has been called the process of automating the ineffable i.e. creating technology that can digitize those things that we humans find too great or too extreme to be expressed or described in words. So does accepting this core fundamental help to explain what contemporary AI really is and what it can do? Chief decision scientist for Google Cloud Cassie Kozyrkov explains that traditional software programming relies on a developer's ability to express instructions for a task explicitly.
Feb-8-2019, 02:15:39 GMT