Some tasks such as fighting spam, content moderation, etc. by their very nature require an online system. Offline systems, on the other hand, don't need to run in real-time. They can be built to run efficiently on a batch of inputs at once and can take advantage of approaches like Transductive Learning. Some online systems can be reactive, and can even do the learning in an online fashion (aka online learning), but many online systems are built and deployed with a periodic offline model build that is pushed to production. Systems that are built using online learning should especially be sensitive to adversarial environments.
Nvidia has announced its DGX-1 Deep Learning System at the 2016 GPU Technology Conference. That might not mean much to the average PC fan, but in context that is over twelve times the graphics performance of the Nvidia Titan X, it's most expensive and powerful graphics card on the market. The Tesla GP100 is based on TSMC's 16nm FinFET manufacturing process, and uses High Bandwidth Memory (HBM2) for the first time. Nvidia is the first to adopt both features, before Intel or AMD, though Samsung has been using the 16nm manufacturing process since late 2015. Rather than make the GPU slimmer with the new manufacturing process, Nvidia has added a lot more transistors to the card.
As spatial query systems such as EQS (Unreal Engine 4), TPS (CryEngine), and PQS (Luminous Studio) have matured, auto-generated spatial queries are increasingly relied upon for robust dynamic position selection. We present a series of techniques and extensions to these systems used by Square Enix to produce novel behaviors and improve position selection in our current generation of AAA RPG titles. In addition, we have expanded UE4's Environment Query System to serve as a general-purpose utility system; we show how minor modifications allowed the team to use EQS to coordinate combat, reduce behavior tree complexity with a hybrid BT/US approach, and increase character AI quality for a range of tasks such as action and target selection. Attendees will learn how to get the most out of modern spatial query systems with a combination of new techniques and best practices to maximize quality and extend their application to new areas.
Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement learning are opening the door to the development of novel deep architectures and training frameworks for addressing System 2 tasks (which are done consciously), such as reasoning, planning, capturing causality and obtaining systematic generalization in natural language processing and other applications. Such an expansion of deep learning from System 1 tasks to System 2 tasks is important to achieve the old deep learning goal of discovering high-level abstract representations because we argue that System 2 requirements will put pressure on representation learning to discover the kind of high-level concepts which humans manipulate with language. We argue that towards this objective, soft attention mechanisms constitute a key ingredient to focus computation on a few concepts at a time (a "conscious thought") as per the consciousness prior and its associated assumption that many high-level dependencies can be approximately captured by a sparse factor graph. We also argue how the agent perspective in deep learning can help put more constraints on the learned representations to capture affordances, causal variables, and model transitions in the environment.