Collaborating Authors

Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series


Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems. Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?

The SAL Integrated Cognitive Architecture

AAAI Conferences

Over the last two decades, the complementary properties of symbolic and connectionist systems have led to a number of attempts at hybridizing the two approaches to leverage their strengths and alleviate their shortcomings. The fact that those attempts have generally fallen short of their goals largely reflects the difficulties in integrating computational paradigms of a very different nature without sacrificing their key properties in the process. In this paper, we propose that biological plausibility can serve as a powerful constraint to guide the integration of hybrid intelligent systems. We introduce a hybrid cognitive architecture called SAL, for "Synthesis of ACT-R and Leabra". ACT-R and Leabra are cognitive architectures in the symbolic and connectionist tradition, respectively. Despite widely different origins and levels of abstraction, they have evolved considerable commonalities in response to a joint set of constraints including behavioral, physiological, and brain imaging data. We introduce the ACT-R and Leabra cognitive architectures and their similarities in structures and concepts then describe one possible instantiation of the SAL architecture based on a modular composition of its constituent architectures. We illustrate the benefits of the integration by describing an application of the architecture to autonomous navigation in a virtual environment and discuss future research directions.

Unity: A General Platform for Intelligent Agents Machine Learning

Recent advances in Deep Reinforcement Learning and Robotics have been driven by the presence of increasingly realistic and complex simulation environments. Many of the existing platforms, however, provide either unrealistic visuals, inaccurate physics, low task complexity, or a limited capacity for interaction among artificial agents. Furthermore, many platforms lack the ability to flexibly configure the simulation, hence turning the simulation environment into a black-box from the perspective of the learning system. Here we describe a new open source toolkit for creating and interacting with simulation environments using the Unity platform: Unity ML-Agents Toolkit. By taking advantage of Unity as a simulation platform, the toolkit enables the development of learning environments which are rich in sensory and physical complexity, provide compelling cognitive challenges, and support dynamic multi-agent interaction. We detail the platform design, communication protocol, set of example environments, and variety of training scenarios made possible via the toolkit.

How Google's AI breakthroughs are putting us on a path to narrow AI - TechRepublic


While IBM's Deep Blue computer mastered chess in the mid 1990s and in more recent years a system built by Google's DeepMind lab has beaten humans at classic 70s arcade games - Go was a different matter. Go has 200 moves per turn compared to 20 per turn in Chess. Over the course of a game of Go there are so many possible moves that searching through each of them to identify the best play is too costly from a computational point of view. Now a system developed by Google DeepMind has beaten European Go champion and elite player Fan Hui. Rather than being programmed in how to play the game, the AlphaGo system learned how to do so using two deep neural networks and an advanced tree search.

Neuromorphic Chipsets - Industry Adoption Analysis


Von Neumann Architecture Neuromorphic Architecture Neuromorphic architectures address challenges like high power consumption, low speed, and other efficiency-related bottlenecks prevalent in the traditional von Neumann architecture Architecture Bottleneck CPU Memory Neuromorphic architectures integrate processing and storage, getting rid of the bus bottleneck connecting the CPU and memory Encoding Scheme and Signals Unlike the von Neumann architecture with sudden highs and lows in the form of binary encoding, neuromorphic chips offer a continuous analog transition in the form of spiking signals Devices and Components CPU, memory, logic gates, etc. Artificial neurons and synapses Neuromorphic devices and components are more complex than logic gates Versus Versus Versus 10. NEUROMORPHIC CHIPSETS 10 SAMPLE REPORT Neuromorphic Chipsets vs. GPUs Parameters Neuromorphic Chips GPU Chips Basic Operation Based on the emulation of the biological nature of neurons onto a chip Use parallel processing to perform mathematical operations Parallelism Inherent parallelism enabled by neurons and synapses Require the development of architectures for parallel processing to handle multiple tasks simultaneously Data Processing High High Power Low Power-intensive Accuracy Low High Industry Adoption Still in the experimental stage More accessible Software New tools and methodologies need to be developed for programming neuromorphic hardware Easier to program than neuromorphic silicons Memory Integrated memory and neural processing Use of an external memory Limitations • Not suitable for precise calculations and programming- related challenges • Creation of neuromorphic devices is difficult due to the complexity of interconnections • Thread limited • Suboptimal for massively parallel structures Neuromorphic chipsets are at an early stage of development, and would take approximately 20 years to be at the same level as GPUs. The asynchronous operation of neuromorphic chips makes them more efficient than other processing units.