AI programs are constructed within a complex framework that includes a computer's hardware and operating system, programming languages, and often general frameworks for representing and reasoning.
Babak Hodjat is the CTO for AI at Cognizant where he leads a team of developers and researchers bringing advanced AI solutions to businesses. Babak is the former co-founder and CEO of Sentient, responsible for the core technology behind the world's largest distributed artificial intelligence system. Babak was also the founder of the world's first AI-driven hedge-fund, Sentient Investment Management. Babak is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist. Prior to co-founding Sentient, Babak was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering.
Today we're joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being "resignated" from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the "distributed" nature of the institute, how they're going about figuring out what is in scope and out of scope for the institute's research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more!
Matrix AI Network employed AI-Optimization to create a secure high-performance open source blockchain. MANAS is a distributed AI Service Platform built on MATRIX Mainnet. Its functions include AI model training, AI algorithmic model authentication, algorithmic model transaction, paid access to algorithmic models through API, etc. We aim to build a distributed AI network where everyone can build, share, and profit from AI services. Matrix AI continues to build in every field where artificial intelligence is needed.
Neural architecture search is the task of automatically finding one or more architectures for a neural network that will yield models with good results (low losses), relatively quickly, for a given dataset. Neural architecture search is currently an emergent area. There is a lot of research going on, there are many different approaches to the task, and there isn't a single best method generally -- or even a single best method for a specialized kind of problem such as object identification in images. Neural architecture search is an aspect of AutoML, along with feature engineering, transfer learning, and hyperparameter optimization. It's probably the hardest machine learning problem currently under active research; even the evaluation of neural architecture search methods is hard.
Imagine that while you are enjoying your morning bowl of Cheerios, a spider drops from the ceiling and plops into the milk. Years later, you still can't get near a bowl of cereal without feeling overcome with disgust. Researchers have now directly observed what happens inside a brain learning that kind of emotionally charged response. In a new study published in January in the Proceedings of the National Academy of Sciences, a team at the University of Southern California was able to visualize memories forming in the brains of laboratory fish, imaging them under the microscope as they bloomed in beautiful fluorescent greens. From earlier work, they had expected the brain to encode the memory by slightly tweaking its neural architecture. Instead, the researchers were surprised to find a major overhaul in the connections.
Computers have been taught to use data to establish patterns where possible. And while the delegation of these activities to machines has helped mankind in many ways, bias still exists in technologies such as artificial intelligence. For instance, there are biases in facial recognition systems, according to Alex Hanna (pictured), director of research at The Distributed AI Research Institute. "The fact remains that facial recognition is used and is disproportionally deployed on marginalized populations," she said. "So in the U.S., that means black and brown communities. That's where facial recognition is used disproportionately."
Deep neural networks (NN) perform well in various tasks (e.g., computer vision) because of the convolutional neural networks (CNN). However, the difficulty of gathering quality data in the industry field hinders the practical use of NN. To cope with this issue, the concept of transfer learning (TL) has emerged, which leverages the fine-tuning of NNs trained on large-scale datasets in data-scarce situations. Therefore, this paper suggests a two-stage architectural fine-tuning method for image classification, inspired by the concept of neural architecture search (NAS). One of the main ideas of our proposed method is a mutation with base architectures, which reduces the search cost by using given architectural information. Moreover, an early-stopping is also considered which directly reduces NAS costs. Experimental results verify that our proposed method reduces computational and searching costs by up to 28.2% and 22.3%, compared to existing methods.
A large number of neural network models of associative memory have been proposed in the literature. These include the classical Hopfield networks (HNs), sparse distributed memories (SDMs), and more recently the modern continuous Hopfield networks (MCHNs), which possesses close links with self-attention in machine learning. In this paper, we propose a general framework for understanding the operation of such memory networks as a sequence of three operations: similarity, separation, and projection. We derive all these memory models as instances of our general framework with differing similarity and separation functions. We extend the mathematical framework of Krotov et al (2020) to express general associative memory models using neural network dynamics with only second-order interactions between neurons, and derive a general energy function that is a Lyapunov function of the dynamics. Finally, using our framework, we empirically investigate the capacity of using different similarity functions for these associative memory models, beyond the dot product similarity measure, and demonstrate empirically that Euclidean or Manhattan distance similarity metrics perform substantially better in practice on many tasks, enabling a more robust retrieval and higher memory capacity than existing models.
We are interested in mixed human and agent systems in the context of networked computer games. These games require a fully distributed computer system. State changes must be transmitted by network messages subject to possibly significant latency. The system then is composed of agents' mutually inconsistent views of the world state that cannot be reconciled because no single agent's state is naturally more correct than another's. The paper discusses the implications of this inconsistency for distributed AI systems. While our example is computer games, we argue the implications affect a much larger class of human/AI problems.