Goto

Collaborating Authors

mechanism


How artificial intelligence is aiming to speed pharmaceutical development

#artificialintelligence

A new drug typically takes more than a decade to develop, at a cost of almost $3 billion. That's because about 90% of experimental medicines fail during the various stages of chemical engineering, or during animal or human trials. So drugmakers and investors are spending billions of dollars to turbocharge the search for new treatments using artificial intelligence. Scientists are looking to discover breakthrough medicines by rapidly identifying new compounds and modeling complex mechanisms in the body, and by automating what used to be manual processes. So far only a trickle of treatments created with the much-hyped technology have reached the testing stage.


Modeling of gate tunable synaptic device for neuromorphic applications

#artificialintelligence

The emerging memories are great candidates to establish neuromorphic computing challenging non-Von Neumann architecture. Emerging non-volatile resistive random-access memory (RRAM) attracted abundant attention recently for its low power consumption and high storage density. Up to now, research regarding the tunability of the On/Off ratio and the switching window of RRAM devices remains scarce. In this work, the underlying mechanisms related to gate tunable RRAMs are investigated. The principle of such a device consists of controlling the filament evolution in the resistive layer using graphene and an electric field. A physics-based stochastic simulation was employed to reveal the mechanisms that link the filament size and the growth speed to the back-gate bias. The simulations demonstrate the influence of the negative gate voltage on the device current which in turn leads to better characteristics for neuromorphic computing applications. Moreover, a high accuracy (94.7%) neural network for handwritten character digit classification has been realized using the 1-transistor 1-memristor (1T1R) crossbar cell structure and our stochastic simulation method, which demonstrate the optimization of gate tunable synaptic device.


Gandiva: Introspective Cluster Scheduling for Deep Learning

#artificialintelligence

Nowadays, there is a significantly growing trend toward Artificial Intelligence (AI), especially Machine Learning (ML) and Deep Learning (DL). DL applications (e.g., voice and image recognition) can be seen in the services offered by IT technology leaders like Google. These applications have a remarkable influence on businesses. Hence, DL has become a vital workload in cloud data centers. On the other hand, DL is compute-hungry and as a result, is reliant on powerful GPUs.


Accelerator-Level Parallelism

Communications of the ACM

While past information technology (IT) advances have transformed society, future advances hold great additional promise. For example, we have only just begun to reap the changes from artificial intelligence--especially machine learning--with profound advances expected in medicine, science, education, commerce, and government. All too often forgotten, underlying the IT impact are the dramatic improvements in the programmable hardware. Hardware improvements deliver performance that unlocks new capabilities. However, unlike in the 1990s and early 2000s, tomorrow's performance aspirations must be achieved with much less technological advancement (Moore's Law and Den-nard Scaling).


Comprehensive Guide to Transformers

#artificialintelligence

You have a piece of paper with text on it, and you want to build a model that can translate this text to another language. How do you approach this? The first problem is the variable size of the text. There's no linear algebra model that can deal with vectors with varying dimensions. The default way of dealing with such problems is to use the bag-of-words Model ( 1).


From DL to Agent Based Modelling

#artificialintelligence

Deep learning has seen a lot of recent success in tackling difficult problems that require extracting useful information from large amounts of data. Such work has shown promising results for learning difficult tasks in image recognition, natural language, time-series forecasting, etc. Traditionally, these networks have millions of parameters that are learned using an optimization algorithm. Optimization informs parameters how to update to capture features of the input relevant for learning the task at hand. While these models are often well suited for the tasks on which they are applied, they have not yet shown the ability to bootstrap a-priori knowledge for novel tasks. Even the limited approaches that show some transfer of previously learned knowledge don't scale, in terms of resources, in the same manner as seen in biological brains.


AI Don't Know Jack? – MetaDevo

#artificialintelligence

Think your AI understands the meanings of words? Or understands anything at all? Guess again. There's a big issue inherent in trying to make artificial minds that understand like a human does. It's called the Symbol Grounding Problem1S. TLDR: How can understanding in an AI be made intrinsic to the system, rather than just parasitic on the meanings in the minds of the developers / trainers?


The Impact of AI on the Finance Industry

#artificialintelligence

A race towards digitization is bringing a revolution in the Financial and FinTech sectors. At the core of this digitization lies the availability of a vast array of data (such as Big Data), advancements in affordable computing technologies, and the advent of intelligent technologies such as Machine Learning and Artificial Intelligence. AI has been around for nearly 70 years, its practicality and intelligence have increasing over time. Today, AI has become an integral part of the industrial landscape as well as the lives of common people. Examples of this can be seen in the voice assistants in smartphones, the use of AI robots in supply chain logistics, self-driving cars, movie recommendations on Netflix, and more.


Sarus just released DP-XGBoost

#artificialintelligence

XGBoost is one of the most popular gradient boosted trees library and is featured in many winning solutions on Kaggle competitions. It's written in C and useable in many languages: Python, R, Java, Julia, or Scala. It can run on major distributed environments (Kubernetes, Apache Spark, or Dask) to handle datasets with billions of examples. XGBoost is often used to train models on sensitive data. Since it comes with no privacy guarantee, one can show that personal information may remain in the model weights.


Self-Supervised Learning

#artificialintelligence

Machine learning is broadly divided into supervised, unsupervised, semi-supervised, and reinforcement learning problems. Machine learning has enjoyed the majority of success by tackling supervised learning problems...