badger
General Binding Affinity Guidance for Diffusion Models in Structure-Based Drug Design
Jian, Yue, Wu, Curtis, Reidenbach, Danny, Krishnapriyan, Aditi S.
Structure-Based Drug Design (SBDD) focuses on generating valid ligands that strongly and specifically bind to a designated protein pocket. Several methods use machine learning for SBDD to generate these ligands in 3D space, conditioned on the structure of a desired protein pocket. Recently, diffusion models have shown success here by modeling the underlying distributions of atomic positions and types. While these methods are effective in considering the structural details of the protein pocket, they often fail to explicitly consider the binding affinity. Binding affinity characterizes how tightly the ligand binds to the protein pocket, and is measured by the change in free energy associated with the binding process. It is one of the most crucial metrics for benchmarking the effectiveness of the interaction between a ligand and protein pocket. To address this, we propose BADGER: Binding Affinity Diffusion Guidance with Enhanced Refinement. BADGER is a general guidance method to steer the diffusion sampling process towards improved protein-ligand binding, allowing us to adjust the distribution of the binding affinity between ligands and proteins. Our method is enabled by using a neural network (NN) to model the energy function, which is commonly approximated by AutoDock Vina (ADV). ADV's energy function is non-differentiable, and estimates the affinity based on the interactions between a ligand and target protein receptor. By using a NN as a differentiable energy function proxy, we utilize the gradient of our learned energy function as a guidance method on top of any trained diffusion model. We show that our method improves the binding affinity of generated ligands to their protein receptors by up to 60\%, significantly surpassing previous machine learning methods. We also show that our guidance method is flexible and can be easily applied to other diffusion-based SBDD frameworks.
Hummingbirds have two amazing ways to fly through tiny gaps
High-speed cameras have revealed how hummingbirds negotiate their way through tiny gaps while in flight, which happens much too quickly for the human eye to properly see. The findings could inform new techniques for flying robots. Hummingbirds feed on nectar and have to fly through tiny gaps in cluttered foliage as they flit from flower to flower. Marc Badger at the University of California, Berkeley, says it was while watching hummingbirds from his window that he decided to investigate how they achieve this. "When a dominant male would come and chase an intruder away, that intruder would fly through a bush," he says.
- North America > United States > California > Alameda County > Berkeley (0.26)
- North America > United States > California > Riverside County > Riverside (0.06)
Badgers: generating data quality deficits with Python
Siebert, Julien, Seifert, Daniel, Kelbert, Patricia, Kläs, Michael, Trendowicz, Adam
Generating context specific data quality deficits is necessary to experimentally assess data quality of data-driven (artificial intelligence (AI) or machine learning (ML)) applications. In this paper we present badgers, an extensible open-source Python library to generate data quality deficits (outliers, imbalanced data, drift, etc.) for different modalities (tabular data, time-series, text, etc.). The documentation is accessible at https://fraunhofer-iese.github.io/badgers/ and the source code at https://github.com/Fraunhofer-IESE/badgers
What Would It Take to Imagine a Truly Alien Alien?
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Thomas Nagel's essay "What Is It Like to Be a Bat?" unfortunately does not endeavor to answer its titular question. But Nagel is not even interested in questions of batness. His project is to interrogate "the mind–body problem," the struggle in philosophy or psychology to reduce the mind and consciousness to objective, physical terms. But around the edges of Nagel's project, like tasty crumbs, we can grab at some useful ideas for imagining minds even stranger than bats: the minds of intelligent aliens.
Homemade 'DIY' Weapons Boost Ukraine War Arsenal
In a metal workshop in the industrial city of Kryvyi Rih in southern Ukraine, a homemade anti-drone system waits to be mounted on a military pick-up truck. The contraption -- a heavy machine gun welded to steel tubes -- is one of several do-it-yourself weapons that are proving to be valuable additions to the Ukraine war effort. "We have the skills and the equipment, and we don't lack ideas," said Sergey Bondarenko in the workshop near the southern front. The well-built 39-year-old with a long black beard is a local leader of the territorial defence, a unit of the Ukrainian army. The device will be accompanied by shock absorbers, for more stability and precision, Bondarenko told AFP beside the anti-drone prototype.
SVB study: Industry 4.0 advances, but manufacturing jobs at risk
Silicon Valley Bank, which has helped fund more than 30,000 startups, yesterday released a report on "The Future of Robotics: An Inside View on Innovation in Robotics." It described trends in production, business models, and the adoption of robotics reflecting the increasing maturity of Industry 4.0. The report also addressed concerns about automation displacing jobs and public-policy reactions. Overall, the free Silicon Valley Bank (SVB) report (download PDF) was cautiously optimistic about the prospects for industrial automation. It cited rising U.S. productivity, maturing technologies and suppliers supporting a variety of applications, and a steady climb for robotics deployments, particularly in Asia.
- Asia > China (0.07)
- North America > United States > California > Santa Clara County > Santa Clara (0.05)
- Information Technology (0.71)
- Government (0.69)
- Banking & Finance > Economy (0.48)
BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent Communication)
Rosa, Marek, Afanasjeva, Olga, Andersson, Simon, Davidson, Joseph, Guttenberg, Nicholas, Hlubuček, Petr, Poliak, Martin, Vítku, Jaroslav, Feyereisl, Jan
An architecture and a learning procedure where: An agent is made up of many experts All experts share the same communication policy (expert policy), but have different internal memory states There are two levels of learning, an inner loop (with a communication stage) and an outer lo op In ner loop - Agent's behavior and adaptation should emerge as a result of e xperts communicating between each other. Expert s send messag es (of any complexity) to each other and update their internal states based on observations/messages and their internal state fr om the previous time-step. Expert policy is fixed and does not c hange during the inner loop Inner loop loss need not even be a proper loss function. It can be any kind of structured feedback guiding the adaptation during th e age nt's lifetime Outer loop - An expert policy is discovered over generations of agents, ensuring that strategies that find solutions to prob lems in divers e environments can quickly emerge in the inner loop Agent's objective is to adapt fast to novel tasks Exhibiting the following novel properties: Roles of experts and connectivity among them assigned dynamically at inference time Learned communication protocol with context dependent messages of varied complexity Generalizes to different numbers and types of inputs/ou tputs Ca n be trained to handle variations in architecture during bot h training and testing Initial empirical results show generalization and scalability along the spectrum of learning types.
- Europe > Sweden > Stockholm > Stockholm (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (4 more...)
Simion Zoo: A Workbench for Distributed Experimentation with Reinforcement Learning for Continuous Control Tasks
Fernandez-Gauna, Borja, Graña, Manuel, Zimmermann, Roland S.
In recent years, Reinforcement Learning (RL) has become a very popular area of research, because of the almost exponential increase in computing power due to the advent of dedicated GPUs that have empowered researchers to face previously unaffordable problems. In particular, the successful applications of Deep Reinforcement Learning (DRL)to produce master videogame players [10, 7] have created great expectations about the potential of DRL, even outside the academic research community. As a result of this popularity boost, the number of RL software packages has grown significantly. Nevertheless, these projects are mostly oriented towards the research community, i.e. they assume sophisticated programming users with powerful computing resources to run the software. Even for sophisticated programmers, these packages impose a steep learning curve that hinders their user experience. This is in stark contrast with the de-facto user standards forSupervised Learning (SL) software, which customarily allow users to design/run experiments, and to analyze the results on an intuitive Graphical User Interface (GUI) that allows a swift learning curve. Users without programming skills that intend to design and run RL experiments quickly on inexpensive and commonly available hardware will obviously appreciate such kind of facilities.