Not enough data to create a plot.
Try a different view from the menu above.
Large pre-trained language models have repeatedly shown their ability to produce fluent text. Yet even when starting from a prompt, generation can continue in many plausible directions. Current decoding methods with the goal of controlling generation, e.g., to ensure specific words are included, either require additional models or fine-tuning, or work poorly when the task at hand is semantically unconstrained, e.g., story generation. In this work, we present a plug-and-play decoding method for controlled language generation that is so simple and intuitive, it can be described in a single sentence: given a topic or keyword, we add a shift to the probability distribution over our vocabulary towards semantically similar words. We show how annealing this distribution can be used to impose hard constraints on language generation, something no other plug-and-play method is currently able to do with SOTA language generators. Despite the simplicity of this approach, we see it works incredibly well in practice: decoding from GPT-2 leads to diverse and fluent sentences while guaranteeing the appearance of given guide words. We perform two user studies, revealing that (1) our method outperforms competing methods in human evaluations; and (2) forcing the guide words to appear in the generated text has no impact on the fluency of the generated text.
A multidisciplinary team from the Idaho and Argonne National Laboratories, Kairos Power, and Curtiss-Wright, along with support from academics, have developed digital twin nuclear reactors. By using a US$5.2 million grant from the Department of Energy's Advanced Research Projects Agency-Energy, the scientists and engineers have engaged a physics-based machine learning process to construct and later maintain the digital twin reactors. By grounding the machine learning algorithm in actual physics, the artificial intelligence model generates predictions that are more robust and reliable when compared to more abstract models. The complex nature of this approach provides two layers of problem-solving simultaneously. First, a machine learning-driven predictive maintenance system actively avoids unexpected outages while optimizing maintenance, and predicts mechanical failure before prototypical mechanical stress indicates as much.
Vibraimage is a digital system that quantifies a subject's mental and emotional state by analysing video footage of the movements of their head. Vibraimage is used by police, nuclear power station operators, airport security and psychiatrists in Russia, China, Japan and South Korea, and has been deployed at an Olympic Games, FIFA World Cup, and G7 Summit. Yet there is no reliable evidence that the technology is actually effective; indeed, many claims made about its effects seem unprovable. What exactly does vibraimage measure, and how has it acquired the power to penetrate the highest profile and most sensitive security infrastructure across Russia and Asia? I first trace the development of the emotion recognition industry, before examining attempts by vibraimage's developers and affiliates scientifically to legitimate the technology, concluding that the disciplining power and corporate value of vibraimage is generated through its very opacity, in contrast to increasing demands across the social sciences for transparency. I propose the term 'suspect AI' to describe the growing number of systems like vibraimage that algorithmically classify suspects / non-suspects, yet are themselves deeply suspect. Popularising this term may help resist such technologies' reductivist approaches to 'reading' -- and exerting authority over -- emotion, intentionality and agency.
A research collaboration between LBNL, PNNL, Brown University, and NVIDIA has achieved exaflop (half-precision) performance on the Summit supercomputer with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. Their achievement, which will be presented during the "Deep Learning on Supercomputers" workshop at SC19, demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems. In science we know the laws of physics and observation principles – mass, momentum, energy, etc.," said George Karniadakis, professor of applied mathematics at Brown and co-author on the SC19 workshop paper. "The concept of physics-informed GANs is to encode prior information from the physics into the neural network. This allows you to go well beyond the training domain, which is very important in applications where the conditions can change." GANs have been applied to model human face ...
Rolnick, David, Donti, Priya L., Kaack, Lynn H., Kochanski, Kelly, Lacoste, Alexandre, Sankaran, Kris, Ross, Andrew Slavin, Milojevic-Dupont, Nikola, Jaques, Natasha, Waldman-Brown, Anna, Luccioni, Alexandra, Maharaj, Tegan, Sherwin, Evan D., Mukkavilli, S. Karthik, Kording, Konrad P., Gomes, Carla, Ng, Andrew Y., Hassabis, Demis, Platt, John C., Creutzig, Felix, Chayes, Jennifer, Bengio, Yoshua
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.
Most of the current game-theoretic demand-side management methods focus primarily on the scheduling of home appliances, and the related numerical experiments are analyzed under various scenarios to achieve the corresponding Nash-equilibrium (NE) and optimal results. However, not much work is conducted for academic or commercial buildings. The methods for optimizing academic-buildings are distinct from the optimal methods for home appliances. In my study, we address a novel methodology to control the operation of heating, ventilation, and air conditioning system (HVAC). With the development of Artificial Intelligence and computer technologies, reinforcement learning (RL) can be implemented in multiple realistic scenarios and help people to solve thousands of real-world problems. Reinforcement Learning, which is considered as the art of future AI, builds the bridge between agents and environments through Markov Decision Chain or Neural Network and has seldom been used in power system. The art of RL is that once the simulator for a specific environment is built, the algorithm can keep learning from the environment. Therefore, RL is capable of dealing with constantly changing simulator inputs such as power demand, the condition of power system and outdoor temperature, etc. Compared with the existing distribution power system planning mechanisms and the related game theoretical methodologies, our proposed algorithm can plan and optimize the hourly energy usage, and have the ability to corporate with even shorter time window if needed.
Artificial intelligence is no different than the cotton gin, telecommunication satellites or nuclear power plants. It's a technology, one with the potential to vastly improve the lives of every human on Earth, transforming the way that we work, learn and interact with the world around us. But like nuclear science, AI technology also carries the threat of being weaponized -- a digital cudgel with which to beat down the working class and enshrine the current capitalist status quo. Just look at how Amazon's automated facial recognition system is being marketed to law enforcement and government agencies, despite its obvious racial biases, or Wisconsin's automated sentencing tool, Compas, which determines a defendant's prison time via a proprietary and secret algorithm. It just so happens to sentence black and brown defendants to longer terms than their white counterparts for similar crimes.
A small robot is roving around a massive U.S. nuclear waste site to gather critical samples of potential air and water contamination after an emergency was declared Tuesday. The machine was deployed after a tunnel that stores rail cars filled with radioactive waste partially collapsed at Hanford Nuclear Reservation in Washington state. The mishap raised fears of a radiation leak at the nation's most contaminated nuclear site, though officials said there was no actual indication of a release of plutonium radiation as of 2:20 p.m. PDT. The air- and soil-sampling robot is monitoring for any changes on the scene. This robot is being used at Hanford right now to sample contamination in the air and on the ground.
A large body of research describes the importance of adaptability for systems to be resilient in the face of disruptions. However, adaptive processes can be fallible, either because systems fail to adapt in situations requiring new ways of functioning, or because the adaptations themselves produce undesired consequences. A central question is then: how can systems better manage their capacity to adapt to perturbations, and constitute intelligent adaptive systems? Based on studies conducted in different high-risk domains (healthcare, mission control, military operations, urban firefighting), we have identified three basic patterns of adaptive failures or traps: (1) decompensation – when a system exhausts its capacity to adapt as disturbances and challenges cascade; (2) working at cross-purposes – when sub-systems or roles exhibit behaviors that are locally adaptive but globally maladaptive; (3) getting stuck in outdated behaviors – when a system over-relies on past successes although conditions of operation change. The identification of such basic patterns then suggests ways in which a work organization, as an example of a complex adaptive system, needs to behave in order to see and avoid or recognize and escape the corresponding failures. The paper will present how expert practitioners exhibit such resilient behaviors in high-risk situations, and how adverse events can occur when systems fail to do so. We will also explore how various efforts in research related to complex adaptive systems provide fruitful directions to advance both the necessary theoretical work and the development of concrete solutions for improving systems’ resilience.