hartmann
4b03821747e89ce803b2dac590f6a39b-Supplemental-Conference.pdf
Theimplementation optimizes theacquisition function, andtheposterior mean,bysampling adensegridofpoints,and uses a gradient-based optimizer to further optimize the single best point. Thus, onlyacquisition function setup and acquisition function optimization are considered as part of the runtime. For the synthetic test functions, 100 sampled optimal pairs are used for each acquisition function. GP hyperparameters are marginalized over for these tasks, so an equal number ofoptimal pairs aresampled foreachhyperparameter set. Thehyperparameters are re-sampled onafixedschedule throughout the run.
A Broader impact
Our work proposes a novel acquisition function for Bayesian optimization. The approach is founda-tional and does not have direct societal or ethical consequences. T able 2: Hyperparameters for the generated GP sample tasks. For the synthetic test functions, 100 sampled optimal pairs are used for each acquisition function. GP hyperparameters are marginalized over for these tasks, so an equal number of optimal pairs are sampled for each hyperparameter set.
Vanilla Bayesian Optimization Performs Great in High Dimensions
Hvarfner, Carl, Hellsten, Erik Orm, Nardi, Luigi
High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make it more performant in this setting, commonly by imposing various simplifying assumptions on the objective. In this paper, we identify the degeneracies that make vanilla Bayesian optimization poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of lowering the model complexity. Moreover, we propose an enhancement to the prior assumptions that are typical to vanilla Bayesian optimization algorithms, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior with the dimensionality - reveals that standard Bayesian optimization works drastically better than previously thought in high dimensions, clearly outperforming existing state-of-the-art algorithms on multiple commonly considered real-world high-dimensional tasks.
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
A General Framework for User-Guided Bayesian Optimization
Hvarfner, Carl, Hutter, Frank, Nardi, Luigi
The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the underlying function dynamics. However, the ability of Bayesian optimization to incorporate prior knowledge or beliefs about the function at hand in order to accelerate the optimization is limited, which reduces its appeal for knowledgeable practitioners with tight budgets. To allow domain experts to customize the optimization routine, we propose ColaBO, the first Bayesian-principled framework for incorporating prior beliefs beyond the typical kernel structure, such as the likely location of the optimizer or the optimal value. The generality of ColaBO makes it applicable across different Monte Carlo acquisition functions and types of user beliefs. We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
Amazon Games VP Christoph Hartmann explains how past failures helped fuel 'Lost Ark's' success
Hartmann noted that "Crucible" was well into development when he joined the company in August 2018. The game, which was first announced in 2016, contained a battle royale mode meant to compete with the likes of "PUBG" and "Fortnite," as well as teamfighting modes inspired by elements of "League of Legends" and "Dota 2." Hartmann noted that "competition in the genre was fierce" for "Crucible," and the studio applied what it learned from the experience to its work on "New World," another Amazon Games MMO that launched in 2021, and, eventually, "Lost Ark."
Toward AI-enhanced online-characterization and shaping of ultrashort X-ray free-electron laser pulses
Dingel, Kristina, Otto, Thorsten, Marder, Lutz, Funke, Lars, Held, Arne, Savio, Sara, Hans, Andreas, Hartmann, Gregor, Meier, David, Viefhaus, Jens, Sick, Bernhard, Ehresmann, Arno, Ilchen, Markus, Helml, Wolfram
X-ray free-electron lasers (XFELs) as the world`s most brilliant light sources provide ultrashort X-ray pulses with durations typically on the order of femtoseconds. Recently, they have approached and entered the attosecond regime, which holds new promises for single-molecule imaging and studying nonlinear and ultrafast phenomena like localized electron dynamics. The technological evolution of XFELs toward well-controllable light sources for precise metrology of ultrafast processes was, however, hampered by the diagnostic capabilities for characterizing X-ray pulses at the attosecond frontier. In this regard, the spectroscopic technique of photoelectron angular streaking has successfully proven how to non-destructively retrieve the exact time-energy structure of XFEL pulses on a single-shot basis. By using artificial intelligence algorithms, in particular convolutional neural networks, we here show how this technique can be leveraged from its proof-of-principle stage toward routine diagnostics at XFELs, thus enhancing and refining their scientific access in all related disciplines.
Researchers want to revolutionise AI by combining Quantum computers and Neural Networks – Fanatical Futurist by International Keynote Speaker Matthew Griffin
A new research project led by researchers at the Heriot-Watt University in the US aims to harness the power of quantum computers, computers that can operate over 100 million times faster than today's computers, to build a new type of "Quantum Neural Network" that the researchers say could usher in the next generation of Artificial Intelligence (AI), and the first generation of Quantum Artificial Intelligence (QAI) – a type of AI that could have mind blowing implications for almost every industry on Earth. "My colleagues and I instead hope to build the first dedicated neural network computer, using the latest'quantum' technology rather than AI software as it's done today," wrote Michael Hartmann, a professor the university who's leading the research, in a new essay for The Conversation. "By combining these two branches of computing, we hope to produce a breakthrough which leads to AI that operates at unprecedented speed, automatically making very complex decisions in a very short time." A neural network is a type of machine learning algorithm loosely modelled on the human brain that learns from examples in order to deal with new inputs, while quantum computers take advantage of subatomic particles that can exist in more than one state at a time to circumvent the limitations of old-fashioned binary computers that helps them operate hundreds of millions times faster than today's traditional silicon based logical computer systems. By combining the two, Hartmann believes, his team will be able to jump-start a new era in AI research that could manage extraordinarily complex problems like directing traffic flow for an entire city in real-time.
Scientists are building a quantum computer that "acts like a brain"
A new research project aims to harness the power of quantum computers to build a new type of neural network -- work the researchers say could usher in the next generation of artificial intelligence. "My colleagues and I instead hope to build the first dedicated neural network computer, using the latest'quantum' technology rather than AI software," wrote Michael Hartmann, a professor at Heriot-Watt University who's leading the research, in a new essay for The Conversation. "By combining these two branches of computing, we hope to produce a breakthrough which leads to AI that operates at unprecedented speed, automatically making very complex decisions in a very short time." A neural network is a type of machine learning algorithm loosely modeled on a biological brain, which learns from examples in order to deal with new inputs. Quantum computers take advantage of subatomic particles that can exist in more than one state at a time to circumvent the limitations of old-fashioned binary computers.
Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming
Schawe, Hendrik, Bleim, Roman, Hartmann, Alexander K.
Here we study the NP-complete $K$-SAT problem. Although the worst-case complexity of NP-complete problems is conjectured to be exponential, there exist parametrized random ensembles of problems where solutions can typically be found in polynomial time for suitable ranges of the parameter. In fact, random $K$-SAT, with $\alpha=M/N $ as control parameter, can be solved quickly for small enough values of $\alpha$. It shows a phase transition between a satisfiable phase and an unsatisfiable phase. For branch and bound algorithms, which operate in the space of feasible Boolean configurations, the empirically hardest problems are located only close to this phase transition. Here we study $K$-SAT ($K=3,4$) and the related optimization problem MAX-SAT by a linear programming approach, which is widely used for practical problems and allows for polynomial run time. In contrast to branch and bound it operates outside the space of feasible configurations. On the other hand, finding a solution within polynomial time is not guaranteed. We investigated several variants like including artificial objective functions, so called cutting-plane approaches, and a mapping to the NP-complete vertex-cover problem. We observed several easy-hard transitions, from where the problems are typically solvable (in polynomial time) using the given algorithms, respectively, to where they are not solvable in polynomial time. For the related vertex-cover problem on random graphs these easy-hard transitions can be identified with structural properties of the graphs, like percolation transitions. For the present random $K$-SAT problem we have investigated numerous structural properties also exhibiting clear transitions, but they appear not be correlated to the here observed easy-hard transitions. This renders the behaviour of random $K$-SAT more complex than, e.g., the vertex-cover problem.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Germany > Lower Saxony > Oldenburg (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
Want a Robot That Can Really Feel? Give It Whiskers
Among the many reasons humans are bizarre among mammals (the dearth of body hair, the bipedalism, the fact that someone invented the turducken) is a sad shortcoming: You and I don't have sensory whiskers. Cats, dogs, raccoons, sea lions--you name a mammal and it's probably got special hairs sprouting out of its face. After all, whiskers are immensely useful. Rats use them to navigate the darkness, for instance, while a seal's whiskers detect the movements of fishy prey. Whiskers are all the rage in nature, so why not give them to robots?