Goto

Collaborating Authors

 calculation


Expectation Propagation for t-Exponential Family Using q-Algebra

Neural Information Processing Systems

Exponential family distributions are highly useful in machine learning since their calculation can be performed efficiently through natural parameters. The exponential family has recently been extended to the t-exponential family, which contains Student-t distributions as family members and thus allows us to handle noisy data well. However, since the t-exponential family is defined by the deformed exponential, an efficient learning algorithm for the t-exponential family such as expectation propagation (EP) cannot be derived in the same way as the ordinary exponential family. In this paper, we borrow the mathematical tools of q-algebra from statistical physics and show that the pseudo additivity of distributions allows us to perform calculation of t-exponential family distributions through natural parameters. We then develop an expectation propagation (EP) algorithm for the t-exponential family, which provides a deterministic approximation to the posterior or predictive distribution with simple moment matching. We finally apply the proposed EP algorithm to the Bayes point machine and Student-t process classification, and demonstrate their performance numerically.


You Can Approximate Pi by Dropping Needles on the Floor

WIRED

Who needs a supercomputer when you can calculate pi with a box of sewing needles? Happy Pi Day! March 14 is the date that otherwise rational people celebrate this irrational number, because 3/14 contains the first three digits of pi. And hey, pi deserves a day. By definition, it's the ratio of the circumference and diameter of a circle, but it shows up in all kinds of places that seem to have nothing to do with circles, from music to quantum mechanics. Pi is an infinitely long decimal number that never repeats.


Chemistry may not be the 'killer app' for quantum computers after all

New Scientist

Chemistry may not be the'killer app' for quantum computers after all Quantum chemistry calculations that could advance drug development or agriculture have recently emerged as a promising "killer application" of quantum computers, but a new analysis suggests this is unlikely to be the case. Progress in building quantum computers has greatly accelerated in recent years, but it remains an open question what uses are most likely to justify the ongoing investment in this technology. One popular contender is solving problems in quantum chemistry, such as calculating the energy levels of molecules relevant for biomedicine or industry. This requires accounting for the behavior of many quantum particles - electrons in the molecule - simultaneously, so it seems like a good match for computers made from many quantum parts. Quantum computers have finally arrived, but will they ever be useful? However, Xavier Waintal at CEA Grenoble in France and his colleagues have now shown that two leading quantum computing algorithms for this task may actually have, at best, limited use.





Appendix for Bayesian Active Causal Discovery with Multi-Fidelity Experiments Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

Then, we intend to calculate the constraint part. The algorithm for Licence method for single-target interventiion scenario is shown in Algorithm 1. The details of experimental baselines are demonstrated as follows. AIT [11] is an active learning method that utilize f-score to select intervention queries. REAL fidelity means the model always choose the highest fidelity to conduct experiments.




A Supplementary Analysis

Neural Information Processing Systems

To evaluate TSLD's efficiency, we detail training speeds and GPU memory consumption for various Our analysis of confidence disparity in token predictions, detailed in Section 4.2, extends beyond a In fact, this observed trend is consistently present across various GLM models. These errors are visualized using a heatmap plot (Fig. A2 top), For the OPT -6.7B model, quantization error is measured for the 5th and 15th layers. LLaMA-7B model, quantization errors are depicted for input sequence lengths of 128 and 512. From left to right: OPT -6.7B, LLaMA-7B, and LLaMA-2-7B. However, as we delve deeper into the layers of OPT -6.7B or introduce longer input sequences to LLaMA-7B, this phenomenon becomes less pronounced.