simple
- North America > Canada > British Columbia > Vancouver (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Italy (0.04)
- Europe > France (0.04)
- Overview (0.67)
- Research Report (0.46)
Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization
We study the problem of estimating the mean of a distribution in high dimensions when either the samples are adversarially corrupted or the distribution is heavy-tailed. Recent developments in robust statistics have established efficient and (near) optimal procedures for both settings. However, the algorithms developed on each side tend to be sophisticated and do not directly transfer to the other, with many of them having ad-hoc or complicated analyses. In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions. We show that the meta-problem can be solved either by a variant of the Filter algorithm from the recent literature on robust estimation or by the quantum entropy scoring scheme (QUE), due to Dong, Hopkins and Li (NeurIPS '19). By leveraging our duality theorem, these results translate into simple and efficient algorithms for both robust and heavy-tailed settings. Furthermore, the QUE-based procedure has run-time that matches the fastest known algorithms on both fronts. Our analysis of Filter is through the classic regret bound of the multiplicative weights update method. This connection allows us to avoid the technical complications in previous works and improve upon the run-time analysis of a gradient-descent-based algorithm for robust mean estimation by Cheng, Diakonikolas, Ge and Soltanolkotabi (ICML '20).
POp-GS: Next Best View in 3D-Gaussian Splatting with P-Optimality
Wilson, Joey, Almeida, Marcelino, Mahajan, Sachit, Labrie, Martin, Ghaffari, Maani, Ghasemalizadeh, Omid, Sun, Min, Kuo, Cheng-Hao, Sen, Arnab
In this paper, we present a novel algorithm for quantifying uncertainty and information gained within 3D Gaussian Splatting (3D-GS) through P-Optimality. While 3D-GS has proven to be a useful world model with high-quality rasterizations, it does not natively quantify uncertainty. Quantifying uncertainty in parameters of 3D-GS is necessary to understand the information gained from acquiring new images as in active perception, or identify redundant images which can be removed from memory due to resource constraints in online 3D-GS SLAM. We propose to quantify uncertainty and information gain in 3D-GS by reformulating the problem through the lens of optimal experimental design, which is a classical solution to measuring information gain. By restructuring information quantification of 3D-GS through optimal experimental design, we arrive at multiple solutions, of which T-Optimality and D-Optimality perform the best quantitatively and qualitatively as measured on two popular datasets. Additionally, we propose a block diagonal approximation of the 3D-GS uncertainty, which provides a measure of correlation for computing more accurate information gain, at the expense of a greater computation cost.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.34)
Reviews: Simple, Distributed, and Accelerated Probabilistic Programming
In this submission, the authors describe the design, implementation and performance of Edward2, a low-level probabilistic programming language that seamlessly integrates tensorflow, in particular, tensorflow distribution. The key concept of Edward2 is the random variable, which should be understand as general python functions possibly with random choices in the context of Edward2. Also, continuing the design decision of its first version, Edward2 implements the principle of exposing inference to the users while providing them with enough components and combinators so as to make building custom-inference routines easy. This is different from the principle behind other high-level probabilistic programming systems, which is to hide or automate inference from their users. The submission explains a wide range of benefits of following this principle of exposing inference, such as huge boost in the scalability of inference engines and support for non-standard inference tasks.
Advanced EDA Made Simple Using Pandas Profiling
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. It's free, we don't spam, and we never share your email address.
Predator Movies Should Keep It Simple
The recent Hulu movie Prey, a prequel to the 1987 sci-fi horror film Predator, pits a young Comanche woman against a brutal alien hunter. Science fiction author Zach Chapman loved the new movie. "It's definitely my favorite Predator film," Chapman says in Episode 524 of the Geek's Guide to the Galaxy podcast. "I think it's the only one in the franchise that has a theme--or at least that commits to a theme in a meaningful way--and the action is super awesome." Prey has been a hit with audiences and critics alike, a much-needed boost for the franchise after flops like The Predator and Alien vs. Predator: Requiem.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Math is a Language. This is how you should learn it.
One of the hardest things about building a strong career in Artificial Intelligence, Data Science, or Machine Learning is to develop your skills in Math. Unfortunately, Math is one of those fields that scares a lot of people. Not learning Math properly will seriously compromise your problem-solving skills. For more details- check out my article- Why You need Math for Machine Learning. It focuses on Machine Learning, but the principles apply to many more domains.
Science Made Simple: What Is Machine Learning?
Machine learning is the process of using computers to detect patterns in massive datasets and then make predictions based on what the computer learns from those patterns. This makes machine learning a specific and narrow type of artificial intelligence. Full artificial intelligence involves machines that can perform abilities we associate with the minds of human beings and intelligent animals, such as perceiving, learning, and problem-solving. All machine learning is based on algorithms. In general, algorithms are sets of specific instructions that a computer uses to solve problems.
GitHub - uber/fiber: Distributed Computing for AI Made Simple
This project is experimental and the APIs are not considered stable. Fiber is a Python distributed computing library for modern computer clusters. Originally, it was developed to power large scale parallel scientific computation projects like POET and it has been used to power similar projects within Uber. To use Fiber, simply import it in your code and it works very similar to multiprocessing. Note that if __name__ '__main__': is necessary because Fiber uses spawn method to start new processes.