Goto

Collaborating Authors

Uncertainty


The Bayesian vs frequentist approaches: implications for machine learning – Part two

#artificialintelligence

Sampled from a distribution: Many machine learning algorithms make assumptions that the data is sampled from a frequency. For example, linear regression assumes gaussian distribution and logistic regression assumes that the data is sampled from a Bernoulli distribution.


Calculate Maximum Likelihood Estimator with Newton-Raphson Method using R

#artificialintelligence

In statistical modeling, we have to calculate the estimator to determine the equation of your model. The problem is, the estimator itself is difficult to calculate, especially when it involves some distributions like Beta, Gamma, or even Gompertz distribution. Maximum Likelihood Estimator (MLE) is one of many methods to calculate the estimator for those distributions. In this article, I will give you some examples to calculate MLE with the Newton-Raphson method using R. Newton-Raphson method is an iterative procedure to calculate the roots of function f. The goal of this method is to make the approximated result as close as possible with the exact result (that is, the roots of the function).


The Basics of Statistics for Data Science By Statisticians - Statanalytica

#artificialintelligence

Data science has become a boom in the current industry. It is one of the most popular technologies these days. Most of the statistics students want to learn data science. Because statistics is the building block of the machine learning algorithms. But most of the students don't know how much statistics they need to know to start data science.


Helping decision-makers manage resilience under different climate change scenarios: global vs local

AIHub

The Intergovernmental Panel on Climate Change (IPCC) fifth assessment report states that warming of the climate system is unequivocal and notes that each of the last three decades has been successively warmer at the Earth's surface than any preceding decade since 1850. The projections of the IPCC Report regarding future global temperature change range from 1.1 to 4 C, but that temperatures increases of more than 6 C cannot be ruled out [1]. This wide range of values reflects our limitations in performing accurate projections of future climate change produced by different potential pathways of greenhouse gas (GHG) emissions. The sources of the uncertainty that prevent us from obtaining better precision are diverse. One of them is related to the computer models used to project future climate change.


200+ Machine Learning Interview Questions and Answer for 2021

#artificialintelligence

A Machine Learning interview calls for a rigorous interview process where the candidates are judged on various aspects such as technical and programming skills, knowledge of methods and clarity of basic concepts. If you aspire to apply for machine learning jobs, it is crucial to know what kind of interview questions generally recruiters and hiring managers may ask. This is an attempt to help you crack the machine learning interviews at major product based companies and start-ups. Usually, machine learning interviews at major companies require a thorough knowledge of data structures and algorithms. In the upcoming series of articles, we shall start from the basics of concepts and build upon these concepts to solve major interview questions. Machine learning interviews comprise of many rounds, which begin with a screening test. This comprises solving questions either on the white-board, or solving it on online platforms like HackerRank, LeetCode etc. Here, we have compiled a list of ...


2020 in Review With Brian Tse

#artificialintelligence

In 2020, Synced has covered a lot of memorable moments in the AI community. Such as the current situation of women in AI, the born of GPT-3, AI fight against covid-19, hot debates around AI bias, MT-DNN surpasses human baselines on GLUE, AlphaFold Cracked a 50-Year-Old Biology Challenge and so on. To close the chapter of 2020 and look forward to 2021, we are introducing a year-end special issue following Synced's tradition to look back at current AI achievements and explore the possible trend of future AI with leading AI experts. Here, we invite Mr. Brian Tse to share his insights about the current development and future trends of artificial intelligence. Brian Tse focuses on researching and improving cooperation over AI safety, governance, and stability between great powers. He is a Policy Affiliate at the University of Oxford's Center for the Governance of AI, Coordinator at the Beijing AI Academy's AI4SDGs Cooperation Network, and Senior Advisor at the Partnership on AI.


Selection of Summary Statistics for Network Model Choice with Approximate Bayesian Computation

arXiv.org Machine Learning

Approximate Bayesian Computation (ABC) now serves as one of the major strategies to perform model choice and parameter inference on models with intractable likelihoods. An essential component of ABC involves comparing a large amount of simulated data with the observed data through summary statistics. To avoid the curse of dimensionality, summary statistic selection is of prime importance, and becomes even more critical when applying ABC to mechanistic network models. Indeed, while many summary statistics can be used to encode network structures, their computational complexity can be highly variable. For large networks, computation of summary statistics can quickly create a bottleneck, making the use of ABC difficult. To reduce this computational burden and make the analysis of mechanistic network models more practical, we investigated two questions in a model choice framework. First, we studied the utility of cost-based filter selection methods to account for different summary costs during the selection process. Second, we performed selection using networks generated with a smaller number of nodes to reduce the time required for the selection step. Our findings show that computationally inexpensive summary statistics can be efficiently selected with minimal impact on classification accuracy. Furthermore, we found that networks with a smaller number of nodes can only be employed to eliminate a moderate number of summaries. While this latter finding is network specific, the former is general and can be adapted to any ABC application.


Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors

arXiv.org Machine Learning

Predictive uncertainty estimation is an essential next step for the reliable deployment of deep object detectors in safety-critical tasks. In this work, we focus on estimating predictive distributions for bounding box regression output with variance networks. We show that in the context of object detection, training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions regardless of the correctness of the output mean. We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions than NLL. We also address the widespread use of non-proper scoring metrics for evaluating predictive distributions from deep object detectors by proposing an alternate evaluation approach founded on proper scoring rules. Using the proposed evaluation tools, we show that although variance networks can be used to produce high quality predictive distributions, adhoc approaches used by seminal object detectors for choosing regression targets during training do not provide wide enough data support for reliable variance learning. We hope that our work helps shift evaluation in probabilistic object detection to better align with predictive uncertainty evaluation in other machine learning domains. Deep object detectors are being increasingly deployed as perception components in safety critical robotics and automation applications. For reliable and safe operation, subsequent tasks using detectors as sensors require meaningful predictive uncertainty estimates correlated with their outputs. As an example, overconfident incorrect predictions can lead to non-optimal decision making in planning tasks, while underconfident correct predictions can lead to under-utilizing information in sensor fusion. This paper investigates probabilistic object detectors, extensions of standard object detectors that estimate predictive distributions for output categories and bounding boxes simultaneously. This paper aims to identify the shortcomings of recent trends followed by state-of-the-art probabilistic object detectors, and proposes to provide theoretically founded solutions for identified issues.


Paraconsistent Foundations for Quantum Probability

arXiv.org Artificial Intelligence

The mathematics of quantum mechanics has been viewed and analyzed from a huge variety of different perspectives, each shedding light on different subtleties of its underlying structure and its connection to our everyday reality. Here we add an additional thread to this conceptual polyphony, demonstrating a close connection between fuzzy paraconsistent logic and quantum probabilities. This connection suggests new variations on existing interpretations of quantum reality and measurement. It also provides some tantalizing connections between the probabilistic and fuzzy logic used in modern AI systems and quantum probabilistic reasoning, which may have implications for quantum-computing implementations of logical inference based AI. The ideas here arose as a spinoff from the work reported in [Goe21], which uses a variety of paraconsistent intuitionistic logic called Constructible Duality (CD) Logic as a means for giving a rigorous logic foundation to the PLN (Probabilistic Logic Networks) logic [GIGH08] that has been used in the OpenCog AI project [GPG13a, GPG13b] for well over a decade now.


Motor-Imagery-Based Brain Computer Interface using Signal Derivation and Aggregation Functions

arXiv.org Artificial Intelligence

Brain Computer Interface technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is Motor Imagery. In BCI applications, the ElectroEncephaloGraphy is a very popular measurement for brain dynamics because of its non-invasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. BCI systems are composed of a wide range of components that perform signal pre-processing, feature extraction and decision making. In this paper, we define a BCI Framework, named Enhanced Fusion Framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. Firstly, we include aan additional pre-processing step of the signal: a differentiation of the EEG signal that makes it time-invariant. Secondly, we add an additional frequency band as feature for the system and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing motor imagery-based brain-computer interface experiments. On this dataset, the new system achieved a 88.80% of accuracy. We also propose an optimized version of our system that is able to obtain up to 90,76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results.