interval distribution
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.47)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.47)
Rescuing neural spike train models from bad MLE
Arribas, Diego M., Zhao, Yuan, Park, Il Memming
The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to capture important features of the data and even show diverging firing rates. To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels. We develop a method that stochastically optimizes the maximum mean discrepancy induced by the kernel. Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. Using different combinations of spike train kernels, we show that we can control the trade-off between different features which is critical for dealing with model-mismatch.
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Maximum Uncertainty Procedures for Interval-Valued Probability Distributions
Measures of uncertainty and divergence are introduced for interval-valued probability distributions and are shown to have desirable mathematical properties. A maximum uncertainty inference procedure for marginal interval distributions is presented. A technique for reconstruction of interval distributions from projections is developed based on this inference procedure. They may represent collections of confidence intervals derived from frequency data, imprecisely stated subjective probabilities, known linear equality or inequality constraints, etc. Thus, interval distributions sometimes provide a more realistic characterization of uncertainty than do real-valued probability distributions.