to

### What You Get When You Get Zest Explainability

Real explainability is essential during model development when it's time to figure out which variables are most influencing the prediction and how those variables are influencing the prediction. The chart below was generated by our ZAML Explain software and shows how an applicant's traditional credit score, in combination with other pieces of information, affects that applicant's model score (a higher score means a lower likelihood of default). Each of the dots in the chart represent a single credit applicant. The bottom axis is the traditional credit score (from below 500 to over 700). The vertical axis measures the impact that credit score has on an applicant's model score.

### How To Know if Your Machine Learning Model Has Good Performance

After you develop a machine learning model for your predictive modeling problem, how do you know if the performance of the model is any good? This is a common question I am asked by beginners. As a beginner, you often seek an answer to this question, e.g. In this post, you will discover how to answer this question for yourself definitively and know whether your model skill is good or not. Your predictive modeling problem is unique.

### Generative Modeling by Estimating Gradients of the Data Distribution

This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood computation, and inverse problem solving without re-training models. In this blog post, we will show you in more detail the intuition, basic concepts, and potential applications of score-based generative models. Existing generative modeling techniques can largely be grouped into two categories based on how they represent probability distributions. Likelihood-based models and implicit generative models, however, both have significant limitations.

### Free energy score space

Score functions induced by generative models extract fixed-dimension feature vectors from different-length data observations by subsuming the process of data generation, projecting them in highly informative spaces called score spaces. In this way, standard discriminative classifiers are proved to achieve higher performances than a solely generative or discriminative approach. In this paper, we present a novel score space that exploits the free energy associated to a generative model through a score function. This function aims at capturing both the uncertainty of the model learning and local compliance of data observations with respect to the generative process. Theoretical justifications and convincing comparative classification results on various generative models prove the goodness of the proposed strategy.

### Statistical Piano Reduction Controlling Performance Difficulty

We present a statistical-modelling method for piano reduction, i.e. converting an ensemble score into piano scores, that can control performance difficulty. While previous studies have focused on describing the condition for playable piano scores, it depends on player's skill and can change continuously with the tempo. We thus computationally quantify performance difficulty as well as musical fidelity to the original score, and formulate the problem as optimization of musical fidelity under constraints on difficulty values. First, performance difficulty measures are developed by means of probabilistic generative models for piano scores and the relation to the rate of performance errors is studied. Second, to describe musical fidelity, we construct a probabilistic model integrating a prior piano-score model and a model representing how ensemble scores are likely to be edited. An iterative optimization algorithm for piano reduction is developed based on statistical inference of the model. We confirm the effect of the iterative procedure; we find that subjective difficulty and musical fidelity monotonically increase with controlled difficulty values; and we show that incorporating sequential dependence of pitches and fingering motion in the piano-score model improves the quality of reduction scores in high-difficulty cases.