There are several reasons why everyone isn't using Bayesian methods for regression modeling. One reason is that Bayesian modeling requires more thought: you need pesky things like priors, and you can't assume that if a procedure runs without throwing an error that the answers are valid. A second reason is that MCMC sampling -- the bedrock of practical Bayesian modeling -- can be slow compared to closed-form or MLE procedures. A third reason is that existing Bayesian solutions have either been highly-specialized (and thus inflexible), or have required knowing how to use a generalized tool like BUGS, JAGS, or Stan. This third reason has recently been shattered in the R world by not one but two packages: brms and rstanarm.
Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e.
Adversarial learning allows us to free our models of any constraints or limitations in our understanding of the problem domain -- there is no preconception of what to learn and the model is free to explore the data. In the next post we will see how we can utilize the representations learned by our generator for image classification.
The history of Artificial Intelligence is long, but it's only been recently that technology companies and markets have begun to get excited about it… Why? After a few decades of exploration of symbolic AI methods, the field shifted toward statistical approaches, that have as of late started working in a broad array of tasks due to the explosion of data and computing power, this in turn has led to machine learning and, most importantly, enabled deep learning. This is great news for the tech industry. The downside is that there aren't enough data scientists that understand deep learning. For those who do, there is a huge demand for their services.
Technological advances and artificial intelligence (AI) are going to totally transform the way healthcare is delivered over the next five to 10 years. This is the view of Tony Young, National Clinical Director for Innovation at NHS England. But he warns that with the advent of life-changing technologies, we must not lose sight of what it means to be human. As with the arrival of the printing press 500 years ago which gave everyone access to the written word, medicine today is having its own "Gutenberg moment". Technology, such as smartphones and wearables, is giving patients access to medical knowledge and empowering them to take charge of their health and well-being.