Using docker to run old GPU-accelerated deep learning models

#artificialintelligence 

Deep learning models are wonderful, and we always want to use the newest cutting edge solutions to get the best results. But once in a while you stumble upon a relevant whitepaper that looks relevant to the task on hands, even though it's made a few years ago. And few years is an ethernity for the deep learning projects: old versions of frameworks, CUDA, python, etc -- nothing of that is easy to just install and laucnh on the modern systems. Usual answer for that would be Anaconda, but it doesn't provide enough isolation when it comes to the GPU accelerated models. My way of dealing with this problem would be of no surprise to the most: containerisation, in other words -- Docker.