The School of Chemistry at the University of Bristol is at the forefront of applying computing to chemistry, from simulating complex materials and biomolecular systems on supercomputers, developing workflows for robotic chemical synthesis, to using modern machine learning algorithms and advanced visualisation to understand and predict chemical behaviours. To get the most out of scientific computing we need a new type of scientist, who combines a firm grounding in chemistry with strong skills in computing as well as a clear understanding of what can be achieved by merging them. Our new degrees will address this emerging skills gap, allowing students to apply their enthusiasm for computing in chemistry, whether that is learning to build machine learning frameworks for predicting spectra, script automation workflows or conduct quantum chemical calculations. In all this we keep chemistry at the core, enhancing it with the breadth of modern scientific computing, covering coding and software engineering, visualisation and virtual reality, data analysis, machine learning, deep learning and AI, as well as modern hardware and computing resources, such as cloud computing, GPUs and high-performance computing architectures. With these skills, our graduates will be well placed in the future job market where employers are ever-more focussed on this combination of skills and experience.
Machine learning has been around for a while, with the earliest techniques developed in the 1950s. It is currently enjoying a particularly high profile, thanks to a whole range of possible applications from self-driving cars through to Go-playing computers. But what exactly is it? I've just finished diving into Josefin Rosen's blog post, a description of how machine learning makes for a smarter life, and asked her to put ML in context. Welcome to join my discussion with Josefin.
With the rapid growth of the data volume and the fast increasing of the computational model complexity in the scenario of cloud computing, it becomes an important topic that how to handle users' requests by scheduling computational jobs and assigning the resources in data center. In order to have a better perception of the computing jobs and their requests of resources, we analyze its characteristics and focus on the prediction and classification of the computing jobs with some machine learning approaches. Specifically, we apply LSTM neural network to predict the arrival of the jobs and the aggregated requests for computing resources. Then we evaluate it on Google Cluster dataset and it shows that the accuracy has been improved compared to the current existing methods. Additionally, to have a better understanding of the computing jobs, we use an unsupervised hierarchical clustering algorithm, BIRCH, to make classification and get some interpretability of our results in the computing centers.
In my last article, I discussed the evolution of Cloud Computing technology and how Cloud has been a paradigm shift for the Digital Transformation. Cloud provides the businesses with unheralded flexibility while offering them greater versatility and inexpensive solutions for managing the IT systems, where the technological developments are happening at a phenomenal pace and dynamic than ever before.