Oliver Hofmann and his research group at the Institute of Solid State Physics at TU Graz are working on the optimization of modern electronics. A key role in their research is played by interface properties of hybrid materials consisting of organic and inorganic components, which are used, for example, in OLED displays or organic solar cells. The team simulates these interface properties with machine-learning-based methods. The results are used in the development of new materials to improve the efficiency of electronic components. The researchers have now taken up the phenomenon of long-range charge transfer.
For movie buffs, the work that the factory machines do in Charlie Chaplin's 1936 classic, Modern Times, may have seemed too futuristic for its time. Fast forward eight decades, and the colossal changes that Artificial Intelligence is catalyzing around us will most likely give the same impression to our future generations. There is one crucial difference though: while those advancements were in movies, what we are seeing today are real. A question that seems to be on everyone's mind is, What is Artificial Intelligence? The pace at which AI is moving, as well as the breadth and scope of the areas it encompasses, ensure that it is going to change our lives beyond the normal.
Posted by Josh Gordon, Developer Advocate A new HarvardX TinyML course on edX.orgProf. Vijay Janapa Reddi of Harvard, the TensorFlow Lite Micro team, and the edX online learning platform are sharing a series of short TinyML courses this fall that you can observe for free, or sign up to take and receive a certificate. In this article, I'll share a bit about TinyML, what you can do with it, and the …
Ensemble techniques--wherein a model is composed of multiple (possibly) weaker models--are prevalent nowadays within the field of machine learning (ML). Well-known methods such as bagging , boosting , and stacking  are ML mainstays, widely (and fruitfully) deployed on a daily basis. Generally speaking, there are two types of ensemble methods, the first generating models in sequence--e.g., AdaBoost --the latter in a parallel manner--e.g., random forests  and evolutionary algorithms . AdaBoost (Adaptive Boosting) is an ML meta-algorithm that is used in conjunction with other types of learning algorithms to improve performance. The output of so-called "weak learners" is combined into a weighted sum that represents the final output of the boosted classifier.
The success of deep learning over the last decade, particularly in computer vision, has depended greatly on large training data sets. Even though progress in this area boosted the performance of many tasks such as object detection, recognition, and segmentation, the main bottleneck for future improvement is more labeled data. Self-supervised learning is among the best alternatives for learning useful representations from the data. In this article, we will briefly review the self-supervised learning methods in the literature and discuss the findings of a recent self-supervised learning paper from ICLR 2020 . We may assume that most learning problems can be tackled by having clean labeling and more data obtained in an unsupervised way.
For any large-scale computer vision application, one of the critical criteria to success is the quality and quantity of the training dataset required to train the relevant machine learning model. Open-source datasets such as ImageNet are sufficient to train machine learning models for computer vision applications that do not require high accuracy or are not too complicated, But for more complex use cases, obtaining a large amount of high-quality training data can be quite challenging, such as autonomous driving, safety monitoring systems, medical image diagnosis and more. In this article, we take a look at how to quickly create (including collection, labelling, and quality inspection) high-quality training data sets for various computer vision scenarios. Different types of machine learning modelling methods may use different types of training data. The main difference in data type is the degree to which it is marked.
Radiant Earth Foundation has released "LandCoverNet," a human-labelled global land cover classification training dataset. This release contains data across Africa, which accounts for 1/5 of the global dataset. Available for download on Radiant MLHub, the open geospatial library, LandCoverNet will enable accurate and regular land cover mapping for timely insights into natural and anthropogenic impacts on the Earth. Global land cover maps derived from Earth observations are not new, but the influx of open-access high spatial resolution Earth observations, such as that from the European Space Agency's Sentinel missions, coupled with improved computer power, encouraged the development of advanced algorithms. Machine learning models applied to high resolution remotely sensed imagery can classify land cover classes more accurately and faster, given the availability of high-quality training data.
Richard Harmon, Managing Director of Financial Services at Cloudera, discusses the importance of relevant machine learning models in today's age, and how the financial sector can prepare for future changes. The past six months have been turbulent. Business disruptions and closures are happening at an unprecedented scale and impacting the economy in a profound way. In the financial services sector, S&P Global estimates that this year could quadruple UK bank credit losses. The economic uncertainty in the UK is heightened by Brexit, which will see the UK leave the European Union in 2021.