Collaborating Authors

Machine Learning

Convolutional Neural Network


Description The artificial intelligence is a large field includes many techniques to make machine thinks. Therefore, in this course, we investigate the mimicking of human intelligence on machines by introducing a modern algorithm of artificial intelligence named convolutional neural network which is a technique of deep learning for computers to make the machine learn and expert. In this course, we present an overview of deep learning in which, we introduce the notion and classification of convolutional neural networks. We gives also the definition and the advantages of CNNs. In this course, we provide the tricks to elaborate your own architecture of CNN and the hardware and software to design a CNN model. In the end, we present the limitation and future challenges of CNN.

Understanding interfaces of hybrid materials with machine learning


Using machine learning methods, researchers at TU Graz can predict the structure formation of functionalized molecules at the interfaces of hybrid materials. Now they have also succeeded in looking behind the driving forces of this structure formation. The production of nanomaterials involves self-assembly processes of functionalized (organic) molecules on inorganic surfaces. This combination of organic and inorganic components is essential for applications in organic electronics and other areas of nanotechnology. Until now, certain desired surface properties were often achieved on a trial-and-error basis. Molecules were chemically modified until the best result for the desired surface property was found.

Journey to the center of the neuron


Every single one of your thoughts is made possible by your biological neurons. And behind many of the most useful A.I architectures is an entity inspired by them. Neurons are at the epicenter of the processing that underpins the complexity produced by intelligent systems. Curious to know more about the engine of your thoughts and about how they compare to their artificial counterparts? A.I neurons were originally inspired by our biological ones, yet they are very different.

AI Augmentation in Healthcare and Life Sciences


Artificial Intelligence (AI) can provide humans great relief from numerous repetitive tasks, with automation increasing productivity. Furthermore, AI powered machines and devices are fast and efficient: learning, predicting, and deciding with superhuman accuracy. Concurrently, the use cases for augmentative AI are expanding, with numerous industries and organizations looking to tap into the potential. This is clearly the situation in two key Healthcare and Life Sciences (HCLS) areas: neuroscience and radiology. In both areas, radiologists, MRI technicians and physicians must spend hours sifting through images, searching for markers and anomalies that may spur a disease diagnosis.

Hive's cloud-hosted machine learning models draw $85M


While cloud computing continues to gain favor, only a limited number of companies have embraced machine learning based in the cloud. Hive wants to change this by allowing enterprises to access hosted machine learning models via APIs. Hive has had particular success in the area of content moderation, thanks to its deep learning models that help companies interpret unstructured data, like images, videos, and audio. But it's also expanding into areas like advertising and sponsorship measurement as it seeks to find other areas that would benefit from intelligent automation. In an interview with VentureBeat, Hive CEO Kevin Guo said the company kept relatively quiet as it sought to prove its models work.

Are medical AI devices evaluated appropriately?


In just the last two years, artificial intelligence has become embedded in scores of medical devices that offer advice to ER doctors, cardiologists, oncologists, and countless other health care providers. The Food and Drug Administration has approved at least 130 AI-powered medical devices, half of them in the last year alone, and the numbers are certain to surge far higher in the next few years. Several AI devices aim at spotting and alerting doctors to suspected blood clots in the lungs. Some analyze mammograms and ultrasound images for signs of breast cancer, while others examine brain scans for signs of hemorrhage. Cardiac AI devices can now flag a wide range of hidden heart problems.

How To Ensure Your Machine Learning Models Aren't Fooled - InformationWeek


All neural networks are susceptible to "adversarial attacks," where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks. We will look at a brief case study of face recognition systems and their potential vulnerabilities.

Companies are Investing in Machine Learning in 2021. Why?


According to reports, machine learning is one of the most sought-after jobs in 2021. Companies have been in high demand for machine learning engineers to build algorithms that can enable business growth and efficiency. Disruptive technology is not a stranger anymore. Companies are pouring money into the development and deployment of cutting-edge technologies and automation. Companies are adopting business intelligence and automation to boost their services and get a deeper insight into the business.

Tilted empirical risk minimization


Classical ERM () minimizes the average loss and is shown in pink. As (blue), TERM finds a line of best fit while ignoring outliers. In some applications, these'outliers' may correspond to minority samples that should not be ignored. As (red), TERM recovers the min-max solution, which minimizes the worst loss. This can ensure the model is a reasonable fit for all samples, reducing unfairness related to representation disparity.

[D] Complexity of Time Series Models: ARIMA vs. LSTM


Does this concept of VC Dimension carry over to models in time series analysis? Is it possible to show that LSTM's have a higher VC dimension compared to ARIMA style models? Supposedly, neural network based time series models were developed because modeols like ARIMA was unable to provide reliable estimates for bigger and complex datasets. Mathematically speaking, what allows a LSTM to capture more variation and complexity in a dataset compared to ARIMA? Just as a general question: in what instances would it be better to use a CNN for time series forecasting compared to an LSTM?