Sen, Jaydip, Mehtab, Sidra, Sen, Rajdeep, Dutta, Abhishek, Kherwa, Pooja, Ahmed, Saheel, Berry, Pranay, Khurana, Sahil, Singh, Sonali, Cadotte, David W. W, Anderson, David W., Ost, Kalum J., Akinbo, Racheal S., Daramola, Oladunni A., Lainjo, Bongs
Recent times are witnessing rapid development in machine learning algorithm systems, especially in reinforcement learning, natural language processing, computer and robot vision, image processing, speech, and emotional processing and understanding. In tune with the increasing importance and relevance of machine learning models, algorithms, and their applications, and with the emergence of more innovative uses cases of deep learning and artificial intelligence, the current volume presents a few innovative research works and their applications in real world, such as stock trading, medical and healthcare systems, and software automation. The chapters in the book illustrate how machine learning and deep learning algorithms and models are designed, optimized, and deployed. The volume will be useful for advanced graduate and doctoral students, researchers, faculty members of universities, practicing data scientists and data engineers, professionals, and consultants working on the broad areas of machine learning, deep learning, and artificial intelligence.
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help--and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals test from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration. Creating hypotheses has long been a purely human domain.
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Khan, Falaah Arif, Heath, Victoria, Galinkin, Erick, Khurana, Ryan, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
AI - Artificial Intelligence AGI - Artificial General Intelligence ANN - Artificial Neural Network ANOVA - Analysis of Variance ANT - Actor Network Theory API - Application Programming Interface APX - Amsterdam Power Exchange AVE - Average Variance Extracted BU - Business Unit CART - Classification and Regression Tree CBMV - Crowd-based Business Model Validation CR - Composite Reliability CT - Computed Tomography CVC - Corporate Venture Capital DR - Design Requirement DP - Design Principle DSR - Design Science Research DSS - Decision Support System EEX - European Energy Exchange FsQCA - Fuzzy-Set Qualitative Comparative Analysis GUI - Graphical User Interface HI-DSS - Hybrid Intelligence Decision Support System HIT - Human Intelligence Task IoT - Internet of Things IS - Information System IT - Information Technology MCC - Matthews Correlation Coefficient ML - Machine Learning OCT - Opportunity Creation Theory OGEMA 2.0 - Open Gateway Energy Management 2.0 OS - Operating System R&D - Research & Development RE - Renewable Energies RQ - Research Question SVM - Support Vector Machine SSD - Solid-State Drive SDK - Software Development Kit TCP/IP - Transmission Control Protocol/Internet Protocol TCT - Transaction Cost Theory UI - User Interface VaR - Value at Risk VC - Venture Capital VPP - Virtual Power Plant Chapter I
Artificial Intelligence (AI) has recently shown its capabilities for almost every field of life. Machine Learning, which is a subset of AI, is a `HOT' topic for researchers. Machine Learning outperforms other classical forecasting techniques in almost all-natural applications. It is a crucial part of modern research. As per this statement, Modern Machine Learning algorithms are hungry for big data. Due to the small datasets, the researchers may not prefer to use Machine Learning algorithms. To tackle this issue, the main purpose of this survey is to illustrate, demonstrate related studies for significance of a semi-parametric Machine Learning framework called Grey Machine Learning (GML). This kind of framework is capable of handling large datasets as well as small datasets for time series forecasting likely outcomes. This survey presents a comprehensive overview of the existing semi-parametric machine learning techniques for time series forecasting. In this paper, a primer survey on the GML framework is provided for researchers. To allow an in-depth understanding for the readers, a brief description of Machine Learning, as well as various forms of conventional grey forecasting models are discussed. Moreover, a brief description on the importance of GML framework is presented.
Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes. It can be applied to solve a variety of real-world applications in science and engineering. Bayesian approximation and ensemble learning techniques are two most widely-used UQ methods in the literature. In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image processing (e.g., image restoration), medical image analysis (e.g., medical image classification and segmentation), natural language processing (e.g., text classification, social media texts and recidivism risk-scoring), bioinformatics, etc.This study reviews recent advances in UQ methods used in deep learning. Moreover, we also investigate the application of these methods in reinforcement learning (RL). Then, we outline a few important applications of UQ methods. Finally, we briefly highlight the fundamental research challenges faced by UQ methods and discuss the future research directions in this field.
The amount of data for processing and categorization grows at an ever increasing rate. At the same time the demand for collaboration and transparency in organizations, government and businesses, drives the release of data from internal repositories to the public or 3rd party domain. This in turn increase the potential of sharing sensitive information. The leak of sensitive information can potentially be very costly, both financially for organizations, but also for individuals. In this work we address the important problem of sensitive information detection. Specially we focus on detection in unstructured text documents. We show that simplistic, brittle rule sets for detecting sensitive information only find a small fraction of the actual sensitive information. Furthermore we show that previous state-of-the-art approaches have been implicitly tailored to such simplistic scenarios and thus fail to detect actual sensitive content. We develop a novel family of sensitive information detection approaches which only assumes access to labeled examples, rather than unrealistic assumptions such as access to a set of generating rules or descriptive topical seed words. Our approaches are inspired by the current state-of-the-art for paraphrase detection and we adapt deep learning approaches over recursive neural networks to the problem of sensitive information detection. We show that our context-based approaches significantly outperforms the family of previous state-of-the-art approaches for sensitive information detection, so-called keyword-based approaches, on real-world data and with human labeled examples of sensitive and non-sensitive documents.
A panel of judges at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC19) on Thursday awarded a multi-institutional team led by Lawrence Livermore National Laboratory computer scientists with the conference's Best Paper award. The paper, entitled "Massively Parallel Infrastructure for Adaptive Multiscale Simulations: Modeling RAS Initiation Pathway for Cancer," describes the workflow driving a first-of-its-kind multiscale simulation on predictively modeling the dynamics of RAS proteins -- a family of proteins whose mutations are linked to more than 30 percent of all human cancers -- and their interactions with lipids, the organic compounds that help make up cell membranes. Developed as part of the Pilot 2 project in the Joint Design of Advanced Computing for Cancer program, a collaboration between the Department of Energy (DOE) and National Cancer Institute (NCI), the research resulted in a Multiscale Machine-Learned Modeling Infrastructure (MuMMI) that investigators found was scalable to next-generation heterogenous supercomputers such as LLNL's Sierra and Oak Ridge's Summit. Working for more than two years on the pilot project, which is funded by the National Nuclear Security Administration's Advanced Simulation and Computing program, the multidisciplinary team, composed of more than 20 computational scientists, biophysicists, chemists and statisticians from LLNL, Los Alamos National Laboratory, NCI/Frederick National Laboratory for Cancer Research, Oak Ridge National Laboratory (ORNL) and IBM, ran nearly 120,000 simulations on Sierra, using 5.6 million GPU hours of compute time and generating a massive 320 terabytes of data. "I can't begin to describe how happy I am for our team -- it's been a lot of hard work, and to have it recognized at this level is just amazing," said Francesco Di Natale, LLNL computer scientist and the paper's lead author.