Nexen Tire America Inc. has announced the development of an Artificial Intelligence (AI) and big data-driven methodology aimed at reducing tire noise. The big data research for Noise, Vibration and Harshness (NVH) was jointly conducted with Hyundai-Kia Automotive Group and Inha University in Korea. Since 2018, Nexen Tire conducted the joint research with long-standing partner Hyundai-Kia Automotive Group to increase customer satisfaction and improve the environment by reducing noise levels. More importantly, the research is set to make an impact that can help reduce research and development (R&D) time and costs. AI technology and big data are becoming an integral part of the Fourth Industrial Revolution, the future of mobility and the fast-changing automotive industry.
Digital transformation is one of the top priorities for industrial companies. The largest players are already moving in this direction, for many years continuously working to improve production efficiency and launching large-scale optimisation programs. They're called advanced analytics or digital innovation, and at their core, the technology could be summarised under artificial intelligence. In all cases, the efforts to utilise AI models or data analytics systems are part of a bigger digital transformation effort of the progressing companies. In an industrial context, such strategies for cost-saving and process optimisation often start from pilot projects, or top management directives for digital change guide them. In general, changes in processes or investments in capital-intensive and competitive industries require large sums of money. Traditional capital expenditures usually stretch over a long period, so a current financial standing may not allow for a complete physical overhaul of the plants or facilities. These high costs lead to the search for cheaper alternatives.
The contemporary process-aware information systems possess the capabilities to record the activities generated during the process execution. To leverage these process specific fine-granular data, process mining has recently emerged as a promising research discipline. As an important branch of process mining, predictive business process management, pursues the objective to generate forward-looking, predictive insights to shape business processes. In this study, we propose a conceptual framework sought to establish and promote understanding of decision-making environment, underlying business processes and nature of the user characteristics for developing explainable business process prediction solutions. Consequently, with regard to the theoretical and practical implications of the framework, this study proposes a novel local post-hoc explanation approach for a deep learning classifier that is expected to facilitate the domain experts in justifying the model decisions. In contrary to alternative popular perturbation-based local explanation approaches, this study defines the local regions from the validation dataset by using the intermediate latent space representations learned by the deep neural networks. To validate the applicability of the proposed explanation method, the real-life process log data delivered by the Volvo IT Belgium's incident management system are used. The adopted deep learning classifier achieves a good performance with the Area Under the ROC Curve of 0.94. The generated local explanations are also visualized and presented with relevant evaluation measures that are expected to increase the users' trust in the black-box-model.
As Artificial Intelligent (AI) technology advances and increasingly large amounts of data become readily available via various Industrial Internet of Things (IIoT) projects, we evaluate the state of the art of predictive maintenance approaches and propose our innovative framework to improve the current practice. The paper first reviews the evolution of reliability modelling technology in the past 90 years and discusses major technologies developed in industry and academia. We then introduce the next generation maintenance framework - Intelligent Maintenance, and discuss its key components. This AI and IIoT based Intelligent Maintenance framework is composed of (1) latest machine learning algorithms including probabilistic reliability modelling with deep learning, (2) real-time data collection, transfer, and storage through wireless smart sensors, (3) Big Data technologies, (4) continuously integration and deployment of machine learning models, (5) mobile device and AR/VR applications for fast and better decision-making in the field. Particularly, we proposed a novel probabilistic deep learning reliability modelling approach and demonstrate it in the Turbofan Engine Degradation Dataset.
This study introduces SECODA, a novel general-purpose unsupervised non-parametric anomaly detection algorithm for datasets containing continuous and categorical attributes. The method is guaranteed to identify cases with unique or sparse combinations of attribute values. Continuous attributes are discretized repeatedly in order to correctly determine the frequency of such value combinations. The concept of constellations, exponentially increasing weights and discretization cut points, as well as a pruning heuristic are used to detect anomalies with an optimal number of iterations. Moreover, the algorithm has a low memory imprint and its runtime performance scales linearly with the size of the dataset. An evaluation with simulated and real-life datasets shows that this algorithm is able to identify many different types of anomalies, including complex multidimensional instances. An evaluation in terms of a data quality use case with a real dataset demonstrates that SECODA can bring relevant and practical value to real-world settings.
For some time now, the United Arab Emirates (UAE) has been adopting artificial intelligence (AI) in the public and business sectors. This is part of the Gulf country's economic diversification strategy, aimed at transforming the UAE away from an oil-dependent economy to a knowledge-based one. AI is generally conceived as human intelligence processes which are simulated by computer systems, including learning, reasoning, problem-solving, planning, predictive analytics, and advanced robotics. Like other Arab states, the UAE has advanced a public discourse based on a dominant narrative of nationalism which is meant to solidify its image while reinforcing the Emirati rulers' power and legitimacy. Its foundational theme is made up of different frames such as diversity, tolerance, moderation, international cooperation, humanitarianism, and modernity. State leaders use narratives not only to persuade and influence a national and international audience of its image and self-perception, but also as a means to determine its understanding of its place and purpose in the international system.
According to a Nielsen report, brick-and-mortar alcohol dollar sales were up 21% in April 2020 compared to the same period a year ago. Online alcohol sales skyrocketed by 234% over the same period in 2019. However, despite the increase, global sales are decreasing due to the shutdowns in restaurants, bars, live events and travel. Next Century Spirits is a liquor technology startup with $9.6 M in funding. The company uses big data and machine learning to create and filter bespoke distilled spirits.
The implementation of robust, stable, and user-centered data analytics and machine learning models is confronted by numerous challenges in production and manufacturing. Therefore, a systematic approach is required to develop, evaluate, and deploy such models. The data-driven knowledge discovery framework provides an orderly partition of the data-mining processes to ensure the practical implementation of data analytics and machine learning models. However, the practical application of robust industry-specific data-driven knowledge discovery models faces multiple data-- and model-development--related issues. These issues should be carefully addressed by allowing a flexible, customized, and industry-specific knowledge discovery framework; in our case, this takes the form of the cross-industry standard process for data mining (CRISP-DM). This framework is designed to ensure active cooperation between different phases to adequately address data- and model-related issues. In this paper, we review several extensions of CRISP-DM models and various data-robustness-- and model-robustness--related problems in machine learning, which currently lacks proper cooperation between data experts and business experts because of the limitations of data-driven knowledge discovery models.
For some time now, the United Arab Emirates (UAE) has been adopting artificial intelligence (AI) in the public and business sectors. This is part of the Gulf country's economic diversification strategy, aimed at transforming the UAE away from an oil-dependent economy to a knowledge-based one. AI is generally conceived as human intelligence processes which are simulated by computer systems, including learning, reasoning, problem-solving, planning, predictive analytics, and advanced robotics. Like other Arab states, the UAE has advanced a public discourse based on a dominant narrative of nationalism which is meant to solidify its image while reinforcing the Emirati rulers' power and legitimacy. Its foundational theme is made up of different frames such as diversity, tolerance, moderation, international cooperation, humanitarianism, and modernity.
Over the past decade terms such as "Data Science", "Big Data", "Data Lake", "Machine Learning", "AI" and so forth have risen to the forefront (and sometimes fallen back again) of the everyday vocabulary used in the widest variety of industries. I do not wish to engage in an extended argument on consistent nomenclature, but there are two frequently used terms that are of particular interest to me: "Data Scientist" and "Machine Learning Engineer". In the broadest possible sense, both of these terms could be understood as referring to "technically skilled people who build machine learning solutions". "Data Scientist" is a term that over the years has become associated with a sort of generalist mathematician or statistician who can also code a bit and knows how to interpret and visualise data. More recently, the term "Machine Learning Engineer" has become associated with software developers who have picked up some mathematics along the way.