Collaborating Authors


What are the Most In-demand Skills in Artificial Intelligence?


Artificial Intelligence (AI) is intelligence exhibited by machines. In Computer Science, AI research deals with how to create computers that are capable of intelligent behavior. AI has been defined in numerous ways, but in general, it can be described as a way of making a computer system "smart" – able to understand complex tasks and carry out complex commands. The principal benefit of AI is that it can help humans make better decisions by providing insights and recommendations informed by data. AI has several applications and is being employed in a growing number of industries, including healthcare, finance, manufacturing, and transportation. Some of the most remarkable applications of AI are in the field of robotics, where AI is used to create machines that can carry out complex tasks.

Graph Neural Network -- Attention Mechanism Applied and Computational Improvement


"Graph" is a special data structure that usually appears in the era of the internet. For instance, people's relationships on social networks can be viewed as a graph. Even the interaction dynamics of sportsmen can be viewed as a graph composed of nodes of fighters' bunches and edges connected by them. When we are considering interactions among different entities, a graph can be a great representation. Interactions can be represented by edges.

AI is 'Better Than' Humans and That is Ok


Remember in 2017, Elon Musk said that artificial intelligence would replace humanity in the next five years? While working on artificial intelligence for Tesla cars, he concluded that society had approached the moment when artificial intelligence could become significantly smarter than people. "People should not underestimate the power of the computer,'' Musk said. "This is pride and an obvious mistake." He must know what he's talking about, being one of the early investors of DeepMind, a Google subsidiary that developed an AI that could beat humans at Go and chess. AI is really good at many "human" tasks -- diagnosing diseases, translating languages, and serving customers.

The New Intelligence Game


The relevance of the video is that the browser identified the application being used by the IAI as Google Earth and, according to the OSC 2006 report, the Arabic-language caption reads Islamic Army in Iraq/The Military Engineering Unit – Preparations for Rocket Attack, the video was recorded in 5/1/2006, we provide, in Appendix A, a reproduction of the screenshot picture made available in the OSC report. Now, prior to the release of this video demonstration of the use of Google Earth to plan attacks, in accordance with the OSC 2006 report, in the OSC-monitored online forums, discussions took place on the use of Google Earth as a GEOINT tool for terrorist planning. On August 5, 2005 the user "Al-Illiktrony" posted a message to the Islamic Renewal Organization forum titled A Gift for the Mujahidin, a Program To Enable You to Watch Cities of the World Via Satellite, in this post the author dedicated Google Earth to the mujahidin brothers and to Shaykh Muhammad al-Mas'ari, the post was replied in the forum by "Al-Mushtaq al-Jannah" warning that Google programs retain complete information about their users. This is a relevant issue, however, there are two caveats, given the amount of Google Earth users, it may be difficult for Google to flag a jihadist using the functionality in time to prevent an attack plan, one possible solution would be for Google to flag computers based on searched websites and locations, for instance to flag computers that visit certain critical sites, but this is a problem when landmarks are used, furthermore, and this is the second caveat, one may not use one's own computer to produce the search or even mask the IP address. On October 3, 2005, as described in the OSC 2006 report, in a reply to a posting by Saddam Al-Arab on the Baghdad al-Rashid forum requesting the identification of a roughly sketched map, "Almuhannad" posted a link to a site that provided a free download of Google Earth, suggesting that the satellite imagery from Google's service could help identify the sketch.

Technology Ethics in Action: Critical and Interdisciplinary Perspectives Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.

Fractional SDE-Net: Generation of Time Series Data with Long-term Memory Machine Learning

Time series data appears in various areas and its modeling, which enables us to understand more about phenomena behind time evolution or to describe forthcoming scenarios is a fundamental issue. Recently, generative models specializing in time series data using deep neural networks (DNNs) are gathering attention. As to learning of population distribution, generative adversarial network (GAN) advocated in Goodfellow et al. [2014] is a basic approach in spite of the great success of GANs with images (Gui et al. [2021]). GAN is also used for generating time series dataFor instance, beginning from models using recurrent neural networks (RNN-GAN) (Mogren [2016]), TimeGAN reflecting time series structure (Yoon et al. [2019]), QuantGAN focusing on financial time series such as stock price or exchange rate (Wiese et al. [2020]), SigGAN using signature as a characteristic feature of time-series paths (Ni et al. [2020]). In general, time series data is a series of data points indexed in time order and taken at successive equally spaced points in time.

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence Artificial Intelligence

In 1950, Alan Turing proposed an imitation game as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions indistinguishable from a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans. So an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. Furthermore, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.

L\'evy Induced Stochastic Differential Equation Equipped with Neural Network for Time Series Forecasting Machine Learning

With the fast development of modern deep learning techniques, the study of dynamic systems and neural networks is increasingly benefiting each other in a lot of different ways. Since uncertainties often arise in real world observations, SDEs (stochastic differential equations) come to play an important role. To be more specific, in this paper, we use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series which has big jump properties and high probability distribution shift. Our contributions are, first, we explored SDEs driven by $\alpha$-stable L\'evy motion to model the time series data and solved the problem through neural network approximation. Second, we theoretically proved the convergence of the model and obtained the convergence rate. Finally, we illustrated our method by applying it to stock marketing time series prediction and found the convergence order of error.

How Technology Is Reshaping The Fashion Industry - fashionabc


Estimated to be worth $3T by the end of the decade, per CB Insights' Industry Analyst Consensus, the fashion industry is growing at a fast pace, led by cutting-edge technologies. From robots that sew and cut fabric to AI algorithms that predict style trends, VR mirrors in dressing rooms, shopping off the runway and a number of other innovations show how technology is automating and evolving the industry. In 2016, Google collaborated with online fashion platform Zalando and production company Stinkdigital to launch predictive design engine, Project Muze. The algorithm consisted of a set of aesthetic parameter and trained a neural network to comprehend colours, textures and styles derived from Google Fashion Trends Report and data sourced by Zalando -- to create designs in sync with with style preferences identified by the network. Amazon is taking an algorithmic approach to fashion as well.

Forecasting: theory and practice Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.