Goto

Collaborating Authors

Results


Future of AI Part 5: The Cutting Edge of AI

#artificialintelligence

Edmond de Belamy is a Generative Adversarial Network portrait painting constructed in 2018 by Paris-based arts-collective Obvious and sold for $432,500 in Southebys in October 2018.


Future of AI Part 2

#artificialintelligence

This part of the series looks at the future of AI with much of the focus in the period after 2025. The leading AI researcher, Geoff Hinton, stated that it is very hard to predict what advances AI will bring beyond five years, noting that exponential progress makes the uncertainty too great. This article will therefore consider both the opportunities as well as the challenges that we will face along the way across different sectors of the economy. It is not intended to be exhaustive. Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959. Deep Learning refers to the field of Neural Networks with several hidden layers. Such a neural network is often referred to as a deep neural network. Neural Networks are biologically inspired networks that extract abstract features from the data in a hierarchical fashion. Deep Reinforcement Learning will be considered in greater detail in part 3 of this series. For the purpose of this article I will consider AI to cover Machine Learning and Deep Learning. Narrow AI: the field of AI where the machine is designed to perform a single task and the machine gets very good at performing that particular task.


Guide to Interpretable Machine Learning

#artificialintelligence

If you can't explain it simply, you don't understand it well enough. Disclaimer: This article draws and expands upon material from (1) Christoph Molnar's excellent book on Interpretable Machine Learning which I definitely recommend to the curious reader, (2) a deep learning visualization workshop from Harvard ComputeFest 2020, as well as (3) material from CS282R at Harvard University taught by Ike Lage and Hima Lakkaraju, who are both prominent researchers in the field of interpretability and explainability. This article is meant to condense and summarize the field of interpretable machine learning to the average data scientist and to stimulate interest in the subject. Machine learning systems are becoming increasingly employed in complex high-stakes settings such as medicine (e.g. Despite this increased utilization, there is still a lack of sufficient techniques available to be able to explain and interpret the decisions of these deep learning algorithms.


Data Science in Economics

arXiv.org Machine Learning

School of the Built Environment, Oxford Brookes University, Oxford, OX3 0BP, UK. Abstract: This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models. LSDL Large-Scale Deep Learning LSTM Long Short-Term Memory LWDNN List-Wise Deep Neural Network MACN Multi-Agent Collaborated Network MB-LSTM Multivariate Bidirectional LSTM MDNN Multilayer Deep Neural Network MFNN Multi-Filters Neural Network MLP Multiple Layer Perceptron MLP Multi-Layer Perceptron NNRE Neural Network Regression Ensemble O-LSRM Optimal Long Short-Term Memory PCA Principal Component Analysis pSVM Proportion Support Vector Machines RBFNN Radial Basis Function Neural Network RBM Restricted Boltzmann Machine REP Reduced Error Pruning RF Random Forest RFR Random Forest Regression RNN Recurrent Neural Network SAE Stacked Autoencoders SLR Stepwise Linear Regressions SN-CFM Similarity, Neighborhood-Based Collaborative Filtering Model STI Stock Technical Indicators SVM Support Vector Machine SVR Support Vector Regression SVRE Support Vector Regression Ensemble, TDFA Time-Driven Feature-Aware TS-GRU Two-Stream GRU WA Wavelet Analysis WT Wavelet Transforms 1. Introduction Application of data science in different disciplines is exponentially increasing. Because data science has had tremendous progresses in analysis and use of data. Like other disciplines, economics has benefited from the advancements of data science. Advancements of data science in economics have been progressive and have recorded promising results in the literature.


Alphabet's Next Billion-Dollar Business: 10 Industries To Watch - CB Insights Research

#artificialintelligence

Alphabet is using its dominance in the search and advertising spaces -- and its massive size -- to find its next billion-dollar business. From healthcare to smart cities to banking, here are 10 industries the tech giant is targeting. With growing threats from its big tech peers Microsoft, Apple, and Amazon, Alphabet's drive to disrupt has become more urgent than ever before. The conglomerate is leveraging the power of its first moats -- search and advertising -- and its massive scale to find its next billion-dollar businesses. To protect its current profits and grow more broadly, Alphabet is edging its way into industries adjacent to the ones where it has already found success and entering new spaces entirely to find opportunities for disruption. Evidence of Alphabet's efforts is showing up in several major industries. For example, the company is using artificial intelligence to understand the causes of diseases like diabetes and cancer and how to treat them. Those learnings feed into community health projects that serve the public, and also help Alphabet's effort to build smart cities. Elsewhere, Alphabet is using its scale to build a better virtual assistant and own the consumer electronics software layer. It's also leveraging that scale to build a new kind of Google Pay-operated checking account. In this report, we examine how Alphabet and its subsidiaries are currently working to disrupt 10 major industries -- from electronics to healthcare to transportation to banking -- and what else might be on the horizon. Within the world of consumer electronics, Alphabet has already found dominance with one product: Android. Mobile operating system market share globally is controlled by the Linux-based OS that Google acquired in 2005 to fend off Microsoft and Windows Mobile. Today, however, Alphabet's consumer electronics strategy is being driven by its work in artificial intelligence. Google is building some of its own hardware under the Made by Google line -- including the Pixel smartphone, the Chromebook, and the Google Home -- but the company is doing more important work on hardware-agnostic software products like Google Assistant (which is even available on iOS).


On the Convergence of Artificial Intelligence and Distributed Ledger Technology: A Scoping Review and Future Research Agenda

arXiv.org Artificial Intelligence

Developments in Artificial Intelligence (AI) and Distributed Ledger Technology (DLT) currently lead lively debates in academia and practice. AI processes data to perform tasks that were previously thought possible only for humans to perform. DLT acts in uncertain environments to create consensus over data among a group of participants. In recent articles, both technologies complement each other. Examples include the design of secure distributed ledgers or the creation of allied learning systems distributed across multiple nodes. This can lead to technological convergence, which in the past, has paved the way for major IT product innovations. Previous work highlights several potential benefits of the convergence of AI and DLT but only provides a limited theoretical framework to describe upcoming real-world integration cases of both technologies. We aim to contribute by conducting a systematic literature review on the previous work and by providing rigorously derived future research opportunities. Our analysis identifies how AI and DLT exchange data, and how to use these integration principles to build new systems. Based on that, we present open questions for future research. This work helps researchers active in AI or DLT to overcome current limitations in their field, and engineers to develop systems along with the convergence of these technologies.


Artificial Intelligence for Social Good: A Survey

arXiv.org Artificial Intelligence

Its impact is drastic and real: Youtube's AIdriven recommendation system would present sports videos for days if one happens to watch a live baseball game on the platform [1]; email writing becomes much faster with machine learning (ML) based auto-completion [2]; many businesses have adopted natural language processing based chatbots as part of their customer services [3]. AI has also greatly advanced human capabilities in complex decision-making processes ranging from determining how to allocate security resources to protect airports [4] to games such as poker [5] and Go [6]. All such tangible and stunning progress suggests that an "AI summer" is happening. As some put it, "AI is the new electricity" [7]. Meanwhile, in the past decade, an emerging theme in the AI research community is the so-called "AI for social good" (AI4SG): researchers aim at developing AI methods and tools to address problems at the societal level and improve the wellbeing of the society.


Tech's Biggest Leaps From the Last 10 Years, and Why They Matter

#artificialintelligence

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years. The 2010s saw IBM's Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind's AlphaGO becoming the world's Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online. For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.


China's robotics market: Analyst looks ahead to 2020

#artificialintelligence

The Chinese robotics market is growing strong, but not without its own pains. Trade tensions and a global economic slowdown, particularly in automotive manufacturing, have affected demand in the Chinese robotics market. However, interest in supply chain automation and political support of domestic innovation could encourage growth in 2020. This is Part 2 of The Robot Report's Q&A with Georg Stieler, managing director for Asia at international consulting firm STM Stieler. In Part 1, he discussed the state of the robotics market in China, looking at causes for the current slowdown and what types of robots are in demand.


Wayve raises $20 million to give autonomous cars better AI brains

#artificialintelligence

Wayve, a U.K.-based startup that's developing artificial intelligence (AI) that teaches cars to drive autonomously using reinforcement learning, simulation, and computer vision, has raised $20 million in a series A round of funding led by Palo Alto venture capital (VC) firm Eclipse Ventures, with participation from Balderton Capital, Compound Ventures, Fly Ventures, and First Minute Capital. Several notable angel investors also participated in the round, including Uber's chief scientist Zoubin Ghahramani and Pieter Abbeel, a UC Berkeley robotics professor and pioneer of deep reinforcement learning. Founded out of Cambridge, U.K., in 2017, Wayve's core premise is that the big breakthrough in self-driving cars will come from better AI brains rather than more sensors or "hand-coded" rules. The company said that it trains its autonomous driving system using simulated environments and then transfers that knowledge into the real world, where it emulates how humans adapt to conditions in real time. Wayve's systems learn from each safety driver intervention to understand why the driver had to intervene, bypassing HD maps, lidar, and other sensors that have become synonymous with the burgeoning autonomous vehicle movement.