Collaborating Authors


The Future of AI Part 1


It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".

Why Deep Learning DevCon Comes At The Right Time


The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field. Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.

Report: State of Artificial Intelligence in India - 2020


Artificial Intelligence or AI is a field of Data Science that trains computers to learn from experience, adjust to inputs, and perform tasks of certain cognitive levels. Over the last few years, AI has emerged as a significant data science function and, by utilizing advanced algorithms and computing power, AI is transforming the functional, operational, and strategic landscape of various business domains. AI algorithms are designed to make decisions, often using real-time data. Using sensors, digital data, and even remote inputs, AI algorithms combine information from a variety of different sources, analyze the data instantly, and act on the insights derived from the data. Most AI technologies – from advanced recommendation engines to self-driving cars – rely on diverse deep learning models. By utilizing these complex models, AI professionals are able to train computers to accomplish specific tasks by recognizing patterns in the data. Analytics India Magazine (AIM), in association with Jigsaw Academy, has developed this study on the Artificial Intelligence market to understand the developments of the AI market in India, covering the market in terms of Industry and Company Type. Moreover, the study delves into the market size of the different categories of AI and Analytics startups / boutique firms. As a part of the broad Data Science domain, the Artificial Intelligence technology function has so far been classified as an emerging technology segment. Moreover, the AI market in India has, till now, been dominated by the MNC Technology and the GIC or Captive firms. Domestic firms, Indian startups, and even International Technology startups across various sectors have, so far, not made a significant investment, in terms of operations and scale, in the Indian AI market. Additionally, IT services and Boutique AI & Analytics firms had not, till a couple of years ago, developed full-fledged AI offerings in India for their clients.

Council Post: Symbolism Versus Connectionism In AI: Is There A Third Way?


It's an essential prerequisite for deciding how we want critical decisions about our health and well-being to be made -- possibly for a very long time to come. To understand why the "how" behind AI functionality is so important, we first have to appreciate the fact that there have historically been two very different approaches to AI. The first is symbolism, which deals with semantics and symbols. Many early AI advances utilized a symbolistic approach to AI programming, striving to create smart systems by modeling relationships and using symbols and programs to convey meaning. But it soon became clear that one weakness to these semantic networks and this "top-down" approach was that true learning was relatively limited.

Image Classification Model


Image classification is one of the most important applications of computer vision. Its applications ranges from classifying objects in self driving cars to identifying blood cells in healthcare industry, from identifying defective items in manufacturing industry to build a system that can classify persons wearing masks or not. Image Classification is used in one way or the other in all these industries. Which framework do they use? You must have read a lot about the differences between different deep learning frameworks including TensorFlow, PyTorch, Keras, and many more.

State-of-the-art Techniques in Deep Edge Intelligence Artificial Intelligence

The potential held by the gargantuan volumes of data being generated across networks worldwide has been truly unlocked by machine learning techniques and more recently Deep Learning. The advantages offered by the latter have seen it rapidly becoming a framework of choice for various applications. However, the centralization of computational resources and the need for data aggregation have long been limiting factors in the democratization of Deep Learning applications. Edge Computing is an emerging paradigm that aims to utilize the hitherto untapped processing resources available at the network periphery. Edge Intelligence (EI) has quickly emerged as a powerful alternative to enable learning using the concepts of Edge Computing. Deep Learning-based Edge Intelligence or Deep Edge Intelligence (DEI) lies in this rapidly evolving domain. In this article, we provide an overview of the major constraints in operationalizing DEI. The major research avenues in DEI have been consolidated under Federated Learning, Distributed Computation, Compression Schemes and Conditional Computation. We also present some of the prevalent challenges and highlight prospective research avenues.

A new way to train AI systems could keep them safer from hackers


The context: One of the best unsolved defects of deep knowing is its vulnerability to so-called adversarial attacks. When included to the input of an AI system, these perturbations, apparently random or undetected to the human eye, can make things go totally awry. Stickers tactically put on a stop indication, for instance, can deceive a self-driving automobile into seeing a speed limitation indication for 45 miles per hour, while sticker labels on a roadway can puzzle a Tesla into drifting into the incorrect lane. Safety important: Most adversarial research study concentrates on image acknowledgment systems, however deep-learning-based image restoration systems are susceptible too. This is especially uncomfortable in healthcare, where the latter are typically utilized to rebuild medical images like CT or MRI scans from x-ray information.

Explainable Artificial Intelligence: a Systematic Review Artificial Intelligence

This has led to the development of a plethora of domain-dependent and context-specific methods for dealing with the interpretation of machine learning (ML) models and the formation of explanations for humans. Unfortunately, this trend is far from being over, with an abundance of knowledge in the field which is scattered and needs organisation. The goal of this article is to systematically review research works in the field of XAI and to try to define some boundaries in the field. From several hundreds of research articles focused on the concept of explainability, about 350 have been considered for review by using the following search methodology. In a first phase, Google Scholar was queried to find papers related to "explainable artificial intelligence", "explainable machine learning" and "interpretable machine learning". Subsequently, the bibliographic section of these articles was thoroughly examined to retrieve further relevant scientific studies. The first noticeable thing, as shown in figure 2 (a), is the distribution of the publication dates of selected research articles: sporadic in the 70s and 80s, receiving preliminary attention in the 90s, showing raising interest in 2000 and becoming a recognised body of knowledge after 2010. The first research concerned the development of an explanation-based system and its integration in a computer program designed to help doctors make diagnoses [3]. Some of the more recent papers focus on work devoted to the clustering of methods for explainability, motivating the need for organising the XAI literature [4, 5, 6].

AI Research Considerations for Human Existential Safety (ARCHES) Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.

Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction Machine Learning

With the emerging adoption of deep neural networks (DNNs) in the HPC domain, the reliability of DNNs is also growing in importance. As prior studies demonstrate the vulnerability of DNNs to hardware transient faults (i.e., soft errors), there is a compelling need for an efficient technique to protect DNNs from soft errors. While the inherent resilience of DNNs can tolerate some transient faults (which would not affect the system's output), prior work has found there are critical faults that cause safety violations (e.g., misclassification). In this work, we exploit the inherent resilience of DNNs to protect the DNNs from critical faults. In particular, we propose Ranger, an automated technique to selectively restrict the ranges of values in particular DNN layers, which can dampen the large deviations typically caused by critical faults to smaller ones. Such reduced deviations can usually be tolerated by the inherent resilience of DNNs. Ranger can be integrated into existing DNNs without retraining, and with minimal effort. Our evaluation on 8 DNNs (including two used in self-driving car applications) demonstrates that Ranger can achieve significant resilience boosting without degrading the accuracy of the model, and incurring negligible overheads.