How do I start a career as a deep learning engineer? What are some of the key tools and frameworks used in AI? How do I learn more about ethics in AI? Everyone has questions, but the most common questions in AI always return to this: how do I get involved? Cutting through the hype to share fundamental principles for building a career in AI, a group of AI professionals gathered at NVIDIA's GTC conference in the spring offered what may be the best place to start. Each panelist, in a conversation with NVIDIA's Louis Stewart, head of strategic initiatives for the developer ecosystem, came to the industry from very different places. But the speakers -- Katie Kallot, NVIDIA's former head of global developer relations and emerging areas; David Ajoku, founder of startup aware.ai;
This article is part of our Summer reads series. Visit our collection to discover "The Economist reads" guides, guest essays and more seasonal distractions. IN RECENT years artificial intelligence (AI) has undergone a revolution. After decades of modest progress that never quite lived up to its promise, a different approach--relying on big data and stats, not clever algorithms--made huge strides in solving real-world problems like voice- and image-recognition and self-driving cars. Also in the past ten years, a lot of books have been published that aim to explain what AI is, where it's going and why it matters.
Machine learning is often used to build predictive models by extracting patterns from large datasets. These models are used in predictive data analytics applications including price prediction, risk assessment, predicting customer behavior, and document classification. This introductory textbook offers a detailed and focused treatment of the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. Technical and mathematical material is augmented with explanatory worked examples, and case studies illustrate the application of these models in the broader business context. This second edition covers recent developments in machine learning, especially in a new chapter on deep learning, and two new chapters that go beyond predictive analytics to cover unsupervised learning and reinforcement learning.
This book provides a collection of recent research works addressing theoretical issues on improving the learning process and the generalization of GANs as well as state-of-the-art applications of GANs to various domains of real life. Generative adversarial networks (GANs), as the main method of adversarial learning, achieve great success and popularity by exploiting a minimax learning concept, in which two networks compete with each other during the learning process. Their key capability is to generate new data and replicate available data distributions, which are needed in many practical applications, particularly in computer vision and signal processing. The book is intended for academics, practitioners, and research students in artificial intelligence looking to stay up to date with the latest advancements on GANs' theoretical developments and their applications.
In 2022, Artificial Intelligence is the hottest and most in-demand field; most engineers want to make their careers in AI, Data Science & Data Analytics. Going through the best and most reliable resources is the best way to learn, so here is the list of the best AI books on the market today. Artificial Intelligence is the field of study that simulates the processes of human intelligence on computer systems. These processes include the acquisition of information, using them, and approximating conclusions. The research topics in AI include problem-solving, reasoning, planning, natural language, programming, and machine learning. Automation, robotics, and sophisticated computer software and programs characterize a career in Artificial Intelligence.
This could be the first stop of your brand new machine learning journey. I personally like how the technical concept is translated into plain English – each chapter starts with a high-level overview of a ML algorithm or methodology, concise and clear, followed by lots of visual examples and real world scenarios. I can guarantee you won't get lost halfway. The book focuses on getting you introduced to ML with minimal math. But if you want to grasp some more of math, the next book I recommend is waiting for you.
It is possible to design and deploy advanced machine learning algorithms that are essentially math-free and stats-free. People working on that are typically professional mathematicians. These algorithms are not necessarily simpler. See for instance a math-free regression technique with prediction intervals, here. Or supervised classification and alternative to t-SNE, here. Interestingly, this latter math-free machine
Artists, throughout history, have engaged—and in many cases developed—technologies. Indeed, the distinction between artist and technologist is largely a modern creation. Engaging emerging industrial processes has been a characteristic of art in Western culture for well over a century, photography and cinema being prime examples. Throughout that century, artist/technologists have developed new media, new practices, and new technological genres: photography, cinema, radio, television, video, electronics, welded metal sculpture, and so on. Each of these has generated new, radically interdisciplinary communities that grappled with the aesthetic, philosophical, and technical issues (all at the same time) in new and complex interdisciplinary discourses. Over the last 30 years, and in some cases longer, artists have engaged computational techniques and computing generally, and biotech (bioart), and so on. During the 1990s in particular, the computational arts community was a theoretical maelstrom, with practitioners from the plastic arts, from photography, film, and video, from critical theory and media studies, and from engineering and computer science, all crossing swords in a joyous and generative discursive chaos.Computer and digital arts begin almost with the first computers. The history of computer gaming might be said to begin with Christopher Strachey’s draughts (checkers) program first written for the Pilot Ace computer in 1950. Over the period of consumer commercialisation of computing—beginning with the “desktop revolution” of the later 1980s—and the somewhat later development of graphics software, the vast majority of practitioners have utilised such technologies as tools—often in digital emulations of predigital practices: digital painting, video, animation, and so on. A much smaller community has explored the potential of computing and programming as medium, and the creation of computational systems as artworks with varying degrees of autonomy or sense-making, including sensor-based systems and robotics (early examples being Gordon Pask and Robin McKinnon-Woods’ Musicolor of the early 1950s, Nicholas Schoffer’s CYSP robots of the mid 1950s, and Edward Ihnatowicz’s Senster, debuted in 1970).Among such artist-researchers, the question of learning and adaptation always begged, but remained largely, technically intractable. That is not to say that there has not been a long and diverse—if not well known—range of practices in generative art, such as the biomorphic virtual sculpture of William Latham (for instance Biogenesis, 1994), the interactive installation works of Stocker et al. (2009), the biomorphic animations of Jon McCormack, and so many others. (Audry might have devoted a little more time to elucidating this history with respect to his topic.) In more recent years, the development of machine learning has provided some new approaches to these questions, and as Audry explains, a small community of artists have pursued its potential.There are, we might propose, three ways in which one might approach a new medium or new technology. There is technical mastery: to understand the technology, to become practically adept—to be able to say, “I know how this works and I know how to make it.” This is the tight analytic focus of technical design—the mode of the engineer. Alternatively, one might attempt to position the technology historically and socially—the big-picture mode of the philosopher or cultural theorist. When approaching the question from an inventive/creative position, the challenge is knowing what to make, in the present moment, that speaks the language of the technocultural zeitgeist, or that which is on the horizon, such that it constitutes “art,” or comes to constitute a new understanding of what art can be. This is the synthetic mode of the artist. All of these approaches have value. Those who can combine all three have special leverage on their subject, and we see such authoritative voices in each emerging technological milieu. In my view, Sofian Audry is such a person in the very contemporary realm of machine learning.Audry’s quarry in this book is to explore what machine leaning can be as a component of art works and art practices, or what one can do with machine learning that we might regard as “artistic”—recognizing all the while the fungibility of the concept “art.” He asks the right questions, big questions, such as these in the introduction, which provide an armature for much that follows: “As machine learning is likely to become one of the most important industrial technologies of the twenty-first century, how can artists engage in the material and intellectual debates that it brings forward?” (p. 15), and “How can [artists] approach algorithms that are largely meant for problem-solving and optimizing—both of which that (sic) have little to do with the arts? […] how can they relate to a field that has everything to do with engineering, science, and business and seems utterly disconnected from contemporary forms of artistic expression?” (p. 16). Such questions regarding the place of art practice with respect to industrial capitalism have underscored work and structured discourse in the art-and-technology community for generations. For instance, while Experiments in Art and Technology (EAT) garnered support from corporate research campuses like Bell Labs in the late 1960s and early 1970s, Maurice Tuchman’s similar enterprise the Art & Technology Program at the Los Angeles County Museum of Art, 1967–1971, induced protests and boycotts by LA artists, due to its cosy relationship with corporations arming the United States in the Vietnam War. Such issues structured discourse in these communities but are seldom broached in other corners of the art world. The general public—who see such work often through the lens of technological spectacle—are mostly oblivious to them.Simon Schaffer notes in the opening lines of Mechanical Marvels: Clockwork Dreams (a film on seventeenth century automata), “It’s often said that if you really want to understand something then what you should do is build it” (Stacey, 2013). It is a way to confirm to yourself that you understand it, or you don’t. Thomas Edison is reported to have remarked, while attempting to develop the lightbulb, “I have not failed 10,000 times—I’ve successfully found 10,000 ways that will not work” Dyer and Martin (1910).1 Audry is a maker, and can claim that intimate, pragmatic way of knowing: He has, no doubt, found ways that don’t work. But Audry is a thinking maker, who asks reflexive questions about the practice, in aesthetic, theoretical, and historical frames of reference.Audry knows the contemporary technics, but unlike so many techno-jockeys, he has a deep understanding of the history of the field. He correctly identifies the roots of machine learning in the mid-twentieth century (predigital) period of cybernetics, and follows this history through the period of symbolic Artificial Intelligence (AI; 1970s and 1980s), and the blossoming of Artificial Life (1990s) that followed the perceived failure of symbolic AI to achieve anything like animal “intelligence” (Penny, 2017). The theoretical and historical significance of this latter movement is lost on many (least, the audience of this journal). Audry does good historical work in reminding his readers of how wildly interdisciplinary and generative the 1990s period of Artificial Life was. The rapid advancement in technologies of computing, data storage, and network communication facilitated computational simulation of biological phenomena, which was motivated by a resurgence of interest in biological and neurological metaphors (approaches suppressed in the symbolic AI period). This all created a context for the development of machine learning techniques that have become (for better or worse) ubiquitous in contemporary life. The capacity to learn, adapt, even innovate or “create,” was central to cybernetic thinking. Such ideas were somewhat eclipsed in the period of symbolic AI, but re-emerged as central questions in Artificial Life, and have become central in machine learning art.Audry plumbs the theoretical (and ethical) dimensions of his subject in his deep-dive on matters of behavior, adaptivity, and metamorphosis in computational systems, asking questions such as, “Does it even make sense to maintain the anthropocentric notion that only humans can make art, when we know that machines cannot be decoupled from the humans that made them?” (p. 164). He reflects, “Machine learning technologies displace and reconfigure the creative agencies involved in the artistic process, thereby nurturing new human-machine relationships as part of creative endeavors” (p. 159), and concludes that “In the hands of artists, machine learning systems become a new material whose autonomy resists artistic control” (p. 164). Here he posits a creative symbiosis that destabilises humanist-individualist and human-exceptionalist assumptions that linger strongly in the art world, and also defuses apocalyptic fears about AI. Throughout the book he draws on a range of examples from his own and others’ work, as they bear upon key questions of the book. One of Audry’s interesting reflections is to compare the behavior of machine learning systems to the exploratory, experimental, and intuitive practices of artists, and to contrast both of these with the reductive proscriptive logic of symbolic AI.This book introduces a somewhat obscure field of practice to a wider audience. It situates machine learning art in historical, cultural, and technological context, elucidating its motivations and concerns, exploring its aesthetics and explaining the technology, illustrating with salient examples. Audry has the capacity to explain what is important about the technology and the ideas to a nontechnical audience with precision, while avoiding vapid gloss. The book is well-structured—the arguments are laid out, evidence is brought to bear in an orderly way, and conclusions are drawn (one does not find oneself thinking: “Wait, what’s this about? What is at stake? What are they talking about?”—a situation one finds oneself in regrettably often in some genres of critical writing). It is a well-written and very relevant read for anyone interested in cutting edge developments in media arts and provides, for inquiring technologists, insight into the artist’s approaches to the technology. It will serve as a useful text for suitably advanced media arts courses and programs.
Machine Learning (ML) methods are increasingly being used across a variety of fields and have led to the discovery of intricate relationships between variables. We here apply ML methods to predict and interpret life satisfaction using data from the UK British Cohort Study. We discuss the application of first Penalized Linear Models and then one non-linear method, Random Forests. We present two key model-agnostic interpretative tools for the latter method: Permutation Importance and Shapley Values. With a parsimonious set of explanatory variables, neither Penalized Linear Models nor Random Forests produce major improvements over the standard Non-penalized Linear Model. However, once we consider a richer set of controls these methods do produce a non-negligible improvement in predictive accuracy. Although marital status, and emotional health continue to be the most important predictors of life satisfaction, as in the existing literature, gender becomes insignificant in the non-linear analysis.