Continuing Education


HOUDINI: Lifelong Learning as Program Synthesis

Neural Information Processing Systems

We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning. Reusing high-level concepts across domains and learning complex procedures are key challenges in lifelong learning. We show that a program synthesis approach that combines gradient descent with combinatorial search over programs can be a more effective response to these challenges than purely neural methods. Our framework, called HOUDINI, represents neural networks as strongly typed, differentiable functional programs that use symbolic higher-order combinators to compose a library of neural functions. Our learning algorithm consists of: (1) a symbolic program synthesizer that performs a type-directed search over parameterized programs, and decides on the library functions to reuse, and the architectures to combine them, while learning a sequence of tasks; and (2) a neural module that trains these programs using stochastic gradient descent.


Lifelong Learning with Weighted Majority Votes

Neural Information Processing Systems

Better understanding of the potential benefits of information transfer and representation learning is an important step towards the goal of building intelligent systems that are able to persist in the world and learn over time. In this work, we consider a setting where the learner encounters a stream of tasks but is able to retain only limited information from each encountered task, such as a learned predictor. In contrast to most previous works analyzing this scenario, we do not make any distributional assumptions on the task generating process. Instead, we formulate a complexity measure that captures the diversity of the observed tasks. We provide a lifelong learning algorithm with error guarantees for every observed task (rather than on average).


Lifelong Learning with Non-i.i.d. Tasks

Neural Information Processing Systems

In this work we aim at extending theoretical foundations of lifelong learning. Previous work analyzing this scenario is based on the assumption that the tasks are sampled i.i.d. Instead we study two scenarios when lifelong learning is possible, even though the observed tasks do not form an i.i.d. In the first case we prove a PAC-Bayesian theorem, which can be seen as a direct generalization of the analogous previous result for the i.i.d. For the second scenario we propose to learn an inductive bias in form of a transfer procedure.


As AI infiltrates work, employers pay a premium for soft skills

#artificialintelligence

The researchers' findings confirm what employers have reported in recent months: The demand for soft skills will only increase as automation takes hold. In a Cengage survey released earlier this year, employers said they most needed workers who could listen, pay attention to detail, communicate effectively, think critically, demonstrate interpersonal skills and learn new skills. Similarly, a recent Workhuman report concluded that, despite tech's ever-increasing presence in the workplace, the future of work will be people-focused, not machine centered. In fact, that future will create more opportunities for employers to "leverage the previously untapped creativity and innovation of people -- to prioritize humanity and emotional intelligence at work," Workhuman said. Employers may well be recognizing that opportunity: they identified "soft skills" as their top training priority in a LinkedIn poll last year.


Lifelong learning machines (L2M) - Hava Siegelmann keynote at HLAI

#artificialintelligence

Sign in to report inappropriate content. Hava Siegelmann, Microsystems Technology Office Program Manager DARPA, gives a keynote at the Human-Level AI Conference in Prague in August 2018. The conference combined three major conferences AGI, BICA, and NeSy and was organized by AI research and development company GoodAI.


A Unified Framework for Lifelong Learning in Deep Neural Networks

arXiv.org Machine Learning

Humans can learn a variety of concepts and skills incrementally over the course of their lives while exhibiting an array of desirable properties, such as non-forgetting, concept rehearsal, forward transfer and backward transfer of knowledge, few-shot learning, and selective forgetting. Previous approaches to lifelong machine learning can only demonstrate subsets of these properties, often by combining multiple complex mechanisms. In this Perspective, we propose a powerful unified framework that can demonstrate all of the properties by utilizing a small number of weight consolidation parameters in deep neural networks. In addition, we are able to draw many parallels between the behaviours and mechanisms of our proposed framework and those surrounding human learning, such as memory loss or sleep deprivation. This Perspective serves as a conduit for two-way inspiration to further understand lifelong learning in machines and humans.


Learn new languages with the help of artificial intelligence - Komando.com

#artificialintelligence

Learning a new language can be difficult, especially if you approach it the wrong way. Instead of cracking open a textbook or settling in for stuffy video lessons, why not gamify your learning experience? There are several apps that offer to teach you a new language, but few offer speech recognition software and a brand new augmented reality (AR) feature to help drive the lessons home. Much like the supplementary language learning app Memrise, this new app offers a simple and fun way to learn over 33 languages. Tap or click here to learn about Memrise.


Why traditional Agile/Devops models aren't good enough for AI production?

#artificialintelligence

The need for convergence of people, process and technology in modern business has ignited the evolution of newer engineering methodologies. Artificial Intelligence (AI) is no exception. It demands even greater interaction of human and non-human resources in the production processes. AI solutions are built on the basis of an algorithm, data and the continuous learning process. Constantly growing data has enriched the quality of the knowledge and increased computing power has extended machine learning into deep learning; together, our collective ability to quickly evolve an AI solution has improved.


Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril - Stanford Center for Continuing Medical Education - Continuing Education (CE)

#artificialintelligence

Registration for this conference is now closed. This conference is anchored and building on the preview of the Special National Academy of Medicine (NAM) publication titled: "Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril." Co-led by Michael Matheny and Sonoo Thadaney Israni. Registration includes course materials, certificate of participation, breakfast and lunch. CME Certificate Fee: $25.00 Note: If you would like to receive CE Credit for your attendance, there will be a $25.00 fee option after the conference evaluation is completed and your conference attendance is verified. Your email address is used for critical information, including registration confirmation, evaluation, and certificate.


12 must-watch TED Talks on artificial intelligence - QAT Global

#artificialintelligence

For all, you who are technology lovers, AI enthusiasts, and casual consumers with peaked interest, don't miss your chance to learn about the newest advancements in artificial intelligence and an opportunity to join the discussion on the ethics, logistics, and reality of super-intelligent machines. Explore the possibilities of super-intelligence improving our world and our everyday lives while you dive into this great list of TED Talks on artificial intelligence. We have compiled a list of the best TED Talks on AI, providing you with the information you seek on AI technological developments, innovation, and the future of AI. Here are the best TED Talks for anyone interested in AI. We hope you enjoy our list!