If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We recover a video of the motion taking place in a hidden scene by observing changes in indirect illumination in a nearby uncalibrated visible region. We solve this problem by factoring the observed video into a matrix product between the unknown hidden scene video and an unknown light transport matrix. This task is extremely ill-posed, as any non-negative factorization will satisfy the data. Inspired by recent work on the Deep Image Prior, we parameterize the factor matrices using randomly initialized convolutional neural networks trained in a one-off manner, and show that this results in decompositions that reflect the true motion in the hidden scene. Papers published at the Neural Information Processing Systems Conference.
We provide statistical and computational analysis of sparse Principal Component Analysis (PCA) in high dimensions. The sparse PCA problem is highly nonconvex in nature. Consequently, though its global solution attains the optimal statistical rate of convergence, such solution is computationally intractable to obtain. Meanwhile, although its convex relaxations are tractable to compute, they yield estimators with suboptimal statistical rates of convergence. In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time.
Heuristic tools from statistical physics have been used in the past to compute the optimal learning and generalization errors in the teacher-student scenario in multi- layer neural networks. In this contribution, we provide a rigorous justification of these approaches for a two-layers neural network model called the committee machine. We also introduce a version of the approximate message passing (AMP) algorithm for the committee machine that allows to perform optimal learning in polynomial time for a large set of parameters. We find that there are regimes in which a low generalization error is information-theoretically achievable while the AMP algorithm fails to deliver it; strongly suggesting that no efficient algorithm exists for those cases, and unveiling a large computational gap. Papers published at the Neural Information Processing Systems Conference.
For massive and heterogeneous modern data sets, it is of fundamental interest to provide guarantees on the accuracy of estimation when computational resources are limited. In the application of learning to rank, we provide a hierarchy of rank-breaking mechanisms ordered by the complexity in thus generated sketch of the data. This allows the number of data points collected to be gracefully traded off against computational resources available, while guaranteeing the desired level of accuracy. Theoretical guarantees on the proposed generalized rank-breaking implicitly provide such trade-offs, which can be explicitly characterized under certain canonical scenarios on the structure of the data. Papers published at the Neural Information Processing Systems Conference.
How are computational tools changing filmmaking, and how will it change the video content of the future? To explore these topics we welcome Genevieve Patterson, Chief Scientist and Co-Founder of TRASH, to the show. Tools like those offered by TRASH, Genevieve Patterson's software that uses AI to make and share video, are beginning to edit video automagically for people. While these are currently limited to short, simple, social media-style videos the underlying machine learning technologies are building toward something far more.
My five- and seven-year-old constantly fight over who gets the iPad first. We have one, and they get to use it in tiny doses, usually when I'm at my wit's end. They love to code, like the good little 21st-century humanoids they are. They love coding so much and I am so unwilling to give them their own devices that I decided to try something new. It's also something that sounds so counterintuitive it actually might work: screen-free coding.
Generate the Base Learners: Choose any combination of base learners, based on accuracy and diversity. Each base learner can produce more than one predictive model, if you change variables such as case weights, guidance parameters, or input space partitions. The result is a computational "average" of sorts (which is much more complex than the regular arithmetic average). Generate the Base Learners: Choose any combination of base learners, based on accuracy and diversity. Each base learner can produce more than one predictive model, if you change variables such as case weights, guidance parameters, or input space partitions.
So, this is my third column delving into what I believe is the over-hyped and often misunderstood subject of artificial intelligence (AI). Firstly, I'll briefly paraphrase what I mentioned in my previous posts and, that is, what we understand today as AI is nothing more than clever programming and smart technology. In my first post, I suggested that engineers have developed software and devised hardware to alleviate the mundaneness of routine activities, such as on the factory floor; created smart sensors to determine the best time to plant seeds and harvest and, of course, we have a number of sensors that can predict weather patterns; there are also engineers who have developed technology that can assess images of patients better than physicians when determining breast cancer, for example. As a former software engineer, I would similarly develop algorithms and functions that would seek patterns in data where, based on the data processed, I would define expected behaviors, outcomes and actions – this is not "intelligence" – it's just clever programming and smart technology. How do we create an intelligent entity?
Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm.
Informatics for all is a coalition whose aim is to establish informatics as a fundamental discipline to be taken by all students in school. Informatics should be seen as important as mathematics, the sciences, and the various languages. It should be recognized by all as a truly foundational discipline that plays a significant role in education for the 21st century. In Europe, education is a matter left to the individual states. However, education, competencies, and preparedness of the workforce are all important matters for the European Union (EU).