Goto

Collaborating Authors

Search


Day 23: BST Level-Order Traversal

#artificialintelligence

Check out the Tutorial tab for learning materials and an instructional video! Task A level-order traversal, also known as a breadth-first search, visits each level of a tree's nodes from left to right, top to bottom. You are given a pointer,, pointing to the root of a binary search tree. Complete the levelOrder function provided in your editor so that it prints the level-order traversal of the binary search tree. Hint: You'll find a queue helpful in completing this challenge.


Now Google will display why it's showing you its search results

Mashable

The man behind the Google Search curtain is coming out to explain a few things. On Thursday, Google expanded the information that it attaches to search results to show users why they're getting the website recommendations they receive. This includes the "matching keywords" and "related terms" associated with your search that show up in the result, as well as whether other pages reference that link, and if it makes sense for your local area. Google doesn't make a secret of the factors that go into its search rank algorithm -- it spells everything out here. But showing how it applies that criteria to your specific query gives users a new, practical look under the Google hood.


Implementing Custom GridSearchCV and RandomSearchCV without scikit-learn

#artificialintelligence

Grid Search can be thought of as an exhaustive search for selecting a model. In Grid Search, the data scientist sets up a grid of hyperparameter values and for each combination, trains a model and scores on the testing data. In this approach, every combination of hyperparameter values is tried which can be very inefficient. For example, searching 20 different parameter values for each of 4 parameters will require 160,000 trials of cross-validation. This equates to 1,600,000 model fits and 1,600,000 predictions if 10-fold cross validation is used.


YouTube Algorithm Directs Viewers to False, Sexualized Videos, Study Finds

WSJ.com: WSJD - Technology

YouTube has instituted many changes over the past year to limit the problematic videos it recommends to viewers. A new study suggests the repairs have a way to go. Software nonprofit Mozilla Foundation found that YouTube's powerful recommendation engine continues to direct viewers to videos that they say showed false claims and sexualized content, with the platform's algorithms suggesting 71% of the videos that participants found objectionable. The study highlights the continuing challenge Alphabet Inc. subsidiary YouTube faces as it tries to police the user-generated content that turned it into the world's leading video service. It is emblematic of the struggle roiling platforms from Facebook Inc. to Twitter Inc., which soared to prominence by encouraging people to share information but which now face regulatory and social pressure to police divisive, misleading and dangerous content without censoring diverse points of view. For YouTube, it also shows gaps in its efforts to steer users to videos that should be of interest based on viewership patterns, as opposed to those that are going viral for other reasons.


How AI Revolutionised the Ancient Game of Chess

#artificialintelligence

I have come to the personal conclusion that while all artists are not chess players, all chess players are artists. Originally called Chaturanga, the game was set on an 8x8 Ashtāpada board and shared two key fundamental features that still distinguish the game today. Different pieces subject to different rules of movement and the presence of a single king piece whose fate determines the outcome. But it was not until the 15th century, with the introduction of the queen piece and the popularization of various other rules, that we saw the game develop into the form we know today. The emergence of international chess competition in the late 19th century meant that the game took on a new geopolitical importance.


Leveraging Language to Learn Program Abstractions and Search Heuristics

#artificialintelligence

Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains – string editing, image composition, and abstract reasoning about scenes – even when no natural language hints are available at test time.


Optimal personalised treatment computation through in silico clinical trials on patient digital twins

arXiv.org Artificial Intelligence

In Silico Clinical Trials (ISCT), i.e., clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine. In this paper we present methods and an algorithm that, by means of extensive computer simulation-based experimental campaigns (ISCT) guided by intelligent search, optimise a pharmacological treatment for an individual patient (precision medicine). We show the effectiveness of our approach on a case study involving a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction in humans.


QUBO transformation using Eigenvalue Decomposition

arXiv.org Artificial Intelligence

Quadratic Unconstrained Binary Optimization (QUBO) is a general-purpose modeling framework for combinatorial optimization problems and is a requirement for quantum annealers. This paper utilizes the eigenvalue decomposition of the underlying Q matrix to alter and improve the search process by extracting the information from dominant eigenvalues and eigenvectors to implicitly guide the search towards promising areas of the solution landscape. Computational results on benchmark datasets illustrate the efficacy of our routine demonstrating significant performance improvements on problems with dominant eigenvalues.


Nearly Minimax Optimal Adversarial Imitation Learning with Known and Unknown Transitions

arXiv.org Artificial Intelligence

This paper is dedicated to designing provably efficient adversarial imitation learning (AIL) algorithms that directly optimize policies from expert demonstrations. Firstly, we develop a transition-aware AIL algorithm named TAIL with an expert sample complexity of $\tilde{O}(H^{3/2} |S|/\varepsilon)$ under the known transition setting, where $H$ is the planning horizon, $|S|$ is the state space size and $\varepsilon$ is desired policy value gap. This improves upon the previous best bound of $\tilde{O}(H^2 |S| / \varepsilon^2)$ for AIL methods and matches the lower bound of $\tilde{\Omega} (H^{3/2} |S|/\varepsilon)$ in [Rajaraman et al., 2021] up to a logarithmic factor. The key ingredient of TAIL is a fine-grained estimator for expert state-action distribution, which explicitly utilizes the transition function information. Secondly, considering practical settings where the transition functions are usually unknown but environment interaction is allowed, we accordingly develop a model-based transition-aware AIL algorithm named MB-TAIL. In particular, MB-TAIL builds an empirical transition model by interacting with the environment and performs imitation under the recovered empirical model. The interaction complexity of MB-TAIL is $\tilde{O} (H^3 |S|^2 |A| / \varepsilon^2)$, which improves the best known result of $\tilde{O} (H^4 |S|^2 |A| / \varepsilon^2)$ in [Shani et al., 2021]. Finally, our theoretical results are supported by numerical evaluation and detailed analysis on two challenging MDPs.


Leveraging Language to Learn Program Abstractions and Search Heuristics

arXiv.org Artificial Intelligence

Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains -- string editing, image composition, and abstract reasoning about scenes -- even when no natural language hints are available at test time.