function
Fast Bayesian Estimation of Point Process Intensity as Function of Covariates
In this paper, we tackle the Bayesian estimation of point process intensity as a function of covariates. We propose a novel augmentation of permanental process called augmented permanental process, a doubly-stochastic point process that uses a Gaussian process on covariate space to describe the Bayesian a priori uncertainty present in the square root of intensity, and derive a fast Bayesian estimation algorithm that scales linearly with data size without relying on either domain discretization or Markov Chain Monte Carlo computation. The proposed algorithm is based on a non-trivial finding that the representer theorem, one of the most desirable mathematical property for machine learning problems, holds for the augmented permanental process, which provides us with many significant computational advantages. We evaluate our algorithm on synthetic and real-world data, and show that it outperforms state-of-the-art methods in terms of predictive accuracy while being substantially faster than a conventional Bayesian method.
Word2Fun: Modelling Words as Functions for Diachronic Word Representation
Word meaning may change over time as a reflection of changes in human society. Therefore, modeling time in word representation is necessary for some diachronic tasks. Most existing diachronic word representation approaches train the embeddings separately for each pre-grouped time-stamped corpus and align these embeddings, e.g., by orthogonal projections, vector initialization, temporal referencing, and compass. However, not only does word meaning change in a short time, word meaning may also be subject to evolution over long timespans, thus resulting in a unified continuous process. A recent approach called `DiffTime' models semantic evolution as functions parameterized by multiple-layer nonlinear neural networks over time. In this paper, we will carry on this line of work by learning explicit functions over time for each word. Our approach, called `Word2Fun', reduces the space complexity from $\mathcal{O}(TVD)$ to $\mathcal{O}(kVD)$ where $k$ is a small constant ($k \ll T $). In particular, a specific instance based on polynomial functions could provably approximate any function modeling word evolution with a given negligible error thanks to the Weierstrass Approximation Theorem. The effectiveness of the proposed approach is evaluated in diverse tasks including time-aware word clustering, temporal analogy, and semantic change detection.
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation
Despite recent progress made by large language models in code generation, they still struggle with programs that meet complex requirements. Recent work utilizes plan-and-solve decomposition to decrease the complexity and leverage self-tests to refine the generated program. Yet, planning deep-inside requirements in advance can be challenging, and the tests need to be accurate to accomplish self-improvement. To this end, we propose FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus. Specifically, FunCoder recursively branches off sub-functions as smaller goals during code generation, represented by a tree hierarchy. These sub-functions are then composited to attain more complex objectives.
Reviews: Consistent Kernel Mean Estimation for Functions of Random Variables
Quantifying the price of dependent expansion points is an interesting mathematical question. However, this paper could improve in motivating the general readers from machine learning community to get interested in a broader topic of Kernel Mean Embedding. The authors attempt to do this in the last paragraph in Section 4, relating it to Probabilistic Programming Systems, which seems to be a weak connection. AS a reader, I was curious as to where such techniques of KME and reduced set expansions can be potentially used, or are used currently in solving some application specific problems. Another disappointing aspect of the paper is that the authors did not delve deeper into the question Multiple arguments in Section 3. What is currently provided is a direct corollary of the Theorem 2, and the paper assigns too much space for something that has little information over what is already said.
Reviews: Randomized Prior Functions for Deep Reinforcement Learning
Summary: This paper studies RL exploration based on uncertainty. First, they compare several previously published RL exploration methods and identifying their drawbacks (including illustrative toy experiments). Then, they extend a particular previous method, bootstrapped DQN [1] (which uses bootstrap uncertainty estimates), through the addition of random prior functions. This extension is motivated from Bayesian linear regression, and transferred to the case of deep non-linear neural networks. Experimental results on the Chain, CartPole swing-up and Montezuma Revenge show improved performance over a previous baseline, the bootstrapped DQN method.
Artificial Intelligence in Oil & Gas Market Research Report by Function, Component, Application, Region - Global Forecast to 2027 - Cumulative Impact of COVID-19
Market Statistics: The report provides market sizing and forecast across 7 major currencies - USD, EUR, JPY, GBP, AUD, CAD, and CHF. It helps organization leaders make better decisions when currency exchange data is readily available. In this report, the years 2018 and 2020 are considered as historical years, 2021 as the base year, 2022 as the estimated year, and years from 2023 to 2027 are considered as the forecast period. Market Segmentation & Coverage: This research report categorizes the Artificial Intelligence in Oil & Gas to forecast the revenues and analyze the trends in each of the following sub-markets: Based on Function, the market was studied across Field Services, Material Movement, Predictive Maintenance & Machine Inspection, Production Planning, Quality Control, and Reclamation. Based on Component, the market was studied across Hardware, Services, and Software.
- North America > United States (1.00)
- Africa (0.74)
- Asia > Middle East (0.71)
- (3 more...)
- Research Report > Experimental Study (0.72)
- Research Report > New Finding (0.62)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Energy > Oil & Gas (1.00)
- (2 more...)
Why Creativity Is Now More Important Than Intelligence
Machines can now do what you could call IQ-style thinking – covering what'multiple intelligences' theorist Howard Gardner would call visual-spatial, verbal-linguistic and logical-mathematical intelligence – pretty darn well. Artificial Intelligence (AI) is here and it's getting more sophisticated every day. But AC – Artificial Creativity – barely exists. AI has been unsettling the human world for quite some time. Can you believe it was 1997 when IBM's Deep Blue computer triumphed over chess colossus Garry Kasparov?
- Asia > China (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
marcotcr/lime
This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data), with a package called lime (short for local interpretable model-agnostic explanations). Lime is based on the work presented in this paper. Our plan is to add more packages that help users understand and interact meaningfully with machine learning. Lime is able to explain any black box text classifier, with two or more classes.
505
Artificial Intelligence for Microcomputers If you would like to develop an expert system or knowledgebased system on a microcomputer, you might want to read Artijcial Intelligence for Microcomputers by Mickey Williamson, This nontechnical book is easy to understand, written for the unsophisticated microcomputer user. The first chapters provide a brief history of artificial intelligence (AI) and an introduction to natural language query systems. They explain what knowledge-based systems and expert systems are and how they work. Discussions are also provided of the two major AI programming languages, Lisp and Prolog, including their strengths and weaknesses. The remainder of the book is devoted to a review of some of the existing AI software products for microcomputers, such as natural language query systems, decision support systems, expert system development shells, and AI programming languages.
Sparse Distributed Memory
Restricting the number of potential readers is unfortunate because an interdisciplinary view of the world around us must be developed. This book should have been written to show a scientist with a good mathematics background how to do modeling and simulation. Scientific research needs more people trained in system concepts, people trained to understand and apply the Weltanschauung of system theory. Indeed, the recent recommendation for science education that came out of the Science for All Americans study, sponsored by the American Association for the Advancement of Science, emphasized an interdisciplinary approach to scientific concepts. By limiting the technical accessibility of this book, the author has not helped us address the need for training scientists in the use of interdisciplinary tools in scientific research.
- Education > Curriculum > STEM (0.55)
- Information Technology > Software (0.35)