While they are very useful to diagnose typical cases, it is difficult for them to diagnose complicated cases. Therefore various approaches, such as deeper knowledge representation, case-based reasoning, are proposed in order to overcome this problem. However, they axe not sufficient to solve this problem completely. One reason that they are not so sutticient is that they are lacking one important track of diagnosis that medical experts do when they meet complicated cases. In this paper, we introduce combination of reasoning, planning and learning methods in order to solve this difficulty.
The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses. However, there is no simple model that combines the observations that (1) marginal distributions over single-neuron spike counts are often approximately Poisson; and (2) joint distributions over the responses of multiple neurons are often strongly dependent. Here, we show that both marginal and joint properties of neural responses can be captured using Poisson copula models. Copulas are joint distributions that allow random variables with arbitrary marginals to be combined while incorporating arbitrary dependencies between them. Different copulas capture different kinds of dependencies, allowing for a richer and more detailed description of dependencies than traditional summary statistics, such as correlation coefficients. We explore a variety of Poisson copula models for joint neural response distributions, and derive an efficient maximum likelihood procedure for estimating them. We apply these models to neuronal data collected in and macaque motor cortex, and quantify the improvement in coding accuracy afforded by incorporating the dependency structure between pairs of neurons.
We discover the patterns of autistic reasoning in the conditions requiring change in representation of domain knowledge. The formalism of nonmonotonic logic of defaults is used to simulate the autistic decision-making while learning how to adjust an action to the environment which forces new representation structure. Our main finding is that while autistic reasoning may be able to process single default rules, they have a characteristic difficulty in cases with nontrivial representation changes, where multiple default rules conflict. We evaluate our hypothesis that the skill of representation adjustment can be advanced by learning default reasoning patterns via a set of exercises.
An important area of KDD research involves development of techniques which transform raw data into forms more useful for prediction or explanation. We present an approach to automating the search for "indicator functions" which mediate such transformations. The fitness of a function is measured as its contribution to discerning different classes of data. Genetic programming techniques are applied to the search for and improvement of the programs which make up these functions. Rough set theory is used to evaluate the fitness of functions. Rough set theory provides a unique evaluator in that it allows the fitness of each function to depend on the combined performance of a population of functions. This is desirable in applications which need a population of programs that perform well in concert and contrasts with traditional genetic programming applications which have as there goal to find a single program which performs well. This approach has been applied to a small database of iris flowers with the goal of learning to predict the species of the flower given the values of four iris attributes and to a larger breast cancer database with the goal of predicting whether remission will occur within a five year period.
Over the last few decades, many distinct lines of research aimed at automating mathematics have been developed, including computer algebra systems (CASs) for mathematical modelling, automated theorem provers for first-order logic, SAT/SMT solvers aimed at program verification, and higher-order proof assistants for checking mathematical proofs. More recently, some of these lines of research have started to converge in complementary ways. One success story is the combination of SAT solvers and CASs (SAT+CAS) aimed at resolving mathematical conjectures. Many conjectures in pure and applied mathematics are not amenable to traditional proof methods. Instead, they are best addressed via computational methods that involve very large combinatorial search spaces. SAT solvers are powerful methods to search through such large combinatorial spaces---consequently, many problems from a variety of mathematical domains have been reduced to SAT in an attempt to resolve them. However, solvers traditionally lack deep repositories of mathematical domain knowledge that can be crucial to pruning such large search spaces. By contrast, CASs are deep repositories of mathematical knowledge but lack efficient general search capabilities. By combining the search power of SAT with the deep mathematical knowledge in CASs we can solve many problems in mathematics that no other known methods seem capable of solving. We demonstrate the success of the SAT+CAS paradigm by highlighting many conjectures that have been disproven, verified, or partially verified using our tool MathCheck. These successes indicate that the paradigm is positioned to become a standard method for solving problems requiring both a significant amount of search and deep mathematical reasoning. For example, the SAT+CAS paradigm has recently been used by Heule, Kauers, and Seidl to find many new algorithms for $3\times3$ matrix multiplication.