Automatic Programming
1592
AR&A techniques have been used to solve a variety of tasks, including automatic programming, constraint satisfaction, design, diagnosis, machine learning, search, planning, reasoning, game playing, scheduling, and theorem proving. The primary purpose of AR&A techniques in such settings is to overcome computational intractability. In addition, AR&A techniques are useful for accelerating learning and summarizing sets of solutions. The Fifth Symposium on Abstraction, Reformulation, and Approximation (SARA-2002) was held from 2 to 4 August 2002, directly after the Eighteenth National Conference on Artificial Intelligence (AAAI-2002). It was chaired by Sven Koenig from the Georgia Institute of Technology and Robert Holte from the University of Alberta (Canada) and held at Kananaskis Mountain Lodge, Kananaskis Village, Alberta (Canada) between Calgary and Banff in the Rocky Mountains.
A Perspective on Automatic Programming
Most work in automatic programming has focused primarily on the roles of deduction and programming knowledge However, the role played by knowledge of the task domain seems to be at least as important, both for the usability of an automatic programming system and for the feasibility of building one which works on nontrivial problems This perspective has evolved during the course of a variety of studies over the last several years, including detailed examination of existing software for a particular domain (quantitative interpretation of oil well logs) and the implementation of an experimental automatic programming system for that domain The importance of domain knowledge has two important implications: a primary goal of automatic programming research should be to characterize the programming process for specific domains; and a crucial issue to be addressed in these characterizations is the interaction of domain and programming knowledge during program synthesis Used by permission of the International Joint Conferences on Artificial Intelligence; copies of the Proceedings are available from William Kaufmann, Inc, 95 First St., Los Altos, CA 94022 USA. For example, the work of Green (1969) and Waldinger and Lee (1969) in the late 1960s was concerned with the use of a theorem-prover to produce programs. This deductive paradigm continues to be the basis for much research in automatic programming (e.g., Manna & Waldinger 1980, Smith 1983). In the mid 1970's, work on the PSI project (Barstow 1979, Green 1977, Kant 1981) and on the Programmer's Apprentice (Rich 1981) was fundamentally concerned with the codification of knowledge about programming techniques and the use of that knowledge in program synthesis and analysis Work within the knowledge-based paradigm is also continuing (e.g., Barstow 1982, Waters 1981). This article is concerned with the role played by knowledge of the task domain, a role which seems to be at least as important.
Automatic Programming Assessments
The industry today is on a constant look-out for good programmers. In this new age of digital services and products, it's a premium to possess programming skills. Whenever a friend asks me to refer a good programmer for his company, I tell him -- why would I refer to you, I will hire her for my team! But what does having programming skills really mean? What do we look for when we hire programmers?
JIT native code generation for TensorFlow computation graphs using Python and LLVM
One of the most amazing components of the TensorFlow architecture is the computation graph that can be serialized using Protocol Buffers. This computation graph follows a well-defined format (click here for the proto files) and describes the computation that you specify (it can be a Deep Learning model like a CNN, a simple Logistic Regression or even any computation you want). As you can see, this is a very simple computation graph. First, we define the placeholder that will hold the input tensor and after that we specify the computation that should happen using this input tensor as input data. Here we can also see that we're defining two important nodes of this graph, one is called "input" (the aforementioned placeholder) and the other is called "output", that will hold the result of the final computation.
Automatic Programming Robert Elschlager and Jorge Phillips Handbook of Artificial Intelligence
Theorem Proving Vision Robotics Information Processing Psychology Learning and Inductive Inference Planning and Related Problem-solving Techniques Automatic Programming (AP) Is a new, dynamic, and not precisely defined area of artificial intelligence. This overview discusses the definitions, history, motivating forces and goals of automatic programming and includes a brief description of the basic characteristics and central issues of AP systems. The article begins with a section discussing the various possible definitions of automatic programming, the background in which it has achieved existence, as well as some of its general motivating forces and goals. The next section describes four characteristics of all AP systems: the method by which a user of such a system specifies or describes the desired program, the target language in which the system writes the program, the problem or application area to which the system is addressed, and the approach or operational method employed by the system. Next, a section discusses four basic issues, one or more of which concern all AP systems: the representation and processing of partial or incomplete information; the transformation of structures, and especially the transformation of program descriptions into other descriptions (in this chapter, the term program description includes the user's specification of the desired program, any Internal representations of the progrrm, as well as the target language implementation); the efficiency of the target language Imp,ementation; and the system's capabilities for aiding in the understanding of the program.
Automatic Programming: A Tutorial on Formal Methodologies ALAN W. BIERMANN
Automatic computer programming or automatic programming occurs whenever a machine aids in this process. The amount of automatic programming that is occurring is a variable quantity that depends on how much aid the human is given. There are a number of dimensions on which the level of help can be measured including the level of the language used by the human, the amount of informality allowed, the degree to which the system is told what to do rather than how to do it, and the efficiency of the resulting code. Thus we usually say that there is a higher degree of automatic programming whenever a higher level language is used, less precision is required of the human, the input instructions are more declarative and less procedural, and the quality of the object code is better. The technologies of automatic programming thus include the fields that help move the programming experience along any of these dimensions: algorithm synthesis, programming language research, compiler theory, human factors, and others. This paper will concentrate on only the first of these topics, formal methodologies for the automatic construction of algorithms from fragmentary information. The formal methodologiest have been separated into two categories, synthesis from formal specifications and synthesis from examples. In the former case, it is assumed a specification is given for the target program with adequate domain information so that the target program can be derived in a series of logical steps.
The Fifth Symposium on Abstraction, Reformulation, and Approximation (SARA-2002)
The Fifth International Symposium on Abstraction, Reformulation, and Approximation (SARA-2002) was held from 2 to 4 August 2002 in Kananaskis, Alberta, Canada. This interdisciplinary conference brought together researchers from around the world to present recent progress on, and exchange ideas about, how abstraction, reformulation, and approximation techniques can be used in areas such as automatic programming, constraint satisfaction, design, diagnosis, machine learning, search, planning, reasoning, game playing, scheduling, and theorem proving.
Domain-Based Program Synthesis Using Planning and Derivational Analogy
In my Ph.D. dissertation (Bhansali 1991), I develop an integrated knowledge-based framework for efficiently synthesizing programs by bringing together ideas from the fields of software engineering (software reuse, domain modeling) and AI (hierarchical planning, analogical reasoning). Based on this framework, I constructed a prototype system, APU, that can synthesize UNIX shell scripts from a high-level specification of problems typically encountered by novice shell programmers. An empirical evaluation of the system's performance points to certain criteria that determine the feasibility of the derivational analogy approach in the automatic programming domain when the cost of detecting analogies and recovering from wrong analogs is considered.