[T]hree fundamental questions that must be addressed in the design of any automatic programming system: What does the user see? How does the system work? What does the system know? --- from Approaches to Automatic Programming
"Software is a messy business. ... The situation has triggered interest in using computer programs to generate other programs automatically. The benefits of automatic software are compelling. ... 'If a programmer can sit down, specify what you want and push a button, you end up much more productive,' says Doug Smith, a researcher at the Kestrel Institute, a nonprofit RD center in Palo Alto, California. 'It's the next stage in the evolution of computer programming.' Smith and his colleagues at Kestrel have developed a program that translates a description of a problem into guidelines a computer can understand. ... One automatic programming tool has already made it into the financial marketplace. SciComp, based in Austin, Texas, has developed a product that helps investment banks design programs to price financial derivatives. ... Researchers at NASA hope to be able to generate programs on the fly during emergencies."
By Charles Rich and Richard C. Waters. Advances in Computers, Volume 37, M. C. Yovits, ed., Academic Press, 1993. Available from MERL (Mitsubishi Electric Research Laboratories). "This paper is an overview of current approaches to automatic programming organized around three fundamental questions that must be addressed in the design of any automatic programming system: What does the user see? How does the system work? What does the system know? As an example of a research effort in this area, we focus the Programmer's Apprentice project. ... Much of what was originally conceived of as automatic programming was achieved long ago. Today, no one would call an assembler or a compiler automatic programming. However, when these devices were first invented in the 1950's the term was quite appropriate."
Software defect prediction aims to reduce software testing efforts by guiding testers through the defect-prone sections of software systems. Defect predictors are widely used in organizations to predict defects in order to save time and effort as an alternative to other techniques such as manual code reviews. The usage of a defect prediction model in a real-life setting is difficult because it requires software metrics and defect data from past projects to predict the defect-proneness of new projects. It is, on the other hand, very practical because it is easy to apply, can detect defects using less time and reduces the testing effort. We have built a learning-based defect prediction model for a telecommunication company in the space of one year. In this study, we have briefly explained our model, presented its pay-off and described how we have implemented the model in the company. Furthermore, we compared the performance of our model with that of another testing strategy applied in a pilot project that implemented a new process called Team Software Process (TSP). Our results show that defect predictors can predict 87 percent of code defects, decrease inspection efforts by 72 percent and hence, reduces post-release defects by 44 percent. Furthermore, they can be used as complementary tools for a new process implementation whose effects on testing activities are limited.
By Nick Gibson. Builder AU (August 10, 2007). "He's one of the fathers of modern software practices. In the late sixties while working at Ericsson he invented both sequence diagrams and use cases, and in later years worked on the SDL, UML and the RUP. We caught up with Dr Ivar Jacobson to hear his thoughts on where the industry is today, and where it will head in the future. ... [Q] What do you think is the most important thing to change? [A] To really automate and remove no brain work you need to find some identifiable patterns and apply these patterns over and over again with slightly different inputs -- that makes it hard because it's not reusable code only. I think reusable code has increased, we definitely use more reusable code than we did 20 years ago -- not as much as we could -- but we use more. When it comes to patterns you need to have different parameters based upon the context. They're very context centred. So what programmers do is use patterns, but they have to add the context themselves. That is what makes it so slow. I have since 1981 described a vision where we are assisted by intelligent agents. An intelligent agent that understands what you're doing in software. The only real difference between this kind of software and normal software, so to speak, is that its rule driven. ... These rules trigger based on context, and then a pattern is applied. ... [Q] Do you think then that using artificial intelligence is a way we can increase productivity?...
By Kimberly Patch, Technology Research News (March 23 / 30, 2005). "Writing software has been relatively difficult since people began programming computers in the mid-1900s. Although programming a computer is eminently useful -- it gives you fine control of a powerful tool -- it requires learning a programming language. Researchers from the Massachusetts Institute of Technology are aiming to remove this requirement. They have taken a step toward that goal with a language-to-code visualizer dubbed Metafor. The visualizer uses natural language instructions to sketch the outlines of a program. It can be used as a programming learning tool and to provide rough drafts of programming projects, and could lead to more complete programming-by-natural-language methods. ... Metafor organizes a natural-language description of a program into the skeleton of a program by mapping the inherent structure of English -- parts of speech, syntax, and subject-verb-object roles -- into a basic programmatic structure of class objects, properties, functions, and if-then rules, said [Hugo] Liu."
By Robert L. Akers, Ion Bica, Elaine Kant, Curt Randall, and Robert L. Young. AI Magazine 22(2): Summer 2001, 27-42. This paper is based on the authors' presentation at the Twelfth Innovative Applications of Artificial Intelligence Conference (IAAI-2000). Abstract: "The SciFinance software synthesis system, licensed to major investment banks, automates programming for financial risk-management activities-- from algorithms research to production pricing to risk control. SciFinance's high-level, extensible specification language, aspen, lets quantitative analysts generate code from concise model descriptions written in application-specific and mathematical terminology; typically, a page or less produces thousands of lines of c. aspen's abstractions help analysts focus on their primary tasks--model description, validation, and analysis--rather than on programming details. Compared with manual programming, automation produces codes that are more sophisticated, accurate, and consistent. Analysts develop models within a day that previously took weeks or were not even attempted. SciFinance extends a system that generates scientific computing codes in a variety of target languages. The implementation integrates an object-oriented knowledge base, refinement and optimization rules, computer algebra, and a planning system. The shared knowledge base is used by the specification checker, synthesis system, and information portal."
The entire text of this book is available online.
By Stu Burton, Kent Swanson, and Lisa Leonard. AI Magazine 14(4): Winter 1993, 43-50. "Celite corporation and Andersen Consulting have developed an advanced approach to traditional software development called the application software factory (ASF). The approach is an integration of technology and total quality 'management' techniques that includes the use of an expert system to guide module design and perform 'module programming.' The expert system component is called the knowledge-based design assistant and its inclusion in the ASF methodology" has significantly reduced module development time, training time, and module and communication errors.
"Our goal is automatic generation of computer programs from specifications that are much smaller and easier to write than ordinary programs."
"Automatic Programming is defined as the synthesis of a program from a specification. If automatic programming is to be useful, the specification must be smaller and easier to write than the program would be if written in a conventional programming language."
"a non-profit computer science research institute focusing on formal and knowledge-based methods for incremental automation of the software process. Kestrel's research efforts are applicable to the construction of the intelligent software design and engineering environment of the future that provides automated support for all activities in the software life-cycle. ... Our staff of researchers combines expertise in program synthesis, software engineering, machine intelligence, knowledge-base management, logic, automated reasoning systems, software environments, programming languages and compilers."
"This report describes some experiments in constructing a compiler that makes use of heuristic problem~solving techniques such as those incorporated in the General Problem Solver (GPS) . The experiments were aimed at the dual objectives of throwing light on some of the problems of constructing more powerful programming languages and compilers, and of testing whether the task of writing a computer program can be regarded as a "problem" in the sense in which that term is used in GPS. The present paper is concerned primarily with the second objective--with analyzing some of the problem-solving processes that are involved in writing computer programs. At the present stage of their development, no claims will be made for the heuristic programming procedures described here as practical approaches to the construction of compilers. Their interest lies in what they teach us about the nature of the programming task."
See also: Artificial intelligence and self-organizing systems: Experiments with a Heuristic Compiler
This paper presents a framework for
characterizing automatic programming systems
In terms of how a task Is communicated to the
system, the method and time at which the
system acquires the knowledge to perform the
task, and the characteristics of the resulting
program to perform that task. It describes
one approach In which both tasks and knowledge
about the task domain are stated in natural
language In the terms of that domain. Al l
knowledge of computer science necessary to
Implement the task Is Internalized Inside the