The reader-friendly Algorithm Design Manual provides straightforward access to combinatorial algorithms technology, stressing design over analysis. The first part, Techniques, provides accessible instruction on methods for designing and analyzing computer algorithms. The second part, Resources, is intended for browsing and reference, and comprises the catalog of algorithmic resources, implementations and an extensive bibliography.
Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task. The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design. NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures. However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures. This procedure needs to be executed from scratch for each new task. The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks. In this paper, we present the Multitask Neural Model Search (MNMS) controller. Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks. We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task. We then show that pre-trained MNMS controllers can transfer learning to new tasks. By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models.
Autodesk announced recently the availability of a shape-based search capability in A360. A blog article titled How Machine Learning Will Transform 3D Engineering describes the new capability, called Design Graph, as a "Google search-like functionality for the world of 3D models." Google search functionality is probably the wrong metaphor for 3D search. Web search is fundamentally text based, whereas searching for a part or a design requires a combination of textual and geometric terms and attributes, and sufficiently deep domain semantics. In fact, the blog article makes the very same argument later, describing Design Graph's purpose to "identify and understand designs based on their inherent characteristics--their shape and structure--rather than by any labeling (tags) or metadata" (i.e.
Regli, William C. (Drexel University) | Kopena, Joseph B. (Drexel University) | Grauer, Michael (Drexel University) | Simpson, Timothy W. (Penn State University) | Stone, Robert B. (Oregon State University) | Lewis, Kemper (University at Buffalo - SUNY) | Bohm, Matt R. (Oregon State University) | Wilkie, David (Drexel University) | Piecyk, Martin (Drexel University) | Osecki, Jordan (Drexel University)
This article introduces the challenge of digital preservation in the area of engineering design and manufacturing and presents a methodology to apply knowledge representation and semantic techniques to develop Digital Engineering Archives. This work is part of an ongoing, multiuniversity, effort to create cyber infrastructure-based engineering repositories for undergraduates (CIBER-U) to support engineering design education. The technical approach is to use knowledge representation techniques to create formal models of engineering data elements, workflows and processes. With these formal engineering knowledge and processes can be captured and preserved with some guarantee of long-term interpretability.