automatically
cleanlab 2.0: Automatically Find Errors in ML Datasets
Distributed ML is an active area of work, in both academia and industry, and it has been for some time now. Companies like Google were doing distributed machine learning decades ago. For some use cases, libraries like scikit-learn are totally adequate, while for other use cases, e.g. when using sophisticated models that require a lot of compute to train, training over large datasets that don't fit on a single node, distributed computing is essential. On the topic of data storage: in some cases, system builders do co-design the data storage and data processing, e.g. Such co-design can give performance gains.
- Information Technology > Artificial Intelligence > Machine Learning (0.65)
- Information Technology > Data Science > Data Mining > Big Data (0.45)
Workshop Report
The 28th International Workshop on Qualitative Reasoning (QR-15) presented advances toward reasoning tractably with massive qualitative and quantitative models, automatically learning and reasoning about continuous processes, and representing knowledge about space, causation, and uncertainty. The technical track included two invited talks, 11 oral presentations, and 5 poster presentations.
- Health & Medicine (0.39)
- Education (0.33)
1614
Where Are the Semantics in the Semantic Web? The most widely accepted defining feature of the semantic web is machine-usable content. By this definition, the semantic web is already manifest in shopping agents that automatically access and use web content to find the lowest air fares or book prices. However, where are the semantics? Most people regard the semantic web as a vision, not a reality--so shopping agents should not "count."
- Information Technology > Artificial Intelligence > Representation & Reasoning > Ontologies (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
Helping Novices Avoid the Hazards of Data: Leveraging Ontologies to Improve Model Generalization Automatically with Online Data Sources
The infrastructure and tools necessary for largescale data analytics, formerly the exclusive purview of experts, are increasingly available. Whereas a knowledgeable data miner or domain expert can rightly be expected to exercise caution when required (for example, around misleading conclusions supposedly supported by the data), the nonexpert may benefit from some judicious assistance. This article describes an end-to-end learning framework that allows a novice to create models from data easily by helping structure the model-building process and capturing extended aspects of domain knowledge. By treating the whole modeling process interactively and exploiting high-level knowledge in the form of an ontology, the framework is able to aid the user in a number of ways, including in helping to avoid pitfalls such as data dredging. Prudence must be exercised to avoid these hazards as certain conclusions may only be supported if, for example, there is extra knowledge that gives reason to ...
A Prototype Expert System
During the past year, a prototype expert system for tactical data fusion has been under development This compute1 program combines various messages concerning electronic intelligence (ELINT) to aid in decision making concerning enemy actions and intentions The prototype system is written in Prolog, a language that has proved to be very powerful and easy to use for problem/rule development The resulting prototype system (called EXPRS - Expert PRolog System) uses English-like rule constructs of Prolog code This approach enables the system to generate answers automatically to "why" a rule fired, and "how" that rule fired In addition, a rule clause construct is provided which allows direct access to Prolog code routines This paper describes the structure of the rules used and provides typical useI interactions IN THE MODERN MILITARY ENVIRONMENT, Multiple sensor inputs need to be interpreted in a timely manner to assess developing battlefield conditions. The high volume of data from such sensor systems, as well as their high rate of data transfer, make this timely interpretation difficult and very demanding of human resources. THE AI MAGAZINE Summer 1984 37 is inherently probabilistic as well as time varying and nonmonotonic. The fusion process can also require numerical analysis to be done on the raw sensor data. This "number crunching" analysis is best done (and is currently being done) with languages such as This form of representation is very general, offering good future growth potential for the system.
- Government > Military (1.00)
- Information Technology > Software (0.73)
1535
The operation of a human organization requires dozens of everyday tasks to ensure coherence in organizational activities, monitor the status of such activities, gather information relevant to the organization, keep everyone in the organization informed, and so on. Teams of software agents can aid humans in accomplishing these tasks, facilitating the organization's coherent functioning and rapid response to crises and reducing the burden on humans. These activities are often well suited for software agents, which can devote significant resources to perform these tasks, thus reducing the burden on humans. Indeed, teams of such software agents, including proxy agents that act on behalf of humans, would enable organizations to act coherently, attain their mission goals robustly, react to crises swiftly, and adapt to events dynamically. Such agent teams could assist all organizations, including the military, civilian disaster response, corporations, and universities and research institutions.
- Information Technology (1.00)
- Consumer Products & Services (1.00)
An AI Framework for the Automatic Assessment ofe-Government Forms
This article describes the architecture and AI technology behind an XML-based AI framework designed to streamline e-government form processing. The framework performs several crucial assessment and decision support functions, including workflow case assignment, automatic assessment, followup action generation, precedent case retrieval, and learning of current practices. To implement these services, several AI techniques were used, including rule-based processing, schema-based reasoning, AI clustering, case-based reasoning, data mining, and machine learning. The primary objective of using AI for e-government form processing is of course to provide faster and higher quality service as well as ensure that all forms are processed fairly and accurately. With AI, all relevant laws and regulations as well as current practices are guaranteed to be considered and followed.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Case-Based Reasoning (1.00)
Automatically Generating Game Tactics through Evolutionary Learning
The decision-making process of computer-controlled opponents in video games is called game AI. Adaptive game AI can improve the entertainment value of games by allowing computer-controlled opponents to fix weaknesses automatically in the game AI and to respond to changes in human-player tactics. Dynamic scripting is a reinforcement learning approach to adaptive game AI that learns, during gameplay, which game tactics an opponent should select to play effectively. In previous work, the tactics used by dynamic scripting were designed manually. We introduce the evolutionary state-based tactics generator (ESTG), which uses an evolutionary algorithm to generate tactics automatically.
Automating machine learning puts analytical models on autopilot
More than basic machine learning models are becoming automated. Today, software exists that can do more complicated tasks, like natural language processing and generation that does not require coding. In the future, even AI could be automated for enterprises. For example, San Antonio-based USAA is using a natural language tool from Narrative Science that runs machine learning algorithms to automatically generate verbal descriptions of metrics included in business intelligence reports. Luke Horgan, director of digital channel analytics at the insurance company, said this enables his team to answer a broader array of questions than they were previously capable of tackling.
Log Analytics With Deep Learning and Machine Learning - XenonStack
Deep Learning is a type of Neural Network Algorithm that takes metadata as an input and process the data through some layers of the nonlinear transformation of the input data to compute the output. This algorithm has a unique feature, i.e., automatic feature extraction. It means that this algorithm automatically grasps the relevant features required for the solution of the problem. It reduces the burden on the programmer to select the features explicitly. It can be used to solve supervised, unsupervised or semi-supervised type of challenges.