Case-Based Reasoning


Artificial Intelligence Can Now Identify A Bird Just By Looking At A Photo Digital Trends

#artificialintelligence

Artificial intelligence technology has proven itself useful in many different areas, and now birdwatching has gotten the A.I. treatment. A new A.I. tool can identify up to 200 different species of birds just by looking at one photo. The technology comes from a team at Duke University that used over 11,000 photos of 200 bird species to teach a machine to differentiate them. The tool was shown birds from ducks to hummingbirds and was able to pick out specific patterns that match a particular species of bird. "Along the way, it spits out a series of heat maps that essentially say: 'This isn't just any warbler. It's a hooded warbler, and here are the features -- like its masked head and yellow belly -- that give it away,'" wrote Robin Smith, senior science writer in Duke's communications department, in a blog post about the new technology.


Training the Shadow AI of Killer Instinct (2013) AI and Games

#artificialintelligence

Killer Instinct: Definitive Edition is now available on Amazon for Xbox One: Amazon.com: You can even fight my shadows shown in this video. The shadow system is capable of replicating a players performance in a non-player character after only three matches in the shadow dojo. We take a look at how this system records and acts upon data, but also the challenges faced in creating fighting game AI.


The Killer Groove: The Shadow AI of Killer Instinct

#artificialintelligence

The 2013 reboot of fighting game Killer Instinct on Xbox One has proven a popular game in the modern e-sports arena. While it successfully revives the long-dormant franchise from the Super Nintendo and Nintendo 64 era, it also introduces a new approach for fighting game AI. The Shadow Mode system -- a free update released in season 2 of Killer Instinct in 2015 -- allows for players to construct AI-driven fighters that are designed to match your own capability when playing as a given character. After spending three training sessions in the dojo, your shadow AI is capable of replicating a portion of your behaviour: with many of your strategic decisions being mapped to the non-player character. Once established, your shadow can be sent off to fight your friends or other players on Xbox Live.


r/MachineLearning - [Project] pgANN Fast Approximate Nearest Neighbor (ANN) searches with a PostgreSQL database.

#artificialintelligence

Hi, we did experiment with ES, using range queries on the vectors and boolean querying them and also tried using LSH/MinHash to save a signature for each vector. Did you have a different approach in mind? Also, you're correct about L-1 & L2 distances being poor metrics in this dimensionality, but our goal was to fetch a subset of (say) a few thousand "good enough" results - from a pool of a tens of millions - that can then be re-ranked with cosine or such metric. Unfortunately, there are no easy wins in ANN and this works well enough for us. We hope others can benefit as well.


AI-based Analytics: The key to business-led eDiscovery Casepoint

#artificialintelligence

Another common eDiscovery pitfall is the use of standard approaches for every case. Rather than dig in and discern data minimization and cost estimates for each case, many practitioners use generic formulas. Dubious tenets like "every stage of large cases goes to law firms" or "law firms always manage review for us" still rule the day. Teams automatically slap project planning formulas like 0 to 6 months for ECA, 6 to 12 months for full-blown eDiscovery and 12 to 24 months to finish eDiscovery, motions and trial preparations onto every eDiscovery project.


AI-based Analytics: The key to business-led eDiscovery Casepoint

#artificialintelligence

Another common eDiscovery pitfall is the use of standard approaches for every case. Rather than dig in and discern data minimization and cost estimates for each case, many practitioners use generic formulas. Dubious tenets like "every stage of large cases goes to law firms" or "law firms always manage review for us" still rule the day. Teams automatically slap project planning formulas like 0 to 6 months for ECA, 6 to 12 months for full-blown eDiscovery and 12 to 24 months to finish eDiscovery, motions and trial preparations onto every eDiscovery project.


Supervised Learning Approach to Approximate Nearest Neighbor Search

arXiv.org Machine Learning

Approximate nearest neighbor search is a classic algorithmic problem where the goal is to design an efficient index structure for fast approximate nearest neighbor queries. We show that it can be framed as a classification problem and solved by training a suitable multi-label classifier and using it as an index. Compared to the existing algorithms, this supervised learning approach has several advantages: it enables adapting an index to the query distribution when the query distribution and the corpus distribution differ; it allows using training sets larger than the corpus; and in principle it enables using any multi-label classifier for approximate nearest neighbor search. We demonstrate these advantages on multiple synthetic and real-world data sets by using a random forest and an ensemble of random projection trees as the base classifiers. Introduction In k -nearest neighbor ( k -nn) search, k points that are nearest to the query point are retrieved from the corpus. Approximate nearest neighbor search is used to speed up k -nn search in applications where fast response times are critical, such as in computer vision, robotics, and recommendation systems. Traditionally, approximate nearest neighbor search is approached as a problem in algorithms and data structures. Space-partitioning methods--trees, hashing, and quantization--divide the space according to a geometric criterion. For instance, k -d trees (Bentley 1975) and principal component trees (McNames 2001) are grown by hierarchically partitioning the space along the maximum variance directions of the corpus.


SEC selects Casepoint for its cloud-based e-discovery pilot

#artificialintelligence

Casepoint was selected over a number of other e-discovery technology solutions after undergoing a rigorous multi-step evaluation process. The SEC's evaluation factors included assessing software features and functionality, cybersecurity, management and key personnel, past performance, and a competitive proof of concept process. The SEC receives approximately 3,500 new productions, totaling nearly 10TB after processing, each month. The SEC's existing e-discovery systems contain almost 1PB of data across 7,300 cases and must support 600 users in its Washington, D.C. headquarters and 1,200 additional users across its 11 regional offices nationwide. It was imperative that the SEC chose an enterprise-scale solution that is fast, powerful, and easy-to-use.


IBM Research Launches Explainable AI Toolkit

#artificialintelligence

Explainability or interpretability of AI is a huge deal these days, especially due to the rise in the number of enterprises depending on the decisions made by machine learning and deep learning. Naturally, stakeholders want a level of transparency for how the algorithms came up with their recommendations. The so-called "black box" of AI is rapidly being questioned. For this reason, I was encouraged to learn of IBM's recent efforts in this area. The company's research arm just launched a new open-source AI toolkit, "AI Explainability 360," consisting of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.


Method for the semantic indexing of concept hierarchies, uniform representation, use of relational database systems and generic and case-based reasoning

arXiv.org Artificial Intelligence

This paper presents a method for semantic indexing and describes its application in the field of knowledge representation. Starting point of the semantic indexing is the knowledge represented by concept hierarchies. The goal is to assign keys to nodes (concepts) that are hierarchically ordered and syntactically and semantically correct. With the indexing algorithm, keys are computed such that concepts are partially unifiable with all more specific concepts and only semantically correct concepts are allowed to be added. The keys represent terminological relationships. Correctness and completeness of the underlying indexing algorithm are proven. The use of classical relational databases for the storage of instances is described. Because of the uniform representation, inference can be done using case-based reasoning and generic problem solving methods.