Measuring Intelligence


Accenture launches artificial intelligence testing services

#artificialintelligence

IT services and consulting company Accenture is launching new services for testing artificial intelligence systems to help companies build own AI-driven products and services based locally or on the cloud.



Accenture Launches New Artificial Intelligence Testing Services

#artificialintelligence

Accenture Launches New Artificial Intelligence Testing Services Powered by a "Teach and Test" methodology, the new services help companies validate the safety, reliability and transparency of their artificial intelligence systems NEW YORK; Feb. 20, 2018 – Accenture (NYSE: ACN) has launched new services for testing artificial intelligence (AI) systems, powered by a unique "Teach and Test" methodology designed to help companies build, monitor and measure reliable AI systems within their own infrastructure or in the cloud. Accenture's "Teach and Test" methodology ensures that AI systems are producing the right decisions in two phases. The "Teach" phase focuses on the choice of data, models and algorithms that are used to train machine learning. This phase experiments and statistically evaluates different models to select the best performing model to be deployed into production, while avoiding gender, ethnic and other biases, as well as ethical and compliance risks. Accenture AI Testing Services from Accenture Technology During the "Test" phase, AI system outputs are compared to key performance indicators, and assessed for whether the system can explain how a decision or outcome was determined.


The Role of Artificial Intelligence in Testing: An Interview with Jason Arbon

#artificialintelligence

Josiah Renaudin: Welcome back to another TechWell interview. I'm joined by Jason Arbon, the CEO of Appdiff and a speaker at this year's STAR WEST. First, could you tell us a bit about where you worked at before you started Appdiff? Jason Arbon: Hi, Josiah, nice to chat with you again. Later while I was at Google, I worked on test automation for the Chrome browser and ran a team doing personalized web search.


It's a game changer: recruiters make a play for ideal jobseeker

The Guardian

Welcome aboard the Starship Comet – a virtual spaceship in the smartphone game Cosmic Cadet, which asks players to complete six levels of interstellar challenges in 30 minutes. The game may look and feel like Angry Birds, but it is testing more than your ability to swipe and aim. It is a psychometric assessment, which its creators believe will revolutionise the recruitment industry. Measuring cognitive processes such as resilience and problem-solving, the game collects data on how job candidates instinctively respond to given situations, thereby helping employers gain a better understanding of how they would perform in the role and whether they are a good fit for the company. Cosmic Cadet is one of three games available for iPhone and Android users.


On the influence of intelligence in (social) intelligence testing environments

arXiv.org Artificial Intelligence

This paper analyses the influence of including agents of different degrees of intelligence in a multiagent system. The goal is to better understand how we can develop intelligence tests that can evaluate social intelligence. We analyse several reinforcement algorithms in several contexts of cooperation and competition. Our experimental setting is inspired by the recently developed Darwin-Wallace distribution.


Analysis of first prototype universal intelligence tests: evaluating and comparing AI algorithms and humans

arXiv.org Artificial Intelligence

Today, available methods that assess AI systems are focused on using empirical techniques to measure the performance of algorithms in some specific tasks (e.g., playing chess, solving mazes or land a helicopter). However, these methods are not appropriate if we want to evaluate the general intelligence of AI and, even less, if we compare it with human intelligence. The ANYNT project has designed a new method of evaluation that tries to assess AI systems using well known computational notions and problems which are as general as possible. This new method serves to assess general intelligence (which allows us to learn how to solve any new kind of problem we face) and not only to evaluate performance on a set of specific tasks. This method not only focuses on measuring the intelligence of algorithms, but also to assess any intelligent system (human beings, animals, AI, aliens?,...), and letting us to place their results on the same scale and, therefore, to be able to compare them. This new approach will allow us (in the future) to evaluate and compare any kind of intelligent system known or even to build/find, be it artificial or biological. This master thesis aims at ensuring that this new method provides consistent results when evaluating AI algorithms, this is done through the design and implementation of prototypes of universal intelligence tests and their application to different intelligent systems (AI algorithms and humans beings). From the study we analyze whether the results obtained by two different intelligent systems are properly located on the same scale and we propose changes and refinements to these prototypes in order to, in the future, being able to achieve a truly universal intelligence test.


A Program for the Solution of a Class of Geometric-Analogy Intelligence-Test Questions

Classics

Reprinted in Minsky, Marvin L. Semantic Information Processing, pp. 271 ff.Proceedings of the Spring Joint Computer Conference, 1964, pp 327-338. Ph.D. dissertation, M.I.T., June 1963