Artificial Intelligence System


Experiment could lead to machine's learning without humans

Daily Mail

Machines that can think for themselves - and perhaps turn on their creators as a result - have long been a fascination of science fiction. And creating robots that can learn without any input from humans is moving ever closer, thanks to the latest developments in artificial intelligence. One such project seeks to pit the wits of two AI algorithms against each other, with results that could one day lead to the emergence of such intelligent machines. Researchers have pitted AI algorithms against each other to create more realistic'imaginings' of the real world. Google's Generative Adversarial Network works by pitting two algorithms against each other, in an attempt to create convincing representations of the real world.


Racial, sexist bias may sneak into AI systems: Study

#artificialintelligence

Washington: Artificial Intelligence systems can acquire our cultural, racial or gender biases when trained with ordinary human language available online, scientists including one of Indian origin have found. In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. However, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.


Biased bots: Human prejudices sneak into artificial intelligence systems

#artificialintelligence

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender. Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.


Biased bots: Human prejudices sneak into artificial intelligence systems - ScienceBlog.com

#artificialintelligence

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender. Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.


Are public services ready to exploit artificial intelligence?

#artificialintelligence

Governments are already using data and analytics in a number of ways to help them become better informed and provide superior services for their citizens. For both central and local governments, an increasing number of back end processing and citizen engagement opportunities are emerging for smart use of artificial intelligence and its many subfields. The biggest area for potential quick wins will be the vast processing that occurs in various administration tasks. This includes improving awareness of patterns in data, to create new theses and models. Bringing together data from different areas and using algorithms that learn, can create new insights.


What Does the Future of Mobile Marketing Look Like?

#artificialintelligence

The outcomes of the future are pretty uncertain eventualities. But yet, we as humans strive to predict future outcomes and scenarios in a variety of ways. For example the complete guesswork of choosing your lottery numbers, to the more logical science & technology based efforts of meteorologists predicting future weather patterns. Needless to say, human beings have an innate desire to analyze patterns and contemplate the unknown. When it comes to the future of the mobile marketing industry, things are a little more certain than next week's big numbers.


Making AI systems that see the world as humans do

#artificialintelligence

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do. "The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition." The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory.


A 'Babelfish' could be the web's next big thing, says AI expert

AITopics Original Links

Though the idea of the "Babelfish" - a thing able to translate between any two languages on the fly - was created by the author Douglas Adams as a handy solution to the question of how intergalactic travellers could understand each other, it could be reality within 25 years. At least, that is, for human language. Prof Nigel Shadbolt, a close associate of the web inventor Sir Tim Berners-Lee, says that the idea of automatic machine translation "on the fly" is achievable before the world wide web turns 50. Shadbolt also forecasts that future changes to the web will mean people will be "connected all the time" to medical diagnostic systems – but also that search companies including Google and China's Baidu may face challenges as web use shifts from the desktop to handheld and mobile devices. Having first used the web in 1993, via an early version of the Mosaic browser while on a visit to Canada, Shadbolt now thinks that it opens up huge possibilities for artificial intelligence systems built by connecting computers across the web - so-called cloud computing - that will be able to enhance daily life.


Developing artificial intelligence systems that can interpret images

AITopics Original Links

Like many kids, Antonio Torralba began playing around with computers when he was 13 years old. Unlike many of his friends, though, he was not playing video games, but writing his own artificial intelligence (AI) programs. Growing up on the island of Majorca, off the coast of Spain, Torralba spent his teenage years designing simple algorithms to recognize handwritten numbers, or to spot the verb and noun in a sentence. But he was perhaps most proud of a program that could show people how the night sky would look from a particular direction. Or you could move to another planet, and it would tell you how the stars would look from there, he says.


Precrime: Artificial intelligence system can predict data theft by scanning email

AITopics Original Links

Workers who may be tempted to sell confidential corporate data should think twice about what they write in an email--an AI-based monitoring system could be watching. Tokyo-based data analysis company UBIC has developed an artificial intelligence system that scans messages for signs of potential plans to purloin data. A risk prediction function is being added to an existing product from the company that audits email for signs of activity such as price fixing. The Lit i View Email Auditor has been used in electronic discovery procedures in U.S. lawsuits. The artificial intelligence system, dubbed Virtual Data Scientist, can sift through messages and identify senders whose writing suggests they are in financial straits or disgruntled about how their employer treats them.