Digging Deep Into Artificial Intelligence (AI): What It Means to Mining and Geologists


Imagine a network of mine sites operated remotely--drilling, analysing core samples, collecting and interpreting data wirelessly from machine to machine, and transmitting real-time information into the cloud, absolutely without physical, human touch. In fact, it is fast becoming the reality in an industry that's increasingly powered by artificial intelligence a.k.a When we think of AI, we think of robots and machines capable of independent thought or autonomous movement. These are possibilities, and even realities, in today's world where practically anything can be automated. AI, however, goes beyond hardware, and its applications are farther-reaching than we can perhaps imagine.

Let's talk about Artificial Intelligence


Artificial intelligence (AI) field was coined by John McCarthy in the summer workshop at Dartmouth College in 1956, which many of the world's leading researchers in computing attended [1]. The main purpose of this workshop was to re-create in the machines the mechanism of the human brain aiming to emulate human intelligence. However, AI researchers were soon realized that this is not a trivial task and the ideas presented at that time were very important for the future rule based expert systems. Simulate human intelligence in computer systems is a very difficult task, firstly because we don't have a strong grasp of how the brain works in its entirety. Furthermore, the human intelligence is not just about the brain, other factors are essential parts of our intelligence, such as education, memory, motivation, emotions, etc. Nowadays, the brain has been considered the main inspiration for the AI field, as well as the birds were inspiration to model the first airplane.

What Is Artificial Intelligence (AI)?


In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.

Let me into your home: artist Lauren McCarthy on becoming Alexa for a day

The Guardian

In a gallery in downtown Manhattan, people are huddling around four laptops, taking turns to control the apartments of 14 complete strangers. They watch via live video feeds, and respond whenever the residents ask "Someone" to help them. They switch the lights on and off, boil the kettle, put some music on – whatever they can do to oblige. The project, called Someone, is the latest in a series exploring our ever more complicated relationship with technology. It's by the American artist Lauren McCarthy and is a sort of outsourcing of Lauren, an earlier work in which she acted as a real-life Alexa, remotely watching over a home 24 hours a day, responding to its occupants' questions and needs like a flesh and blood version of Amazon's voice-operated virtual assistant.

AI explained


In this lecture, I will offer you a definition of artificial intelligence, or AI, and give you a brief overview of its history from its inception in the 1950s. Let's start by saying what AI isn't. AI is not machines that think, or even computers that work the way the brain works. AI is what machines do, not how they do it. The authors of a leading textbook on AI have offered eight possible definitions of the term.

The languages of AI


The evolution of artificial intelligence (AI) grew with the complexity of the languages available for development. In 1959, Arthur Samuel developed a self-learning checkers program at IBM on an IBM 701 computer using the native instructions of the machine (quite a feat given search trees and alpha-beta pruning). But today, AI is developed using various languages, from Lisp to Python to R. This article explores the languages that evolved for AI and machine learning. The programming languages that are used to build AI and machine learning applications vary. Each application has its own constraints and requirements, and some languages are better than others in particular problem domains.

Four people are allowing strangers to control their smart homes


For the next seven weeks, anyone who's inclined can go to 205 Hudson Street in New York City and take over someone else's apartment. Smart devices like the kettles, lighting and speakers of four homes connect directly to laptops in the corner of an art gallery. Cameras are trained on bathrooms, kitchens and living areas. Visitors can sit down and become a human Alexa, playing music, eavesdropping on conversations through microphones and communicating with the inhabitants via text-to-speech. Each home -- three in Brooklyn, one in San Francisco -- will be "live" for two hours a day.

A Small Talk on Artificial Intelligence – Data Driven Investor – Medium


Artificial beings, behaves like humans were common in fiction stories, TV serials and movies. As a child, I was a big fan of Japanese TV serial "The Giant Robot" and Karel Capek's R.U.R (Rossum's Universal Robot). All technological inventions and scientific discoveries have kick-started human wild fantasies into reality. Today, Artificial Intelligence, is one of the hot topics on the table. Websites and magazines have been constantly bombarded with news and articles on AI which seems too good to be true for me.

A new customer experience: How AI is changing marketing


Content provided by IBM with Insider Studios. In the summer of 1956, 10 scientists and mathematicians gathered at New Hampshire's Dartmouth College to brainstorm a new concept Assistant Professor John McCarthy called "artificial intelligence." According to the original proposal for the research project, McCarthy--along with fellow organizers from Harvard, Bell Labs and IBM--wanted to explore the idea of programming machines to use language and solve problems for humans while improving over time. It would be years before these lofty objectives were met, but the summer workshop is credited with launching the field of artificial intelligence (AI). Sixty years later, cognitive scientists, data analysts, UX designers and countless others are doing everything those pioneering scientists hoped for--and more.

Google chief executive Sundar Pichai set to testify to Congress in December

Washington Post - Technology News

Google chief executive Sundar Pichai is set to testify to Congress in December, facing off against lawmakers for the first time at a hearing that could subject the search giant to the same harsh political spotlight that has faced its tech peers all year. The scheduled Dec. 5 hearing before the House Judiciary Committee, confirmed by three sources familiar with the plan but not authorized to speak on record, comes in response to some Republicans who claim that Google is biased against conservatives. A spokesman for the panel's GOP leader, Virginia Rep. Bob Goodlatte, did not immediately respond to a request for comment. Google also did not immediately respond. Led by House Majority Leader Kevin McCarthy (Calif.),