John McCarthy first coined the term "Artificial Intelligence" (AI) in 1956 at the Dartmouth Conference along with four other founding colleagues – Marvin Minsky, Oliver Selfridge, Ray Solomonoff, and Trenchard More. The original definition and concept of AI according to John McCarthy is "Every aspect of learning or any other feature of intelligence can in principle be so preciously described that a machine can be made to simulate it. An attempt will be made to find how to make machines and concepts, solve kinds of problems now reserved for humans, and improve themselves." What it simply means is that AI is a term for "simulated intelligence" in machines. The machines are programmed to mimic the cognitive functions of the human brain.
In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.
A prominent researcher of machine learning and artificial intelligence is arguing that his field has strayed out of the bounds of science and engineering and into "alchemy." Ali Rahimi, who works on AI for Google, said he thinks his field has made amazing progress, but suggested there's something rotten in the way it's developed. In machine learning, a computer "learns" via a process of trial and error. The problem in a talk presented at an A.I. conference is that researchers who work in the field -- when a computer "learns" due to a process of trial and error -- not only don't understand exactly how their algorithms learn, but they don't understand how the techniques they're using to build those algorithms work either, Rahimi suggested in a talk presented at an AI conference covered recently by Matthew Hutson for Science magazine. Back in 2017, Rahimi sounded the alarm on the mystical side of artificial intelligence: "We produce stunningly impressive results," he wrote in a blog.
Artificial intelligence is a trending technology from quite a few years now. You must have heard a lot about it in tech news and blogs. There are various predictions about the future of Artificial intelligence but have you ever been keen to about its initial stages? In contemporary times, AI along with its subsets machine learning and deep learning are ruling the innovations in the software industry market. In fact, the magic of AI is such that 41 percent of consumers are expecting that their life will change with AI in the future.
Ai is everywhere, it has incorporated into every aspect of our life, unknowingly. It changed the way we live by simplifying things we do in our routine, like shopping, traveling, man-machine interaction. AI almost gained control of our actions. It decides what we shop, by showing ads and recommendations while you are shopping, AI trip advisors suggest you a travel destination and the best vacation packages for your budget. AI helping Businesses and financial institutions to serve their customers better with the automated question and answer chatbots. AI also defines our social media feeds, how many of your Facebook friends have not been showing up on your wall, even they active in social media? Because AI knows what and who you are interested in.