Artificial Intelligence: A Free Online Course from MIT

#artificialintelligence

That's because, to paraphrase Amazon's Jeff Bezos, artificial intelligence (AI) is "not just in the first inning of a long baseball game, but at the stage where the very first batter comes up." Look around, and you will find AI everywhere--in self driving cars, Siri on your phone, online customer support, movie recommendations on Netflix, fraud detection for your credit cards, etc. To be sure, there's more to come. Featuring 30 lectures, MIT's course "introduces students to the basic knowledge representation, problem solving, and learning methods of artificial intelligence." It includes interactive demonstrations designed to "help students gain intuition about how artificial intelligence methods work under a variety of circumstances."


Should you become a data scientist?

#artificialintelligence

There is no shortage of articles attempting to lay out a step-by-step process of how to become a data scientist. Are you a recent graduate? Do this… Are you changing careers? Do that… And make sure you're focusing on the top skills: coding, statistics, machine learning, storytelling, databases, big data… Need resources? Check out Andrew Ng's Coursera ML course, …". Although these are important things to consider once you have made up your mind to pursue a career in data science, I hope to answer the question that should come before all of this. It's the question that should be on every aspiring data scientist's mind: "should I become a data scientist?" This question addresses the why before you try to answer the how. What is it about the field that draws you in and will keep you in it and excited for years to come? In order to answer this question, it's important to understand how we got here and where we are headed. Because by having a full picture of the data science landscape, you can determine whether data science makes sense for you. Before the convergence of computer science, data technology, visualization, mathematics, and statistics into what we call data science today, these fields existed in siloes -- independently laying the groundwork for the tools and products we are now able to develop, things like: Oculus, Google Home, Amazon Alexa, self-driving cars, recommendation engines, etc. The foundational ideas have been around for decades... early scientists dating back to the pre-1800s, coming from wide range of backgrounds, worked on developing our first computers, calculus, probability theory, and algorithms like: CNNs, reinforcement learning, least squares regression. With the explosion in data and computational power, we are able to resurrect these decade old ideas and apply them to real-world problems. In 2009 and 2012, articles were published by McKinsey and the Harvard Business Review, hyping up the role of the data scientist, showing how they were revolutionizing the way businesses are operating and how they would be critical to future business success. They not only saw the advantage of a data-driven approach, but also the importance of utilizing predictive analytics into the future in order to remain competitive and relevant. Around the same time in 2011, Andrew Ng came out with a free online course on machine learning, and the curse of AI FOMO (fear of missing out) kicked in. Companies began the search for highly skilled individuals to help them collect, store, visualize and make sense of all their data. "You want the title and the high pay?


Mossberg: The Disappearing Computer

#artificialintelligence

The biggest hardware and software arrival since the iPad in 2010 has been Amazon's Echo voice-controlled intelligent speaker, powered by its Alexa software assistant. But just because you're not seeing amazing new consumer tech products on Amazon, in the app stores, or at the Apple Store or Best Buy, that doesn't mean the tech revolution is stuck or stopped. They are: Artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables. Google has changed its entire corporate mission to be "AI first" and, with Google Home and Google Assistant, to perform tasks via voice commands and eventually hold real, unstructured conversations.


As AI moves to the chip, mobile devices are about to get much smarter

#artificialintelligence

The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it's about to injects smarts into every other object imaginable. That's because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that's likely to lead to much smarter phones, drones, robots, cameras, wearables and more. "Consumers will be genuinely amazed at the capabilities of these devices," said Cormac Brick, vice president of machine learning for Movidius Ltd., a maker of vision processor chips in San Mateo, Calif.


DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse

#artificialintelligence

You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.