Our readers loved these deals most this week. (Photo: Amazon / iLife) Who doesn't love getting a good deal on something great? This week, we saw some really fantastic deals on products we love, and the good news is that almost all the best deals are still available now. We took a closer look at wh...
The 10 most popular things our readers bought on Amazon in January (Photo: Reviewed.com) If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA TODAY's newsroom and any business incentives. If you think people take a break from shopping after the holidays, guess again. January was a productive month for our readers and their wallets.
In the first half of our two-part conversation with Wikibon lead analyst James Kobielus (@jameskobielus), he discussed the incredible impact of machine learning in helping organizations make better business decisions and be more productive. In today's Part 2, he addresses what aspects of machine learning should be keeping data scientists up at night. Developing these algorithms is not without its challenges, Kobielus says. The first major challenge is finding data. Algorithms can't do magic unless they've been "trained."
Google just announced significant enhancements to its machine learning services (MLaaS), attempting to close the significant competitive gap that Microsoft has enjoyed, in my opinion, for the last year or so. Not to be left out, Amazon.com AWS announced the company's own new MLaaS tools and services at AWS Re:Invent last November, trying to court AI application developers to build their smart apps on the AWS cloud. MLaaS is still in its infancy today, but it may become a dominant AI platform for enterprises who would prefer to leave all the messy details to someone else, and rent AI services by the click. This article summarizes each company's strategies and tactics and tries to size up the winners and losers.
Google just announced significant enhancements to its machine learning services (MLaaS), attempting to close the significant competitive gap that Microsoft has enjoyed, in my opinion, for the last year or so. Not to be left out, Amazon.com AWS announced the company's own new MLaaS tools and services at AWS Re:Invent last November, trying to court AI application developers to build their smart apps on the AWS cloud. MLaaS is still in its infancy today, but it may become a dominant AI platform for enterprises who would prefer to leave all the messy details to someone else, and rent AI services by the click. This article summarizes each company's strategies and tactics, and tries to size up the winners and losers.
This week, a video surfaced of a Harvard professor, Steven Pinker, which appeared to show him lauding members of a racist movement. The clip, which was pulled from a November event at Harvard put on by Spiked magazine, showed Mr. Pinker referring to "the often highly literate, highly intelligent people who gravitate to the alt-right" and calling them "internet savvy" and "media savvy." The neo-Nazi Daily Stormer website ran an article headlined, in part, "Harvard Jew Professor Admits the Alt-Right Is Right About Everything." A tweet of the video published by the self-described "Right-Wing Rabble-Rouser" Alex Witoslawski got hundreds of retweets, including one from the white-nationalist leader Richard Spencer. "Steven Pinker has long been a darling of the white supremacist'alt-right,'" noted the lefty journalist Ben Norton.
Gary Marcus has recently published a detailed, rather extensive critique of Deep Learning. While many of Dr. Marcus's points are well-known among those deeply familiar with the field and have been somewhat well-publicized for years, these discussions haven't yet reached many who are newly involved in decision-making in this space. Overall, the discussion the critique has generated seems clarifying and useful. I have decided to write up my thoughts because, while I think Dr. Marcus' critique is thoughtful, necessary and often justified, I disagree with some of the conclusions. To start, Dr. Marcus' assessment that Deep Learning, as originally defined, is merely a statistical technique for classifying patterns is spot on in my opinion.
Dear Editor: ... May I also take this opportunity to praise the staff of the AI Magazine for a most informative and professional journal, and one which I find increasingly important for acquainting me with the latest progress in American research. I look forward to the continuing success of the Association in all its activities. Dear Sir, Yours sincerely, Marten E. Bennett Gzllingham, Kent, UK I would like to comment on something disturbing that appeared to be revealed at the recent I J C AI conference at Karlsruhe. The background to it is the "Marietta affair." At the industrial exhibition associated with the conference a Germany company, Marietta, was due to mount an exhibit.
Genetic Epistemology Editor: In his recent article in AI Magazine, "AI prepares for 2001," Nils Nilsson put forward a paradigm of AI based on a declarative representation of knowledge with semantic attachments to problem-specific procedures and data structures. The author discussed various research strategies for AI and specifically a computer-individual project was introduced as an efficient way of stimulating research and advances in the basic science of AI The undertaking of such a project immediately raises some classical psychological questions. Besides the deductive versus inductive or declarative versus procedural controversials, problems related to knowledge representation and evolution in an interactive environment must be considered. I would like to present some ideas and concepts stemming from current research in Genetic Epistemology (GE), initiated by Jean Piaget, as possible contributions to AI research fields. Knowledge is a common preoccupation for GE and AI.