"Today's expert systems deal with domains of narrow specialization. For expert systems to perform competently over a broad range of tasks, they will have to be given very much more knowledge. ... The next generation of expert systems ... will require large knowledge bases. How will we get them?"
– Edward Feigenbaum, Pamela McCorduck, H. Penny Nii, from The Rise of the Expert Company. New York: Times Books, 1988.
In collaboration with BigML partner, INFORM Gmbh, we're pleased to bring the BigML community a new educational webinar: Machine Learning Fights Financial Crime. This FREE virtual event will take place on October 28, 2020, at 8:00 AM PDT / 9:00 AM PDT and it's the ideal learning opportunity for Financial institutions, banking sector professionals, credit professionals, risk advisers, crime fighters, fraud professionals, and anyone interested in finding out about the latest financial crime-fighting and risk analysis strategies and trends. Financial institutions must innovate to stop the onslaught of fraudulent transactions. The utilization of Machine Learning as a tool for fraud detection is trending. Combining Machine Learning with existing intelligent and dynamic rule sets produces a sustainable strategy to address this challenge.
Since it was unveiled earlier this year, the new AI-based language generating software GPT-3 has attracted much attention for its ability to produce passages of writing that are convincingly human-like. Some have even suggested that the program, created by Elon Musk's OpenAI, may be considered or appears to exhibit, something like artificial general intelligence (AGI), the ability to understand or perform any task a human can. This breathless coverage reveals a natural yet aberrant collusion in people's minds between the appearance of language and the capacity to think. Language and thought, though obviously not the same, are strongly and intimately related. And some people tend to assume that language is the ultimate sign of thought.
It is non-trivial to design engaging and balanced sets of game rules. Modern chess has evolved over centuries, but without a similar recourse to history, the consequences of rule changes to game dynamics are difficult to predict. AlphaZero provides an alternative in silico means of game balance assessment. It is a system that can learn near-optimal strategies for any rule set from scratch, without any human supervision, by continually learning from its own experience. In this study we use AlphaZero to creatively explore and design new chess variants.
There are two different types of AI in wide use today. Recent developments have focused on data-driven machine learning, but in the last decades, most AI applications in education (AIEd) have been based on representational / knowledge-based AI. Data-driven AI uses a programming paradigm that is new to most computing professionals. It requires competences which are different from traditional programming and computational thinking. It opens up new ways to use computing and digital devices. But the development of state-of-the-art AI is now starting to exceed the computational capacity of the largest AI developers. The recent rapid developments in data-driven AI may not be sustainable. The impact of AI in education will depend on how learning and competence needs change, as AI will be widely used in the society and economy.
The fundamental challenge of natural language processing (NLP) is resolution of the ambiguity that is present in the meaning of and intent carried by natural language. To resolve ambiguity within a text, algorithms use knowledge from the context within which the text appears. For example, the presence of the sentence "I visited the zoo." before the sentence "I saw a bat" can be used to conclude that bat represents an animal and not a wooden club. While in many situations neighboring text is sufficient for reducing ambiguity, typically it is not sufficient when dealing with text from specialized domains. Processing domain-specific text requires an understanding of a large number of domain-specific concepts and processes that NLP algorithms cannot glean from neighboring text alone.
As more and more industries bring ML use cases to production, the need for consistent practices for managing ML in Production and optimizing ML Lifecycle iteration has grown rapidly. Last year, a few of us partnered with USENIX to drive the first-ever Industry/Academic conference dedicated to the challenges of and innovations in managing ML in Production. OpML 2019 was a great success - bringing together experts, practitioners, engineers, and researchers to discuss the latest and greatest in ML Ops. You can find a summary of OpML 2019 here. This year, due to COVID19, OpML 2020 became a virtual conference with video presentations and open discussions on Slack.
Recently, in an official announcement, Google launched an OpenCL-based mobile GPU inference engine for Android. The tech giant claims that the inference engine offers up to 2x speedup over the OpenGL backend on neural networks which include enough workload for the GPU. This GPU inference engine is currently made available in the latest version of TensorFlow Lite (TFLite) library. Open Graphics Library or OpenGL is an API designed for rendering vector graphics through which a client application can control this system. It is a popular software interface that allows a programmer to communicate with graphics hardware.
Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. Association Rules find all sets of items (itemsets) that have support greater than the minimum support and then using the large itemsets to generate the desired rules that have confidence greater than the minimum confidence. The lift of a rule is the ratio of the observed support to that expected if X and Y were independent. A typical and widely used example of association rules application is market basket analysis.
Many of us have had the feeling that technology, which continues to change at an ever-dizzying pace, may be leaving us behind. That was embodied this past week during a Congressional hearing, nominally convened to investigate antitrust concerns of four big tech titans: Amazon, Apple, Facebook and Google. While the five-and-a-half-hour inquiry touched on a range topics from pesky spam filters and search results to how companies approached acquisitions, the House Judiciary subcommittee hearing laid one thing bare: A sizable disconnect appears to exist between the technology Americans are using and depending on in their daily lives and the knowledge base of people with the power and responsibility to decide its future and regulation. "Consumers and investors walk away feeling like a lot of these lawmakers don't really understand the business models to an extent that they could then navigate them and put laws in place that will dictate the future of where they go," said Daniel Ives, an analyst with Wedbush Securities. The antitrust subcommittee hearing had been convened to look into the tech giants' market dominance.
The global Artificial Intelligence company Expert System announced the release of the expert.ai NL API, the cloud-based Natural Language API that enables data scientists, computational linguists, knowledge engineers and developers to easily embed advanced Natural Language Understanding and Natural Language Processing capabilities (NLU / NLP) into their applications. This release is the first step in executing on the company's strategy to become the global platform of reference for AI-based Natural Language problem solving. The growing need for accessible and accurate AI-based NLU / NLP applications in the enterprise places increased demand on the developer ecosystem to bring speed, scale and precision to linguistic analysis. According to Gartner, "during recent years, advances in the application of machine learning (including neural networks) and knowledge graphs to natural language processing have enabled machine-based attribution that diminishes the need for human oversight. Application of the technology is broadening as well as deepening -- across industries and functional domains, and into use cases -- pushing this innovation from many years in the Tough of Disillusionment toward the Slope of Enlightenment."