If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Their Swarm AI platform presents groups with a question and places potential answers in different corners of their screen. Users control a virtual magnet with their mouse and engage in a tug of war to drag an ice hockey puck to the answer they think is correct. The system's algorithm analyses how each user interacts with the puck – for instance, how much conviction they drag it with or how quickly they waver when they're in the minority – and uses this information to determine where the puck moves. That creates feedback loops in which each user is influenced by the choice and conviction of the others allowing the puck to end up at the answer best reflecting the collective wisdom of the group. Several academic papers and high-profile clients who use the product back up the effectiveness of the Swarm AI platform.
I like to watch rugby, even though I know very little about it. They rightfully believe they're talking to people who watch rugby a lot, so they feel no need to address me, personally, with rugby-for-dummies spiels that might give me an appreciation for the game. But emerging technology could soon solve my problem. Some companies are working on AI that will generate custom sports commentary, which means I could potentially tune into a streaming rugby game and listen to a human-sounding, AI-driven robot commentator that already understands my level of rugby savvy. Maybe my robot commentator will patiently explain the difference between a blood bin and a tight head.
To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.
In a classic start-up setting -- in a former textile plant four miles from where the first hockey match was played a century and a half ago -- a group of high-tech computer engineers are changing Canada's most revered pastime. There -- in sterile cubicles amid lots of wood and windows, with a jelly-bean dispenser and the inevitable dog, all planted in a gentrifying Jewish section of Montreal where Mordecai Richler set his landmark 1970 novel "St. Urbain's Horseman" -- they examine the 4,000 motions they detect players make in the course of each 60-minute game. The result is millions of data points unavailable to fans in the stands, but indispensable for coaches and, ultimately, players. The work being done here is changing the world of sport.
Accurately learning from user data while providing quantifiable privacy guarantees provides an opportunity to build better ML models while maintaining user trust. This paper presents a formal approach to carrying out privacy preserving text perturbation using the notion of dx-privacy designed to achieve geo-indistinguishability in location data. Our approach applies carefully calibrated noise to vector representation of words in a high dimension space as defined by word embedding models. We present a privacy proof that satisfies dx-privacy where the privacy parameter epsilon provides guarantees with respect to a distance metric defined by the word embedding space. We demonstrate how epsilon can be selected by analyzing plausible deniability statistics backed up by large scale analysis on GloVe and fastText embeddings. We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types. Our results demonstrate practical utility (< 2% utility loss for training binary classifiers) while providing better privacy guarantees than baseline models.
One of the most immediate ways that organizations are seeing value in artificial intelligence is in the use of chatbots and conversational interfaces, one of the seven fundamental patterns of AI. Chatbots have been in use for decades, but only recently have they had sufficient intelligence to handle conversations with a wide range of vocabulary, accents, and conversational styles. Now we have chatbots that can be developed to engage in very diverse interactions and handle many different conversational patterns. Chatbots have proven to be very valuable in many use cases ranging from customer support to conversational commerce. As a result, companies and organizations of all types are investing in chatbots and conversational systems.
Summer always comes with a lull in U.S. pro sports between the NBA and NHL championships in mid-June and when the highly anticipated NFL season kicks off in September. Baseball, America's favorite pastime, reliably fills this void, bolstered every four years by the Olympics and the FIFA World Cup. This year, the Women's World Cup drew massive attention as the U.S. National Team won its fourth title -- a world record for women's pro soccer. In fact, the American audience for the Women's World Cup final was 20% higher than the 2018 men's final and generated massive social media engagement from players, fans and celebrities -- a hard act to follow for the MLB All-Star Game a few days later. The same goes for games like the Super Bowl.
Look, nobody knows what analytics actually is anyway, so why are we still talking about it? At its most basic, analytics is simply a tool. As the old saying goes, when all you have is a hammer, everything looks like a nail. Yes, analytics is simply a way to draw meaning out of data, but just because you finally figured out how to apply gradient boosting to your ridge regression model doesn't mean you should. Once you think of analytics as a tool, a means to an end, then it's much easier to see that it's not just a tool, but an entire toolbox.
THIS DATA IS PROVIDED "AS IS" AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DATA, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The ability of reasoning beyond data fitting is substantial to deep learning systems in order to make a leap forward towards artificial general intelligence. A lot of efforts have been made to model neural-based reasoning as an iterative decision-making process based on recurrent networks and reinforcement learning. Instead, inspired by the consciousness prior proposed by Yoshua Bengio, we explore reasoning with the notion of attentive awareness from a cognitive perspective, and formulate it in the form of attentive message passing on graphs, called neural consciousness flow (NeuCFlow). Aiming to bridge the gap between deep learning systems and reasoning, we propose an attentive computation framework with a three-layer architecture, which consists of an unconsciousness flow layer, a consciousness flow layer, and an attention flow layer. We implement the NeuCFlow model with graph neural networks (GNNs) and conditional transition matrices. Our attentive computation greatly reduces the complexity of vanilla GNN-based methods, capable of running on large-scale graphs. We validate our model for knowledge graph reasoning by solving a series of knowledge base completion (KBC) tasks. The experimental results show NeuCFlow significantly outperforms previous state-of-the-art KBC methods, including the embedding-based and the path-based. The reproducible code can be found by the link below.