Goto

Collaborating Authors

Results


A Brief Introduction to Edge Computing and Deep Learning

#artificialintelligence

Welcome to my first blog on topics in artificial intelligence! Here I will introduce the topic of edge computing, with context in deep learning applications. This blog is largely adapted from a survey paper written by Xiaofei Wang et al.: Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. If you're interested in learning more about any topic covered here, there are plenty of examples, figures, and explanations in the full 35 page survery: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp & arnumber 8976180 Now, before we begin, I'd like to take a moment and motivate why edge computing and deep learning can be very powerful when combined: Deep learning is becoming an increasingly-capable practice in machine learning that allows computers to detect objects, recognize speech, translate languages, and make decisions. More problems in machine learning are solved with the advanced techniques that researchers discover by the day.


Deep Learning Optimized Sparse Antenna Activation for Reconfigurable Intelligent Surface Assisted Communication

arXiv.org Artificial Intelligence

To capture the communications gain of the massive radiating elements with low power cost, the conventional reconfigurable intelligent surface (RIS) usually works in passive mode. However, due to the cascaded channel structure and the lack of signal processing ability, it is difficult for RIS to obtain the individual channel state information and optimize the beamforming vector. In this paper, we add signal processing units for a few antennas at RIS to partially acquire the channels. To solve the crucial active antenna selection problem, we construct an active antenna selection network that utilizes the probabilistic sampling theory to select the optimal locations of these active antennas. With this active antenna selection network, we further design two deep learning (DL) based schemes, i.e., the channel extrapolation scheme and the beam searching scheme, to enable the RIS communication system. The former utilizes the selection network and a convolutional neural network to extrapolate the full channels from the partial channels received by the active RIS antennas, while the latter adopts a fully-connected neural network to achieve the direct mapping between the partial channels and the optimal beamforming vector with maximal transmission rate. Simulation results are provided to demonstrate the effectiveness of the designed DL-based schemes.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Disney's new AI is facial recognition for animation

#artificialintelligence

Disney's massive archive spans the course of nearly a century of content, which can turn any search for specific characters, scenes or on-screen objects within it into a significant undertaking. However, a team of researchers from Disney's Direct-to-Consumer & International Organization (DTCI) have built a machine learning platform to help automate the digital archival of all that content. They call it the Content Genome. The CG platform is built to populate knowledge graphs with content metadata, akin to what you see in Google results if you search for Steve Jobs (below). From there, AI applications can then leverage that data to enhance search, discovery and personalization features or as Anthony Accardo, Director of Research and Development at DTCI, told Engadget, help animators find specific shots and sequences from within Disney's archive.


How Product Placement Works In 2020 - With AI, Deep Learning And More

#artificialintelligence

Lucy Hale at The CW's Summer 2019 TCA Party, sponsored by BEN (Photo by Jean Baptiste ... [ ] Lacroix/WireImage) Product placement just ain't what it used to be--and boy, is that a good thing. What originated as an old-school Hollywood function to beat housewives over the head with brand names and'must-have' products, has transformed into a modern, data-driven marketing tool that works for everyone involved. For years, BEN, a Bill Gates-owned product placement agency, has been at the forefront. As one of the first companies to utilize AI to identify, match and facilitate product placement opportunities across film, TV, music videos and social media, you will not only see their work everywhere, but not see it everywhere. Historically, they're behind some of the most iconic placements in Austin Powers, Forrest Gump, and ET, but more recent placements range from the adorably obvious (the family eating Cheerios in The Marvelous Mrs. Maisel) to the unexpectedly inconspicuous (see: flipped Chevrolets in Deadpool or Microsoft phones in Get Out).


Semi-supervised acoustic and language model training for English-isiZulu code-switched speech recognition

arXiv.org Machine Learning

We present an analysis of semi-supervised acoustic and language model training for English-isiZulu code-switched ASR using soap opera speech. Approximately 11 hours of untranscribed multilingual speech was transcribed automatically using four bilingual code-switching transcription systems operating in English-isiZulu, English-isiXhosa, English-Setswana and English-Sesotho. These transcriptions were incorporated into the acoustic and language model training sets. Results showed that the TDNN-F acoustic models benefit from the additional semi-supervised data and that even better performance could be achieved by including additional CNN layers. Using these CNN-TDNN-F acoustic models, a first iteration of semi-supervised training achieved an absolute mixed-language WER reduction of 3.4%, and a further 2.2% after a second iteration. Although the languages in the untranscribed data were unknown, the best results were obtained when all automatically transcribed data was used for training and not just the utterances classified as English-isiZulu. Despite reducing perplexity, the semi-supervised language model was not able to improve the ASR performance.


Alphabet's Next Billion-Dollar Business: 10 Industries To Watch - CB Insights Research

#artificialintelligence

Alphabet is using its dominance in the search and advertising spaces -- and its massive size -- to find its next billion-dollar business. From healthcare to smart cities to banking, here are 10 industries the tech giant is targeting. With growing threats from its big tech peers Microsoft, Apple, and Amazon, Alphabet's drive to disrupt has become more urgent than ever before. The conglomerate is leveraging the power of its first moats -- search and advertising -- and its massive scale to find its next billion-dollar businesses. To protect its current profits and grow more broadly, Alphabet is edging its way into industries adjacent to the ones where it has already found success and entering new spaces entirely to find opportunities for disruption. Evidence of Alphabet's efforts is showing up in several major industries. For example, the company is using artificial intelligence to understand the causes of diseases like diabetes and cancer and how to treat them. Those learnings feed into community health projects that serve the public, and also help Alphabet's effort to build smart cities. Elsewhere, Alphabet is using its scale to build a better virtual assistant and own the consumer electronics software layer. It's also leveraging that scale to build a new kind of Google Pay-operated checking account. In this report, we examine how Alphabet and its subsidiaries are currently working to disrupt 10 major industries -- from electronics to healthcare to transportation to banking -- and what else might be on the horizon. Within the world of consumer electronics, Alphabet has already found dominance with one product: Android. Mobile operating system market share globally is controlled by the Linux-based OS that Google acquired in 2005 to fend off Microsoft and Windows Mobile. Today, however, Alphabet's consumer electronics strategy is being driven by its work in artificial intelligence. Google is building some of its own hardware under the Made by Google line -- including the Pixel smartphone, the Chromebook, and the Google Home -- but the company is doing more important work on hardware-agnostic software products like Google Assistant (which is even available on iOS).


Why math is easy for AI but gardening is not: Moravec's paradox

#artificialintelligence

Artificial intelligence (AI) systems, powered by massive data and sophisticated algorithms -- including but not limited to -- deep neural networks and statistical machine learning (ML)(support vector machines, clustering, random forest, etc.), are having profound and transformative impact on our daily lives as they make their way into everything from finance to healthcare, from retail to transportation. Netflix movie recommender, Amazon's product prediction, Facebook's uncanny ability to show what you may like, Google's assistant, DeepMind's AlphaGo, Stanford's AI beating human doctors. Machine learning is eating software. However, one of the common features of these powerful algorithms is that they utilize sophisticated mathematics to do their job -- to classify and segment an image, to arrive at the key decisions, to make a product recommendation, to model a complex phenomenon, or to extract and visualize a hidden pattern from a deluge of data. All of these mathematical processes are, quite simply, beyond the scope of a single human (or a team) to perform manually (even on a computer) or inside their head.


Conditional Self-Attention for Query-based Summarization

arXiv.org Artificial Intelligence

Self-attention mechanisms have achieved great success on a variety of NLP tasks due to its flexibility of capturing dependency between arbitrary positions in a sequence. For problems such as query-based summarization (Qsumm) and knowledge graph reasoning where each input sequence is associated with an extra query, explicitly modeling such conditional contextual dependencies can lead to a more accurate solution, which however cannot be captured by existing self-attention mechanisms. In this paper, we propose \textit{conditional self-attention} (CSA), a neural network module designed for conditional dependency modeling. CSA works by adjusting the pairwise attention between input tokens in a self-attention module with the matching score of the inputs to the given query. Thereby, the contextual dependencies modeled by CSA will be highly relevant to the query. We further studied variants of CSA defined by different types of attention. Experiments on Debatepedia and HotpotQA benchmark datasets show CSA consistently outperforms vanilla Transformer and previous models for the Qsumm problem.


Netflix open-sources Metaflow, its Python framework for building and managing data science projects Packt Hub

#artificialintelligence

Yesterday, the Netflix team announced to open-source Metaflow, a Python library that helps scientists and engineers build and manage real-life data science projects. The Netflix team writes, "Over the past two years, Metaflow has been used internally at Netflix to build and manage hundreds of data-science projects from natural language processing to operations research." Metaflow was developed by Netflix to boost productivity of data scientists who work on a wide variety of projects from classical statistics to deep learning. It provides a unified API to the infrastructure stack required to execute data science projects, from prototype to production. Models are only a small part of an end-to-end data science project.