Welcome to episode 222 of the AI in Action podcast, the show where we break down the hype and explore the impact that Data Science, Machine Learning and Artificial Intelligence are making on our everyday lives. Powered by Alldus International, our goal is to share with you the insights of technologists and data science enthusiasts to showcase the excellent work that is being done within AI in the United States and Europe. Today's guest is Rameez Tase, Co-Founder & CEO at Antenna in New York. Antenna is an early-stage data analytics startup that is shaking up the way subscription businesses in various industries have access to insightful metrics to make strategic business decisions in today's hyper-competitive environment. They help define the metrics that matter, benchmark success standards vs. the competition, and illuminate the strategies required to build winning direct-to-consumer subscription businesses.
Algorithms now determine how much things cost. It's called dynamic pricing and it adjusts according to current market conditions in order to increase profits. The rise of e-commerce has propelled pricing algorithms into an everyday occurrence--whether you're shopping on Amazon, booking a flight, hotel or ordering an Uber. In this continuation of our series on automation and your wallet, we explore what happens when a machine determines the price you pay. This episode was reported by Anthony Green and produced by Jennifer Strong and Emma Cillekens. We're edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski. Jennifer: Alright so I'm in an airport just outside New York City and just looking at the departures board here seeing all these flights going different places… It makes me think about how we decide how much something should cost… like a ticket for one of these flights. Because where the plane is going is just part of the puzzle. The price of airfare is highly personalized.
LoopMe, an outcomes-based platform, has launched a measurement solution, PurchaseLoop Measurement, which provides real-time consumer brand lift measurement and analytics for OOH advertising, according to a company press release. Designed for agencies, brands and publishers to measure media effectiveness throughout their campaigns, PurchaseLoop Measurement also provides analytics across media channels, including digital, CTV, digital audio. "Consumer behavior has changed rapidly over the past 18 months, creating new challenges as brands try to measure whether their advertising is moving the needle," Chris Swarbrick, managing partner - ad technology strategy and coordination at OMG UK, said in the release. "LoopMe provides a unique offering that leverages real-time machine learning and artificial intelligence to find consumers where they spend the most time -- on their mobile devices -- and measures advertising effectiveness across brand metrics in a quick and scalable way, even as the market becomes more fragmented. LoopMe is paving the way for the next wave of brand measurement solutions that secure a true 1-1 connection between brands and consumers."
This is the first part of a 2-part series on the growing importance of teaching Data and AI literacy to our students. This will be included in a module I am teaching at Menlo College but wanted to share the blog to help validate the content before presenting to my students. Apple plans to introduce new iPhone software that uses artificial intelligence (AI) to churn through the vast collection of photos that people have taken with their iPhones to detect and report child sexual abuse. See the Wall Street article "Apple Plans to Have iPhones Detect Child Pornography, Fueling Priva..." for more details on Apple's plan. Apple has a strong history of working to protect its customers' privacy.
Modeling tap or click sequences of users on a mobile device can improve our understandings of interaction behavior and offers opportunities for UI optimization by recommending next element the user might want to click on. We analyzed a large-scale dataset of over 20 million clicks from more than 4,000 mobile users who opted in. We then designed a deep learning model that predicts the next element that the user clicks given the user's click history, the structural information of the UI screen, and the current context such as the time of the day. We thoroughly investigated the deep model by comparing it with a set of baseline methods based on the dataset. The experiments show that our model achieves 48% and 71% accuracy (top-1 and top-3) for predicting next clicks based on a held-out dataset of test users, which significantly outperformed all the baseline methods with a large margin. We discussed a few scenarios for integrating the model in mobile interaction and how users can potentially benefit from the model.
We present HelpViz, a tool for generating contextual visual mobile tutorials from text-based instructions that are abundant on the web. HelpViz transforms text instructions to graphical tutorials in batch, by extracting a sequence of actions from each text instruction through an instruction parsing model, and executing the extracted actions on a simulation infrastructure that manages an array of Android emulators. The automatic execution of each instruction produces a set of graphical and structural assets, including images, videos, and metadata such as clicked elements for each step. HelpViz then synthesizes a tutorial by combining parsed text instructions with the generated assets, and contextualizes the tutorial to user interaction by tracking the user's progress and highlighting the next step. Our experiments with HelpViz indicate that our pipeline improved tutorial execution robustness and that participants preferred tutorials generated by HelpViz over text-based instructions. HelpViz promises a cost-effective approach for generating contextual visual tutorials for mobile interaction at scale.
Mobile User Interface Summarization generates succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen, which can be useful for many language-based application scenarios. We present Screen2Words, a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase. Summarizing mobile screens requires a holistic understanding of the multi-modal data of mobile UIs, including text, image, structures as well as UI semantics, motivating our multi-modal learning approach. We collected and analyzed a large-scale screen summarization dataset annotated by human workers. Our dataset contains more than 112k language summarization across $\sim$22k unique UI screens. We then experimented with a set of deep models with different configurations. Our evaluation of these models with both automatic accuracy metrics and human rating shows that our approach can generate high-quality summaries for mobile screens. We demonstrate potential use cases of Screen2Words and open-source our dataset and model to lay the foundations for further bridging language and user interfaces.
Coded Bias, directed by Shalini Kantayya, is a documentary in the way Artificial Intelligence trails human data with the assistance of algorithms incorporated in sophisticated Machine Learning Models. Although many of the algorithms used today were created in the 80s, we have digitalised our lives, and data, in a massive amount never so accessible in the history of humankind. Adding to that, the increase in processing power by computers and wireless exchange of information by the 5G technology means AI is probably the most powerful technology ever designed. It already has the capacity to individualised strategies to nudge behaviours desired by a third party. It is only visible to the targeted person, leaves no traces and almost unregulated with few exceptions like the GDPR (General Data Protection Regulation).
"The big tech is banking heavily on AI, Cloud and 5G technologies to retain customers and drive growth" A global emergency can smother your business, government lawsuits can break your company, competitors with trillion-dollar market value can wipe your organisation off the map. But what would happen when all three come together in the same year? The pandemic brought the world to a standstill. The internet giants, however, came out of it unscathed. Apple, Amazon, Google and Facebook, popularly known as the big four, have not only survived a combination of calamities but registered profits and left the Wall Street analysts dumbfounded.
Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life. The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges. In this context, a wise cooperation and resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed inference, training and online learning tasks across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges.