Goto

Collaborating Authors

Results


ReverseAds Announces The World's First True Alternative To Search Advertising

#artificialintelligence

ReverseAds announced the launch of its reverse-engineered search advertising solution that uses Big Data, A.I., and predictive modeling to help brands serve intuitive ads everywhere buyers go online after their initial search. ReverseAds addresses the need for predictive multi-channel ad campaigns that provide total visibility of the buyer's journey, allowing brands to move beyond underperforming search ads. This approach to digital advertising prioritizes ROI and CPA compared to the CPC bidding model provided by Google. With ReverseAds, considered purchase brands gain access to unprecedented amounts of intent data and a USPTO provisional patent-approved Assignment Algorithm. The algorithm uses predictive learning A.I. to determine which keywords will drive a business's highest total conversion.


Artificial Intelligence Chip Market Analysis

#artificialintelligence

This report studies the Artificial Intelligence Chip Market with many aspects of the industry like the market size, market status, market trends and forecast, the report also provides brief information of the competitors and the specific growth opportunities with key market drivers. Find the complete Artificial Intelligence Chip Market analysis segmented by companies, region, type and applications in the report. Artificial Intelligence Chip Market continues to evolve and expand in terms of the number of companies, products, and applications that illustrates the growth perspectives. The report also covers the list of Product range and Applications with SWOT analysis, CAGR value, further adding the essential business analytics. Artificial Intelligence Chip Market research analysis identifies the latest trends and primary factors responsible for market growth enabling the Organizations to flourish with much exposure to the markets.


AI web scraping augments data collection

#artificialintelligence

Web scraping involves writing a software robot that can automatically collect data from various webpages. Simple bots might get the job done, but more sophisticated bots use AI to find the appropriate data on a page and copy it to the appropriate data field to be processed by an analytics application. AI web scraping-based use cases include e-commerce, labor research, supply chain analytics, enterprise data capture and market research, said Sarah Petrova, co-founder at Techtestreport. These kinds of applications rely heavily on data and the syndication of data from different parties. Commercial applications use web scraping to do sentiment analysis about new product launches, curate structured data sets about companies and products, simplify business process integration and predictively gather data.


Six Areas Where AI Is Improving Customer Experiences - Enterprise Irregulars

#artificialintelligence

Bottom Line: This year's hard reset is amplifying how vital customer relationships are and how much potential AI has to find new ways to improve them. The hard reset every company is going through today is making senior management teams re-evaluate every line item and expense, especially in marketing. Spending on Customer Experience is getting re-evaluated as are supporting AI, analytics, business intelligence (BI), and machine learning projects and spending. Marketers able to quantify their contributions to revenue gains are succeeding the most at defending their budgets. Knowing if and by how much CX initiatives and strategies are paying off has been elusive.


Six Areas Where AI Is Improving Customer Experiences

#artificialintelligence

Bottom Line: This year's hard reset is amplifying how vital customer relationships are and how much potential AI has to find new ways to improve them. The hard reset every company is going through today is making senior management teams re-evaluate every line item and expense, especially in marketing. Spending on Customer Experience is getting re-evaluated as are supporting AI, analytics, business intelligence (BI), and machine learning projects and spending. Marketers able to quantify their contributions to revenue gains are succeeding the most at defending their budgets. Knowing if and by how much CX initiatives and strategies are paying off has been elusive.


AI is the Most Disruptive Marketing Trend Since the Printing Press

#artificialintelligence

The market for big data and AI is surging. One recent study found that the global market for these technologies will be worth $229 billion within the next five years. There are many benefits to industries that implement AI; healthcare, finance, communications, retailers, and even art companies are making use of the technology. And in the marketing industry, AI is revolutionizing the way corporations use data, interact with customers, and grow their firm's reach. James Paine, the founder of West Realty Advisors, compiled a list of case studies on companies using big data and AI to get more value for their marketing campaigns.


Estimation Bias in Multi-Armed Bandit Algorithms for Search Advertising

Neural Information Processing Systems

In search advertising, the search engine needs to select the most profitable advertisements to display, which can be formulated as an instance of online learning with partial feedback, also known as the stochastic multi-armed bandit (MAB) problem. In this paper, we show that the naive application of MAB algorithms to search advertising for advertisement selection will produce sample selection bias that harms the search engine by decreasing expected revenue and "estimation of the largest mean" (ELM) bias that harms the advertisers by increasing game-theoretic player-regret. We then propose simple bias-correction methods with benefits to both the search engine and the advertisers. Papers published at the Neural Information Processing Systems Conference.


Online Joint Bid/Daily Budget Optimization of Internet Advertising Campaigns

arXiv.org Machine Learning

Pay-per-click advertising includes various formats (\emph{e.g.}, search, contextual, social) with a total investment of more than 200 billion USD per year worldwide. An advertiser is given a daily budget to allocate over several, even thousands, campaigns, mainly distinguishing for the ad, target, or channel. Furthermore, publishers choose the ads to display and how to allocate them employing auctioning mechanisms, in which every day the advertisers set for each campaign a bid corresponding to the maximum amount of money per click they are willing to pay and the fraction of the daily budget to invest. In this paper, we study the problem of automating the online joint bid/daily budget optimization of pay-per-click advertising campaigns over multiple channels. We formulate our problem as a combinatorial semi-bandit problem, which requires solving a special case of the Multiple-Choice Knapsack problem every day. Furthermore, for every campaign, we capture the dependency of the number of clicks on the bid and daily budget by Gaussian Processes, thus requiring mild assumptions on the regularity of these functions. We design four algorithms and show that they suffer from a regret that is upper bounded with high probability as O(sqrt{T}), where T is the time horizon of the learning process. We experimentally evaluate our algorithms with synthetic settings generated from real data from Yahoo!, and we present the results of the adoption of our algorithms in a real-world application with a daily average spent of 1,000 Euros for more than one year.


Dynamic Incentive-aware Learning: Robust Pricing in Contextual Auctions

arXiv.org Machine Learning

Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions. Buyers' valuations for an item depend on the context that describes the item. However, the seller is not aware of the relationship between the context and buyers' valuations, i.e., buyers' preferences. The seller's goal is to design a learning policy to set reserve prices via observing the past sales data, and her objective is to minimize her regret for revenue, where the regret is computed against a clairvoyant policy that knows buyers' heterogeneous preferences. Given the seller's goal, utility-maximizing buyers have the incentive to bid untruthfully in order to manipulate the seller's learning policy. We propose learning policies that are robust to such strategic behavior. These policies use the outcomes of the auctions, rather than the submitted bids, to estimate the preferences while controlling the long-term effect of the outcome of each auction on the future reserve prices. When the market noise distribution is known to the seller, we propose a policy called Contextual Robust Pricing (CORP) that achieves a T-period regret of $O(d\log(Td) \log (T))$, where $d$ is the dimension of {the} contextual information. When the market noise distribution is unknown to the seller, we propose two policies whose regrets are sublinear in $T$.


Targeted display advertising: the case of preferential attachment

arXiv.org Machine Learning

An average adult is exposed to hundreds of digital advertisements daily (https://www.mediadynamicsinc.com/uploads/files/PR092214-Note-only-150-Ads-2mk.pdf), making the digital advertisement industry a classic example of a big-data-driven platform. As such, the ad-tech industry relies on historical engagement logs (clicks or purchases) to identify potentially interested users for the advertisement campaign of a partner (a seller who wants to target users for its products). The number of advertisements that are shown for a partner, and hence the historical campaign data available for a partner depends upon the budget constraints of the partner. Thus, enough data can be collected for the high-budget partners to make accurate predictions, while this is not the case with the low-budget partners. This skewed distribution of the data leads to "preferential attachment" of the targeted display advertising platforms towards the high-budget partners. In this paper, we develop "domain-adaptation" approaches to address the challenge of predicting interested users for the partners with insufficient data, i.e., the tail partners. Specifically, we develop simple yet effective approaches that leverage the similarity among the partners to transfer information from the partners with sufficient data to cold-start partners, i.e., partners without any campaign data. Our approaches readily adapt to the new campaign data by incremental fine-tuning, and hence work at varying points of a campaign, and not just the cold-start. We present an experimental analysis on the historical logs of a major display advertising platform (https://www.criteo.com/). Specifically, we evaluate our approaches across 149 partners, at varying points of their campaigns. Experimental results show that the proposed approaches outperform the other "domain-adaptation" approaches at different time points of the campaigns.