In recent years, players within Canada's financial services industry, from banks to Fintech startups, have shown early and innovative adoption of artificial intelligence ("AI") and machine learning ("ML") within their organizations and services. With the ability to review and analyze vast amounts of data, AI algorithms and ML help financial services organizations improve operations, safeguard against financial crime, sharpen their competitive edge and better personalize their services. As the industry continues to implement more AI and build upon its existing applications, it should ensure that such systems are used responsibly and designed to account for any unintended consequences. Below we provide a brief overview of current considerations, as well as anticipated future shifts, in respect of the use of AI in Canada's financial services industry. At a high level, Canadian banks and many bank-specific activities are matters of federal jurisdiction.
When Tony Stark needs to travel to space in the original Iron Man movie, he asks his artificial intelligent (AI) assistant J.A.R.V.I.S. to make a suit that can survive harsh conditions. As AI specialist Kamal Choudhary explains: "The way I see it, what J.A.R.V.I.S. did is, it had a database of materials, scanned the database, found a suitable material, tested it, then synthesized an alloy that could survive space conditions. "That's what we want our system to do, and that's why we called it JARVIS." Choudhary, a researcher at the National Institute of Standards and Technology (NIST), is the founder and developer of JARVIS (Joint Automated Repository for Various Integrated Simulations)--an open dataset designed to automate materials discovery and optimization. Writing in npj Computational Materials in December 2021, Choudhary and Brian DeCost (NIST) described the latest enhancements to JARVIS that apply AI to speed discovery. Combining graph neural networks with chemical and structural knowledge about materials, their Atomistic Line Graph Neural Network (ALIGNN) outperforms previously reported models on atomistic prediction tasks with very high accuracy and better or comparable model training speed. "ALIGNN can predict characteristics in seconds instead of months," Choudhary said. Beyond the inspiration from Iron Man, there was the Materials Genome Initiative. Originated in 2011 under President Obama, the initiative is a multi-federal agency effort to discover, manufacture, and deploy advanced materials twice as fast and at a fraction of the cost of traditional methods. NIST's original contribution to the initiative was the creation of a database of materials and their characteristics, obtained rigorously, using standardized, cutting-edge computing methods. Several such databases have been established, but "what's particular about the JARVIS database is that it contains modules for various kinds of computational approaches," according to David Vanderbilt, professor of physics at Rutgers University, member of the National Academy of Sciences, and a contributor to the project. "There are many different theoretical levels on which you can approach the field.
Ranking, recommendation, and retrieval systems are widely used in online platforms and other societal systems, including e-commerce, media-streaming, admissions, gig platforms, and hiring. In the recent past, a large "fair ranking" research literature has been developed around making these systems fair to the individuals, providers, or content that are being ranked. Most of this literature defines fairness for a single instance of retrieval, or as a simple additive notion for multiple instances of retrievals over time. This work provides a critical overview of this literature, detailing the often context-specific concerns that such an approach misses: the gap between high ranking placements and true provider utility, spillovers and compounding effects over time, induced strategic incentives, and the effect of statistical uncertainty. We then provide a path forward for a more holistic and impact-oriented fair ranking research agenda, including methodological lessons from other fields and the role of the broader stakeholder community in overcoming data bottlenecks and designing effective regulatory environments.
Recommender systems play an important role in helping people find information and make decisions in today's increasingly digitalized societies. However, the wide adoption of such machine learning applications also causes concerns in terms of data privacy. These concerns are addressed by the recent "General Data Protection Regulation" (GDPR) in Europe, which requires companies to delete personal user data upon request when users enforce their "right to be forgotten". Many researchers argue that this deletion obligation does not only apply to the data stored in primary data stores such as relational databases but also requires an update of machine learning models whose training set included the personal data to delete. We explore this direction in the context of a sequential recommendation task called Next Basket Recommendation (NBR), where the goal is to recommend a set of items based on a user's purchase history. We design efficient algorithms for incrementally and decrementally updating a state-of-the-art next basket recommendation model in response to additions and deletions of user baskets and items. Furthermore, we discuss an efficient, data-parallel implementation of our method in the Spark Structured Streaming system. We evaluate our implementation on a variety of real-world datasets, where we investigate the impact of our update techniques on several ranking metrics and measure the time to perform model updates. Our results show that our method provides constant update time efficiency with respect to an additional user basket in the incremental case, and linear efficiency in the decremental case where we delete existing baskets. With modest computational resources, we are able to update models with a latency of around 0.2~milliseconds regardless of the history size in the incremental case, and less than one millisecond in the decremental case.
The technology boom amidst the pandemic has already hit 2022, creating another record with Apple. This week, the company was valued at $3 trillion, the first US company to reach this growth. This follows Apple's tremendous market growth that has risen by 38% since the start of 2021 and tripled in value in under four years. The Guardian estimated the valuation is equivalent to the combined value of Boeing, Coca-Cola, Disney, Exxon-Mobil, McDonald's, Netflix and Walmart. While the growth has not been sustained, the company has surely been a disruptor in the technology market with their breakthrough innovations for decades.
As AI systems demonstrate increasingly strong predictive performance, their adoption has grown in numerous domains. However, in high-stakes domains such as criminal justice and healthcare, full automation is often not desirable due to safety, ethical, and legal concerns, yet fully manual approaches can be inaccurate and time consuming. As a result, there is growing interest in the research community to augment human decision making with AI assistance. Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions. To invite and help structure research efforts towards a science of understanding and improving human-AI decision making, we survey recent literature of empirical human-subject studies on this topic. We summarize the study design choices made in over 100 papers in three important aspects: (1) decision tasks, (2) AI models and AI assistance elements, and (3) evaluation metrics. For each aspect, we summarize current trends, discuss gaps in current practices of the field, and make a list of recommendations for future research. Our survey highlights the need to develop common frameworks to account for the design and research spaces of human-AI decision making, so that researchers can make rigorous choices in study design, and the research community can build on each other's work and produce generalizable scientific knowledge. We also hope this survey will serve as a bridge for HCI and AI communities to work together to mutually shape the empirical science and computational technologies for human-AI decision making.
Recent advances in path-based explainable recommendation systems have attracted increasing attention thanks to the rich information provided by knowledge graphs. Most existing explainable recommendations only utilize static knowledge graphs and ignore the dynamic user-item evolutions, leading to less convincing and inaccurate explanations. Although there are some works that realize that modelling user's temporal sequential behaviour could boost the performance and explainability of the recommender systems, most of them either only focus on modelling user's sequential interactions within a path or independently and separately of the recommendation mechanism. In this paper, we propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL), which utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation. Compared with existing works that use heavy recurrent neural networks to model temporal information, we propose simple but effective neural networks to capture users' historical item features and path-based context to characterize the next purchased item. Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
Most of the existing recommender systems are based only on the rating data, and they ignore other sources of information that might increase the quality of recommendations, such as textual reviews, or user and item characteristics. Moreover, the majority of those systems are applicable only on small datasets (with thousands of observations) and are unable to handle large datasets (with millions of observations). We propose a recommender algorithm that combines a rating modelling technique (i.e., Latent Factor Model) with a topic modelling method based on textual reviews (i.e., Latent Dirichlet Allocation), and we extend the algorithm such that it allows adding extra user- and item-specific information to the system. We evaluate the performance of the algorithm using Amazon.com datasets with different sizes, corresponding to 23 product categories. After comparing the built model to four other models we found that combining textual reviews with ratings leads to better recommendations. Moreover, we found that adding extra user and item features to the model increases its prediction accuracy, which is especially true for medium and large datasets.
Research about recommender systems emerges over the last decade and comprises valuable services to increase different companies' revenue. Several approaches exist in handling paper recommender systems. While most existing recommender systems rely either on a content-based approach or a collaborative approach, there are hybrid approaches that can improve recommendation accuracy using a combination of both approaches. Even though many algorithms are proposed using such methods, it is still necessary for further improvement. In this paper, we propose a recommender system method using a graph-based model associated with the similarity of users' ratings, in combination with users' demographic and location information. By utilizing the advantages of Autoencoder feature extraction, we extract new features based on all combined attributes. Using the new set of features for clustering users, our proposed approach (GHRS) has gained a significant improvement, which dominates other methods' performance in the cold-start problem. The experimental results on the MovieLens dataset show that the proposed algorithm outperforms many existing recommendation algorithms on recommendation accuracy.