In today's world, the power of artificial intelligence is everywhere. From agriculture to healthcare, from shopping to dating, from the vehicles we drive to the way we do business, our experiences are increasingly shaped by AI. This is true even when it comes to whiskey tasting, although in this case the intelligence is driven by our senses and our reasoning rather than sophisticated algorithms. This is a topic that is close to my heart, given that I'm a director of AI, data analytics and high performance computing sales who moonlights as a whiskey sommelier. I often have occasion to reflect on the amazing parallels between the principles of AI and the process of tasting whisky.
Food and beverage manufacturers are still on the fence about artificial intelligence adoption. Many believe it isn't useful or plain do not know how to use it. According to research firm Mordor Intelligence, artificial intelligence in the food and beverage market was valued at U.S. $3.07 billion in 2020 and is expected to reach $29.94 billion by 2026 at a CAGR of over 45.77% during the forecast period, 2021-2026. The Association for Advancing Automation (A3) is made up of three principle daughter associations: robotics, machine vision and motion control/motor industries, where AI acts as a bridge across those technologies. One of the roles for Robert Huschka, vice president of Education Strategies, A3, is liaison to the strategic advisory committee on artificial intelligence.
Coca Cola probably isn't the first name that comes to mind when you're thinking about investing in Artificial Intelligence (AI). But even without AI, this company has a lot going for it. Founded over 134 years ago, they're Lindy, they can bounce back from setbacks (New Coke), and have the "mind share" of billions of people. With products sold in over 200 countries, and nearly 2 billion servings a day, the complexity of local markets make Coca Cola an ideal candidate for adopting AI and Machine Learning. One place is Social Media.
Dutch brewing company Heineken is one of the largest beer producers in the world with more than 70 production facilities globally. From small breweries to mega-plants, its logistics and production processes are increasingly complex and its machinery ever more advanced. The global beer giant therefore began looking for robotics solutions to make its breweries safer and more attractive for employees while enabling a more flexible organisation. The environment is constantly changing and the robot has to be able to respond immediately. Automatically adapting to the situation Dennis van der Plas, senior global lead packaging lines at Heineken, says, "We are becoming a high-tech company and attracting more and more technically trained staff. Repetitive tasks – like picking up fallen bottles from the conveyor belt will not provide them job satisfaction."
Today I will present a guided tutorial for applying Kemp & Tenembaum's brilliant "form discovery" algorithm to a wine dataset. Ultimately, this provides a data-driven map to choose wines from, based on our tastes. If you are, like me, fond of data science, machine learning, cognition and/or a wine lover, then you might find this post interesting. Actually, if you know of ways it could be improved I'd love to hear them!] First of all, like every recipe, we'll start with a list of things we need: Essentially, in their work Kemp & Tenenbaum created an algorithm which finds the best structural representation for a dataset, without any assumption nor indication about this dimension.
Taxonomy is a hierarchically structured knowledge graph that plays a crucial role in machine intelligence. The taxonomy expansion task aims to find a position for a new term in an existing taxonomy to capture the emerging knowledge in the world and keep the taxonomy dynamically updated. Previous taxonomy expansion solutions neglect valuable information brought by the hierarchical structure and evaluate the correctness of merely an added edge, which downgrade the problem to node-pair scoring or mini-path classification. In this paper, we propose the Hierarchy Expansion Framework (HEF), which fully exploits the hierarchical structure's properties to maximize the coherence of expanded taxonomy. HEF makes use of taxonomy's hierarchical structure in multiple aspects: i) HEF utilizes subtrees containing most relevant nodes as self-supervision data for a complete comparison of parental and sibling relations; ii) HEF adopts a coherence modeling module to evaluate the coherence of a taxonomy's subtree by integrating hypernymy relation detection and several tree-exclusive features; iii) HEF introduces the Fitting Score for position selection, which explicitly evaluates both path and level selections and takes full advantage of parental relations to interchange information for disambiguation and self-correction. Extensive experiments show that by better exploiting the hierarchical structure and optimizing taxonomy's coherence, HEF vastly surpasses the prior state-of-the-art on three benchmark datasets by an average improvement of 46.7% in accuracy and 32.3% in mean reciprocal rank.
Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference. However, measurements can be uncertain, and it is unclear how the awareness of input uncertainty can affect the trust in explanations. We propose and study two approaches to help users to manage their perception of uncertainty in a model explanation: 1) transparently show uncertainty in feature attributions to allow users to reflect on, and 2) suppress attribution to features with uncertain measurements and shift attribution to other features by regularizing with an uncertainty penalty. Through simulation experiments, qualitative interviews, and quantitative user evaluations, we identified the benefits of moderately suppressing attribution uncertainty, and concerns regarding showing attribution uncertainty. This work adds to the understanding of handling and communicating uncertainty for model interpretability.
The most common representation of words in NLP tasks is the One Hot Encoding. Below we can see an example of One Hot Encoding for the words "Cat" and "Dog". As we can see, these two vectors are independent since their inner product is 0, and their Euclidean distance is the square root of 2. Notice that this applies to every pair in the vocabulary, meaning that every pair of words are independent, and their distance is the square root of 2. Notice that this applies to every pair in the vocabulary, meaning that every pair of words are independent, and their distance is \(\sqrt(2)\). For example, the words below are considered independent, and the distance -- similarity between any pair of words is the same. This is an issue for NLP tasks since we want to be able to capture the relation between words.
In my first article on Time Series, I hope to introduce the basic ideas and definitions required to understand basic Time Series analysis. We will start with the essential and key mathematical definitions, which are required to implement more advanced models. The information will be introduced in a similar manner as it was in a McGill graduate course on the subject, and following the style of the textbook by Brockwell and Davis. A'Time Series' is a collection of observations indexed by time. The observations each occur at some time t, where t belongs to the set of allowed times, T. Note: T can be discrete in which case we have a discrete time series, or it could be continuous in the case of continuous time series.