Goto

Collaborating Authors

Results


Is AI the Secret Behind Indian Start-ups Making it into Unicorns Club?

#artificialintelligence

India is a breeding ground for many industries. The increase in educated population and the run towards growth has unraveled technology into the country. Today, technology is a core element of growth in the Indian ecosystem. While well-established companies are embracing artificial intelligence for further improvement, Indian start-ups are ballooning like never before. Fortunately, technology-based Indian start-ups landscape has evolved to become the 3rd largest in the world.


The Future of AI in Insurance - Insurance Thought Leadership

#artificialintelligence

Organizations hoping to deploy artificial intelligence have to know what problems they're solving -- no vague questions allowed. Artificial intelligence (AI) and machine learning have come a long way, both in terms of adoption across the broader technology landscape and in the insurance industry specifically. That said, there is still much more territory to cover, helping integral employees like claims adjusters do their jobs better, faster and easier. Data science is currently being used to uncover insights that claims representatives wouldn't have found otherwise, which can be extremely valuable. Data science steps in to identify patterns within massive amounts of data that are too large for humans to comprehend on their own; machines can alert users to relevant, actionable insights that improve claim outcomes and facilitate operational efficiency.


Does Your AI Model Know What It's Talking About? Here's One Way To Find Out.

#artificialintelligence

In Season 4 of the show Silicon Valley, Jian-Yang creates an app called SeeFood that uses an AI algorithm to identify any food it sees--but since the algorithm has only been trained on images of hot dogs, every food winds up being labeled "hot dog" or "not hot dog." While Jian-Yang's creation may seem absurd, in fact his app displays an intelligence that most AI models in use today do not: it only gives an answer that it knows is 100% accurate. In real life, when you ask most machine learning algorithms a question, they are programmed to give you an answer, even when they are somewhat or entirely unqualified to do so. The data on which these models are trained may have nothing to do with the specific question being asked, but the model delivers an answer anyway -- and as a result, that answer is often wrong. It's as if SeeFood tried to identify every food based only on a knowledge of hot dogs. This issue, known as "model overconfidence," is a key reason why many AI deployments fail to meet their business objectives.


Automatic Intent-Slot Induction for Dialogue Systems

arXiv.org Artificial Intelligence

Automatically and accurately identifying user intents and filling the associated slots from their spoken language are critical to the success of dialogue systems. Traditional methods require manually defining the DOMAIN-INTENT-SLOT schema and asking many domain experts to annotate the corresponding utterances, upon which neural models are trained. This procedure brings the challenges of information sharing hindering, out-of-schema, or data sparsity in open-domain dialogue systems. To tackle these challenges, we explore a new task of {\em automatic intent-slot induction} and propose a novel domain-independent tool. That is, we design a coarse-to-fine three-step procedure including Role-labeling, Concept-mining, And Pattern-mining (RCAP): (1) role-labeling: extracting keyphrases from users' utterances and classifying them into a quadruple of coarsely-defined intent-roles via sequence labeling; (2) concept-mining: clustering the extracted intent-role mentions and naming them into abstract fine-grained concepts; (3) pattern-mining: applying the Apriori algorithm to mine intent-role patterns and automatically inferring the intent-slot using these coarse-grained intent-role labels and fine-grained concepts. Empirical evaluations on both real-world in-domain and out-of-domain datasets show that: (1) our RCAP can generate satisfactory SLU schema and outperforms the state-of-the-art supervised learning method; (2) our RCAP can be directly applied to out-of-domain datasets and gain at least 76\% improvement of F1-score on intent detection and 41\% improvement of F1-score on slot filling; (3) our RCAP exhibits its power in generic intent-slot extractions with less manual effort, which opens pathways for schema induction on new domains and unseen intent-slot discovery for generalizable dialogue systems.


KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge Graph

arXiv.org Artificial Intelligence

Entity synonyms discovery is crucial for entity-leveraging applications. However, existing studies suffer from several critical issues: (1) the input mentions may be out-of-vocabulary (OOV) and may come from a different semantic space of the entities; (2) the connection between mentions and entities may be hidden and cannot be established by surface matching; and (3) some entities rarely appear due to the long-tail effect. To tackle these challenges, we facilitate knowledge graphs and propose a novel entity synonyms discovery framework, named \emph{KGSynNet}. Specifically, we pre-train subword embeddings for mentions and entities using a large-scale domain-specific corpus while learning the knowledge embeddings of entities via a joint TransC-TransE model. More importantly, to obtain a comprehensive representation of entities, we employ a specifically designed \emph{fusion gate} to adaptively absorb the entities' knowledge information into their semantic features. We conduct extensive experiments to demonstrate the effectiveness of our \emph{KGSynNet} in leveraging the knowledge graph. The experimental results show that the \emph{KGSynNet} improves the state-of-the-art methods by 14.7\% in terms of hits@3 in the offline evaluation and outperforms the BERT model by 8.3\% in the positive feedback rate of an online A/B test on the entity linking module of a question answering system.


Modeling Weather-induced Home Insurance Risks with Support Vector Machine Regression

arXiv.org Machine Learning

Insurance industry is one of the most vulnerable sectors to climate change. Assessment of future number of claims and incurred losses is critical for disaster preparedness and risk management. In this project, we study the effect of precipitation on a joint dynamics of weather-induced home insurance claims and losses. We discuss utility and limitations of such machine learning procedures as Support Vector Machines and Artificial Neural Networks, in forecasting future claim dynamics and evaluating associated uncertainties. We illustrate our approach by application to attribution analysis and forecasting of weather-induced home insurance claims in a middle-sized city in the Canadian Prairies.


AI in the Professional Services Industry

#artificialintelligence

Business organizations look to professional services firms to offload existing processes such as payroll, claims processing, and other clerical tasks. Consequently, rather than push the innovation curve as early adopters of emerging technology, professional services firms have traditionally followed well-established procedures and used conventional tools. However, much of the work they take on involves processes that are well suited for optimization through AI, and many corporations are investigating the benefits of AI for streamlining workflows and cutting operational expenses. A KPMG report predicts that enterprises will increase their spending on intelligent automation from $12.4 billion in 2019 to $232 billion in 2025, almost 19 times as much in just seven years. A McKinsey report estimates that 20 percent of the cyclical tasks of a typical finance unit can be fully automated and almost 50 percent can be mostly automated.


Autocalibration and Tweedie-dominance for Insurance Pricing with Machine Learning

arXiv.org Machine Learning

Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts. The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in W\"uthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).


Machine Learning Can Help The Insurance Industry Throughout The Process Lifecycle

#artificialintelligence

Insurance works with large amounts of data, about many individuals, many instances requiring insurance, and many factors involved in solving the claims. To add to the complexity, not all insurance is alike. Life insurance and automobile insurance are not (as far as I know) the same thing. There are many similar processes, but data and numerous flows can be different. Machine learning (ML) is being applied to multiple aspects of insurance practice.


Deep Learning Market Growth Projection from to 2025

#artificialintelligence

Market Study Report has recently added a report on Deep Learning Market which provides a succinct analysis of the market size, revenue forecast, and the regional landscape of this industry. The report also highlights the major challenges and current growth strategies adopted by the prominent companies that are a part of the dynamic competitive spectrum of this business sphere. The deep learning market has been segmented on the basis of offerings, applications, end-user industries, and geographies. In terms of offerings, software holds the largest share of the deep learning market. Also, the market for services is expected to grow at the highest CAGR from 2018 to 2023.