It can be difficult to design and develop artificial intelligence systems to meet specific quality standards. Often, AI systems are designed to be "as good as possible" rather than meeting particular targets. Using the Design for Six Sigma quality methodology, an automated insurance underwriting expert system was designed, developed, and fielded. Using this methodology resulted in meeting the high quality expectations required for deployment.
These four new solution accelerators help financial services and insurance firms solve complex business challenges by discovering meaningful relationships between events that impact one another (correlation) and cause a future event to happen (causation). Following the success of Synechron's AI Automation Program – Neo, Synechron's AI Data Science experts have developed a powerful set of accelerators that allow financial firms to address business challenges related to investment research generation, predicting the next best action to take with a wealth management client, high-priority customer complaints, and better predicting credit risk related to mortgage lending. The Accelerators combine Natural Language Processing (NLP), Deep Learning algorithms and Data Science to solve the complex business challenges and rely on a powerful Spark and Hadoop platform to ingest and run correlations across massive amounts of data to test hypotheses and predict future outcomes. The Data Science Accelerators are the fifth Accelerator program Synechron has launched in the last two years through its Financial Innovation Labs (FinLabs), which are operating in 11 key global financial markets across North America, Europe, Middle East and APAC; including: New York, Charlotte, Fort Lauderdale, London, Paris, Amsterdam, Serbia, Dubai, Pune, Bangalore and Hyderabad. With this, Synechron's Global Accelerator programs now includes over 50 Accelerators for: Blockchain, AI Automation, InsurTech, RegTech, and AI Data Science and a dedicated team of over 300 employees globally.
The most recent financial upheavals have cast doubt on the adequacy of some of the conventional quantitative risk management strategies, such as VaR (Value at Risk), in many common situations. Consequently, there has been an increasing need for verisimilar financial stress testings, namely simulating and analyzing financial portfolios in extreme, albeit rare scenarios. Unlike conventional risk management which exploits statistical correlations among financial instruments, here we focus our analysis on the notion of probabilistic causation, which is embodied by Suppes-Bayes Causal Networks (SBCNs); SBCNs are probabilistic graphical models that have many attractive features in terms of more accurate causal analysis for generating financial stress scenarios. In this paper, we present a novel approach for conducting stress testing of financial portfolios based on SBCNs in combination with classical machine learning classification tools. The resulting method is shown to be capable of correctly discovering the causal relationships among financial factors that affect the portfolios and thus, simulating stress testing scenarios with a higher accuracy and lower computational complexity than conventional Monte Carlo Simulations.
Sparsity-promoting priors have become increasingly popular over recent years due to an increased number of regression and classification applications involving a large number of predictors. In time series applications where observations are collected over time, it is often unrealistic to assume that the underlying sparsity pattern is fixed. We propose here an original class of flexible Bayesian linear models for dynamic sparsity modelling. The proposed class of models expands upon the existing Bayesian literature on sparse regression using generalized multivariate hyperbolic distributions. The properties of the models are explored through both analytic results and simulation studies. We demonstrate the model on a financial application where it is shown that it accurately represents the patterns seen in the analysis of stock and derivative data, and is able to detect major events by filtering an artificial portfolio of assets.
Max-convolution is an important problem closely resembling standard convolution; as such, max-convolution occurs frequently across many fields. Here we extend the method with fastest known worst-case runtime, which can be applied to nonnegative vectors by numerically approximating the Chebyshev norm $\| \cdot \|_\infty$, and use this approach to derive two numerically stable methods based on the idea of computing $p$-norms via fast convolution: The first method proposed, with runtime in $O( k \log(k) \log(\log(k)) )$ (which is less than $18 k \log(k)$ for any vectors that can be practically realized), uses the $p$-norm as a direct approximation of the Chebyshev norm. The second approach proposed, with runtime in $O( k \log(k) )$ (although in practice both perform similarly), uses a novel null space projection method, which extracts information from a sequence of $p$-norms to estimate the maximum value in the vector (this is equivalent to querying a small number of moments from a distribution of bounded support in order to estimate the maximum). The $p$-norm approaches are compared to one another and are shown to compute an approximation of the Viterbi path in a hidden Markov model where the transition matrix is a Toeplitz matrix; the runtime of approximating the Viterbi path is thus reduced from $O( n k^2 )$ steps to $O( n $k \log(k))$ steps in practice, and is demonstrated by inferring the U.S. unemployment rate from the S&P 500 stock index.