These four new solution accelerators help financial services and insurance firms solve complex business challenges by discovering meaningful relationships between events that impact one another (correlation) and cause a future event to happen (causation). Following the success of Synechron's AI Automation Program – Neo, Synechron's AI Data Science experts have developed a powerful set of accelerators that allow financial firms to address business challenges related to investment research generation, predicting the next best action to take with a wealth management client, high-priority customer complaints, and better predicting credit risk related to mortgage lending. The Accelerators combine Natural Language Processing (NLP), Deep Learning algorithms and Data Science to solve the complex business challenges and rely on a powerful Spark and Hadoop platform to ingest and run correlations across massive amounts of data to test hypotheses and predict future outcomes. The Data Science Accelerators are the fifth Accelerator program Synechron has launched in the last two years through its Financial Innovation Labs (FinLabs), which are operating in 11 key global financial markets across North America, Europe, Middle East and APAC; including: New York, Charlotte, Fort Lauderdale, London, Paris, Amsterdam, Serbia, Dubai, Pune, Bangalore and Hyderabad. With this, Synechron's Global Accelerator programs now includes over 50 Accelerators for: Blockchain, AI Automation, InsurTech, RegTech, and AI Data Science and a dedicated team of over 300 employees globally.
It can be difficult to design and develop artificial intelligence systems to meet specific quality standards. Often, AI systems are designed to be "as good as possible" rather than meeting particular targets. Using the Design for Six Sigma quality methodology, an automated insurance underwriting expert system was designed, developed, and fielded. Using this methodology resulted in meeting the high quality expectations required for deployment.
The most recent financial upheavals have cast doubt on the adequacy of some of the conventional quantitative risk management strategies, such as VaR (Value at Risk), in many common situations. Consequently, there has been an increasing need for verisimilar financial stress testings, namely simulating and analyzing financial portfolios in extreme, albeit rare scenarios. Unlike conventional risk management which exploits statistical correlations among financial instruments, here we focus our analysis on the notion of probabilistic causation, which is embodied by Suppes-Bayes Causal Networks (SBCNs); SBCNs are probabilistic graphical models that have many attractive features in terms of more accurate causal analysis for generating financial stress scenarios. In this paper, we present a novel approach for conducting stress testing of financial portfolios based on SBCNs in combination with classical machine learning classification tools. The resulting method is shown to be capable of correctly discovering the causal relationships among financial factors that affect the portfolios and thus, simulating stress testing scenarios with a higher accuracy and lower computational complexity than conventional Monte Carlo Simulations.
Max-convolution is an important problem closely resembling standard convolution; as such, max-convolution occurs frequently across many fields. Here we extend the method with fastest known worst-case runtime, which can be applied to nonnegative vectors by numerically approximating the Chebyshev norm $\| \cdot \|_\infty$, and use this approach to derive two numerically stable methods based on the idea of computing $p$-norms via fast convolution: The first method proposed, with runtime in $O( k \log(k) \log(\log(k)) )$ (which is less than $18 k \log(k)$ for any vectors that can be practically realized), uses the $p$-norm as a direct approximation of the Chebyshev norm. The second approach proposed, with runtime in $O( k \log(k) )$ (although in practice both perform similarly), uses a novel null space projection method, which extracts information from a sequence of $p$-norms to estimate the maximum value in the vector (this is equivalent to querying a small number of moments from a distribution of bounded support in order to estimate the maximum). The $p$-norm approaches are compared to one another and are shown to compute an approximation of the Viterbi path in a hidden Markov model where the transition matrix is a Toeplitz matrix; the runtime of approximating the Viterbi path is thus reduced from $O( n k^2 )$ steps to $O( n $k \log(k))$ steps in practice, and is demonstrated by inferring the U.S. unemployment rate from the S&P 500 stock index.
Rockville, MD 20850 Colleen McClintock Infinite Intelligence, Inc. 1155 Connecticut Avenue, #500 Washington 20036 Jacqueline Sobieski Fannie Mae 3900 Wisconsin Avenue Washington 20016 Abstract Business policy can be defined as the guidelines and procedures by which an organization conducts its business. Organizations depend on their information systems to implement their business policy. It is important that any implementation of business policy allows faster application development and better quality management and also provides a balance between flexibility and centralized control. This paper views business rules as atomic units of business policy that can be used to define or constrain different aspects of the business. It then argues that business rules provide an excellent representation for business policy. KARMA was developed and deployed at Fannie Mae. 1 Introduction Business policy can be defined as the guidelines and procedures by which an organization conducts its business. Business policy is often documented in manuals and business guidelines and is reflected in an organization's information systems. Organizations depend on their information systems to implement this policy.