Artificial intelligence has given the world of banking and the financial industry as a whole a way to meet the demands of customers who want smarter, more convenient, safer ways to access, spend, save and invest their money. We've put together a rundown of how AI is being used in finance and the companies leading the way. A recent study found 77% of consumers preferred paying with a debit or credit card compared to only 12% who favored cash. But easier payment options isn't the only reason the availability of credit is important to consumers. Having good credit aids in receiving favorable financing options, landing jobs and renting an apartment, to name a few examples.
China's once scorching tech sector is cooling off. "Winter is coming," laughs Lin Liu, a 29-year-old Shanghai tech worker. Electric vehicles, industrial robots, and microchip production all slowed recently. Big firms like Alibaba, Tencent, and search engine Baidu have slashed jobs. Overall, one in five Chinese tech companies plans to cut recruitment, says jobs site Liepin.com.
Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust AI, which means being able to understand what the system is doing and why. Even though transparency is often seen as the requirement in this case, realistically it might not always be possible or desirable, whereas the need to ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a "glass box" around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems. The explicit transformation of abstract moral values into concrete norms brings great benefits in terms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value. These advantages will have an impact on the well-being of AI systems users at large, building their trust and providing them with concrete knowledge on how systems adhere to moral values.
Fear, uncertainty and doubt are tainting the banking industry's views of artificial intelligence. There's so much noise about AI, it's reminiscent of irrational fears about electricity or even the microwave -- it's going to take away our jobs, is more dangerous than nuclear weapons and will have a negative impact on our cities. From my perspective, it's important to be cautious when we evaluate new technologies, but I'm an optimist at heart. I believe in the power of technology to create value and transform lives. The individuals who are responsible for AI have the capacity to create guardrails and ensure that these new approaches to data science do not have a negative impact.
Although "black box" models such as Artificial Neural Networks, Support Vector Machines, and Ensemble Approaches continue to show superior performance in many disciplines, their adoption in the sensitive disciplines (e.g., finance, healthcare) is questionable due to the lack of interpretability and explainability of the model. In fact, future adoption of "black box" models is difficult because of the recent rule of "right of explanation" by the European Union where a user can ask for an explanation behind an algorithmic decision, and the newly proposed bill by the US government, the "Algorithmic Accountability Act", which would require companies to assess their machine learning systems for bias and discrimination and take corrective measures. Top Bankruptcy Prediction Models are A.I.-based and are in need of better explainability -the extent to which the internal working mechanisms of an AI system can be explained in human terms. Although explainable artificial intelligence is an emerging field of research, infusing domain knowledge for better explainability might be a possible solution. In this work, we demonstrate a way to collect and infuse domain knowledge into a "black box" model for bankruptcy prediction. Our understanding from the experiments reveals that infused domain knowledge makes the output from the black box model more interpretable and explainable.
The financial services sector is pouring money into artificial intelligence (AI), with banks, for example, expected to spend $5.6 billion on AI in 2019 – second only to the retail sector. Until now, the vast majority of AI projects have remained pilots, and in many cases those projects led to tech deployments without a clear business use. Simply put, it's been trendy. Most AI projects today are aimed at improving customer service efficiency and security by introducing chatbot technology, or by deploying machine-based learning to uncover trends across business lines in customer behavior and what they need. "It's about ensuring banks are able to retain the memory of a customer's journey across bank services," said Sankar Narayanan, chief practice officer at analytics service provider Fractal Analytics.
This is not a blog about Old MacDonald or his farm. Instead it is about Artificial Intelligence (AI) in the mortgage industry. And we will NOT allow any sarcastic, caustic or offhand remarks about the mortgage industry needing some kind of intelligence. First of all, exactly what is artificial intelligence, at least how it is described of late? One thing it is not is fake intelligence (not related to fake news … and you might like this site that helps YOU create your own fake news … but I digress, and so soon ... sorry).
Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for generating and evaluating a diverse set of counterfactual explanations based on average distance and determinantal point processes. To evaluate the actionability of counterfactuals, we provide metrics that enable comparison of counterfactual-based methods to other local explanation methods. We further address necessary tradeoffs and point to causal implications in optimizing for counterfactuals. Our experiments on three real-world datasets show that our framework can generate a set of counterfactuals that are diverse and well approximate local decision boundaries.
A new study or survey is released almost daily that suggests that artificial intelligence (AI) and machine learning (ML) will revolutionize our lives. This past summer, the Treasury Department released a report in which the agency recommended facilitating the development of AI due to the potential it holds for financial services companies and the overall economy. The agency also found that AI was one of the three biggest areas of investment for financial services companies last year. However, it's not just the Treasury Department that is backing AI and machine learning. The Federal Reserve has recognized the two concepts, as has the Financial Industry Regulatory Authority (FINRA), which noted that AI could help banks prevent money laundering and improve data management and customer service.
Artificial Intelligence (AI) is evolving quickly as the go-getter technology for companies across the world to redefine their services and offerings. The technology itself is inching to become better and smarter day by day, giving high adoption goals to newer industries. There is huge interest garnered when one talks about AI in banking and other financial sectors, a domain which is showing very high adoption rates. The rudimentary applications into AI include introducing smarter chatbots for customer service, placing an AI robot for self-service at banks and personalising services for individuals. AI enables the Banks to bring in more efficiency to their back-office in a bid to reduce fraud and security risks.