Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.
Artificial intelligence-based tools continue to be used by only a very small percentage of law firms, according to the ABA's 2020 Legal Technology Survey Report this month. Just 7% of respondents to the ABA Legal Technology Resource Center's survey reported that their firms use AI tech tools, a decrease of one percentage point from a year ago. Meanwhile, 23% of respondents said their firms were not interested in purchasing AI-based tools and nearly 34% said they did not know enough about AI to answer the question regarding their firms current or planned usage of such tools. Alexander Paykin, a Legal Technology Resource Center board member, says he thinks the legal industry has been slow to adopt AI-based tools because the available products have yet to demonstrate they can consistently produce the results vendors promise. He points to his experience with the AI-based legal research offerings he has tried out in recent years to back up his point.
Since the dawn of Bronze age civilizations more than 5000 years ago, humans have been creating norms of societal governance. The process continues with many imperfections. Off late, Artificial Intelligence (AI) is increasing its influence in decision making processes in the lives of humans and expectations are whether AI will follow similar or better norms. Principles that govern the behaviour of responsible AI systems are being established. All AI systems should be fair in dealing with people and be inclusive in coverage.
Insurance companies are continually subjected to questionable claims, whether that be actual fraud, waste, or just abuse. Insurance fraud in the U.S. alone represents a USD 32 billion in P&C and USD 84 billion in health care costs per year loss to insurance companies. Each carrier has tens and even hundreds of thousands of claims processed, yet the fraudulent claims are actually a small fraction of the total. This leads to highly unbalanced datasets with sparse data that makes fraud detection especially hard. Combine that with the fact that new schemes are constantly emerging for which there is no available ground truth until well after a scheme is successfully implemented. This leaves insurance companies at a disadvantage.
Justpoint, a New York-based startup that uses Artificial Intelligence for analysis of individual medical malpractice claims, has now secured $1 million in a seed funding round. Justpoint is founded by Victor Bornstein. It is the AI-first medical malpractice company offering consumers and law firms a way of understanding the legal merits of a claim as well as an instant prediction of the likely settlement amount. Harry Langenberg of Optima Tax Relief, said, "Justpoint has identified a big inefficient market in medical claims and malpractice that is ripe for disruption. Leveraging their deep experience in healthcare and technology, they have put together a brilliant team of engineers and scientists to turn their vision into reality. Their ability to leverage technologies such as AI, machine learning, and predictive analytics will add tremendous efficiencies and cut wasteful processes across the value chain, improving payouts and transparency for consumers and reducing search times and costs for law firms".
Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.
Thousands of students in England are angry about the controversial use of an algorithm to determine this year's GCSE and A-level results. They were unable to sit exams because of lockdown, so the algorithm used data about schools' results in previous years to determine grades. It meant about 40% of this year's A-level results came out lower than predicted, which has a huge impact on what students are able to do next. GCSE results are due out on Thursday. There are many examples of algorithms making big decisions about our lives, without us necessarily knowing how or when they do it.
The aim of improvements in data driven exercises in insurance has led to the desire to gather additional data than traditionally available. In addition to underwriting characteristics such as age, gender and address, technology now allows the collection of many more variables. Examples include dynamic data from sensors for driving behaviour in vehicles, appliance and electrical usage in homes and static data from external databases on traffic violations, crime scores or credit scores. High dimensional models arise if modelling sensor data at multiple time points and the individual variables that comprise summary scores. Reasoning with a large number of variables can become unnecessarily complex without any actuarial judgment. For example, it may not be necessary to include hundreds of rating factors as predictors if many of them are known to be related or unnecessary. This discussion proposes the use of graph theory as a means of translating intuitive reasoning to mathematical properties. This is done via graphical models, which involve the use of graph theory to formulate probabilistic models (Lauritzen, 1996). The approach has been used in applications such as medical expert systems (Franklin et al., 1989), natural language processing (Blei et al., 2003), image processing, bioinformatics and others (Wainwright and Jordan, 2008).
You cannot learn to play the piano by going to concerts. A compass [will] point you True North from where you're standing, but it's got no advice about the swamps and deserts and chasms that you'll encounter along the way. If in pursuit of your destination, you plunge ahead, heedless of obstacles, and achieve nothing more than to sink in a swamp... What's the use of knowing True North? The practice of surgery often forces unique ad hoc decisions based on contextual intricacies in the moment, which are not typically captured in broad, top-down, or committee-approved guidelines. Surgical ethics are principled, of course, but also pragmatic. They are also replete with moral contradictions and uncertainties; the introduction of novel technology into this environment can potentially increase those challenges. The essential element that distinguishes an ethical problem from a tragic situation is the element of choice." Moreover, choosing between options often involves identifying factors by which those options are not exactly equal, and the method one uses to weigh these factors can draw upon a set of ethical frameworks that, themselves, can be somewhat incongruous. At their core, artificial intelligence (AI) systems - and machine learning (ML) more specifically - are also designed to make choices, often by categorizing some input among a set of nominal categories. In the past, the choices these systems made could only be evaluated by their correctness - their accuracy in applying the same categorical labels that a human would to previously unseen inputs, like whether an image contains a tumour, or not.
Smart IT systems are now calculating claims costs and attributing fault for accidents without any human involvement, speeding up the resolution of claims. Technology is set to transform motor insurance in the next five to 10 years, revolutionising both the claims process and repair. Artificial intelligence (AI) is enabling insurers to evaluate vehicle damage at the scene of a collision, without the need for a claims handler or loss adjustor. By analysing millions of photos of vehicle damage and cross-referencing them with actual repairs, programmers have been able to create algorithms that can assess the scale of the damage and create a full estimate including recommended repair, paint, parts costs and labour hours. The system can determine, for example, whether body panels can be repaired or need replacing, and in worse case scenarios it ensures that no total losses are sent to bodyshops.