Goto

Collaborating Authors

 Babic, Aleksandar


Rethinking Synthetic Data definitions: A privacy driven approach

arXiv.org Artificial Intelligence

Synthetic data is emerging as a cost-eective solution necessary to meet the increasing data demands of AI development and can be generated either from existing knowledge or derived from real data. The traditional classification of synthetic data into hybrid, partial or fully synthetic datasets has limited value and does not reflect the ever-increasing methods to generate synthetic data. The characteristics of synthetic data are greatly shaped by the generation method and their source, which in turn determines its practical applications. We suggest a dierent approach to grouping synthetic data types that better reflect privacy perspectives. This is a crucial step towards improved regulatory guidance in the generation and processing of synthetic data. This approach to classification provides flexibility to new advancements like deep generative methods and oers a more practical framework for future applications.


Artificial intelligence to improve clinical coding practice in Scandinavia: a crossover randomized controlled trial

arXiv.org Artificial Intelligence

International Statistical Classification of Diseases and Related Health Problems codes, tenth revision (ICD-10) [1] play an important role in healthcare. All hospitals in Scandinavia record their activity by summarizing patient encounters into ICD-10 codes. Clinical coding directly affects how health institutions function on a daily basis because they are partially reimbursed based on the codes they report. The same codes are used to measure both volume and quality of care, thereby providing an important foundation of knowledge for decision makers at all levels in the healthcare service. Clinical coding is a highly complex and challenging task that requires a deep understanding of both the medical terminology and intricate clinical documentation. Coders must accurately translate detailed patient records into standardized codes, navigating the inherently complex medical language, which make this task prone to errors and inconsistencies.


Implementing a Nordic-Baltic Federated Health Data Network: a case report

arXiv.org Artificial Intelligence

Background: Centralized collection and processing of healthcare data across national borders pose significant challenges, including privacy concerns, data heterogeneity and legal barriers. To address some of these challenges, we formed an interdisciplinary consortium to develop a feder-ated health data network, comprised of six institutions across five countries, to facilitate Nordic-Baltic cooperation on secondary use of health data. The objective of this report is to offer early insights into our experiences developing this network. Methods: We used a mixed-method ap-proach, combining both experimental design and implementation science to evaluate the factors affecting the implementation of our network. Results: Technically, our experiments indicate that the network functions without significant performance degradation compared to centralized simu-lation. Conclusion: While use of interdisciplinary approaches holds a potential to solve challeng-es associated with establishing such collaborative networks, our findings turn the spotlight on the uncertain regulatory landscape playing catch up and the significant operational costs.


Can I trust my fake data -- A comprehensive quality assessment framework for synthetic tabular data in healthcare

arXiv.org Artificial Intelligence

Ensuring safe adoption of AI tools in healthcare hinges on access to sufficient data for training, testing and validation. In response to privacy concerns and regulatory requirements, using synthetic data has been suggested. Synthetic data is created by training a generator on real data to produce a dataset with similar statistical properties. Competing metrics with differing taxonomies for quality evaluation have been suggested, resulting in a complex landscape. Optimising quality entails balancing considerations that make the data fit for use, yet relevant dimensions are left out of existing frameworks. We performed a comprehensive literature review on the use of quality evaluation metrics on SD within the scope of tabular healthcare data and SD made using deep generative methods. Based on this and the collective team experiences, we developed a conceptual framework for quality assurance. The applicability was benchmarked against a practical case from the Dutch National Cancer Registry. We present a conceptual framework for quality assurance of SD for AI applications in healthcare that aligns diverging taxonomies, expands on common quality dimensions to include the dimensions of Fairness and Carbon footprint, and proposes stages necessary to support real-life applications. Building trust in synthetic data by increasing transparency and reducing the safety risk will accelerate the development and uptake of trustworthy AI tools for the benefit of patients. Despite the growing emphasis on algorithmic fairness and carbon footprint, these metrics were scarce in the literature review. The overwhelming focus was on statistical similarity using distance metrics while sequential logic detection was scarce. A consensus-backed framework that includes all relevant quality dimensions can provide assurance for safe and responsible real-life applications of SD.


Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review

arXiv.org Artificial Intelligence

Ensuring quality human-AI interaction (HAII) in safety-critical industries is essential. Failure to do so can lead to catastrophic and deadly consequences. Despite this urgency, what little research there is on HAII is fragmented and inconsistent. We present here a survey of that literature and recommendations for research best practices that will improve the field. We divided our investigation into the following research areas: (1) terms used to describe HAII, (2) primary roles of AI-enabled systems, (3) factors that influence HAII, and (4) how HAII is measured. Additionally, we described the capabilities and maturity of the AI-enabled systems used in safety-critical industries discussed in these articles. We found that no single term is used across the literature to describe HAII and some terms have multiple meanings. According to our literature, five factors influence HAII: user characteristics and background (e.g., user personality, perceptions), AI interface and features (e.g., interactive UI design), AI output (e.g., accuracy, actionable recommendations), explainability and interpretability (e.g., level of detail, user understanding), and usage of AI (e.g., heterogeneity of environments and user needs). HAII is most commonly measured with user-related subjective metrics (e.g., user perception, trust, and attitudes), and AI-assisted decision-making is the most common primary role of AI-enabled systems. Based on this review, we conclude that there are substantial research gaps in HAII. Researchers and developers need to codify HAII terminology, involve users throughout the AI lifecycle (especially during development), and tailor HAII in safety-critical industries to the users and environments.