Goto

Collaborating Authors

 standardisation


Loop2Net: Data-Driven Generation and Optimization of Airfoil CFD Meshes from Sparse Boundary Coordinates

Fan, Lushun, Xia, Yuqin, Li, Jun, Jenkins, Karl

arXiv.org Artificial Intelligence

In this study, an innovative intelligent optimization system for mesh quality is proposed, which is based on a deep convolutional neural network architecture, to achieve mesh generation and optimization. The core of the study is the Loop2Net generator and loss function, it predicts the mesh based on the given wing coordinates. And the model's performance is continuously optimised by two key loss functions during the training. Then discipline by adding penalties, the goal of mesh generation was finally reached.


Accelerating Discovery in Natural Science Laboratories with AI and Robotics: Perspectives and Challenges from the 2024 IEEE ICRA Workshop, Yokohama, Japan

Cooper, Andrew I., Courtney, Patrick, Darvish, Kourosh, Eckhoff, Moritz, Fakhruldeen, Hatem, Gabrielli, Andrea, Garg, Animesh, Haddadin, Sami, Harada, Kanako, Hein, Jason, Hübner, Maria, Knobbe, Dennis, Pizzuto, Gabriella, Shkurti, Florian, Shrestha, Ruja, Thurow, Kerstin, Vescovi, Rafael, Vogel-Heuser, Birgit, Wolf, Ádám, Yoshikawa, Naruki, Zeng, Yan, Zhou, Zhengxue, Zwirnmann, Henning

arXiv.org Artificial Intelligence

Fundamental breakthroughs across many scientific disciplines are becoming increasingly rare (1). At the same time, challenges related to the reproducibility and scalability of experiments, especially in the natural sciences (2,3), remain significant obstacles. For years, automating scientific experiments has been viewed as the key to solving this problem. However, existing solutions are often rigid and complex, designed to address specific experimental tasks with little adaptability to protocol changes. With advancements in robotics and artificial intelligence, new possibilities are emerging to tackle this challenge in a more flexible and human-centric manner.


Large Multimodal Model based Standardisation of Pathology Reports with Confidence and their Prognostic Significance

Alzaid, Ethar, Pergola, Gabriele, Evans, Harriet, Snead, David, Minhas, Fayyaz

arXiv.org Artificial Intelligence

Pathology reports are rich in clinical and pathological details but are often presented in free-text format. The unstructured nature of these reports presents a significant challenge limiting the accessibility of their content. In this work, we present a practical approach based on the use of large multimodal models (LMMs) for automatically extracting information from scanned images of pathology reports with the goal of generating a standardised report specifying the value of different fields along with estimated confidence about the accuracy of the extracted fields. The proposed approach overcomes limitations of existing methods which do not assign confidence scores to extracted fields limiting their practical use. The proposed framework uses two stages of prompting a Large Multimodal Model (LMM) for information extraction and validation. The framework generalises to textual reports from multiple medical centres as well as scanned images of legacy pathology reports. We show that the estimated confidence is an effective indicator of the accuracy of the extracted information that can be used to select only accurately extracted fields. We also show the prognostic significance of structured and unstructured data from pathology reports and show that the automatically extracted field values significant prognostic value for patient stratification. The framework is available for evaluation via the URL: https://labieb.dcs.warwick.ac.uk/.


Exploring Federated Deep Learning for Standardising Naming Conventions in Radiotherapy Data

Haidar, Ali, Mouiee, Daniel Al, Aly, Farhannah, Thwaites, David, Holloway, Lois

arXiv.org Artificial Intelligence

Standardising structure volume names in radiotherapy (RT) data is necessary to enable data mining and analyses, especially across multi-institutional centres. This process is time and resource intensive, which highlights the need for new automated and efficient approaches to handle the task. Several machine learning-based methods have been proposed and evaluated to standardise nomenclature. However, no studies have considered that RT patient records are distributed across multiple data centres. This paper introduces a method that emulates real-world environments to establish standardised nomenclature. This is achieved by integrating decentralised real-time data and federated learning (FL). A multimodal deep artificial neural network was proposed to standardise RT data in federated settings. Three types of possible attributes were extracted from the structures to train the deep learning models: tabular, visual, and volumetric. Simulated experiments were carried out to train the models across several scenarios including multiple data centres, input modalities, and aggregation strategies. The models were compared against models developed with single modalities in federated settings, in addition to models trained in centralised settings. Categorical classification accuracy was calculated on hold-out samples to inform the models performance. Our results highlight the need for fusing multiple modalities when training such models, with better performance reported with tabular-volumetric models. In addition, we report comparable accuracy compared to models built in centralised settings. This demonstrates the suitability of FL for handling the standardization task. Additional ablation analyses showed that the total number of samples in the data centres and the number of data centres highly affects the training process and should be carefully considered when building standardisation models.


Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance?

Ho-Dac, Marion

arXiv.org Artificial Intelligence

However, these texts do not provide any guidelines that specify and detail the relationship between AI standards and fundamental rights, its meaning or implication. This chapter aims to clarify this critical regulatory blind spot. The main issue tackled is whether the adoption of AI harmonised standards, based on the future AI Act, should take into account fundamental rights. In our view, the response is yes. The high risks posed by certain AI systems relate in particular to infringements of fundamental rights. Therefore, mitigating such risks involves fundamental rights considerations and this is what future harmonised standards should reflect. At the same time, valid criticisms of the European standardisation process have to be addressed. Finally, the practical incorporation of fundamental rights considerations in the ongoing European standardisation of AI systems is discussed.


An adaptive standardisation model for Day-Ahead electricity price forecasting

Sebastián, Carlos, González-Guillén, Carlos E., Juan, Jesús

arXiv.org Artificial Intelligence

The study of Day-Ahead prices in the electricity market is one of the most popular problems in time series forecasting. Previous research has focused on employing increasingly complex learning algorithms to capture the sophisticated dynamics of the market. However, there is a threshold where increased complexity fails to yield substantial improvements. In this work, we propose an alternative approach by introducing an adaptive standardisation to mitigate the effects of dataset shifts that commonly occur in the market. By doing so, learning algorithms can prioritize uncovering the true relationship between the target variable and the explanatory variables. We investigate four distinct markets, including two novel datasets, previously unexplored in the literature. These datasets provide a more realistic representation of the current market context, that conventional datasets do not show. The results demonstrate a significant improvement across all four markets, using learning algorithms that are less complex yet widely accepted in the literature. This significant advancement unveils opens up new lines of research in this field, highlighting the potential of adaptive transformations in enhancing the performance of forecasting models.


The Digital World: Shaping global standards for Artificial Intelligence - Express Computer

#artificialintelligence

Despite being viewed as a technology of the future, artificial intelligence (AI) has already impacted our daily lives in several ways. Right from the time we wake up, till we go to bed, AI is constantly a part of our lives in forms like voice assistants, online banking, OTT, face IDs among others. Shaping global standards for AI A number of standards covering significant AI issues are now being developed by the ISO/IEC committee for artificial intelligence under the working title ISO/IEC 42001 ISO/IEC DIS 42001 – Information technology -- Artificial intelligence -- Management system. The ISO/IEC 42001 standard, which is being developed by 50 countries, will be essential for improving AI governance and accountability globally. ISO/IEC standardisation brings together the opinions of all relevant stakeholder groups, including SMEs, academia, civil society, and many more.


UK launches new AI Standards Hub for the development of AI best practices

#artificialintelligence

In January 2022, DLA Piper reported on an announcement of a new initiative, as part of the UK's National AI Strategy, to shape the way organisations and regulators develop technical standards for artificial intelligence ("AI"). The initiative, the AI Standards Hub ("Hub"), was highlighted as a collaborative effort between the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory, in partnership with the UK Government, to lead the way in developing standards that could be used across all sectors and jurisdictions. On 12 October, in their latest update, the Alan Turing Institute announced that the hard work of the collaborators was finally complete and that the Hub was ready for interaction. While still early in its use, the Hub already contains an array of resources that will allow its users to understand and help shape the role of standards in the development of AI and best practices. The primary goal of the Hub is to advance trustworthy and responsible AI through a focus on standards that can be used as part of governance and innovation tools and mechanisms.


Scrutinising AI requires holistic, end-to-end system audits

#artificialintelligence

Organisations must conduct end-to-end audits that consider both the social and technical aspects of artificial intelligence (AI) to fully understand the impacts of any given system, but a lack of understanding around how to conduct holistic audits and the limitations of the process is holding back progress, say algorithmic auditing experts. At the inaugural International Algorithmic Auditing Conference, hosted in Barcelona on 8 November by algorithmic auditing firm Eticas, experts had a wide-ranging discussion on what a "socio-technical" audit for AI should entail, as well as various challenges associated with the process. Attended by representatives from industry, academia and the third sector, the goal of the conference is to create a shared forum for experts to discuss developments in the field and help establish a roadmap for how organisations can manage their AI systems responsibly. Those involved in this first-of-its-kind gathering will go on to Brussels to meet with European Union (EU) officials and other representatives from digital rights organisations, so they can share their collective thinking on how AI audits can and should be regulated for. Gemma Galdon-Clavell, conference chair and director of Eticas, said: "Technical systems, when they're based on personal data, are not just technical, they are socio-technical, because the data comes from comes from social processes."


Patenting the AI pipeline: intellectual property for AI before standardisation

#artificialintelligence

Over the past few years, and after decades as little more than a mathematical curiosity, useful industrial applications of AI have become commonplace. AI is now recognised as one of the primary drivers of computing development. In 2018, Canadians Yoshua Bengio, Geoffrey Hinton and Yann LeCun – the'godfathers of AI' – received the Turing Award, computing's highest honour, for their foundational work on deep learning. The International Data Corporation forecasts that worldwide revenues for the AI market will grow to nearly $330 billion in 2021, and will exceed $550 billion by 2024 (IDC Semiannual AI Tracker, January 2021). Driven and enabled by the extraordinary growth of data globally, this surge in the AI industry has also spurred a flood of AI-related patenting.