Goto

Collaborating Authors

Artificial Intellgence -- Application in Life Sciences and Beyond. The Upper Rhine Artificial Intelligence Symposium UR-AI 2021

arXiv.org Artificial Intelligence

The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


Artificial Intelligence : from Research to Application ; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2019)

arXiv.org Artificial Intelligence

The TriRhenaTech alliance universities and their partners presented their competences in the field of artificial intelligence and their cross-border cooperations with the industry at the tri-national conference 'Artificial Intelligence : from Research to Application' on March 13th, 2019 in Offenburg. The TriRhenaTech alliance is a network of universities in the Upper Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

arXiv.org Artificial Intelligence

The ability to distinguish between the self and the background is of paramount importance for robotic tasks. The particular case of hands, as the end effectors of a robotic system that more often enter into contact with other elements of the environment, must be perceived and tracked with precision to execute the intended tasks with dexterity and without colliding with obstacles. They are fundamental for several applications, from Human-Robot Interaction tasks to object manipulation. Modern humanoid robots are characterized by high number of degrees of freedom which makes their forward kinematics models very sensitive to uncertainty. Thus, resorting to vision sensing can be the only solution to endow these robots with a good perception of the self, being able to localize their body parts with precision. In this paper, we propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view. It is known that CNNs require a huge amount of data to be trained. To overcome the challenge of labeling real-world images, we propose the use of simulated datasets exploiting domain randomization techniques. We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy. We focus our attention on developing a methodology that requires low amounts of data to achieve reasonable performance while giving detailed insight on how to properly generate variability in the training dataset. Moreover, we analyze the fine-tuning process within the complex model of Mask-RCNN, understanding which weights should be transferred to the new task of segmenting robot hands. Our final model was trained solely on synthetic images and achieves an average IoU of 82% on synthetic validation data and 56.3% on real test data. These results were achieved with only 1000 training images and 3 hours of training time using a single GPU.


Action in Mind: A Neural Network Approach to Action Recognition and Segmentation

arXiv.org Artificial Intelligence

Recognizing and categorizing human actions is an important task with applications in various fields such as human-robot interaction, video analysis, surveillance, video retrieval, health care system and entertainment industry. This thesis presents a novel computational approach for human action recognition through different implementations of multi-layer architectures based on artificial neural networks. Each system level development is designed to solve different aspects of the action recognition problem including online real-time processing, action segmentation and the involvement of objects. The analysis of the experimental results are illustrated and described in six articles. The proposed action recognition architecture of this thesis is composed of several processing layers including a preprocessing layer, an ordered vector representation layer and three layers of neural networks. It utilizes self-organizing neural networks such as Kohonen feature maps and growing grids as the main neural network layers. Thus the architecture presents a biological plausible approach with certain features such as topographic organization of the neurons, lateral interactions, semi-supervised learning and the ability to represent high dimensional input space in lower dimensional maps. For each level of development the system is trained with the input data consisting of consecutive 3D body postures and tested with generalized input data that the system has never met before. The experimental results of different system level developments show that the system performs well with quite high accuracy for recognizing human actions.


Artificial Intelligence in Surgery

arXiv.org Artificial Intelligence

The Hamlyn Centre for Robotic Surgery, Imperial College London, UK 2. Institute of Medical Robotics, Shanghai Jiao Tong University, ChinaAbstract Artificial Intelligence (AI) is gradually changing the practice of surgery with the advanced technological development of imaging, navigation and robotic intervention. In this article, the recent successful and influential applications of AI in surgery are reviewed from preoperative planning and intra-operative guidance to the integration of surgical robots. We end with summarizing the current state, emerging trends and major challenges in the future development of AI in surgery. Keywords: Artificial intelligence, Surgical autonomy, Medical robotics, Deep learning 1. Introduction Advances in surgery have made a significant impact on the management of both acute and chronic diseases, prolonging life and continuously extending the boundary of survival. These advances are underpinned by continuing technological developments in diagnosis, imaging, and surgical instrumentation. Complex surgical navigation and planning are made possible through the use of both pre-and intra-operative imaging techniques such as ultrasound, Computed Tomography (CT), and Magnetic Resonance Imaging Preprint submitted to Frontiers of Medicine January 6, 2020 arXiv:2001.00627v1 Many terminal illnesses have been transformed into clinically manageable chronic lifelong conditions and increasing surgery is focused on the systematic level impact on patients, avoiding isolated surgical treatment or anatomical alteration, with careful consideration of metabolic, haemodynamic and neurohormonal consequences that can influence the quality of life. For recent advances in medicine, AI has played an important role in clinical decision support since the early years of developing the MYCIN system [5]. AI is now increasingly used for risk stratification, genomics, imaging and diagnosis, precision medicine, and drug discovery. The introduction of AI in surgery is more recent and it has a strong root in imaging and navigation, with early techniques focused on feature detection and computer assisted intervention for both preoperative planning and intra-operative guidance. Over the years, supervised algorithms such as active shape models, atlas based methods and statistical classifiers have been developed [1]. With recent successes of AlexNet [6], deep learning methods, especially Deep Con-volutional Neural Network (DCNN) where multiple convolutional layers are cascaded, have enabled automatically learned data-driven descriptors, rather than ad hoc handcrafted features, to be used for image understanding with improved robustness and generalizability.