Goto

Collaborating Authors

Results


Machine Learning for Accurate Intraoperative Pediatric Middle Ear Effusion Diagnosis

#artificialintelligence

OBJECTIVES: Misdiagnosis of acute and chronic otitis media in children can result in significant consequences from either undertreatment or overtreatment. Our objective was to develop and train an artificial intelligence algorithm to accurately predict the presence of middle ear effusion in pediatric patients presenting to the operating room for myringotomy and tube placement. METHODS: We trained a neural network to classify images as “ normal” (no effusion) or “abnormal” (effusion present) using tympanic membrane images from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media or otitis media with effusion. Model performance was tested on held-out cases and fivefold cross-validation. RESULTS: The mean training time for the neural network model was 76.0 (SD ± 0.01) seconds. Our model approach achieved a mean image classification accuracy of 83.8% (95% confidence interval [CI]: 82.7–84.8). In support of this classification accuracy, the model produced an area under the receiver operating characteristic curve performance of 0.93 (95% CI: 0.91–0.94) and F1-score of 0.80 (95% CI: 0.77–0.82). CONCLUSIONS: Artificial intelligence–assisted diagnosis of acute or chronic otitis media in children may generate value for patients, families, and the health care system by improving point-of-care diagnostic accuracy. With a small training data set composed of intraoperative images obtained at time of tympanostomy tube insertion, our neural network was accurate in predicting the presence of a middle ear effusion in pediatric ear cases. This diagnostic accuracy performance is considerably higher than human-expert otoscopy-based diagnostic performance reported in previous studies.


3D Spatial Recognition without Spatially Labeled 3D

arXiv.org Artificial Intelligence

We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision. WyPR jointly addresses three core 3D recognition tasks: point-level semantic segmentation, 3D proposal generation, and 3D object detection, coupling their predictions through self and cross-task consistency losses. We show that in conjunction with standard multiple-instance learning objectives, WyPR can detect and segment objects in point cloud data without access to any spatial labels at training time. We demonstrate its efficacy using the ScanNet and S3DIS datasets, outperforming prior state of the art on weakly-supervised segmentation by more than 6% mIoU. In addition, we set up the first benchmark for weakly-supervised 3D object detection on both datasets, where WyPR outperforms standard approaches and establishes strong baselines for future work.


Negative Selection Algorithm Research and Applications in the last decade: A Review

arXiv.org Artificial Intelligence

The Negative selection Algorithm (NSA) is one of the important methods in the field of Immunological Computation (or Artificial Immune Systems). Over the years, some progress was made which turns this algorithm (NSA) into an efficient approach to solve problems in different domain. This review takes into account these signs of progress during the last decade and categorizes those based on different characteristics and performances. Our study shows that NSA's evolution can be labeled in four ways highlighting the most notable NSA variations and their limitations in different application domains. We also present alternative approaches to NSA for comparison and analysis. It is evident that NSA performs better for nonlinear representation than most of the other methods, and it can outperform neural-based models in computation time. We summarize NSA's development and highlight challenges in NSA research in comparison with other similar models.


An efficient projection neural network for $\ell_1$-regularized logistic regression

arXiv.org Machine Learning

$\ell_1$ regularization has been used for logistic regression to circumvent the overfitting and use the estimated sparse coefficient for feature selection. However, the challenge of such a regularization is that the $\ell_1$ norm is not differentiable, making the standard algorithms for convex optimization not applicable to this problem. This paper presents a simple projection neural network for $\ell_1$-regularized logistics regression. In contrast to many available solvers in the literature, the proposed neural network does not require any extra auxiliary variable nor any smooth approximation, and its complexity is almost identical to that of the gradient descent for logistic regression without $\ell_1$ regularization, thanks to the projection operator. We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value. The proposed neural solution significantly outperforms state-of-the-art methods with respect to the execution time and is competitive in terms of accuracy and AUROC.


Selective Probabilistic Classifier Based on Hypothesis Testing

arXiv.org Artificial Intelligence

In this paper, we propose a simple yet effective method to deal with the violation of the Closed-World Assumption for a classifier. Previous works tend to apply a threshold either on the classification scores or the loss function to reject the inputs that violate the assumption. However, these methods cannot achieve the low False Positive Ratio (FPR) required in safety applications. The proposed method is a rejection option based on hypothesis testing with probabilistic networks. With probabilistic networks, it is possible to estimate the distribution of outcomes instead of a single output. By utilizing Z-test over the mean and standard deviation for each class, the proposed method can estimate the statistical significance of the network certainty and reject uncertain outputs. The proposed method was experimented on with different configurations of the COCO and CIFAR datasets. The performance of the proposed method is compared with the Softmax Response, which is a known top-performing method. It is shown that the proposed method can achieve a broader range of operation and cover a lower FPR than the alternative.


Click-Through Rate Prediction Using Graph Neural Networks and Online Learning

arXiv.org Artificial Intelligence

Recommendation systems have been extensively studied by many literature in the past and are ubiquitous in online advertisement, shopping industry/e-commerce, query suggestions in search engines, and friend recommendation in social networks. Moreover, restaurant/music/product/movie/news/app recommendations are only a few of the applications of a recommender system. A small percent improvement on the CTR prediction accuracy has been mentioned to add millions of dollars of revenue to the advertisement industry. Click-Through-Rate (CTR) prediction is a special version of recommender system in which the goal is predicting whether or not a user is going to click on a recommended item. A content-based recommendation approach takes into account the past history of the user's behavior, i.e. the recommended products and the users reaction to them. So, a personalized model that recommends the right item to the right user at the right time is the key to building such a model. On the other hand, the so-called collaborative filtering approach incorporates the click history of the users who are very similar to a particular user, thereby helping the recommender to come up with a more confident prediction for that particular user by leveraging the wider knowledge of users who share their taste in a connected network of users. In this project, we are interested in building a CTR predictor using Graph Neural Networks complemented by an online learning algorithm that models such dynamic interactions. By framing the problem as a binary classification task, we have evaluated this system both on the offline models (GNN, Deep Factorization Machines) with test-AUC of 0.7417 and on the online learning model with test-AUC of 0.7585 using a sub-sampled version of Criteo public dataset consisting of 10,000 data points.


Accelerating Entrepreneurial Decision-Making Through Hybrid Intelligence

arXiv.org Artificial Intelligence

AI - Artificial Intelligence AGI - Artificial General Intelligence ANN - Artificial Neural Network ANOVA - Analysis of Variance ANT - Actor Network Theory API - Application Programming Interface APX - Amsterdam Power Exchange AVE - Average Variance Extracted BU - Business Unit CART - Classification and Regression Tree CBMV - Crowd-based Business Model Validation CR - Composite Reliability CT - Computed Tomography CVC - Corporate Venture Capital DR - Design Requirement DP - Design Principle DSR - Design Science Research DSS - Decision Support System EEX - European Energy Exchange FsQCA - Fuzzy-Set Qualitative Comparative Analysis GUI - Graphical User Interface HI-DSS - Hybrid Intelligence Decision Support System HIT - Human Intelligence Task IoT - Internet of Things IS - Information System IT - Information Technology MCC - Matthews Correlation Coefficient ML - Machine Learning OCT - Opportunity Creation Theory OGEMA 2.0 - Open Gateway Energy Management 2.0 OS - Operating System R&D - Research & Development RE - Renewable Energies RQ - Research Question SVM - Support Vector Machine SSD - Solid-State Drive SDK - Software Development Kit TCP/IP - Transmission Control Protocol/Internet Protocol TCT - Transaction Cost Theory UI - User Interface VaR - Value at Risk VC - Venture Capital VPP - Virtual Power Plant Chapter I


The Challenges and Opportunities of Human-Centered AI for Trustworthy Robots and Autonomous Systems

arXiv.org Artificial Intelligence

The trustworthiness of Robots and Autonomous Systems (RAS) has gained a prominent position on many research agendas towards fully autonomous systems. This research systematically explores, for the first time, the key facets of human-centered AI (HAI) for trustworthy RAS. In this article, five key properties of a trustworthy RAS initially have been identified. RAS must be (i) safe in any uncertain and dynamic surrounding environments; (ii) secure, thus protecting itself from any cyber-threats; (iii) healthy with fault tolerance; (iv) trusted and easy to use to allow effective human-machine interaction (HMI), and (v) compliant with the law and ethical expectations. Then, the challenges in implementing trustworthy autonomous system are analytically reviewed, in respects of the five key properties, and the roles of AI technologies have been explored to ensure the trustiness of RAS with respects to safety, security, health and HMI, while reflecting the requirements of ethics in the design of RAS. While applications of RAS have mainly focused on performance and productivity, the risks posed by advanced AI in RAS have not received sufficient scientific attention. Hence, a new acceptance model of RAS is provided, as a framework for requirements to human-centered AI and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human's capacity. while focusing on contributions to humanity.


Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning

arXiv.org Machine Learning

Graph representation learning has become a ubiquitous component in many scenarios, ranging from social network analysis to energy forecasting in smart grids. In several applications, ensuring the fairness of the node (or graph) representations with respect to some protected attributes is crucial for their correct deployment. Yet, fairness in graph deep learning remains under-explored, with few solutions available. In particular, the tendency of similar nodes to cluster on several real-world graphs (i.e., homophily) can dramatically worsen the fairness of these procedures. In this paper, we propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning. FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions. After describing the general algorithm, we demonstrate its application on two benchmark tasks, specifically, as a random walk model for producing node embeddings, and to a graph convolutional network for link prediction. We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy, and compares favourably with existing state-of-the-art solutions. In an ablation study, we demonstrate that our algorithm can flexibly interpolate between biasing towards fairness and an unbiased edge dropout. Furthermore, to better evaluate the gains, we propose a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics. In particular, we extend the metric used to measure the bias in the node embeddings to take into account the graph structure.


Explainable Artificial Intelligence Reveals Novel Insight into Tumor Microenvironment Conditions Linked with Better Prognosis in Patients with Breast Cancer

arXiv.org Artificial Intelligence

We investigated the data-driven relationship between features in the tumor microenvironment (TME) and the overall and 5-year survival in triple-negative breast cancer (TNBC) and non-TNBC (NTNBC) patients by using Explainable Artificial Intelligence (XAI) models. We used clinical information from patients with invasive breast carcinoma from The Cancer Genome Atlas and from two studies from the cbioPortal, the PanCanAtlas project and the GDAC Firehose study. In this study, we used a normalized RNA sequencing data-driven cohort from 1,015 breast cancer patients, alive or deceased, from the UCSC Xena data set and performed integrated deconvolution with the EPIC method to estimate the percentage of seven different immune and stromal cells from RNA sequencing data. Novel insights derived from our XAI model showed that CD4+ T cells and B cells are more critical than other TME features for enhanced prognosis for both TNBC and NTNBC patients. Our XAI model revealed the critical inflection points (i.e., threshold fractions) of CD4+ T cells and B cells above or below which 5-year survival rates improve. Subsequently, we ascertained the conditional probabilities of $\geq$ 5-year survival in both TNBC and NTNBC patients under specific conditions inferred from the inflection points. In particular, the XAI models revealed that a B-cell fraction exceeding 0.018 in the TME could ensure 100% 5-year survival for NTNBC patients. The findings from this research could lead to more accurate clinical predictions and enhanced immunotherapies and to the design of innovative strategies to reprogram the TME of breast cancer patients.