Goto

Collaborating Authors

 screening test



After investigation, those losses are due to hyperparameter

Neural Information Processing Systems

Hence, for both versions, Screenkhorn is more competitive than Greenkhorn. On the use of constrained L-BFGS (Reviewers #2 and #3). (see Proposition 1). Concern 2. The new formulation (3) has the form of The final version of the paper will include all suggested modifications.



Devising a solution to the problems of Cancer awareness in Telangana

Avhad, Priyanka, Kshirsagar, Vedanti, Ranjan, Urvi, Nakhua, Mahek

arXiv.org Artificial Intelligence

According to the data, the percent of women who underwent screening for cervical cancer, breast and oral cancer in Telangana in the year 2020 was 3.3 percent, 0.3 percent and 2.3 percent respectively. Although early detection is the only way to reduce morbidity and mortality, people have very low awareness about cervical and breast cancer signs and symptoms and screening practices. We developed an ML classification model to predict if a person is susceptible to breast or cervical cancer based on demographic factors. We devised a system to provide suggestions for the nearest hospital or Cancer treatment centres based on the users location or address. In addition to this, we can integrate the health card to maintain medical records of all individuals and conduct awareness drives and campaigns. For ML classification models, we used decision tree classification and support vector classification algorithms for cervical cancer susceptibility and breast cancer susceptibility respectively. Thus, by devising this solution we come one step closer to our goal which is spreading cancer awareness, thereby, decreasing the cancer mortality and increasing cancer literacy among the people of Telangana.


Safe Screening Rules for Group SLOPE

Bao, Runxue, Lu, Quanchao, Zhang, Yanfu

arXiv.org Machine Learning

Variable selection is a challenging problem in high-dimensional sparse learning, especially when group structures exist. Group SLOPE performs well for the adaptive selection of groups of predictors. However, the block non-separable group effects in Group SLOPE make existing methods either invalid or inefficient. Consequently, Group SLOPE tends to incur significant computational costs and memory usage in practical high-dimensional scenarios. To overcome this issue, we introduce a safe screening rule tailored for the Group SLOPE model, which efficiently identifies inactive groups with zero coefficients by addressing the block non-separable group effects. By excluding these inactive groups during training, we achieve considerable gains in computational efficiency and memory usage. Importantly, the proposed screening rule can be seamlessly integrated into existing solvers for both batch and stochastic algorithms. Theoretically, we establish that our screening rule can be safely employed with existing optimization algorithms, ensuring the same results as the original approaches. Experimental results confirm that our method effectively detects inactive feature groups and significantly boosts computational efficiency without compromising accuracy.


Safe Screening Rules for Group OWL Models

Bao, Runxue, Lu, Quanchao, Zhang, Yanfu

arXiv.org Machine Learning

Group Ordered Weighted $L_{1}$-Norm (Group OWL) regularized models have emerged as a useful procedure for high-dimensional sparse multi-task learning with correlated features. Proximal gradient methods are used as standard approaches to solving Group OWL models. However, Group OWL models usually suffer huge computational costs and memory usage when the feature size is large in the high-dimensional scenario. To address this challenge, in this paper, we are the first to propose the safe screening rule for Group OWL models by effectively tackling the structured non-separable penalty, which can quickly identify the inactive features that have zero coefficients across all the tasks. Thus, by removing the inactive features during the training process, we may achieve substantial computational gain and memory savings. More importantly, the proposed screening rule can be directly integrated with the existing solvers both in the batch and stochastic settings. Theoretically, we prove our screening rule is safe and also can be safely applied to the existing iterative optimization algorithms. Our experimental results demonstrate that our screening rule can effectively identify the inactive features and leads to a significant computational speedup without any loss of accuracy.


Amid concerns about Biden's mental acuity, experts reveal how cognitive tests work and what they reveal

FOX News

After President Biden's lackluster debate performance sparked renewed concerns about his mental acuity, both sides of the political spectrum have been clamoring for him to take a cognitive test. Biden has not seen a neurologist, but did undergo his annual physical exam in February, Dr. Kevin O'Connor, physician for the president, said in a July 8 statement from the White House. The doctor reiterated that Biden's physical exam did not reveal concerns about a neurological disorder. In a recent interview with George Stephanopoulos, Biden remained noncommittal about formal cognitive testing, noting, "I have a cognitive test every single day" -- meaning by performing his duties as president of the United States. Many Americans, however, have wanted greater transparency.


Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries

Neural Information Processing Systems

Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can learn informative hierarchical sparse representations more efficiently.



Evaluating large language models' ability to understand metaphor and sarcasm using a screening test for Asperger syndrome

Yakura, Hiromu

arXiv.org Artificial Intelligence

Metaphors and sarcasm are precious fruits of our highly-evolved social communication skills. However, children with Asperger syndrome are known to have difficulties in comprehending sarcasm, even if they possess a certain level of verbal IQ sufficient for understanding metaphors. Given that, a screening test that scores the ability to understand metaphor and sarcasm has been used to differentiate Asperger syndrome from other symptoms exhibiting akin external behaviors (e.g., attention-deficit/hyperactivity disorder). This study uses the standardized test to examine the capability of recent large language models (LLMs) in understanding human nuanced communication. The results divulged that, whereas their ability to comprehend metaphors has been improved with the increase of the number of model parameters, the improvement in sarcasm understanding was not observed. This implies that an alternative approach is imperative to imbue LLMs with the capacity to grasp sarcasm, which has been associated with the amygdala, a pivotal cerebral region for emotional learning, in the case of humans.