comment
The best new science fiction books of October 2025
Science fiction legend Ursula K. Le Guin is honoured with a new collection out this month, and sci-fi fans can also look forward to fiction from astronaut Chris Hadfield and award-winning authors Ken Liu and Mary Robinette Kowal Like many of you, no doubt, Ursula K. Le Guin is one of my favourite sci-fi writers. So I am really excited about a collection out this month that brings together the maps she would draw when starting a story, and also celebrates her brilliant and wise writing. Not least because we've just read with the New Scientist Book Club: do come and join us and share your thoughts on this classic novel with fellow fans! The sci-fi out this month looks forward as well as back, though. Ken Liu brings us a thriller set in the near future, and I'm keen to read Megha Majumdar's tale of a flooded Kolkata and a desperate mother.
- Asia > India > West Bengal > Kolkata (0.25)
- Asia > Russia (0.15)
- North America > United States > Utah (0.05)
- (6 more...)
- Government (0.98)
- Health & Medicine > Therapeutic Area (0.49)
AI-based modular warning machine for risk identification in proximity healthcare
Razzetta, Chiara, Noei, Shahryar, Barbarossa, Federico, Spairani, Edoardo, Roascio, Monica, Barbi, Elisa, Ciacci, Giulia, Sommariva, Sara, Guastavino, Sabrina, Piana, Michele, Lenge, Matteo, Arnulfo, Gabriele, Magenes, Giovanni, Maranesi, Elvira, Amabili, Giulio, Massone, Anna Maria, Benvenuto, Federico, Jurman, Giuseppe, Sona, Diego, Campi, Cristina
"DHEAL-COM - Digital Health Solutions in Community Medicine" is a research and technology project funded by the Italian Department of Health for the development of digital solutions of interest in proximity healthcare. The activity within the DHEAL-COM framework allows scientists to gather a notable amount of multi-modal data whose interpretation can be performed by means of machine learning algorithms. The present study illustrates a general automated pipeline made of numerous unsupervised and supervised methods that can ingest such data, provide predictive results, and facilitate model interpretations via feature identification.
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Africa > Comoros > Grande Comore > Moroni (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.69)
- Health & Medicine > Diagnostic Medicine (0.68)
- Health & Medicine > Health Care Technology (0.67)
- Government > Regional Government > Europe Government > Italy Government (0.35)
Evaluation of Hate Speech Detection Using Large Language Models and Geographical Contextualization
Zahid, Anwar Hossain, Roy, Monoshi Kumar, Das, Swarna
The proliferation of hate speech on social media is one of the serious issues that is bringing huge impacts to society: an escalation of violence, discrimination, and social fragmentation. The problem of detecting hate speech is intrinsically multifaceted due to cultural, linguistic, and contextual complexities and adversarial manipulations. In this study, we systematically investigate the performance of LLMs on detecting hate speech across multilingual datasets and diverse geographic contexts. Our work presents a new evaluation framework in three dimensions: binary classification of hate speech, geography-aware contextual detection, and robustness to adversarially generated text. Using a dataset of 1,000 comments from five diverse regions, we evaluate three state-of-the-art LLMs: Llama2 (13b), Codellama (7b), and DeepSeekCoder (6.7b). Codellama had the best binary classification recall with 70.6% and an F1-score of 52.18%, whereas DeepSeekCoder had the best performance in geographic sensitivity, correctly detecting 63 out of 265 locations. The tests for adversarial robustness also showed significant weaknesses; Llama2 misclassified 62.5% of manipulated samples. These results bring to light the trade-offs between accuracy, contextual understanding, and robustness in the current versions of LLMs. This work has thus set the stage for developing contextually aware, multilingual hate speech detection systems by underlining key strengths and limitations, therefore offering actionable insights for future research and real-world applications.
- Asia (0.70)
- North America > United States (0.30)
- Europe (0.28)
Reviews: A Primal-Dual link between GANs and Autoencoders
Summary: It is the purpose of this article to establish links between the two most popular generative models nowadays, namely GANs and (W)AEs. In Theorem 8, an (in)equality linking both criteria is stated, tending to explain the performance similarities between the models. An introduced f-WAE model is also thoroughly analyzed, linked to an f-divergence or a Wasserstein distance depending on the weighting operated. Finally, Authors use findings of Theorem 8 to derive generalization bounds on WAEs. Major Comments: - The paper is well written, related works are correctly discussed and notions well introduced, making the submission self-contained although quite dense. The fact that Theorem 8 can be viewed as a generalization of Lemma 4 is particularly striking, and attest of the soundness of the approach.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Thank you for fruitful comments. We would like to rebut this criticism from the following two points. We would like to emphasize that this restriction is identical with assuming that the loss function is strongly convex. A huge body of theoretical works on convex empirical risk minimization problems have been devoted to the problems with strongly convex loss functions. If the reviewers claim that the scope of our work is narrow, the same criticism should be applied to those past works targeted to strongly convex loss functions.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The authors prove that variational inference in LDA converges to the ground truth model, in polynomial time, for two different case studies with different underlying assumptions about the structure of the data. In this analysis, the authors employ "thresholded" EM updates which estimate the per-topic word distribution based on the subset of documents where a given document dominates. The proofs, which are provided in a 35-page supplement, require assumptions about the number of words in a document that are uniquely associated with each topic, the number of topics per document, and the number documents in which a given word exclusively identifies a topic. I am not enough of a specialist to evaluate the provided proofs in detail, so I will restrict myself to relatively high level comments. Empirically speaking, variational inference can and does get stuck in local maxima.
Reviews: Universal consistency and minimax rates for online Mondrian Forests
Summary: This paper proposes a modification of Mondorian Forest which is a variant of Random Forest, a majority vote of decision trees. The authors show that the modified algorithm has the consistency property while the original algorithm does not have one. In particular, when the conditional probability function is Lipschitz, the proposed algorithm achieves the minimax error rate, where the lower bound is previously known. Comments: The technical contribution is to refine the original version of the Mondorian Forest and prove its consistency. The theoretical results are nice and solid. The main idea comes from the original algorithm, thus the originality of the paper is a bit incremental.
Reviews: Trimmed Density Ratio Estimation
Summary: This paper proposes a "trimmed" estimator that robustly (to outliers) estimates the ratio of two densities, assuming an a exponential family model. This robustness is important, as density ratios can inherently be very unstable when the denominator is small. The proposed model is based on an optimization problem, motivated by minimizing KL divergence between the two densities in the ratio, and is made more computationally tractable by re-expressing it in terms of an equivalent saddle-point/max-min formulation. Similar to the one-class SVM, this formulation explicitly discards a portion (determined by a tuning parameter) of "outlier" samples. The density-ratio estimator is shown to be consistent in two practical settings, one in which the data contains a small portion of explicit outliers and another in which the estimand is intrinsically unstable.
Reviews: Preference Based Adaptation for Learning Objectives
Summary: The authors consider the problem of optimizing the linear combination of multiple objective functions, where these objective functions are typically surrogate loss functions for machine learning tasks. In the problem setting, the decision maker explore-while-exploit the linear combination in a duel bandit setting, where in each time step the decision maker tests the two hypotheses generated from two linear combinations, and then the decision maker would receive the feedback on whether the first hypothesis is better or the second is better. The main contributions of the paper is the proposal of online algorithms for the duel bandit problem, where the preference on two tested hypotheses is modeled by a binary logistic choice model. In order to avoid retraining the hypothesis for every different linear combination, the authors adapt the boosting algorithm, which focuses on optimizing the mixture of K different hypotheses, where each hypothesis stem from optimizing one surrogate function. Major Comment: I find the paper quite interesting in terms of problem model and the analysis, and I am more inclined towards acceptance than rejection.
How to plot a box plot using the pandas Python library? - The Security Buddy
Using a box plot, one can know the spread and skewness of data. It is a standardized way of displaying the five-number summary of the data: The minimum The maximum The median The first quartile or 25th percentile and The third quartile or 75th percentile A box plot usually includes two parts. It includes a […]
- Information Technology > Security & Privacy (1.00)
- Energy > Oil & Gas > Upstream (0.85)