determination
Completion of partial structures using Patterson maps with the CrysFormer machine learning model
Pan, Tom, Dramko, Evan, Miller, Mitchell D., Kyrillidis, Anastasios, Phillips, George N. Jr
Protein structure determination has long been one of the primary challenges of structural biology, to which deep machine learning (ML)-based approaches have increasingly been applied. However, these ML models generally do not incorporate the experimental measurements directly, such as X-ray crystallographic diffraction data. To this end, we explore an approach that more tightly couples these traditional crystallographic and recent ML-based methods, by training a hybrid 3-d vision transformer and convolutional network on inputs from both domains. We make use of two distinct input constructs / Patterson maps, which are directly obtainable from crystallographic data, and "partial structure" template maps derived from predicted structures deposited in the AlphaFold Protein Structure Database with subsequently omitted residues. With these, we predict electron density maps that are then post-processed into atomic models through standard crystallographic refinement processes. Introducing an initial dataset of small protein fragments taken from Protein Data Bank entries and placing them in hypothetical crystal settings, we demonstrate that our method is effective at both improving the phases of the crystallographic structure factors and completing the regions missing from partial structure templates, as well as improving the agreement of the electron density maps with the ground truth atomic structures. This work has been accepted in Acta Crystallographic section D.
- North America > United States > Texas > Harris County > Houston (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Plug-and-Play Dramaturge: A Divide-and-Conquer Approach for Iterative Narrative Script Refinement via Collaborative LLM Agents
Xie, Wenda, Guo, Chao, Wang, Yanqing Jing. Junle, Lv, Yisheng, Wang, Fei-Yue
Although LLMs have been widely adopted for creative content generation, a single-pass process often struggles to produce high-quality long narratives. How to effectively revise and improve long narrative scripts like scriptwriters remains a significant challenge, as it demands a comprehensive understanding of the entire context to identify global structural issues and local detailed flaws, as well as coordinating revisions at multiple granularities and locations. Direct modifications by LLMs typically introduce inconsistencies between local edits and the overall narrative requirements. To address these issues, we propose Dramaturge, a task and feature oriented divide-and-conquer approach powered by hierarchical multiple LLM agents. It consists of a Global Review stage to grasp the overall storyline and structural issues, a Scene-level Review stage to pinpoint detailed scene and sentence flaws, and a Hierarchical Coordinated Revision stage that coordinates and integrates structural and detailed improvements throughout the script. The top-down task flow ensures that high-level strategies guide local modifications, maintaining contextual consistency. The review and revision workflow follows a coarse-to-fine iterative process, continuing through multiple rounds until no further substantive improvements can be made. Comprehensive experiments show that Dra-maturge significantly outperforms all baselines in terms of script-level overall quality and scene-level details. Our approach is plug-and-play and can be easily integrated into existing methods to improve the generated scripts.
- Research Report (0.81)
- Workflow (0.54)
- Leisure & Entertainment (1.00)
- Media > Film (0.46)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The reviewer seems to have clearly understood the approach and our intended contribution. We feel that the reviewer's objections mostly involve technical issues that we did not explain or justify clearly enough, but which do not undermine the novelty or soundness of the basic results. We apologize for not explaining / justifying these issues more carefully, and feel they will be straightforward to address in the revision. We thank the reviewer for raising them.
How Signal's Meredith Whittaker Remembers SignalGate: 'No Fucking Way'
The Signal Foundation president recalls where she was when she heard Trump cabinet officials had added a journalist to a highly sensitive group chat. In March of this year, Meredith Whittaker was at her kitchen table in Paris when Signal, the encrypted messaging service she runs, suddenly became an international headline . A colleague sent their group chat the story ricocheting across the globe: "The Trump Administration Accidentally Texted Me Its War Plans." Of course, you know the rest: In the piece, The Atlantic's editor in chief, Jeffrey Goldberg, detailed how he'd been added to a Signal chat about an upcoming military operation in Yemen. Over the following days and weeks, the incident would become known as " SignalGate "--and created a legitimate risk that the fallout would cause people to question Signal's security, instead of pointing their fingers at the profoundly dubious op-sec of senior-level Trump officials. In fact, Signal's user numbers grew by leaps and bounds, both in the US and around the world. It's growth that, Whittaker thinks, is coming at a time when "people are feeling in a much deeper, much more personal way why privacy might be important." On this week's episode of, I talked to Whittaker, who also cofounded the AI Now Institute, about the aftermath of SignalGate, the trajectory of artificial intelligence, and the tech industry's current relationship with politics. Nice to see you, Katie. Nice to see you, too. Brace yourself, we always start these conversations with a little warmup, so I'm going to ask you some very fast questions. I knew you were gonna say that. What's the weirdest AI application you've ever seen? A chatbot that pretends to be your friend.
- Asia > Middle East > Yemen (0.24)
- South America (0.14)
- North America > United States > California (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Services (0.89)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.48)
Accelerating Atomic Fine Structure Determination with Graph Reinforcement Learning
Ding, M., Darvariu, V. -A., Ryabtsev, A. N., Hawes, N., Pickering, J. C.
Atomic data determined by analysis of observed atomic spectra are essential for plasma diagnostics. For each low-ionisation open d- and f-subshell atomic species, around $10^3$ fine structure level energies can be determined through years of analysis of $10^4$ observable spectral lines. We propose the automation of this task by casting the analysis procedure as a Markov decision process and solving it by graph reinforcement learning using reward functions learned on historical human decisions. In our evaluations on existing spectral line lists and theoretical calculations for Co II and Nd II-III, hundreds of level energies were computed within hours, agreeing with published values in 95% of cases for Co II and 54-87% for Nd II-III. As the current efficiency in atomic fine structure determination struggles to meet growing atomic data demands from astronomy and fusion science, our new artificial intelligence approach sets the stage for closing this gap.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
The Precautionary Principle and the Innovation Principle: Incompatible Guides for AI Innovation Governance?
In policy debates concerning the governance and regulation of Artificial Intelligence (AI), both the Precautionary Principle (PP) and the Innovation Principle (IP) are advocated by their respective interest groups. Do these principles offer wholly incompatible and contradictory guidance? Does one necessarily negate the other? I argue here that provided attention is restricted to weak-form PP and IP, the answer to both of these questions is "No." The essence of these weak formulations is the requirement to fully account for type-I error costs arising from erroneously preventing the innovation's diffusion through society (i.e. mistaken regulatory red-lighting) as well as the type-II error costs arising from erroneously allowing the innovation to diffuse through society (i.e. mistaken regulatory green-lighting). Within the Signal Detection Theory (SDT) model developed here, weak-PP red-light (weak-IP green-light) determinations are optimal for sufficiently small (large) ratios of expected type-I to type-II error costs. For intermediate expected cost ratios, an amber-light 'wait-and-monitor' policy is optimal. Regulatory sandbox instruments allow AI testing and experimentation to take place within a structured environment of limited duration and societal scale, whereby the expected cost ratio falls within the 'wait-and-monitor' range. Through sandboxing regulators and innovating firms learn more about the expected cost ratio, and what respective adaptations -- of regulation, of technical solution, of business model, or combination thereof, if any -- are needed to keep the ratio out of the weak-PP red-light zone. Nevertheless AI foundation models are ill-suited for regulatory sandboxing as their general-purpose nature precludes credible identification of misclassification costs.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (27 more...)
- Law > Statutes (1.00)
- Law > Environmental Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (6 more...)
A Novel Method to Determine Total Oxidant Concentration Produced by Non-Thermal Plasma Based on Image Processing and Machine Learning
Sancak, Mirkan Emir, Sen, Unal, Keris-Sen, Ulker Diler
Accurate determination of total oxidant concentration ([Ox]_{tot}) in non-thermal plasma (NTP)-treated aqueous systems remains a critical challenge due to the transient nature of reactive oxygen and nitrogen species and the subjectivity of conventional titration methods used for [Ox]_{tot} determination. This study introduces a novel, color-based computer analysis (CBCA) method that integrates advanced image processing with machine learning (ML) to quantify colorimetric shifts in potassium iodide (KI) solutions during oxidation. First, a custom-built visual data acquisition system captured high-resolution video of the color transitions in a KI solution during oxidation with an NTP system. The change in [Ox]_{tot} during the experiments was monitored with a standard titrimetric method. Second, the captured frames were processed using a robust image processing pipeline to extract RGB, HSV, and Lab color features. The extracted features were statistically evaluated, and the results revealed strong linear correlations with the measured [Ox]_{tot} values, particularly in the saturation (HSV), a and b (Lab), and blue (RGB) channels. Subsequently, the [Ox]_{tot} measurements and the extracted color features were used to train and validate five ML models. Among them, linear regression and gradient boosting models achieved the highest predictive accuracy (R^2 > 0.990). It was also found that reducing the feature set from nine to four resulted in comparable performance with improved prediction efficiency, especially for gradient boosting. Finally, comparison of the model predictions with real titration measurements revealed that the CBCA system successfully predicts the [Ox]_{tot} in KI solution with high accuracy (R^2 > 0.998) even with a reduced number of features.
- Asia > Middle East > Republic of Türkiye (0.04)
- Europe > Switzerland (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Materials > Chemicals (1.00)
- Health & Medicine (0.93)
Health Insurance Coverage Rule Interpretation Corpus: Law, Policy, and Medical Guidance for Health Insurance Coverage Understanding
U.S. health insurance is complex, and inadequate understanding and limited access to justice have dire implications for the most vulnerable. Advances in natural language processing present an opportunity to support efficient, case-specific understanding, and to improve access to justice and healthcare. Yet existing corpora lack context necessary for assessing even simple cases. We collect and release a corpus of reputable legal and medical text related to U.S. health insurance. We also introduce an outcome prediction task for health insurance appeals designed to support regulatory and patient self-help applications, and release a labeled benchmark for our task, and models trained on it.
- North America > United States > California (0.05)
- North America > United States > Ohio (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- (5 more...)