Goto

Collaborating Authors

 nucleus









NIS3D: A Completely Annotated Benchmark for Dense 3D Nuclei Image Segmentation

Neural Information Processing Systems

Despite the rapid advances of large-volume 3D imaging acquisition methods and the emergence of sophisticated algorithms to segment the nuclei in recent years, a benchmark with all cells completely annotated is still missing, making it hard to accurately assess and further improve the performance of the algorithms. The existing nuclei segmentation benchmarks either worked on 2D only or annotated a small number of 3D cells, perhaps due to the high cost of 3D annotation for large-scale data. To fulfill the critical need, we constructed NIS3D, a 3D, high cell density, large-volume, and completely annotated Nuclei Image Segmentation benchmark, assisted by our newly designed semi-automatic annotation software. NIS3D provides more than 22,000 cells across multiple most-used species in this area. Each cell is labeled by three independent annotators, so we can measure the variability of each annotation.


Inside the wild experiments physicists would do with zero limits

New Scientist

From a particle smasher encircling the moon to an "impossible" laser, five scientists reveal the experiments they would run in a world powered purely by imagination In physics, breakthroughs are rare. Experiments are slow, expensive and often end up refining, rather than rewriting, our understanding of the universe. But what if the only constraint on scientific ambition were imagination? We asked five physicists to describe the kind of experiment they would do if they didn't have to worry about budgets, engineering limitations or political realities. Not because we expect any of it to happen soon - though in a few cases, momentum is building - but because it is revealing to see where their minds go when the usual boundaries are stripped away. One researcher wants to launch radio telescopes deep into space to probe dark matter with cosmic energy flashes.


Short-Context Dominance: How Much Local Context Natural Language Actually Needs?

Vakilian, Vala, Wang, Zimeng, Rawat, Ankit Singh, Thrampoulidis, Christos

arXiv.org Artificial Intelligence

We investigate the short-context dominance hypothesis: that for most sequences, a small local prefix suffices to predict their next tokens. Using large language models as statistical oracles, we measure the minimum context length (MCL) needed to reproduce accurate full-context predictions across datasets with sequences of varying lengths. For sequences with 1-7k tokens from long-context documents, we consistently find that 75-80% require only the last 96 tokens at most. Given the dominance of short-context tokens, we then ask whether it is possible to detect challenging long-context sequences for which a short local prefix does not suffice for prediction. We introduce a practical proxy to MCL, called Distributionally Aware MCL (DaMCL), that does not require knowledge of the actual next-token and is compatible with sampling strategies beyond greedy decoding. Our experiments validate that simple thresholding of the metric defining DaMCL achieves high performance in detecting long vs. short context sequences. Finally, to counter the bias that short-context dominance induces in LLM output distributions, we develop an intuitive decoding algorithm that leverages our detector to identify and boost tokens that are long-range-relevant. Across Q&A tasks and model architectures, we confirm that mitigating the bias improves performance.