Goto

Collaborating Authors

 cambridge


Graphene-based sensor to improve robot touch

Robohub

Multiscale-structured miniaturized 3D force sensors CC BY 4.0 Robots are becoming increasingly capable in vision and movement, yet touch remains one of their major weaknesses. Now, researchers have developed a miniature tactile sensor that could give robots something much closer to a human sense of touch. The technology, developed by researchers at the University of Cambridge, is based on liquid metal composites and graphene - a two-dimensional form of carbon. The'skin' allows robots to detect not just how hard they are pressing on an object, but also the direction of applied forces, whether an object is slipping, and even how rough a surface is, at a scale small enough to rival the spatial resolution of human fingertips. Their results are reported in the journal .

  Country:
  Genre: Research Report (0.50)
  Industry: Health & Medicine (0.31)

We don't know if AI-powered toys are safe, but they're here anyway

New Scientist

We don't know if AI-powered toys are safe, but they're here anyway Toys powered by AI show a worrying lack of emotional understanding. Mya, aged 3, and her mother Vicky playing with an AI toy called Gabbo during an observation at the University of Cambridge's Faculty of Education Even the most cutting-edge AI models are prone to presenting fabrication as fact, dispensing dangerous information and failing to grasp social cues. Despite this, toys equipped with AI that can chat with children are a burgeoning industry. Some scientists are warning that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling such a toy "I love you", to which it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."





Cutting Through the Noise: On-the-fly Outlier Detection for Robust Training of Machine Learning Interatomic Potentials

Lam, Terry C. W., O'Neill, Niamh, Schran, Christoph, Schaaf, Lars L.

arXiv.org Machine Learning

The accuracy of machine learning interatomic potentials suffers from reference data that contains numerical noise. Often originating from unconverged or inconsistent electronic-structure calculations, this noise is challenging to identify. Existing mitigation strategies such as manual filtering or iterative refinement of outliers, require either substantial expert effort or multiple expensive retraining cycles, making them difficult to scale to large datasets. Here, we introduce an on-the-fly outlier detection scheme that automatically down-weights noisy samples, without requiring additional reference calculations. By tracking the loss distribution via an exponential moving average, this unsupervised method identifies outliers throughout a single training run. We show that this approach prevents overfitting and matches the performance of iterative refinement baselines with significantly reduced overhead. The method's effectiveness is demonstrated by recovering accurate physical observables for liquid water from unconverged reference data, including diffusion coefficients. Furthermore, we validate its scalability by training a foundation model for organic chemistry on the SPICE dataset, where it reduces energy errors by a factor of three. This framework provides a simple, automated solution for training robust models on imperfect datasets across dataset sizes.


60495b4e033e9f60b32a6607b587aadd-Paper.pdf

Neural Information Processing Systems

Furthermore, weprove information theoretic lower bounds which establish minimax optimality of the skillparameter estimation technique usedinouralgorithm. These bounds utilize a continuum version of Fano's method along with a careful covering argument.




Half of UK novelists believe AI is likely to replace their work entirely

AIHub

Just over half (51%) of published novelists in the UK say that artificial intelligence is likely to end up entirely replacing their work as fiction writers, a new report from the University of Cambridge has found. Close to two-thirds (59%) of novelists say they know their work has been used to train AI Large Language Models (LLMs) without permission or payment. Over a third (39%) of novelists say their income has already taken a hit from generative AI, for example due to loss of other work that facilitates novel writing. Most (85%) novelists expect their future income to be driven down by AI. In new research for Cambridge's Minderoo Centre for Technology and Democracy (MCTD), Dr Clementine Collett surveyed 258 published novelists earlier this year, as well as 74 industry insiders - from commissioning editors to literary agents - to gauge how AI is viewed and used in the world of British fiction.*