Goto

Collaborating Authors

 laser


The El Paso No-Fly Debacle Is Just the Beginning of a Drone Defense Mess

WIRED

Fears over a drug cartel drone over Texas sparked a recent airspace shutdown in El Paso and New Mexico, highlighting just how tricky it can be to deploy anti-drone weapons near cities. A shocking but ultimately brief airspace closure over El Paso, Texas, and parts of New Mexico last week is stoking unease among pilots and the broader public about the status of United States anti-drone defenses. As low-cost UAV equipment proliferates around the world, analysts have repeatedly warned that destructive attacks perpetrated using drones are inevitable . It is challenging to develop nimble and safe countermeasures, though, given that things like jamming or attempting to shoot down a drone are difficult--or even impossible--to carry out safely in populated areas, much less densely populated cities. In the case of the El Paso incident, the Federal Aviation Administration originally set the airspace closure to last 10 days, but ultimately lifted it after eight hours.


Are lasers the future of anti-drone warfare?

Al Jazeera

Are lasers the future of anti-drone warfare? A drone appears on the grainy, gray-scaled image of the thermal camera. This is the type of drone used by groups such as Hezbollah, Hamas and the Yemeni Houthis. Seconds later, the wing of the drone snaps off, sending it tumbling down, exploding when it hits the ground. This is a video shared by the Israeli Ministry of Defence and arms producer Rafael, a hint towards the future of anti-drone warfare.


Inside the wild experiments physicists would do with zero limits

New Scientist

From a particle smasher encircling the moon to an "impossible" laser, five scientists reveal the experiments they would run in a world powered purely by imagination In physics, breakthroughs are rare. Experiments are slow, expensive and often end up refining, rather than rewriting, our understanding of the universe. But what if the only constraint on scientific ambition were imagination? We asked five physicists to describe the kind of experiment they would do if they didn't have to worry about budgets, engineering limitations or political realities. Not because we expect any of it to happen soon - though in a few cases, momentum is building - but because it is revealing to see where their minds go when the usual boundaries are stripped away. One researcher wants to launch radio telescopes deep into space to probe dark matter with cosmic energy flashes. Others are dreaming of completely new kinds of particle accelerator or lasers that push the at bounds of the possible.


Towards a Safer and Sustainable Manufacturing Process: Material classification in Laser Cutting Using Deep Learning

Salem, Mohamed Abdallah, Ashur, Hamdy Ahmed, Elshinnawy, Ahmed

arXiv.org Artificial Intelligence

Laser cutting is a widely adopted technology in material processing across various industries, but it generates a significant amount of dust, smoke, and aerosols during operation, posing a risk to both the environment and workers' health. Speckle sensing has emerged as a promising method to monitor the cutting process and identify material types in real-time. This paper proposes a material classification technique using a speckle pattern of the material's surface based on deep learning to monitor and control the laser cutting process. The proposed method involves training a convolutional neural network (CNN) on a dataset of laser speckle patterns to recognize distinct material types for safe and efficient cutting. Previous methods for material classification using speckle sensing may face issues when the color of the laser used to produce the speckle pattern is changed. Experiments conducted in this study demonstrate that the proposed method achieves high accuracy in material classification, even when the laser color is changed. The model achieved an accuracy of 98.30 % on the training set and 96.88% on the validation set. Furthermore, the model was evaluated on a set of 3000 new images for 30 different materials, achieving an F1-score of 0.9643. The proposed method provides a robust and accurate solution for material-aware laser cutting using speckle sensing.


Artificial intelligence approaches for energy-efficient laser cutting machines

Salem, Mohamed Abdallah, Ashour, Hamdy Ahmed, Elshenawy, Ahmed

arXiv.org Artificial Intelligence

This research addresses the significant challenges of energy consumption and environmental impact in laser cutting by proposing novel deep learning (DL) methodologies to achieve energy reduction. Recognizing the current lack of adaptive control and the open-loop nature of CO2 laser suction pumps, this study utilizes closed-loop configurations that dynamically adjust pump power based on both the material being cut and the smoke level generated. To implement this adaptive system, diverse material classification methods are introduced, including techniques leveraging lens-less speckle sensing with a customized Convolutional Neural Network (CNN) and an approach using a USB camera with transfer learning via the pre-trained VGG16 CNN model. Furthermore, a separate DL model for smoke level detection is employed to simultaneously refine the pump's power output. This integration prompts the exhaust suction pump to automatically halt during inactive times and dynamically adjust power during operation, leading to experimentally proven and remarkable energy savings, with results showing a 20% to 50% reduction in the smoke suction pump's energy consumption, thereby contributing substantially to sustainable development in the manufacturing sector.


TumorMap: A Laser-based Surgical Platform for 3D Tumor Mapping and Fully-Automated Tumor Resection

Ma, Guangshen, Prakash, Ravi, Schleupner, Beatrice, Everitt, Jeffrey, Mishra, Arpit, Chen, Junqin, Mann, Brian, Chen, Boyuan, Bridgeman, Leila, Zhong, Pei, Draelos, Mark, Eward, William C., Codd, Patrick J.

arXiv.org Artificial Intelligence

Surgical resection of malignant solid tumors is critically dependent on the surgeon's ability to accurately identify pathological tissue and remove the tumor while preserving surrounding healthy structures. However, building an intraoperative 3D tumor model for subsequent removal faces major challenges due to the lack of high-fidelity tumor reconstruction, difficulties in developing generalized tissue models to handle the inherent complexities of tumor diagnosis, and the natural physical limitations of bimanual operation, physiologic tremor, and fatigue creep during surgery. To overcome these challenges, we introduce "TumorMap", a surgical robotic platform to formulate intraoperative 3D tumor boundaries and achieve autonomous tissue resection using a set of multifunctional lasers. TumorMap integrates a three-laser mechanism (optical coherence tomography, laser-induced endogenous fluorescence, and cutting laser scalpel) combined with deep learning models to achieve fully-automated and noncontact tumor resection. We validated TumorMap in murine osteoscarcoma and soft-tissue sarcoma tumor models, and established a novel histopathological workflow to estimate sensor performance. With submillimeter laser resection accuracy, we demonstrated multimodal sensor-guided autonomous tumor surgery without any human intervention.


Promoting Sustainable Web Agents: Benchmarking and Estimating Energy Consumption through Empirical and Theoretical Analysis

Krupp, Lars, Geißler, Daniel, Banwari, Vishal, Lukowicz, Paul, Karolus, Jakob

arXiv.org Artificial Intelligence

Web agents, like OpenAI's Operator and Google's Project Mariner, are powerful agentic systems pushing the boundaries of Large Language Models (LLM). They can autonomously interact with the internet at the user's behest, such as navigating websites, filling search masks, and comparing price lists. Though web agent research is thriving, induced sustainability issues remain largely unexplored. To highlight the urgency of this issue, we provide an initial exploration of the energy and $CO_2$ cost associated with web agents from both a theoretical -via estimation- and an empirical perspective -by benchmarking. Our results show how different philosophies in web agent creation can severely impact the associated expended energy, and that more energy consumed does not necessarily equate to better results. We highlight a lack of transparency regarding disclosing model parameters and processes used for some web agents as a limiting factor when estimating energy consumption. Our work contributes towards a change in thinking of how we evaluate web agents, advocating for dedicated metrics measuring energy consumption in benchmarks.


Compress to Impress: Efficient LLM Adaptation Using a Single Gradient Step on 100 Samples

Sreeram, Shiva, Maalouf, Alaa, Sharma, Pratyusha, Rus, Daniela

arXiv.org Artificial Intelligence

Recently, Sharma et al. suggested a method called Layer-SElective-Rank reduction (LASER) which demonstrated that pruning high-order components of carefully chosen LLM's weight matrices can boost downstream accuracy -- without any gradient-based fine-tuning. Yet LASER's exhaustive, per-matrix search (each requiring full-dataset forward passes) makes it impractical for rapid deployment. We demonstrate that this overhead can be removed and find that: (i) Only a small, carefully chosen subset of matrices needs to be inspected -- eliminating the layer-by-layer sweep, (ii) The gradient of each matrix's singular values pinpoints which matrices merit reduction, (iii) Increasing the factorization search space by allowing matrices rows to cluster around multiple subspaces and then decomposing each cluster separately further reduces overfitting on the original training data and further lifts accuracy by up to 24.6 percentage points, and finally, (iv) we discover that evaluating on just 100 samples rather than the full training data -- both for computing the indicative gradients and for measuring the final accuracy -- suffices to further reduce the search time; we explain that as adaptation to downstream tasks is dominated by prompting style, not dataset size. As a result, we show that combining these findings yields a fast and robust adaptation algorithm for downstream tasks. Overall, with a single gradient step on 100 examples and a quick scan of the top candidate layers and factorization techniques, we can adapt LLMs to new datasets -- entirely without fine-tuning.


LaSeR: Reinforcement Learning with Last-Token Self-Rewarding

Yang, Wenkai, Liu, Weijie, Xie, Ruobing, Guo, Yiju, Wu, Lulu, Yang, Saiyong, Lin, Yankai

arXiv.org Artificial Intelligence

Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model's self-verification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model's next-token log-probability assigned to any pre-specified token at the solution's last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model's reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance.


This Watch Brand Has Made a Completely New Kind of Strap Using Lasers

WIRED

It looks like fabric, feels like metal, and is as light as rubber. Any watch fan looking to tick all of the above boxes would normally expect to be a dab hand with a spring bar removal tool to experience all the above individually, but a new strap developed by Malaysian independent brand Ming appears to now offer the best of all worlds. The one strap to rule them all has been dubbed the Polymesh, and is 3D-printed from grade five titanium, and comprises 1,693 interconnected pieces (including the buckle) held together without any pins or screws. The only additional parts requiring assembly are the quick-release spring bars at each end that attach it to the watch--the articulated pin buckle is also formed in the same process. Ming says that the strap, which is made up from rows of 15 equilateral triangles, meshed together and bookended by larger end pieces, "has more motion engineered into the radial axis than the lateral one," leading to a supple end result that drapes like fabric yet retains the strength of titanium.