Goto

Collaborating Authors

 sniper




CFaults: Model-Based Diagnosis for Fault Localization in C Programs with Multiple Test Cases

Orvalho, Pedro, Janota, Mikoláš, Manquinho, Vasco

arXiv.org Artificial Intelligence

Debugging is one of the most time-consuming and expensive tasks in software development. Several formula-based fault localization (FBFL) methods have been proposed, but they fail to guarantee a set of diagnoses across all failing tests or may produce redundant diagnoses that are not subset-minimal, particularly for programs with multiple faults. This paper introduces a novel fault localization approach for C programs with multiple faults. CFaults leverages Model-Based Diagnosis (MBD) with multiple observations and aggregates all failing test cases into a unified MaxSAT formula. Consequently, our method guarantees consistency across observations and simplifies the fault localization procedure. Experimental results on two benchmark sets of C programs, TCAS and C-Pack-IPAs, show that CFaults is faster than other FBFL approaches like BugAssist and SNIPER. Moreover, CFaults only generates subset-minimal diagnoses of faulty statements, whereas the other approaches tend to enumerate redundant diagnoses.


Driven from city life to jungle insurgency

The Japan Times

On jungle crests about 1 mile from the front lines in eastern Myanmar, a former hotel banquet coordinator slipped his index finger onto the trigger of an assault rifle. A dentist recalled picking larvae from a young fighter's infected bullet wound. A marketing manager described the adapted commercial drones she is directing to foil the enemy. More than a year after Myanmar's military seized full control in a coup -- imprisoning the nation's elected leaders, killing more than 1,700 civilians and arresting at least 13,000 more -- the country is at war, with some unlikely combatants in the fray. On one side is a military junta that, apart from a brief interlude of semidemocratic governance, has ruled with brutal force for a half-century.


Scale Normalized Image Pyramids with AutoFocus for Object Detection

Singh, Bharat, Najibi, Mahyar, Sharma, Abhishek, Davis, Larry S.

arXiv.org Artificial Intelligence

We present an efficient foveal framework to perform object detection. A scale normalized image pyramid (SNIP) is generated that, like human vision, only attends to objects within a fixed size range at different scales. Such a restriction of objects' size during training affords better learning of object-sensitive filters, and therefore, results in better accuracy. However, the use of an image pyramid increases the computational cost. Hence, we propose an efficient spatial sub-sampling scheme which only operates on fixed-size sub-regions likely to contain objects (as object locations are known during training). The resulting approach, referred to as Scale Normalized Image Pyramid with Efficient Resampling or SNIPER, yields up to 3 times speed-up during training. Unfortunately, as object locations are unknown during inference, the entire image pyramid still needs processing. To this end, we adopt a coarse-to-fine approach, and predict the locations and extent of object-like regions which will be processed in successive scales of the image pyramid. Intuitively, it's akin to our active human-vision that first skims over the field-of-view to spot interesting regions for further processing and only recognizes objects at the right resolution. The resulting algorithm is referred to as AutoFocus and results in a 2.5-5 times speed-up during inference when used with SNIP.


SNIPER: Efficient Multi-Scale Training

Singh, Bharat, Najibi, Mahyar, Davis, Larry S.

Neural Information Processing Systems

Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels.


Army launches massive new 'What's Your Warrior?' ad campaign

FOX News

Maneuvering attack helicopters through mountains, firing lasers from ground vehicles, commanding autonomous robot sensors, waging cyberattacks and creating microscopic explosions splitting cells in a laboratory -- are all images designed to capture a growing realm of Army experiences depicted in the service's massive new ad campaign called "What's Your Warrior?" Beginning with images of Apache helicopters weaving through rocky cliffs amid dust, wind and high-risk combat, the Army-colored greenish-yellow video animation balances a nuanced message, blending individual soldier specialties with platforms, networks and advanced weapons being used by "teamed" groups of soldiers. The ads show snipers buried in tall grass battling high winds, paratroopers descending in groups through morphing yellowish-clouds and cyberwarriors typing feverishly while satellites, sensors and command and control technology simultaneously operate in tandem. Seeking to appeal to a sense of identity, profession and purpose within the cyber-savvy information-age informed Generation Z, the Army's new recruiting advertisements intend to take news steps beyond the famous "Be All You Can Be" ads by, among other things, expanding the definition of Warrior. Networking weapons from space, guiding ground missile targeting from drones in the air, jamming enemy networks with EW, using AI to organize armored vehicle sensor data and pushing the frontiers of scientific discovery in laboratories -- are all skill sets now increasingly in demand by Army recruiters. While the fundamentals of mechanized warfare, including the Army's Combined Arms Maneuver, are now needed as much or more than any time in modern history, the Army is, of course, expanding its mission scope to encompass space, cyber, EW and AI-driven weapons systems.


Noise-cancelling headsets worn by soldiers can reveal the position of a sniper after a single shot

Daily Mail - Science & tech

Locations of enemy snipers shooting at troops may soon be revealed instantly on the smartphones of the ambushed troops. Cutting-edge audio technology is being developed to use microphones in the ears of the soldiers to track two notable noises from a bullet - supersonic air in front of the bullet and the blast as it leaves the muzzle. Technology is being developed to use these two sounds to trace the original location and reveal where it was fired from. The data and location will then be relayed to the handset of the beleaguered troops to help them identify and neutralise the threat. Audio experts that developed the technology say it builds on existing technology and could be employed on the battlefield in just two years.


SNIPER: Efficient Multi-Scale Training

Singh, Bharat, Najibi, Mahyar, Davis, Larry S.

Neural Information Processing Systems

Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU.


SNIPER: Efficient Multi-Scale Training

Singh, Bharat, Najibi, Mahyar, Davis, Larry S.

Neural Information Processing Systems

We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/MahyarNajibi/SNIPER/ .