Goto

Collaborating Authors

 airshow


A glimpse of future airpower on display at biennial China airshow

Al Jazeera

A squadron of six Chinese Chengdu J-10 jets took off towards an overcast sky in front of thousands of spectators at an airfield in southern China's coastal city of Zhuhai in mid-November. Flying low in a close V-shaped formation, the jets circled back and as they approached a cluster of buildings near the spectators, trails of red, blue, yellow and white smoke suddenly poured from each plane, bringing a cheer from onlookers that was almost as loud as the roar from the warplanes' engines. Seconds later, the J-10s broke their close formation to show off a series of even more impressive acrobatic manoeuvres. But the aerial show by the seasoned pilots was far from the only demonstration of prowess at the China International Aviation & Aerospace Exhibition, better known as Airshow China or the Zhuhai Airshow, which is held biennially and named after the city in southern China where it is held. A wide array of new equipment and aircraft available to the Chinese military – known as the People's Liberation Army (PLA) – was unveiled for the first time at the airshow, held from November 12 to 17.


I Bet You Did Not Mean That: Testing Semantic Importance via Betting

Teneggi, Jacopo, Sulam, Jeremias

arXiv.org Machine Learning

Recent works have extended notions of feature importance to \emph{semantic concepts} that are inherently interpretable to the users interacting with a black-box predictive model. Yet, precise statistical guarantees, such as false positive rate control, are needed to communicate findings transparently and to avoid unintended consequences in real-world scenarios. In this paper, we formalize the global (i.e., over a population) and local (i.e., for a sample) statistical importance of semantic concepts for the predictions of opaque models, by means of conditional independence, which allows for rigorous testing. We use recent ideas of sequential kernelized testing (SKIT) to induce a rank of importance across concepts, and showcase the effectiveness and flexibility of our framework on synthetic datasets as well as on image classification tasks using vision-language models such as CLIP.