Goto

Collaborating Authors

 opération


AI Safety Meets the War Machine

WIRED

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.


Musk cuts Starlink access for Russian forces - giving Ukraine an edge at the front

BBC News

Evidence is mounting that Elon Musk's decision to deny Russian forces access to his Starlink satellite-based internet service has blunted Moscow's advance, caused confusion among Russian soldiers and handed an advantage to Ukraine's defenders. And what can Ukraine's military achieve in the meantime? The Russians lost their ability to control the field, a Ukrainian drone operator who goes by the callsign Giovanni told us. I think they lost 50% of their capacity for offence, he said. That's what the numbers show.







ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation

Neural Information Processing Systems

This paper presents a new mechanism to facilitate the training of mask transformers for efficient panoptic segmentation, democratizing its deployment. We observe that due to the high complexity in the training objective of panoptic segmentation, it will inevitably lead to much higher penalization on false positive.