Goto

Collaborating Authors

 Venezuela


A defense official reveals how AI chatbots could be used for targeting decisions

MIT Technology Review

Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.


Dario Amodei's Oppenheimer Moment

The Atlantic - Technology

It came earlier than expected. More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, "Machines of Loving Grace," is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in "most relevant fields," and we'll have millions of them, a "country of geniuses," all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they're developed--or "grown," as staffers at Anthropic are fond of saying--in the correct way, they will decide to greatly improve our lives. Amodei does not explain precisely how the AIs will accomplish this.


AI Safety Meets the War Machine

WIRED

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.


Trump's Board of Peace faces its first test on Gaza

Al Jazeera

'The next stage of the Gaza genocide has begun' How important is the Rafah crossing reopening? Trump's Board of Peace faces its first test on Gaza NewsFeed Trump's Board of Peace faces its first test on Gaza Members of Donald Trump's Board of Peace have expressed optimism about peace and rebuilding in Gaza at its inaugural session. Despite multibillion dollar pledges, there are doubts about how it will manage the enclave's unresolved issues. Trump gives Iran 10-15 days to make deal, warns'bad things will happen' Masked protesters arrested outside Trump's Board of Peace meeting Palestinians in Gaza say'Board of Peace' will further occupation OpenAI's Sam Altman: Global AI regulation'urgently' needed


Venezuela signs amnesty law as families await prison releases

Al Jazeera

Venezuela's acting president Delcy Rodriguez signed an amnesty law that could free hundreds of people jailed over protests and political unrest dating back decades. The law marks a shift for the country, which has long denied holding any political prisoners. Trump's Board of Peace faces its first test on Gaza Trump gives Iran 10-15 days to make deal, warns'bad things will happen' Masked protesters arrested outside Trump's Board of Peace meeting Palestinians in Gaza say'Board of Peace' will further occupation OpenAI's Sam Altman: Global AI regulation'urgently' needed


Macron defends EU AI rules and vows crackdown on child 'digital abuse'

The Guardian

Emmanuel Macron told delegates at the AI summit: 'Europe is not blindly focused on regulation.' Emmanuel Macron told delegates at the AI summit: 'Europe is not blindly focused on regulation.' Macron defends EU AI rules and vows crackdown on child'digital abuse' Emmanuel Macron has hit back at US criticism of Europe's efforts to regulate AI, vowing to protect children from "digital abuse" during France's presidency of the G7. Speaking at the AI Impact summit in Delhi, the French president called for tougher safeguards after global outrage over Elon Musk's Grok chatbot being used to generate tens of thousands of sexualised images of children, and amid mounting concern about the concentration of AI power in a handful of companies. His remarks were echoed by António Guterres, the UN secretary general, who told delegates - including several US tech billionaires - that "no child should be a test subject for unregulated AI". "The future of AI cannot be decided by a few countries or left to the whims of a few billionaires," Guterres said. "AI must belong to everyone".


How Nick Land Became Silicon Valley's Favorite Doomsayer

The New Yorker

Nick Land believes that digital superintelligence is going to kill us all. In San Francisco, his followers ask: What if, instead of trying to stop an A.I. takeover, you work to bring it on as fast as possible? In the spring of 1994, at a philosophy conference on a run-down modernist campus in the English Midlands, a group of academics, media theorists, artists, hackers, and d.j.s gathered to hear a young professor give a talk at a conference called "Virtual Futures." It was ten o'clock in the morning, and most of the attendees were wiped out from a rave that had taken place in the student union the night before. But the talk--titled "Meltdown"--was highly anticipated. The professor, Nick Land, was tenured in the philosophy department at the University of Warwick, at the time one of the top philosophy programs in the U.K. Land had gained a cult following for his radical anti-humanism, his wild predictions about the future of technology, and his erratic teaching style. Soon, his academic presentations would become increasingly "experimental"; at a conference in 1996, he lay on the floor, reciting cut-up poetry in what an attendee described as a "demon voice" while jungle music played in the background.




Maduro raid questions trigger Pentagon review of top AI firm as potential 'supply chain risk'

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .