Plotting

Dell wants to be your one-stop shop for AI infrastructure

ZDNet

Michael Dell is pitching a "decentralized" future for artificial intelligence that his company's devices will make possible. "The future of AI will be decentralized, low-latency, and hyper-efficient," predicted the Dell Technologies founder, chairman, and CEO in his Dell World keynote, which you can watch on YouTube. "AI will follow the data, not the other way around," Dell said at Monday's kickoff of the company's four-day customer conference in Las Vegas. Dell is betting that the complexity of deploying generative AI on-premise is driving companies to embrace a vendor with all of the parts, plus 24-hour-a-day service and support, including monitoring. On day two of the show, Dell chief operating officer Jeffrey Clarke noted that Dell's survey of enterprise customers shows 37% want an infrastructure vendor to "build their entire AI stack for them," adding, "We think Dell is becoming an enterprise's'one-stop shop' for all AI infrastructure."


Google releases its asynchronous Jules AI agent for coding - how to try it for free

ZDNet

The race to deploy AI agents is heating up. At its annual I/O developer conference yesterday, Google announced that Jules, its new AI coding assistant, is now available worldwide in public beta. The launch marks the company's latest effort to corner the burgeoning market for AI agents, widely regarded across Silicon Valley as essentially a more practical and profitable form of chatbot. Virtually every other major tech giant -- including Meta, OpenAI, and Amazon, just to name a few -- has launched its own agent product in recent months. Also: I tested ChatGPT's Deep Research against Gemini, Perplexity, and Grok AI to see which is best Originally unveiled by Google Labs in December, Jules is positioned as a reliable, automated coding assistant that can manage a broad suite of time-consuming tasks on behalf of human users. The model is "asynchronous," which, in programming-speak, means it can start and work on tasks without having to wait for any single one of them to finish.


A Appendix

Neural Information Processing Systems

A.1 Illustration of group actions This section is intended to provide a visual, more intuitive understanding of the different group actions on the tensors of our network. We begin with a visualization of the group action for the input space. We exemplify it over the sequence GGACT, whose reverse complement is AGTCC. The representation with arbitrary P can mix an arbitrary number of channels together with the group action. Cohen et al. [11, Theorem 3.3] gives a general result about linear equivariant mapping.


A Proof of the Lower Bounds A.1 Lower Bound on the Exploration Cost: Proof of Theorem 1 Let us denote, for any model ยต R KM and agent m,k m = arg max

Neural Information Processing Systems

Assume that stopping time ฯ„ is almost surely finite under ยต for algorithm A. Let event E Now arm k is optimal for agent m. For cumulative regret, the change-of-distribution lemma becomes an asymptotic result, stated below Lemma 6. Fix ยต R The proof uses a change-of-distribution, following a technique proposed by [16]. The conclusion follows from some elementary real analysis to prove that the right hand side of the inequality is larger than (1 ฮต) log(T) for T large enough (how larger depends in a complex way of ยต, ฮป, ฮต and the algorithm). At this point, we would really like to select the alternative model ฮป that leads to the tightest inequality in Lemma 6. However, this choice of alternative ฮป depends on T, hence we cannot apply Lemma 6, which is asymptotic in T and holds for a fixed ฮป.


A Gaming YouTuber Says an AI-Generated Clone of His Voice Is Being Used to Narrate 'Doom' Videos

WIRED

On a little known YouTube channel, a breezy, British narrator is explaining the ins and outs of Doom: The Dark Ages' story. Though not named, his voice may be familiar to video game fans as that of Mark Brown. The trouble is, Brown had nothing to do with the video. Brown, who goes by Game Maker's Toolkit, is a content creator and developer who's covered video game design for over a decade. His channel has 220 videos, broadcast to over 1.65 million subscribers, where he gives in-depth explanations on things like puzzle mechanics in Blue Prince or addresses UI problems in The Legend of Zelda: Echoes of Wisdom.


Unpacking the Flaws of Techbro Dreams of the Future

Mother Jones

Cutaway view of a fictional space colony concept painted by artist Rick Guidice as part of a NASA art program in the 1970s. This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Elon Musk once joked: "I would like to die on Mars. Musk is, in fact, deadly serious about colonizing the Red Planet. Part of his motivation is the idea of having a "back-up" planet in case some future catastrophe renders the Earth uninhabitable. Musk has suggested that a million people may be calling Mars home by 2050 -- and he's hardly alone in his enthusiasm. Venture capitalist Marc Andreessen believes the world can easily support 50 billion people, and more than that once we settle other planets. And Jeff Bezos has spoken of exploiting the resources of the moon and the asteroids to build giant space stations. "I would love to see a trillion humans living in the solar system," he has said. Not so fast, cautions science journalist Adam Becker.


Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control

Neural Information Processing Systems

Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images. Adding control to the video generation process is an important goal moving forward and recent approaches that condition video generation models on camera trajectories make strides towards it. Yet, it remains challenging to generate a video of the same scene from multiple different camera trajectories. Solutions to this multi-video generation problem could enable large-scale 3D scene generation with editable camera trajectories, among other applications. We introduce collaborative video diffusion (CVD) as an important step towards this vision. The CVD framework includes a novel cross-video synchronization module that promotes consistency between corresponding frames of the same video rendered from different camera poses using an epipolar attention mechanism. Trained on top of a state-of-the-art camera-control module for video generation, CVD generates multiple videos rendered from different camera trajectories with significantly better consistency than baselines, as shown in extensive experiments.



AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal Properties

Neural Information Processing Systems

Anomaly detection is widely used for identifying critical errors and suspicious behaviors, but current methods lack interpretability. We leverage common properties of existing methods and recent advances in generative models to introduce counterfactual explanations for anomaly detection. Given an input, we generate its counterfactual as a diffusion-based repair that shows what a non-anomalous version should have looked like. A key advantage of this approach is that it enables a domain-independent formal specification of explainability desiderata, offering a unified framework for generating and evaluating explanations. We demonstrate the effectiveness of our anomaly explainability framework, AR-Pro, on vision (MVTec, VisA) and time-series (SWaT, WADI, HAI) anomaly datasets. The code used for the experiments is accessible at: https://github.com/xjiae/arpro.


Ghost kitchen delivery drivers have overrun an Echo Park neighborhood, say frustrated residents

Los Angeles Times

As soon as Echo Park Eats opened on the corner of Sunset Boulevard and Douglas Street in the fall of 2023, Sandy Romero said her neighborhood became overrun with delivery drivers. "The first day that they opened business it was chaotic, unorganized and it's just such a nuisance now," she said. Echo Park Eats is a ghost kitchen, a meal preparation hub for app-based delivery orders. It rents its kitchens to 26 different food vendors. The facility is part of CloudKitchens, led by Travis Kalanick, co-founder of Uber Technologies, which has kitchen locations across the nation including 11 in Los Angeles County.