competitor
Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI's models
Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI's models Musk kept his cool, and OpenAI's lawyer bulldozed him with piercing questions about his motivations for suing the company. In the first week of the landmark trial between Elon Musk and OpenAI, Musk took the stand in a crisp black suit and tie and argued that OpenAI CEO Sam Altman and president Greg Brockman had deceived him into bankrolling the company. Along the way, he warned that AI could destroy us all and sat through revelations that he had poached OpenAI employees for his own companies. He even confessed, to some audible gasps in the courtroom, that his own AI company, xAI, which makes the chatbot Grok, uses OpenAI's models to train its own. The federal courthouse in Oakland, California, was packed with armies of lawyers carrying boxes of exhibits, journalists typing away at their laptops, and a handful of concerned OpenAI employees. Outside, protesters lined the streets, carrying signs urging people to quit ChatGPT, boycott Tesla, or both.
- Law > Litigation (0.35)
- Media > News (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Musk accuses OpenAI lawyer of trying to 'trick' him in combative testimony
Musk accuses OpenAI lawyer of trying to'trick' him in combative testimony In his second day on the stand, Elon Musk was at times combative under questioning by OpenAI's lawyer, whom he accused of asking overly complicated questions. Your questions are not simple, he told lawyer William Savitt at one point. They're designed to trick me essentially, Musk is suing fellow OpenAI co-founder Altman and the AI firm, alleging they misled him by shifting the organisation away from its non-profit roots toward a for-profit model. OpenAI says Musk is motivated by jealousy and regret for walking away from the company in 2018. It has also accused Musk, head of xAI, of trying to derail one of his key rivals.
- Europe > United Kingdom (0.50)
- North America > United States (0.30)
- Law > Litigation (0.54)
- Leisure & Entertainment > Sports (0.43)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.99)
Musk says basis of charitable giving at stake in OpenAI lawsuit
A trial pitting two founders of OpenAI - Sam Altman and Elon Musk - against each other has opened in California, with the sides presenting duelling narratives about the company's history and obligations to consumers. Musk, wearing a dark suit and tie, was asked by one of his lawyers what the lawsuit was about when he took the stand. It's actually very simple, he said. It's not okay to steal a charity... If it's okay to loot a charity, the entire foundation of charitable giving will be destroyed.
- Leisure & Entertainment (1.00)
- Law > Litigation (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.39)
Musk v Altman: The most toxic row in tech goes on trial
The bitter feud between Elon Musk and OpenAI boss Sam Altman has raged for years, but has mostly played out online in the form of accusations, counter-accusations and jibes. But starting on Tuesday, the beef between the two tech billionaires will shift to a much higher-profile forum: a federal courtroom in California, where their row will be the focus of a month-long trial. Being considered is Musk's claim that Altman - with whom he founded OpenAI - has swindled him out of millions of dollars and reneged on the ChatGPT-maker's original non-profit mission. Musk and Altman themselves will be among those to testify in a case in which the future of AI could be at stake. And while one will presumably emerge the winner, it's plausible that neither will emerge from the saga unscathed.
- Leisure & Entertainment > Sports (0.52)
- Law > Litigation (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.57)
CRPS-Optimal Binning for Univariate Conformal Regression
We propose a method for non-parametric conditional distribution estimation based on partitioning covariate-sorted observations into contiguous bins and using the within-bin empirical CDF as the predictive distribution. Bin boundaries are chosen to minimise the total leave-one-out Continuous Ranked Probability Score (LOO-CRPS), which admits a closed-form cost function with $O(n^2 \log n)$ precomputation and $O(n^2)$ storage; the globally optimal $K$-partition is recovered by a dynamic programme in $O(n^2 K)$ time. Minimisation of within-sample LOO-CRPS turns out to be inappropriate for selecting $K$ as it results in in-sample optimism. We instead select $K$ by $K$-fold cross-validation of test CRPS, which yields a U-shaped criterion with a well-defined minimum. Having selected $K^*$ and fitted the full-data partition, we form two complementary predictive objects: the Venn prediction band and a conformal prediction set based on CRPS as the nonconformity score, which carries a finite-sample marginal coverage guarantee at any prescribed level $\varepsilon$. The conformal prediction is transductive and data-efficient, as all observations are used for both partitioning and p-value calculation, with no need to reserve a hold-out set. On real benchmarks against split-conformal competitors (Gaussian split conformal, CQR, CQR-QRF, and conformalized isotonic distributional regression), the method produces substantially narrower prediction intervals while maintaining near-nominal coverage.
- North America > United States > Virginia > Virginia Beach (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
Learning Sparse Gaussian Graphical Models with Overlapping Blocks
We present a novel framework, called GRAB (GRaphical models with overlApping Blocks), to capture densely connected components in a network estimate. GRAB takes as input a data matrix of p variables and n samples, and jointly learns both a network among p variables and densely connected groups of variables (called `blocks'). GRAB has four major novelties as compared to existing network estimation methods: 1) It does not require the blocks to be given a priori.
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Europe > Italy (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- Information Technology (0.46)
- Education (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.45)
fd78f2f65881c1c7ce47e26b040cf48f-Supplemental-Datasets_and_Benchmarks.pdf
License: Werelease the code used to build our benchmark and perform our experiments under theMITLicense (https://mit-license.org/),whereas werelease datawecreated, including the performance metrics collected by us, the splits used to train, validate and test our surrogate models, and our surrogate models, under the CCBY 4.0 License (https://creativecommons. Compute resources We trained the configurations on a large SLURM-based cluster with approximately 300,000 CPU-cores available in parallel. This ensures that all three data splits retain all or most of the statistical properties, including any biases, of the original performancedataset. Whereas fitting XGBoost used mean-squared-error as a regression metric, quality of fit for hyperparameters was judged using Kendall's tau rank correlation values. Task SpeedupoverHPO-only SpeedupoverNAS-only CIFAR-10 54.7 33.7 Colorectal-Histology 75.2 20.1 Fashion-MNIST 8.5 34.6 Geometricmean 32.7 28.6 resource consumption for our experiments performed on Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHztobe1.75CPU-core-hours.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)