Goto

Collaborating Authors

 baron


Ex-Washington Post chief blasts 'gutless' Bezos as paper rocked by major layoffs

FOX News

Former Washington Post executive editor Marty Baron offered a blistering statement in reaction to the sweeping layoffs impacting the paper while taking aim at its owner Jeff Bezos.



Can tracking make my sleep worse? The quiet torment of sleep tech.

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. The ticking tyranny of 2 a.m. after you climbed into bed–responsibly–at 11. As the minutes go by, all you can think about is the importance of good sleep for function, mood, and productivity. What's worse, the big white letters on your sleep score will read "poor" like a middle school quiz. And while health-tracking devices have helped many gain insight into their bodies, hyperfixation on sleep metrics can backfire.


Sampling Bag of Views for Open-Vocabulary Object Detection

Choi, Hojun, Choe, Junsuk, Shim, Hyunjung

arXiv.org Artificial Intelligence

Existing open-vocabulary object detection (OVD) develops methods for testing unseen categories by aligning object region embeddings with corresponding VLM features. A recent study leverages the idea that VLMs implicitly learn compositional structures of semantic concepts within the image. Instead of using an individual region embedding, it utilizes a bag of region embeddings as a new representation to incorporate compositional structures into the OVD task. However, this approach often fails to capture the contextual concepts of each region, leading to noisy compositional structures. This results in only marginal performance improvements and reduced efficiency. To address this, we propose a novel concept-based alignment method that samples a more powerful and efficient compositional structure. Our approach groups contextually related ``concepts'' into a bag and adjusts the scale of concepts within the bag for more effective embedding alignment. Combined with Faster R-CNN, our method achieves improvements of 2.6 box AP50 and 0.5 mask AP over prior work on novel categories in the open-vocabulary COCO and LVIS benchmarks. Furthermore, our method reduces CLIP computation in FLOPs by 80.3% compared to previous research, significantly enhancing efficiency. Experimental results demonstrate that the proposed method outperforms previous state-of-the-art models on the OVD datasets.


Global Optimization: A Machine Learning Approach

Bertsimas, Dimitris, Margaritis, Georgios

arXiv.org Artificial Intelligence

Many approaches for addressing Global Optimization problems typically rely on relaxations of nonlinear constraints over specific mathematical primitives. This is restricting in applications with constraints that are black-box, implicit or consist of more general primitives. Trying to address such limitations, Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving black-box global optimization problems by approximating the nonlinear constraints using hyperplane-based Decision-Trees and then using those trees to construct a unified mixed integer optimization (MIO) approximation of the original problem. We provide extensions to this approach, by (i) approximating the original problem using other MIO-representable ML models besides Decision Trees, such as Gradient Boosted Trees, Multi Layer Perceptrons and Suport Vector Machines, (ii) proposing adaptive sampling procedures for more accurate machine learning-based constraint approximations, (iii) utilizing robust optimization to account for the uncertainty of the sample-dependent training of the ML models, and (iv) leveraging a family of relaxations to address the infeasibilities of the final MIO approximation. We then test the enhanced framework in 81 Global Optimization instances. We show improvements in solution feasibility and optimality in the majority of instances. We also compare against BARON, showing improved optimality gaps or solution times in 11 instances.


Projection-Free Online Convex Optimization via Efficient Newton Iterations

Gatmiry, Khashayar, Mhammedi, Zakaria

arXiv.org Artificial Intelligence

This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $\mathcal{K} \subset \mathbb{R}^d$. Classical OCO algorithms (such as Online Gradient Descent) typically need to perform Euclidean projections onto the convex set $\cK$ to ensure feasibility of their iterates. Alternative algorithms, such as those based on the Frank-Wolfe method, swap potentially-expensive Euclidean projections onto $\mathcal{K}$ for linear optimization over $\mathcal{K}$. However, such algorithms have a sub-optimal regret in OCO compared to projection-based algorithms. In this paper, we look at a third type of algorithms that output approximate Newton iterates using a self-concordant barrier for the set of interest. The use of a self-concordant barrier automatically ensures feasibility without the need for projections. However, the computation of the Newton iterates requires a matrix inverse, which can still be expensive. As our main contribution, we show how the stability of the Newton iterates can be leveraged to compute the inverse Hessian only a vanishing fraction of the rounds, leading to a new efficient projection-free OCO algorithm with a state-of-the-art regret bound.


Google invented the AI version of a Hallmark card

#artificialintelligence

I don't have the time, energy, or attention span to give every email a thoughtful reply. It's a problem Google has been trying to solve with a Gmail feature called Smart Replies, the automatically generated, prewritten responses that pop up when you're composing an email. But I worry these simple responses will make us lazy and our language homogeneous. Email's terrible, but do I now need to worry about it destroying language and cratering our relationships, too? Most short email responses aren't carefully written as it is, so we aren't exactly losing out on poetry, says Naomi Baron, a professor of linguistics emerita at American University and author of Words Onscreen: The Fate of Reading in a Digital World. "We like to assume that we're more creative than we actually are," she says.


AI Is Less Of A Threat Than Some Suggest

#artificialintelligence

While robotics and artificial intelligence (AI) promise great advances in productivity, mostly they seem to worry people. Commentators talk and write endlessly about how these marvelous technologies will steal jobs from both workers and the managerial class, creating a large unemployed population. If history has anything to say, however, and it does, such fears are not only exaggerated, they are off the mark entirely. Ultimately, AI will create more new jobs than it destroys and likely in occupations heretofore nonexistent. Popular commentary on this matter maintains an almost universally downbeat tone.


Monitoring with Artificial Intelligence and Machine Learning · Baron Schwartz's Blog

#artificialintelligence

Artificial intelligence and machine learning (AI and ML) are so over-hyped today that I usually don't talk about them. But there are real and valid uses for these technologies in monitoring and performance management. Some companies have already been employing ML and AI with good results for a long time. VividCortex's own adaptive fault detection uses ML, a fact we don't generally publicize. AI and ML aren't magic, and I think we need a broader understanding of this.


Should we worry about rigged priors? A long discussion.

#artificialintelligence

Today's discussion starts with Stuart Buck, who came across a post by John Cook linking to my post, "Bayesian statistics: What's it all about?". Cook wrote about the benefit of prior distributions in making assumptions explicit. Buck shared Cook's post with Jon Baron, who wrote: My concern is that if researchers are systematically too optimistic (or even self-deluded) about about the prior evidence--which I think is usually the case--then using prior distributions as the basis for their new study can lead to too much statistical confidence in the study's results. And so could compound the problem. My response to Jon is that I think all aspects of a model should be justified.