Goto

Collaborating Authors

 kane


Underdamped Langevin MCMC with third order convergence

Scott, Maximilian, O'Kane, Dáire, Jelinčič, Andraž, Foster, James

arXiv.org Machine Learning

In this paper, we propose a new numerical method for the underdamped Langevin diffusion (ULD) and present a non-asymptotic analysis of its sampling error in the 2-Wasserstein distance when the $d$-dimensional target distribution $p(x)\propto e^{-f(x)}$ is strongly log-concave and has varying degrees of smoothness. Precisely, under the assumptions that the gradient and Hessian of $f$ are Lipschitz continuous, our algorithm achieves a 2-Wasserstein error of $\varepsilon$ in $\mathcal{O}(\sqrt{d}/\varepsilon)$ and $\mathcal{O}(\sqrt{d}/\sqrt{\varepsilon})$ steps respectively. Therefore, our algorithm has a similar complexity as other popular Langevin MCMC algorithms under matching assumptions. However, if we additionally assume that the third derivative of $f$ is Lipschitz continuous, then our algorithm achieves a 2-Wasserstein error of $\varepsilon$ in $\mathcal{O}(\sqrt{d}/\varepsilon^{\frac{1}{3}})$ steps. To the best of our knowledge, this is the first gradient-only method for ULD with third order convergence. To support our theory, we perform Bayesian logistic regression across a range of real-world datasets, where our algorithm achieves competitive performance compared to an existing underdamped Langevin MCMC algorithm and the popular No U-Turn Sampler (NUTS).


Major League Wrestling champ Alex Kane wants Matt Cardona to take him seriously as PPV nears

FOX News

Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. Alex Kane has been on a roll in the second half of the year with Major League Wrestling. The Georgia native became the MLW World Heavyweight Champion in July with a win over Alex Hammerstone at "Never Say Never" and has successfully defended the title three times since the victory with his faction – the Bomaye Fight Club – behind him. Thursday night will be one of Kane's toughest matches yet as he steps into the ring against the "Indy God" Matt Cardona at "One Shot" in New York City.


Researchers can't say if they can fully remove AI hallucinations: 'inherent' part of 'mismatch' use

FOX News

Former litigator Jacqueline Schafer, the CEO and founder of Clearbrief, said AI is frequently used in courtrooms, and she created Clearbrief to fact-check citations and court docs created by generative AI. Some researchers are increasingly convinced they will not be able to remove hallucinations from artificial intelligence (AI) models, which remain a considerable hurdle for large-scale public acceptance. "We currently do not understand a lot of the black box nature of how machine learning comes to its conclusions," Kevin Kane, CEO of quantum encryption company American Binary, told Fox News Digital. "Under the current approach to walking this path of AI, it's not clear how we would do that. We'd have to change how they work a lot."


The End of Recommendation Letters

The Atlantic - Technology

I was lunching with a group of fellow professors, and, as happens these days when we assemble, generative artificial intelligence was discussed. Are your students using it? What are you doing to prevent cheating? Heads were shaken in chagrin as iced teas were sipped for comfort. But then, one of my colleagues wondered: Could he use AI to generate a reference letter for a student?


Gaussian Mean Testing Made Simple

Diakonikolas, Ilias, Kane, Daniel M., Pensia, Ankit

arXiv.org Artificial Intelligence

We study the following fundamental hypothesis testing problem, which we term Gaussian mean testing. Given i.i.d. samples from a distribution $p$ on $\mathbb{R}^d$, the task is to distinguish, with high probability, between the following cases: (i) $p$ is the standard Gaussian distribution, $\mathcal{N}(0,I_d)$, and (ii) $p$ is a Gaussian $\mathcal{N}(\mu,\Sigma)$ for some unknown covariance $\Sigma$ and mean $\mu \in \mathbb{R}^d$ satisfying $\|\mu\|_2 \geq \epsilon$. Recent work gave an algorithm for this testing problem with the optimal sample complexity of $\Theta(\sqrt{d}/\epsilon^2)$. Both the previous algorithm and its analysis are quite complicated. Here we give an extremely simple algorithm for Gaussian mean testing with a one-page analysis. Our algorithm is sample optimal and runs in sample linear time.


Ford is ready for the autonomous car. Are drivers?

AITopics Original Links

The auto industry has already developed all the technology necessary to create truly autonomous vehicles, Ford (s f) engineers claim. The reasons there aren't driverless cars all over the road today is in part a cost issue -- the sensors and automated intelligence required aren't cheap -- but mainly one of driver mindset. Your typical commuter isn't quite ready to take the sizable leap from cruise control to completely automated driving. "There is no technology barrier from going where we are now to the autonomous car," said Jim McBride, a Ford Research and Innovation technical expert who specializes in autonomous vehicle technologies. "There are affordability issues, but the big barrier to overcome is customer acceptance."


The Sample Complexity of Robust Covariance Testing

Diakonikolas, Ilias, Kane, Daniel M.

arXiv.org Machine Learning

We study the problem of testing the covariance matrix of a high-dimensional Gaussian in a robust setting, where the input distribution has been corrupted in Huber's contamination model. Specifically, we are given i.i.d. samples from a distribution of the form $Z = (1-\epsilon) X + \epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $\mathcal{N}(0, \Sigma)$, $B$ is a fixed but unknown noise distribution, and $\epsilon>0$ is an arbitrarily small constant representing the proportion of contamination. We want to distinguish between the cases that $\Sigma$ is the identity matrix versus $\gamma$-far from the identity in Frobenius norm. In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples. Moreover, this sample upper bound was shown to be best possible, within constant factors. Our main result is that the sample complexity of covariance testing dramatically increases in the contaminated setting. In particular, we prove a sample complexity lower bound of $\Omega(d^2)$ for $\epsilon$ an arbitrarily small constant and $\gamma = 1/2$. This lower bound is best possible, as $O(d^2)$ samples suffice to even robustly {\em learn} the covariance. The conceptual implication of our result is that, for the natural setting we consider, robust hypothesis testing is at least as hard as robust estimation.


Digital Banking Strategies Hampered By AI Talent Gap

#artificialintelligence

In research done by the Digital Banking Report, financial organizations of all sizes indicate a low level of data maturity despite an increasing array of AI solutions offered by third-party vendors. In the research, only 12% of organizations believed they were "very effective" or "extremely effective" at using data and advanced analytics. This is lower than prior to the onset of COVID-19. While legacy systems are cited as the primary reason for the shortfall, the second most cited challenge is the lack of expertise within the organization to deploy AI technology effectively. In other words, while banks and credit unions can purchase sophisticated AI solutions, there usually isn't a defined path to achieving strategic goals or increasing business value.


How Voice AI is Disrupting Industries: Recap of a VUX World Podcast Episode

#artificialintelligence

A recent VUX World podcast took a deep dive into how voice AI is disrupting multiple industries. Special guest, Mike Zagorsek, VP of product marketing at SoundHound Inc. spoke with hosts Kane Simms and Dustin Coates of VUX World about the Houndify Voice AI platform. He highlighted how some of our partners (Mercedes-Benz, Pandora, and Mastercard) are using voice to create deeper relationships with their customers and extending the functionality and convenience of their products and services. The following is a recap of some of that conversation. You can watch and listen to the podcast in its entirety here.


Harvard researchers developed an AI to determine how medical treatments affect life spans

#artificialintelligence

A new AI system that predicts the health spans of mice could help develop life-extension interventions for humans, according to the tool's inventors. The system analyzes established measures of frailty to gauge a mouse's chronological age and their so-called biological age -- the condition of their physical and mental functions. It was created by researchers from Harvard Medical School's Sinclair Lab, who say it's the first study to track a mouse's frailty for the duration of its life. They plan to use the predictions to quickly test interventions intended to extend the mice's lives and move towards doing the same in humans. "It can take up to three years to complete a longevity study in mice to see if a particular drug or diet slows the aging process," said study co-first author Alice Kane, a research fellow in genetics at Harvard Medical School's Sinclair Lab.