Goto

Collaborating Authors

 top right


A Extension to k-Means and (k, p)-Clustering

Neural Information Processing Systems

The lower bound on opt( U) given in Lemma B.10 holds for ρ -metric spaces with no modifications. By making the appropriate modifications to the proof of Theorem C.1, we can extend this theorem to In particular, we can obtain a proof of Theorem A.5 by taking the proof of Theorem C.1 and adding extra ρ factors whenever the triangle inequality is applied. We first prove Lemma B.1, which shows that the sizes of the sets U By Lemma B.2, we get that Henceforth, we fix some positive ξ and sufficiently large α such that Lemma B.3 holds. By now applying Lemma B.4 it follows that The following lemma is proven in [25]. Lemma B.1, the third inequality follows from Lemma B.7, and the fourth inequality follows from the The second inequality follows from Lemma B.8, the third inequality from averaging and the choice Proof of Lemma 3.3: It follows that with probability at least 1 e Hence, by Theorem D.1, we must have that O (poly( k)) query time must have Ω( k) amortized update time.


Fully Dynamic k-Clustering in O (k) Update Time

Neural Information Processing Systems

Clustering is a fundamental problem in unsupervised learning with several practical applications. In clustering, one is interested in partitioning elements into different groups (i.e.


A Extension to k-Means and (k, p)-Clustering

Neural Information Processing Systems

The lower bound on opt( U) given in Lemma B.10 holds for ρ -metric spaces with no modifications. By making the appropriate modifications to the proof of Theorem C.1, we can extend this theorem to In particular, we can obtain a proof of Theorem A.5 by taking the proof of Theorem C.1 and adding extra ρ factors whenever the triangle inequality is applied. We first prove Lemma B.1, which shows that the sizes of the sets U By Lemma B.2, we get that Henceforth, we fix some positive ξ and sufficiently large α such that Lemma B.3 holds. By now applying Lemma B.4 it follows that The following lemma is proven in [25]. Lemma B.1, the third inequality follows from Lemma B.7, and the fourth inequality follows from the The second inequality follows from Lemma B.8, the third inequality from averaging and the choice Proof of Lemma 3.3: It follows that with probability at least 1 e Hence, by Theorem D.1, we must have that O (poly( k)) query time must have Ω( k) amortized update time.


Fully Dynamic k-Clustering in O (k) Update Time

Neural Information Processing Systems

Clustering is a fundamental problem in unsupervised learning with several practical applications. In clustering, one is interested in partitioning elements into different groups (i.e.


How to reset your terrible streaming recommendations

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. The best streaming services have vast libraries of content, and that's where recommendations can be useful--guiding you towards the movies and shows you're most likely to enjoy, based on what you've already seen. Maybe someone else (a younger member of the family perhaps) has been using your account, and skewed the recommended titles in a direction you don't like. Maybe your recommendations aren't particularly helpful, or maybe you just want a fresh start away from everything you've watched in the past. In those scenarios and others, resetting your recommendations can help--and it's not difficult to do, no matter the streaming services you use.


Google's best AI research tool is now on your phone

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Amidst the flurry of AI announcements and product reveals from Google in recent months, you might have missed one of the most useful AI-powered apps in the whole collection: NotebookLM (that LM stands for Language Model). Perhaps NotebookLM has gone largely under the radar because it was originally launched as more of an academic research tool when it first appeared back in 2023. Its user interface lacks some of the slickness and accessibility of Google Gemini, and it's not quite as obvious how you're supposed to use it, or what it can do. However, NotebookLM is gradually becoming better known amongst consumers, with official apps for Android and iOS now available, alongside the web app.


How to get real-time translations on your phone

Popular Science

Mobile translation apps have improved substantially in recent years--with a little help on speech recognition from AI. Most apps can now keep up with real-time conversations, if your phone has a strong enough internet connection (so the audio can be processed and converted in the cloud). It means if you're trying to hold a conversation with someone in a language you don't know, you no longer need to spend time typing out words and phrases, or trying to figure out spellings and pronunciations. Instead, simply place your phone between you and the other person, and start chatting. There are several apps that can do this for you, but here we'll focus on the free translation apps on your Pixel phone, Galaxy phone, or iPhone.


How to get Gemini to remember (or forget) everything you've said

Popular Science

The upgrades being pushed out for AI chatbots aren't slowing down, and one of the latest improvements added to Google Gemini is an ability for the AI to remember previous conversations. This allows you to refer back to something you've said the previous day, the previous week, or whenever it was. But do you want that? "Gemini can now recall your past chats to provide more helpful responses," explains Google. "Whether you're asking a question about something you've already discussed, or asking Gemini to summarize a previous conversation, Gemini now uses information from relevant chats to craft a response." For now, this is exclusive to Gemini Advanced subscribers and those using Gemini in English, though it may roll out to other users in the future.


How to use tasks and reminders inside ChatGPT

Popular Science

We've seen numerous new features added to ChatGPT in recent months, including updated models, web search capabilities, and the ability to remember what you say to it--and the latest software upgrade added to the AI bot by OpenAI makes it more useful as a general-purpose digital assistant. Beginning in beta form, and available initially to paying subscribers--the feature will reach everyone eventually, OpenAI says--ChatGPT Tasks lets you request the AI chatbot perform actions regularly on an automated schedule, or remind you about something in the future. Here's everything you need to know about it. "In this early beta, you can create scheduled tasks that enable ChatGPT to run automated prompts and proactively reach out to you on a scheduled basis," explains OpenAI. Tasks are available on the web, in the mobile apps, and in the macOS desktop app; OpenAI says the feature will make it to the Windows desktop app soon.


Training on the Edge of Stability Is Caused by Layerwise Jacobian Alignment

Lowell, Mark, Kastner, Catharine

arXiv.org Machine Learning

During neural network training, the sharpness of the Hessian matrix of the training loss rises until training is on the edge of stability. As a result, even nonstochastic gradient descent does not accurately model the underlying dynamical system defined by the gradient flow of the training loss. We use an exponential Euler solver to train the network without entering the edge of stability, so that we accurately approximate the true gradient descent dynamics. We demonstrate experimentally that the increase in the sharpness of the Hessian matrix is caused by the layerwise Jacobian matrices of the network becoming aligned, so that a small change in the network preactivations near the inputs of the network can cause a large change in the outputs of the network. We further demonstrate that the degree of alignment scales with the size of the dataset by a power law with a coefficient of determination between 0.74 and 0.98.