thinking
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e.~where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most $\varepsilon$ (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of $\tilde{O}(1/\varepsilon^2)$ and only $\tilde{O}(1/\varepsilon^2)$ iterations. This contrasts with general stochastic convex optimization, where $\Omega(1/\varepsilon^4)$ iterations are needed Amir et al. 2021. The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using $\Theta(1/\varepsilon^4)$ samples, we rely on uniform convergence in a distribution-dependent ball.
Continuations by Albert Wenger : Thinking About AI: Part 3 - Existential Risk...
Now we are getting to the biggest and weirdest risk of AI: a super intelligence emerging and wiping out humanity in pursuit of its own goals. To a lot of people this seems like a totally absurd idea, held only by a tiny fringe of people who appear weird and borderline culty. It seems so far out there and also so huge that most people wind up dismissing it and/or forgetting about shortly after hearing it. There is a big similarity here to the climate crisis, where the more extreme views are widely dismissed. In case you have not encountered the argument yet, let me give a very brief summary (Nick Bostrom has an entire book on the topic and Eliezer Yudkowsky has been blogging about it for two decades, so this will be super compressed by comparison): A superintelligence when it emerges will be pursuing its own set of goals.
The practical application of 'Thinking' Artificial Intelligence
The power of AI – providing simple solutions to complex business problems Fountech design, develop and integrate AI into the core of your business, often by releasing the untapped potential of Big Data. Sometimes that's data you'll already have, sometimes we'll enable you to find it. We regard ourselves as an AI think-tank, rather than just a development company. Our approach can turn your business ideas into tangible results using targeted AI. That's why our core philosophy is: 'you don't just learn Artificial Intelligence - you need to think it'.
Are You Completely Underestimating AI? – The Startup – Medium
The industrial revolution allowed us to build products at faster rates we had ever seen, and allowed us to scale up our creations to sizes never possible before. Just like machines whose strength is hundreds, if not thousands of times stronger than us, AI's intelligence will be hundreds, if not thousands of times smarter than us. Physical problems like will be solved thousands of times faster than humans could. Machines removed the physical constraints of humans and freed us to pursue more intellectual paths like the information industry, and AI will remove our mental constraints.