Goto

Collaborating Authors

 vaswani


Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Lin, Jiabin, Moothedath, Shana, Vaswani, Namrata

arXiv.org Machine Learning

We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T contextual linear bandits with dimension d simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r much smaller than d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.


Noisy Low Rank Column-wise Sensing

Singh, Ankit Pratap, Vaswani, Namrata

arXiv.org Artificial Intelligence

This letter studies the AltGDmin algorithm for solving the noisy low rank column-wise sensing (LRCS) problem. Our sample complexity guarantee improves upon the best existing one by a factor $\max(r, \log(1/\epsilon))/r$ where $r$ is the rank of the unknown matrix and $\epsilon$ is the final desired accuracy. A second contribution of this work is a detailed comparison of guarantees from all work that studies the exact same mathematical problem as LRCS, but refers to it by different names.


Was Linguistic A.I. Created by Accident?

The New Yorker

In the spring of 2017, in a room on the second floor of Google's Building 1965, a college intern named Aidan Gomez stretched out, exhausted. It was three in the morning, and Gomez and Ashish Vaswani, a scientist focussed on natural language processing, were working on their team's contribution to the Neural Information Processing Systems conference, the biggest annual meeting in the field of artificial intelligence. Along with the rest of their eight-person group at Google, they had been pushing flat out for twelve weeks, sometimes sleeping in the office, on couches by a curtain that had a neuron-like pattern. They were nearing the finish line, but Gomez didn't have the energy to go out to a bar and celebrate. He couldn't have even if he'd wanted to: he was only twenty, too young to drink in the United States.


ChatGPT Spawns Investor Gold Rush in AI

WSJ.com: WSJD - Technology

Before their startup had customers, a business plan or even a formal name, former Google AI researchers Niki Parmar and Ashish Vaswani were fielding interest from investors eager to back the next big thing in artificial intelligence. At Google, Ms. Parmar and Mr. Vaswani were among the co-authors of a seminal 2017 paper that helped pave the way for the boom in so-called generative AI. Earlier this year, only weeks after striking out on their own, they raised funds that valued their fledgling company--now called Essential AI--at around $50 million, people familiar with the company said.


"Attention Is All You Need": USC Alumni Paved Path for ChatGPT - USC Viterbi

#artificialintelligence

Niki Parmar and Ashish Vaswani co-authored a seminal paper that set the groundwork for ChatGPT and other generative AI models. ChatGPT has taken the world by storm, but seeds of the groundbreaking technology were sown at the USC Viterbi School of Engineering. The seminal paper "Attention Is All You Need," which laid the foundation for ChatGPT and other generative AI systems, was co-authored by Ashish Vaswani, a PhD computer science graduate ('14) and Niki Parmar, a master's in computer science graduate ('15). The landmark paper was presented at the 2017 Conference on Neural Information Processing Systems (NeurIPS), one of the top conferences in AI and machine learning. In the paper, the researchers introduced the transformer architecture, a powerful type of neural network that has become widely used for natural language processing tasks, from text classification to language modeling.


What Is a Transformer Model?

#artificialintelligence

If you want to ride the next big wave in AI, grab a transformer. A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence. Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other. First described in a 2017 paper from Google, transformers are among the newest and one of the most powerful classes of models invented to date. They're driving a wave of advances in machine learning some have dubbed transformer AI.


Why Researchers Can't Agree on AI Consciousness

#artificialintelligence

The idea of conscious artificial intelligence (AI) conjures images of machines taking over the world, but experts disagree over whether to take the concept seriously. A top AI researcher recently claimed that AI is already smarter than we think. Ilya Sutskever, the chief scientist of the OpenAI research group, tweeted that "it may be that today's large neural networks are slightly conscious." But other AI experts say that it's far too soon to determine anything of the sort. "To be conscious, an entity needs to be aware of its existence in its environment and that actions it takes will impact its future," Charles Simon, the CEO of FutureAI, told Lifewire in an email interview.


Are we Shadoks?

#artificialintelligence

The Shadoks were "anthropomorphic creatures with the appearance of chubby birds, with long, filiform legs, tiny and prehensile wings, and original hair." Living on a planet with uncertain contours, their main life goal was to build a rocket to land on the earth. To achieve this, they invented the "Cosmopump" intended to pump the "Cosmogol 999" to fuel their rocket. Let's forget the Shadoks for a moment and focus on the effort of reading that led us to these lines. Our brains consumed energy during this turmoil.


Miko 2 and robots like it want to be friends

#artificialintelligence

It was almost ten years ago when Sherry Turkle warned that the world was headed for a place where humans would be interacting socially with machines, like robots. Turkle is a MIT professor and social scientist who has been working on human-technology interaction and what it will mean for the human race. She is the author of several books including Alone Together and Reclaiming Conversation which explore the impact of technology on some of the aspects that actually make humans humans. Over the years, through her books and numerous talks, Sherry Turkle has explained the dangers of people trying to replace each other with machines including the smartphone and robots, but the world seems to have taken little heed as today we see companies inventing robots for all sorts of tasks and even for human relationships. Remember the Chinese inventor of a female robot whom he married in 2017?


Miko 2 Robot Brings The Magic Of Playful Learning To North America Markets Insider

#artificialintelligence

PLEASANTON, Calif., Oct. 29, 2019 /PRNewswire-PRWeb/ -- For the last four years, a friendly little robot has been engaging, entertaining and educating children across Asia. Now, the bestselling Miko 2 robot is bringing the marvel of conversational, play-based learning to kids in North America for the first time--at a special limited time price of $299 guaranteed to make the holiday season even more magical. Miko 2 can be pre-ordered at Miko.ai in 3 attractive colors and will ship in December with guaranteed arrival for the holidays. Miko 2 is the brainchild of Miko, an advanced consumer robotics innovation lab home to global educators, engineers and psychologists. "As a father, Miko's mission is close to my heart. We hope to see children learn and grow with our product and are thrilled to bring it to North American families, especially after the interest and support that we've received in Asia," said CEO Sneh Vaswani.