Goto

Collaborating Authors

 zaremba


Inside OpenAI's Plan to Make AI More 'Democratic'

TIME - Tech

He was surrounded by seven staff from the world's leading artificial intelligence lab, which had launched ChatGPT a few months earlier. One of them was Wojciech Zaremba, an OpenAI co-founder. For over a decade, Megill had been toiling in relative obscurity as the co-founder of Polis, a nonprofit open-source tech platform for carrying out public deliberations. Democracy, in Megill's view, had barely evolved in hundreds of years even as the world around it had transformed unrecognizably. Each voter has a multitude of beliefs they must distill down into a single signal: one vote, every few years. The heterogeneity of every individual gets lost and distorted, with the result that democratic systems often barely reflect the will of the people and tend toward polarization.


Meet the Genius Behind GPT-4

#artificialintelligence

OpenAI has released its latest version of the language model, GPT-4, which it calls a "milestone in our effort in scaling up deep learning". While the company credits the achievement to a team effort, for OpenAI's founder Sam Altman, one person stands out as a driving force behind the pretraining effort – Jakub Pachocki. GPT-4 was truly a team effort from our entire company, but the overall leadership and technical vision of Jakub Pachocki for the pretraining effort was remarkable and we wouldn't be here without it Pachocki has been with OpenAI since 2017, and his technical vision and leadership played a crucial role in the development of GPT-4. According to Altman, "we wouldn't be here without him". In a recent interview with MIT, he said "That fundamental formula has not changed much for years," talking about the evolution of GPT models since the first version released in 2018.


OpenAI upgrades its natural language AI coder Codex and kicks off private beta

#artificialintelligence

OpenAI has already made some big changes to Codex, the AI-powered coding assistant the company announced last month. The system now accepts commands in plain English and outputs live, working code, letting someone build a game or web app without so much as naming a variable. A few lucky coders (and, one assumes, non-coders) will be able to kick the tires on this new Codex API in a free private beta. Codex is best thought of as OpenAI's versatile language engine, GPT-3, but trained only on code instead of ordinary written material. That lets it do things like complete lines of code or entire sections, but when it was announced it wasn't really something a non-coder would be able to easily interact with.


OpenAI can translate English into code with its new machine learning software Codex

#artificialintelligence

AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding. In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like "create a webpage with a menu on the side and title at the top," and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible. "We see this as a tool to multiply programmers," OpenAI's CTO and co-founder Greg Brockman told The Verge.


OpenAI upgrades its natural language AI coder Codex and kicks off private beta – TechCrunch

#artificialintelligence

OpenAI has already made some big changes to Codex, the AI-powered coding assistant the company announced last month. The system now accepts commands in plain English and outputs live, working code, letting someone build a game or web app without so much as naming a variable. A few lucky coders (and, one assumes, non-coders) will be able to kick the tires on this new Codex API in a free private beta. Codex is best thought of as OpenAI's versatile language engine, GPT-3, but trained only on code instead of ordinary written material. That lets it do things like complete lines of code or entire sections, but when it was announced it wasn't really something a non-coder would be able to easily interact with.


Why Did OpenAI Disband Its Robotics Team?

#artificialintelligence

Last month, OpenAI cofounder Wojciech Zaremba said the company has disbanded its robotics team in a Weights & Biases podcast. "I was actually working for several years on robotics. Recently, we changed the focus at OpenAI. I disbanded the robotics team. There are actually plenty of domains that are very rich with data. Ultimately that was holding us back, in the case of robotics," said Zaremba.


OpenAI shuts down robotics team because it doesn't have enough data yet

#artificialintelligence

In brief OpenAI has disbanded its AI robotics team and is no longer trying to apply machine learning to physical machines. Wojciech Zaremba, co-founder of OpenAI, who led the robotics group confirmed that the company recently broke up the team to focus working on more promising areas of artificial general intelligence research. "Here's a reveal ... as of recently we changed the focus at OpenAI, and I actually disbanded the robotics team," he said during an episode of the Weights & Biases podcast. Zaremba said a lack of training data was holding the robotics research back: there wasn't enough information on hand to teach the systems to the level of intelligence desired. "From the perspective of what we want to achieve, which is to build AGI, I think there was actually some components missing," he added.


OpenAI disbands its robotics research team

#artificialintelligence

Join live for the final day of Transform 2021, including the AI Innovation & Women in AI Awards. OpenAI has disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik's Cube. Company cofounder Wojciech Zaremba quietly revealed on a podcast hosted by startup Weights & Biases that OpenAI has shifted its focus to other domains, where data is more readily available. "So it turns out that we can make a gigantic progress whenever we have access to data, and all our machine learning, unsupervised, and reinforcement learning -- they work extremely well, and there [are] actually plenty of domains that are very, very rich with data. And ultimately that was holding us back in terms of robotics," Zaremba said.


Improving Language Modelling with Noise Contrastive Estimation

Liza, Farhana Ferdousi (University of Kent, UK) | Grzes, Marek (University of Kent, UK)

AAAI Conferences

Neural language models do not scale well when the vocabulary is large. Noise contrastive estimation (NCE) is a sampling-based method that allows for fast learning with large vocabularies. Although NCE has shown promising performance in neural machine translation, its full potential has not been demonstrated in the language modelling literature. A sufficient investigation of the hyperparameters in the NCE-based neural language models was clearly missing. In this paper, we showed that NCE can be a very successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately. We introduced the `search-then-converge' learning rate schedule for NCE and designed a heuristic that specifies how to use this schedule. The impact of the other important hyperparameters, such as the dropout rate and the weight initialisation range, was also demonstrated. Using a popular benchmark, we showed that appropriate tuning of NCE in neural language models outperforms the state-of-the-art single-model methods based on standard dropout and the standard LSTM recurrent neural networks.


Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free

#artificialintelligence

The Friday afternoon news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December. But there was a reason they revealed OpenAI at that late hour. It wasn't that no one was looking. It was that everyone was looking. When some of Silicon Valley's most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI's freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers--some made at the conference itself--were large enough to force Musk and Altman to delay the announcement of the new startup.