Not enough data to create a plot.
Try a different view from the menu above.
New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
The rise of "generative" artificial intelligence is all about scaling, the idea of adding more resources to a computer program to get better results. As OpenAI co-founder and chief scientist Ilya Sutskever has remarked, "I had a very strong belief that bigger is better" when he founded the company that would create ChatGPT. That idea of bigger and bigger compute has led to a race to develop the most powerful chips for AI, including not only new GPUs from Nvidia, but also Intel's Habana Labs, which has shown impressive results in benchmark tests; and Advanced Micro Devices, and startups such as Cerebras Systems. Also: Can generative AI solve computer science's greatest unsolved problem? That rush to develop chips has created a very practical problem: How are developers supposed to develop for an expanding universe of kinds of chips that have unique capabilities, and unique programming environments?
When computer scientists hang out at cocktail parties, they're apt to chat, among other things, about the single most important unsolved problem in computer science: the question, Does P NP? Formulated nearly 50 years ago, the question of whether P equals NP is a deep meditation on what can ultimately be achieved with computers. The question, which has implications for fields such as cryptography and quantum computing, has resisted a convincing answer despite decades of intense study. Now, that effort has enlisted the help of generative AI. Also: DeepMind's RT-2 makes robot control a matter of AI chat In a paper titled "Large Language Model for Science: A Study on P vs. NP," lead author Qingxiu Dong and colleagues program OpenAI's GPT-4 large language model using what they call a Socratic Method, several turns of chat via prompt with GPT-4. The team's method amounts to taking arguments from a prior paper and spoon-feeding them to GPT-4 to prompt useful responses.
The first thrilling days of OpenAI's release to the public last winter of ChatGPT brought with it evidence of the program's ability to generate computer code, something that was a revelation to developers. It seemed at the outset that ChatGPT was so good at code, in fact, that suddenly, even people with little coding knowledge could use it to generate powerful software, so powerful it could even be used as malware to threaten computer networks. Many months of experience, and formal research into the matter, have revealed that ChatGPT and other such generative AI cannot really develop programs, per se. The best they can do is offer baby steps, mostly for simple coding problems, which may or may not be helpful to human coders. "What generative has opened everyone's eyes to is the fact that I can almost have a partner when I'm doing a task that essentially gives me suggestions that move me past creative roadblocks," said Naveen Rao, co-founder and CEO of AI startup MosaicML, which was acquired in August by Databricks. At the same time, said Rao, the level of assistance for coding is low.
Welcome to our September 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, find out about recent events, and more. This month, we dive into the layers of deep-learning models, check out the buzz around pollination strategies, and listen to the AI Song Contest entries. In their work Uncovering unique concept vectors through latent space decomposition, Mara Graziani and colleagues focus on understanding how representations are organized by intermediate layers of complex deep learning models. In this interview, Mara tells us about the team's proposed framework for concept discovery. The need to support pollinator abundance is well known and many countries have pollinator strategies, which are informed by a variety of experts.
Meta is starting to make good on its promise to bring generative AI to all of its products. At the company's Connect event, it revealed new AI image editing and sticker-creation features for Instagram. A tool called "restyle" is a bit like a supercharged generative AI filter. It allows users to remix their existing photos into different looks. "Think of typing a descriptor like'watercolor' or a more detailed prompt like'collage from magazines and newspapers, torn edges' to describe the new look and feel of the image you want to create," the company explained.
Artificial intelligence (AI) has been part of the workplace for decades, from deep learning in voice assistants to new features in enterprise software. But generative AI is just being integrated into the workplace widely, stirring up fears of what it could mean for the job market. Generation Z comprises the youngest professionals in the workforce and it is largely not threatened by generative AI. Most Gen Z (59%) say they're not concerned about generative AI replacing their jobs, but only 48% feel prepared for their employer to adopt generative AI into everyday work. The post-pandemic job market has welcomed many young professionals from Generation Z to start their careers while Boomers retired. Adobe just published its Future Workforce Study, which collected responses from 1,011 Gen Z in the US who worked for a medium to large company for up to three years.
It's hard to miss how much attention AI image generators alone have attracted in recent months. With good reason, because they demonstrate the progress of deep learning models in a vivid and playful way. From chaotic random images generated with neural networks, which Google made accessible to the general public with Deep Dream in 2015, the journey went to almost photo-realistic images of the generators Dall-E 2 by Open AI, Midjourney by Midjian, or DreamStudio by Stable Diffusion. Generators are now available not only in the cloud, but also for your own PC. Provided it has enough power.
The quickest way to second-guess a decision to major in English is this: have an extended family full of Salvadoran immigrants and pragmatic midwesterners. The ability to recite Chaucer in the original Middle English was unlikely to land me a job that would pay off my student loans and help me save for retirement, they suggested when I was a college freshman still figuring out my future. I stuck with English, but when my B.A. eventually spat me out into the thick of the Great Recession, I worried that they'd been right. After all, computer-science degrees, and certainly not English, have long been sold to college students as among the safest paths toward 21st-century job security. Coding jobs are plentiful across industries, and the pay is good--even after the tech layoffs of the past year.
Earlier this year, the stock-photo service provider Getty Images sued Stability AI over what Getty said was the misuse of more than 12 million Getty photos in training Stability's AI photo-generation tool, Stable Diffusion. Now Getty Images is releasing its own AI photo-generation tool, which will be available to its commercial customers. And it's bringing in the big dog to do it: Nvidia. Called simply Generative AI by Getty Images, the tool is paywalled on the Getty.com It will also be available through an API, so Getty customers can plug it into other apps.
Everyone's favorite chatbot can now see and hear and speak. Users can now have voice conversations or share images with ChatGPT in real-time. Audio and multimodal features have become the next phase in fierce generative AI competition. Meta recently launched AudioCraft for generating music with AI and Google Bard and Microsoft Bing have both deployed multimodal features for their chat experiences. Just last week, Amazon previewed a revamped version of Alexa that will be powered by its own LLM (large language model), and even Apple is experimenting with AI generated voice, with Personal Voice.