Not enough data to create a plot.
Try a different view from the menu above.
New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
The uncanny ability of artificial intelligence to spot patterns in large amounts of data could finally unravel some of the thorniest mysteries of the ancient world. Researchers working with companies such as IBM and Google's Deepmind are on the brink of deciphering ancient texts once thought unreadable - and even'cracking' an unknown language from almost two millennia before the birth of Christ. AI allows researchers to sift through images far faster than human beings, and the techniques could answer fundamental questions about the history of language and potentially uncover lost works by Greek and Roman writers. A mysterious unknown language, 'Linear A' discovered on tablets in Crete in 1900 has never been deciphered - but AI might be able to crack the code. Among the world's most famous examples of unknown languages, stones and tablets written in the strange'LInear A' language is considered the main script used by the Minoan civilization, a Bronze Age kingdom led by King Minos.
Tens of millions of people are using AI-powered'nudify' apps, according to a new analysis that shows the dark side of the technology. More than 24 million people visited nudity AI websites in September, which digitally alter images, primarily women, to make them appear naked in the photo using deep-learning algorithms. These algorithms are trained on existing images of women which allows it to overlay realistic images of nude body parts, regardless of whether the photographed person is clothed. Spam ads across major platforms are also directing people to the sites and apps increased by more than 2,000 percent since the beginning of 2023. The rise in nudity-promoted apps is particularly prevalent on social media, including Google's YouTube, Reddit, and X - and 52 Telegram groups were also found to be used to access non-consensual intimate imagery (NCII) services.
Microsoft Corp.'s partnership with OpenAI Inc. is facing the potential of a full-blown UK antitrust investigation three weeks after a mutiny at the ChatGPT creator laid bare deep ties between the two companies. The Competition and Markets Authority said Friday it was gathering information from stakeholders to determine whether the collaboration between the two firms threatens competition in the UK, home of Google's AI research lab Deepmind. Microsoft fell 0.7% in premarket trading. Microsoft has benefited richly from its investments, totaling as much as $13 billion, in OpenAI. By integrating OpenAI's products into virtually every corner of its core businesses, the software giant very quickly established itself as the undisputed leader of AI among big tech firms.
A new training model, dubbed "KnowNo," aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much. Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door.
With the number of large language models (LLMs) in the market expected to grow and branch out, businesses will need a governance framework to manage their generative artificial intelligence (AI) applications. Organizations will require layers of intelligence that pull together internal and external capabilities, said Frederic Giron, Forrester's vice president and senior research director. This approach will encompass the use of paid and open-source LLMs from third parties, such as OpenAI's ChatGPT, Anthropic's Claude, and Meta's Llama, and embedded AI tools, such as Salefsforce Einstein GPT. Organizations will also have their own AI models, including using generative AI, tapping general-purpose and specialized LLMs, and running various AI applications alongside key processes, policies, and business rules. The approach will be underpinned by structured and unstructured data, with the latter expected to double amid the adoption of generative AI as companies deploy more conversational experiences for customers and employees, said Giron, who was speaking at the research firm's 2024 predictions briefing this week.
In a year awash with groundbreaking technological leaps and profound ethical debates, we have witnessed AI's unprecedented influence in unexpected areas -- including some indelible marks on entertainment. From the debut of cutting-edge large language models (LLMs) to the innovative Humane AI Pin and the awe-inspiring creation of an entirely new Beatles song, this year has demonstrated AI's rapid evolution and expansive reach. AI has now integrated itself into the fabric of our lives, shaping our technology and profoundly impacting our culture and the arts. AI's profound transformation this year was marked by advancements in open-source AI, licensing debates, and the emergence of powerful generative AI models. Open-source AI development soared to unprecedented heights, reshaping the AI framework and model landscape.
The history of artificial intelligence has been punctuated by periods of so-called "AI winter," when the technology seemed to meet a dead end and funding dried up. Each one has been accompanied by proclamations that making machines truly intelligent is just too darned hard for humans to figure out. Google's release of Gemini, claimed to be a fundamentally new kind of AI model and the company's most powerful to date, suggests that a new AI winter isn't coming anytime soon. In fact, although the 12 months since ChatGPT launched have been a banner year for AI, there is good reason to think that the current AI boom is only getting started. OpenAI didn't have high expectations when it launched the "low key research preview" called ChatGPT in November 2022.
Ever get bogged down by confusing AI terms? In the past year, countless AI-infused products and services have become available, offering a dizzying variety of features frequently wrapped in hard-to-discern jargon. With this handy glossary, you'll now know the difference between AI and AGI, what really happens when ChatGPT "hallucinates," and know what it means when you hear GPT-4 described as an LLM with a transformer model built using deep neural networks. An agent, in the context of AI, is a model or software program that can autonomously perform some kind of task. Examples of agents range from smart home devices that control temperature and lighting, to sensors in robot vacuums and driverless cars, to chatbots like ChatGPT that learn and respond to user prompts. Autonomous agents that carry out complex tasks are often cited as examples of what the next leap forward in AI might look like.
Hype about Gemini, Google DeepMind's long-rumored response to OpenAI's GPT-4, has been building for months. Now, the company has finally revealed what it has been working on in secret all this time. Gemini is Google's biggest AI launch yet--its push to take on competitors OpenAI and Microsoft in the race for AI supremacy. There is no doubt that the model is pitched as best-in-class across a wide range of capabilities--an "everything machine." Judging from its demos, it does many things very well--but few things that we haven't seen before.
The biggest fight of the generative AI revolution is headed to the courtroom--and no, it's not about the latest boardroom drama at OpenAI. Book authors, artists, and coders are challenging the practice of teaching AI models to replicate their skills using their own work as a training manual. But as image generators and other tools have proven able to impressively mimic works in their training data, and the scale and value of training data has become clear, creators are increasingly crying foul. At LiveWIRED in San Francisco, the 30th anniversary event for WIRED magazine, two leaders of that nascent resistance sparred with a defender of the rights of AI companies to develop the technology unencumbered. From left to right: WIRED senior writer Kate Knibbs discussed creators' rights and AI with Mike Masnick, Mary Rasenberger, and Matthew Butterick at LiveWIRED in San Francisco,.