perplexity
Perplexity opens up its Personal Computer AI assistant to all Mac users
Last month, Perplexity sought to better compete with the likes of Claude Cowork and get out ahead of Apple's delayed, generative AI-powered version of Siri by bringing Personal Computer to macOS . The AI assistant was previously only available to those on Perplexity's $200 per month Max plan, but now the company has opened it up to all Mac users. The company says everyone can download the new Perplexity macOS app and use Personal Computer for everyday queries, attachments and dictation. Usage is tied to Pro and Max plans' credit limits, Perplexity noted. Personal Computer can run tasks across local files, other apps, the web and Perplexity's own servers, according to the company.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.78)
- Information Technology > Communications > Mobile (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.39)
Generalized Discrete Diffusion from Snapshots
Zekri, Oussama, Uscidda, Théo, Boullé, Nicolas, Korba, Anna
We introduce Generalized Discrete Diffusion from Snapshots (GDDS), a unified framework for discrete diffusion modeling that supports arbitrary noising processes over large discrete state spaces. Our formulation encompasses all existing discrete diffusion approaches, while allowing significantly greater flexibility in the choice of corruption dynamics. The forward noising process relies on uniformization and enables fast arbitrary corruption. For the reverse process, we derive a simple evidence lower bound (ELBO) based on snapshot latents, instead of the entire noising path, that allows efficient training of standard generative modeling architectures with clear probabilistic interpretation. Our experiments on large-vocabulary discrete generation tasks suggest that the proposed framework outperforms existing discrete diffusion methods in terms of training efficiency and generation quality, and beats autoregressive models for the first time at this scale. We provide the code along with a blog post on the project page : \href{https://oussamazekri.fr/gdds}{https://oussamazekri.fr/gdds}.
- Asia > Middle East > Saudi Arabia (0.04)
- Asia > Middle East > Syria (0.04)
- North America > United States > Illinois (0.04)
- (11 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.87)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze
As Silicon Valley obsesses over a new wave of AI coding agents, Google and other AI labs are shifting their bets. Google is shaking up the team behind Project Mariner, its AI agent that can navigate the Chrome browser and complete tasks on a user's behalf, WIRED has learned. In recent months, some Google Labs staffers who worked on the research prototype have moved on to higher-priority projects, according to two people familiar with the matter. A Google spokesperson confirmed the changes, but said the computer use capabilities developed under Project Mariner will be incorporated into the company's agent strategy moving forward. Google has already folded some of these capabilities into other agent products, including the recently launched Gemini Agent, the spokesperson added.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.90)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.76)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.74)
LightRNN: Memory and Computation-Efficient Recurrent Neural Networks
Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector.
Neural Architecture Optimization
Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Perplexity's Retreat From Ads Signals a Bigger Strategic Shift
The AI search startup once predicted advertising would be a massive business. Perplexity is abandoning plans to put ads in its AI search product as the industry looks for sustainable business models that won't hurt user trust. The changes are part of a larger strategic shift for the company, which has long focused on disrupting Google Search's business. Google is changing to be like Perplexity more than Perplexity is trying to take on Google, said a Perplexity executive at a press briefing on Tuesday. Executives spoke to the press on the condition of anonymity.
- North America > United States > California (0.15)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Asia > China (0.05)
- Information Technology > Communications (0.97)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.52)
Search for Efficient Large Language Models
Large Language Models (LLMs) have long held sway in the realm s of artificial intelligence research. Numerous efficient techniques, inc luding weight pruning, quantization, and distillation, have been embraced to comp ress LLMs, targeting memory reduction and inference acceleration, which unders core the redundancy in LLMs. However, most model compression techniques concen trate on weight optimization, overlooking the exploration of optimal arch itectures. Besides, traditional architecture search methods, limited by the eleva ted complexity with extensive parameters, struggle to demonstrate their effecti veness on LLMs. In this paper, we propose a training-free architecture search fram ework to identify optimal subnets that preserve the fundamental strengths of the o riginal LLMs while achieving inference acceleration. Furthermore, after gen erating subnets that inherit specific weights from the original LLMs, we introduce a reformation algorithm that utilizes the omitted weights to rectify the inher ited weights with a small amount of calibration data. Compared with SOT A training-fr ee structured pruning works that can generate smaller networks, our method dem onstrates superior performance across standard benchmarks. Furthermore, our generated subnets can directly reduce the usage of GPU memory and achieve infer ence acceleration.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Information Technology (0.67)
- Government (0.46)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- Europe > Slovenia > Drava > Municipality of Maribor > Maribor (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- (3 more...)
- Education (0.92)
- Government > Regional Government > Asia Government > North Korea Government (0.46)
- Government > Regional Government > North America Government > United States Government (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Vision (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)