Goto

Collaborating Authors

 Mobile


Release date for Apple's first FOLDABLE iPhone leaks online - and it suggests fans don't have long to wait at all

Daily Mail - Science & tech

It is one of the world's leading tech companies but, unlike its rivals, Apple is yet to reveal its own folding phone design. Now, a possible release date for the long-rumoured foldable iPhone has leaked online - and it suggests tech fans don't have long to wait. According to reports from Apple analysts, the foldable iPhone could be launched before the end of 2026. The rumours also suggest that Apple's latest innovation won't come cheap, with an expected price tag of 2,299. That would make the'iPhone Fold' almost twice the price of Apple's current most expensive smartphone, the iPhone 16 Pro Max, which starts at 1,199 (UK price 1,199).


Faster Algorithms for User-Level Private Stochastic Convex Optimization

Neural Information Processing Systems

We study private stochastic convex optimization (SCO) under user-level differential privacy (DP) constraints. In this setting, there are n users (e.g., cell phones), each possessing m data items (e.g., text messages), and we need to protect the privacy of each user's entire collection of data items. Existing algorithms for user-level DP SCO are impractical in many large-scale machine learning scenarios because: (i) they make restrictive assumptions on the smoothness parameter of the loss function and require the number of users to grow polynomially with the dimension of the parameter space; or (ii) they are prohibitively slow, requiring at least (mn) {3/2} gradient computations for smooth losses and (mn) 3 computations for non-smooth losses. To address these limitations, we provide novel user-level DP algorithms with state-of-the-art excess risk and runtime guarantees, without stringent assumptions. First, we develop a linear-time algorithm with state-of-the-art excess risk (for a non-trivial linear-time algorithm) under a mild smoothness assumption.


Real-time Core-Periphery Guided ViT with Smart Data Layout Selection on Mobile Devices

Neural Information Processing Systems

Mobile devices have become essential enablers for AI applications, particularly in scenarios that require real-time performance. Vision Transformer (ViT) has become a fundamental cornerstone in this regard due to its high accuracy. Recent efforts have been dedicated to developing various transformer architectures that offer im- proved accuracy while reducing the computational requirements. However, existing research primarily focuses on reducing the theoretical computational complexity through methods such as local attention and model pruning, rather than considering realistic performance on mobile hardware. Although these optimizations reduce computational demands, they either introduce additional overheads related to data transformation (e.g., Reshape and Transpose) or irregular computation/data-access patterns.


Google is officially replacing Assistant with Gemini - and there's only one way to keep it

ZDNet

After just nine years, Google is moving Assistant to the graveyard. Last fall, Google announced it was officially replacing Assistant, the default Android assistant on your phone, with Gemini. You still had the option to use Assistant, but now that option is ending and users must move to Gemini. In a recent blog post, Google announced it's officially winding down Assistant. "Later this year," the post read, "the classic Google Assistant will no longer be accessible on most mobile devices or available for new downloads on mobile app stores."


Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration

Neural Information Processing Systems

Mobile device operation tasks are increasingly becoming a popular multi-modal AI application scenario. Current Multi-modal Large Language Models (MLLMs), constrained by their training data, lack the capability to function effectively as operation assistants. Instead, MLLM-based agents, which enhance capabilities through tool invocation, are gradually being applied to this scenario. However, the two major navigation challenges in mobile device operation tasks -- task progress navigation and focus content navigation -- are difficult to effectively solve under the single-agent architecture of existing work. This is due to the overly long token sequences and the interleaved text-image data format, which limit performance.


PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones

Neural Information Processing Systems

PopSign is a smartphone-based bubble-shooter game that helps hearing parentsof deaf infants learn sign language. To help parents practice their ability to sign,PopSign is integrating sign language recognition as part of its gameplay. Fortraining the recognizer, we introduce the PopSign ASL v1.0 dataset that collectsexamples of 250 isolated American Sign Language (ASL) signs using Pixel 4Asmartphone selfie cameras in a variety of environments. It is the largest publiclyavailable, isolated sign dataset by number of examples and is the first dataset tofocus on one-handed, smartphone signs. We collected over 210,000 examplesat 1944x2592 resolution made by 47 consenting Deaf adult signers for whomAmerican Sign Language is their primary language.


Googles officially retiring Assistant

Mashable

In a blog post announcement, Google wrote: "Over the coming months, we're upgrading more users on mobile devices from Google Assistant to Gemini; and later this year, the classic Google Assistant will no longer be accessible on most mobile devices or available for new downloads on mobile app stores." Assistant will remain only on phones running Android 9 or earlier that don't have at least 2 GB of RAM. Google has recently made a concerted effort to get people to use Gemini. Last month, it pulled the tool from its search app, for instance, and redirected users to the standalone Gemini app. And it's not just phones that'll be migrating to Gemini.


Google is removing Assistant from most phones this year

Engadget

Google Assistant's days are numbered. Google announced Friday that all Android devices are switching to Gemini as their default assistant and "the classic Google Assistant will no longer be accessible on most mobile devices." The company says it's working to convert more mobile devices from Google Assistant to Gemini in 2025, and plans on "upgrading tablets, cars and devices that connect to your phone, such as headphones and watches" to the new AI assistant. That presumably includes other platforms like iOS, as well. While smart home devices don't seem to be a focus at Google as of late, the company also reaffirmed plans to use Gemini to power a new experience on speakers, displays, and streaming boxes.


Iran using drones and apps to enforce women's dress code, UN says

BBC News

At Tehran's Amirkabir University, authorities installed facial recognition software at its entrance gate to also find women not wearing the hijab, the report said. Surveillance cameras on Iran's major roads are also being used to search for uncovered women. Investigators also said they obtained the "Nazer" mobile phone app offered by Iranian police, which allows "vetted" members of the public and the police to report on uncovered women in vehicles, including ambulances, buses, metro cars and taxis. "Users may add the location, date, time and the licence plate number of the vehicle in which the alleged mandatory hijab infraction occurred, which then'flags' the vehicle online, alerting the police," the report said. According to the report, a text message is then sent to the registered owner of the vehicle, warning them they had been found in violation of the mandatory hijab laws.


Apple might add live language translation to AirPods this year - how that'll work

ZDNet

As a fan of Star Trek: Deep Space Nine, I've always loved the episode Little Green Men, in which Quark, Rom, and Nog are accidentally transported back to 20th-century Earth. In one interesting scene, we learn that the Ferengi insert tiny universal translators into their large ears so they can converse with people in different languages. Though I'm not a Ferengi, I'm looking forward to a similar feature that reportedly will reach Apple AirPods later this year. Also: Apple reveals the secret to updating every AirPods model - and it's easier than you think In a Bloomberg story published Thursday, tech reporter Mark Gurman said that Apple plans to update its AirPods with a live translation feature that would let you hear other languages translated into your own. Citing information from "people with knowledge of the matter," Gurman said that the new feature will be tied to iOS 19 and arrive as part of an AirPods software upgrade due later this year.