iphone 14
Apple has quietly DISCONTINUED three popular devices - as concerned shoppers find they're sold out around the world
After months of anticipation, Apple finally unveiled its latest smartphone to the world last night. The iPhone 16e is Apple's latest'budget' smartphone, with prices starting at just 599/ 599. The new device runs Apple Intelligence features, including a ChatGPT integration with smart assistant Siri. It also includes a 6.1-inch display, a two-in-one camera system, an'extraordinary' battery life, and the return of the'notch' at the top of the display. While the focus has been on the new device, several eagle-eyed Apple fans have noticed that three popular devices have been quietly discontinued.
- Asia > China (0.06)
- North America > United States > California > Santa Clara County > Cupertino (0.05)
- North America > United States > California > San Bernardino County > San Bernardino (0.05)
- Leisure & Entertainment (0.97)
- Media > Music (0.71)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.89)
Apple's Tim Cook reveals the release date for a brand new device - and there's not long to wait
From iPhones to Apple Watches, Apple is known for its incredible range of gadgets. Now, the tech giant is about to launch a brand new product - and there's not long to wait to see it. Tim Cook, CEO at Apple, has revealed that a new device is coming on Wednesday 19 February. He posted a video of a holographic Apple logo on X (formerly Twitter), writing: 'Get ready to meet the newest member of the family. While further details are yet to be announced, the new device is widely rumoured to be Apple's latest budget iPhone SE.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence (1.00)
Apple Intelligence: What devices and features will actually be supported?
Apple Intelligence is coming, but not to every iPhone out there. In fact, you'll need to have a device with an A17 Pro processor or M-series chip to use many of the features unveiled during the Apple Intelligence portion of WWDC 2024. That means only iPhone 15 Pro owners (and those with an M-series iPad) will get the iOS 18-related Apple Intelligence (AI?) updates like Genmoji, Image Playground, the redesigned Siri and Writing Tools. It's not evident exactly why older devices using an A16 chip (like the iPhone 14 Pro) won't work with Apple Intelligence, given its neural engine seems more than capable compared to the M1. A closer look at the specs sheets of those two processors show that the main differences appear to be in memory and GPU prowess.
MELTing point: Mobile Evaluation of Language Transformers
Laskaridis, Stefanos, Katevas, Kleomenis, Minto, Lorenzo, Haddadi, Hamed
Transformers have revolutionized the machine learning landscape, gradually making their way into everyday tasks and equipping our computers with ``sparks of intelligence''. However, their runtime requirements have prevented them from being broadly deployed on mobile. As personal devices become increasingly powerful and prompt privacy becomes an ever more pressing issue, we explore the current state of mobile execution of Large Language Models (LLMs). To achieve this, we have created our own automation infrastructure, MELT, which supports the headless execution and benchmarking of LLMs on device, supporting different models, devices and frameworks, including Android, iOS and Nvidia Jetson devices. We evaluate popular instruction fine-tuned LLMs and leverage different frameworks to measure their end-to-end and granular performance, tracing their memory and energy requirements along the way. Our analysis is the first systematic study of on-device LLM execution, quantifying performance, energy efficiency and accuracy across various state-of-the-art models and showcases the state of on-device intelligence in the era of hyperscale models. Results highlight the performance heterogeneity across targets and corroborates that LLM inference is largely memory-bound. Quantization drastically reduces memory requirements and renders execution viable, but at a non-negligible accuracy cost. Drawing from its energy footprint and thermal behavior, the continuous execution of LLMs remains elusive, as both factors negatively affect user experience. Last, our experience shows that the ecosystem is still in its infancy, and algorithmic as well as hardware breakthroughs can significantly shift the execution cost. We expect NPU acceleration, and framework-hardware co-design to be the biggest bet towards efficient standalone execution, with the alternative of offloading tailored towards edge deployments.
- Europe > United Kingdom > England > Greater London > London (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (12 more...)
- Energy (1.00)
- Information Technology > Hardware (0.34)
Overview of the VLSP 2023 -- ComOM Shared Task: A Data Challenge for Comparative Opinion Mining from Vietnamese Product Reviews
Le, Hoang-Quynh, Can, Duy-Cat, Nguyen, Khanh-Vinh, Tran, Mai-Vu
This paper presents a comprehensive overview of the Comparative Opinion Mining from Vietnamese Product Reviews shared task (ComOM), held as part of the 10$^{th}$ International Workshop on Vietnamese Language and Speech Processing (VLSP 2023). The primary objective of this shared task is to advance the field of natural language processing by developing techniques that proficiently extract comparative opinions from Vietnamese product reviews. Participants are challenged to propose models that adeptly extract a comparative "quintuple" from a comparative sentence, encompassing Subject, Object, Aspect, Predicate, and Comparison Type Label. We construct a human-annotated dataset comprising $120$ documents, encompassing $7427$ non-comparative sentences and $2468$ comparisons within $1798$ sentences. Participating models undergo evaluation and ranking based on the Exact match macro-averaged quintuple F1 score.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
- Overview (0.68)
- Research Report (0.50)
Apple is crowned the world's biggest phonemaker - overtaking rival Samsung for the first time in 12 years
It's one of the most well-known companies in the world, and now Apple has officially been crowned the world's biggest phonemaker. While Samsung has taken the top spot every year since 2010, it was finally knocked off its pedestal by Apple in 2023. Figures released by the International Data Corporation (IDC) reveal how Apple took 20.1 per cent of the market share last year – a 3.7 per cent increase on 2022. 'The biggest winner is clearly Apple,' said Nabila Popal, research director with IDC's Worldwide Tracker team. 'Not only is Apple the only player in the Top 3 to show positive growth annually, but also bags the number 1 spot annually for the first time ever.'
- Asia > China (0.07)
- North America > United States > California > Santa Clara County > Cupertino (0.05)
- North America > United States > California > San Bernardino County > San Bernardino (0.05)
- Leisure & Entertainment (0.97)
- Media > Music (0.71)
- Semiconductors & Electronics (0.63)
- Information Technology > Communications > Mobile (0.88)
- Information Technology > Artificial Intelligence (0.73)
Microphone Conversion: Mitigating Device Variability in Sound Event Classification
Ryu, Myeonghoon, Oh, Hongseok, Lee, Suji, Park, Han
In this study, we introduce a new augmentation technique to enhance the resilience of sound event classification (SEC) systems against device variability through the use of CycleGAN. We also present a unique dataset to evaluate this method. As SEC systems become increasingly common, it is crucial that they work well with audio from diverse recording devices. Our method addresses limited device diversity in training data by enabling unpaired training to transform input spectrograms as if they are recorded on a different device. Our experiments show that our approach outperforms existing methods in generalization by 5.2% - 11.5% in weighted f1 score. Additionally, it surpasses the current methods in adaptability across diverse recording devices by achieving a 6.5% - 12.8% improvement in weighted f1 score.
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
A Performance Evaluation of a Quantized Large Language Model on Various Smartphones
Çöplü, Tolga, Loedi, Marc, Bendiken, Arto, Makohin, Mykhailo, Bouw, Joshua J., Cobb, Stephen
This paper explores the feasibility and performance of on-device large language model (LLM) inference on various Apple iPhone models. Amidst the rapid evolution of generative AI, on-device LLMs offer solutions to privacy, security, and connectivity challenges inherent in cloud-based models. Leveraging existing literature on running multi-billion parameter LLMs on resource-limited devices, our study examines the thermal effects and interaction speeds of a high-performing LLM across different smartphone generations. We present real-world performance results, providing insights into on-device inference capabilities.
ComOM at VLSP 2023: A Dual-Stage Framework with BERTology and Unified Multi-Task Instruction Tuning Model for Vietnamese Comparative Opinion Mining
Van Thin, Dang, Hao, Duong Ngoc, Nguyen, Ngan Luu-Thuy
The ComOM shared task aims to extract comparative opinions from product reviews in Vietnamese language. There are two sub-tasks, including (1) Comparative Sentence Identification (CSI) and (2) Comparative Element Extraction (CEE). The first task is to identify whether the input is a comparative review, and the purpose of the second task is to extract the quintuplets mentioned in the comparative review. To address this task, our team proposes a two-stage system based on fine-tuning a BERTology model for the CSI task and unified multi-task instruction tuning for the CEE task. Besides, we apply the simple data augmentation technique to increase the size of the dataset for training our model in the second stage. Experimental results show that our approach outperforms the other competitors and has achieved the top score on the official private test.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Vietnam > Hồ Chí Minh City > Hồ Chí Minh City (0.04)
41 Best Amazon Prime Day Deals Under $50 (2023)
Prime Day is here again. Actually, Amazon's second coming of Prime Day is now called Prime Big Deal Days--a bold choice, to say the least. We'll still be calling it Amazon Prime Day, and you hereby have permission from this humble WIRED writer to do the same. We've rounded up the best Prime Day deals under $50-- nothing feels more like a deal than when it's affordable, but it can be hard to find what's a good cheap deal and what isn't. We did the work for you (you're welcome!). Updated October 10: We've added more deals ranging from the Anker PowerWave to the SanDisk Extreme Pro. We test products year-round and handpicked these deals.
- Information Technology > Artificial Intelligence (0.71)
- Information Technology > Communications > Mobile (0.30)