suitcase
The Best Black Friday Travel Gear Deals (2025)
Some of our favorite carry-on suitcases, packing cubes, and other travel gear have deep discounts for Black Friday this year. If your 2026 dreams include traveling across the world for oversea adventures, having reliable travel gear that you can count on is crucial. As it turns out, Black Friday this year is turning out to be a great time to buy some of our favorite carry-on luggage, packing cubes, and toiletry bags, all of which are on sale. So, here are the best Black Friday travel gear deals on products we've tested and loved. One of the most impressive discounts is on the Travelpro Platinum Elite, which is one of our favorite carry-on rollaboard suitcases .
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Asia > Philippines (0.05)
Gear News of the Week: Honor Teases a Bizarre Robot Phone, and Kohler Debuts a Toilet Sensor
Plus: Omega Moon watches land and Coros has a new mountain watch, July unveils a trackable suitcase, Fujifilm has a new Instax, GrapheneOS will work on non-Pixel phones soon, and Roku leans into AI. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Honor, a Chinese phone brand that primarily sells its devices in Europe and Asia, announced a new smartphone in its Magic series this week, dubbed the Magic8 . It's notable because it's one of the first phones to be powered by the recently unveiled Qualcomm Snapdragon 8 Elite Gen 5 --that's the flagship processor that will power many of the top Android phones in 2026.
- Europe (0.89)
- North America > United States (0.47)
- Information Technology (0.70)
- Leisure & Entertainment (0.69)
- Health & Medicine (0.69)
- (2 more...)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence (1.00)
The secrets of lost luggage auctions: I bought four bags for 100. What would I find inside?
A yellow suitcase draws me in like a beacon. It is stacked on a dark shelf at the back of Greasby's auction house in Tooting, south London, and looks brand new, with a hard exterior and wheels that Richard Stacey, a Greasby's regular who is dressed in shorts, a plaid shirt and a cream bucket hat, tells me to test. So I test them – and they work. If I was just buying a bag, that is all I would need to know. But this isn't just a bag: the zip is locked and when I lift it, it is heavy.
- Europe > United Kingdom > England > Greater London > London (0.14)
- North America > United States > New York (0.04)
- Transportation > Air (0.95)
- Consumer Products & Services > Travel (0.86)
- Transportation > Passenger (0.70)
Efficient Learning for Product Attributes with Compact Multimodal Models
Image-based product attribute prediction in e-commerce is a crucial task with numerous applications. The supervised fine-tuning of Vision Language Models (VLMs) faces significant scale challenges due to the cost of manual or API based annotation. In this paper, we investigate label-efficient semi-supervised fine-tuning strategies for compact VLMs (2B-3B parameters) that leverage unlabeled product listings through Direct Preference Optimization (DPO). Beginning with a small, API-based, annotated, and labeled set, we first employ PEFT to train low-rank adapter modules. T o update the adapter weights with unlabeled data, we generate multiple reasoning-and-answer chains per unlabeled sample and segregate these chains into preferred and dispreferred based on self-consistency. W e then fine-tune the model with DPO loss and use the updated model for the next iteration. By using PEFT fine-tuning with DPO, our method achieves efficient convergence with minimal compute overhead. On a dataset spanning twelve e-commerce verticals, DPO-based fine-tuning, which utilizes only unlabeled data, demonstrates a significant improvement over the supervised model. Moreover, experiments demonstrate that accuracy with DPO training improves with more unlabeled data, indicating that a large pool of unlabeled samples can be effectively leveraged to improve performance.
- North America > United States (0.40)
- Europe > Switzerland (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.47)
ChatGPT can plan your dream getaway--if you know how to ask
Planning a trip takes time and often its more of a hassle than you'd like. If you don't feel like spending hours researching, you can simply outsource the first draft of your holiday plans to ChatGPT. The chatbot suggests travel destinations, creates daily plans, compares means of transport, reminds you of charging devices, and even virtually packs your suitcase. But how reliable are these suggestions? And can it actually save you money?
AI suitcase for visually impaired to be tested at expo
A demonstration of an artificial intelligence-powered suitcase, designed to assist visually impaired individuals as a robotic alternative to guide dogs, will be conducted at the Osaka Expo, set to open on Sunday. The latest model incorporates generative AI technology, enabling it to describe the surrounding environment through voice feedback. Equipped with a built-in camera and sensors, the suitcase can analyze its surroundings and provide real-time guidance to users. In late January, an AI suitcase was demonstrated at the National Museum of Emerging Science and Innovation, known as Miraikan, in Tokyo. Resembling a regular suitcase, the device activated when Chieko Asakawa, the museum's chief executive director and a key member of the development team, grasped its handle at hip level.
VIKSER: Visual Knowledge-Driven Self-Reinforcing Reasoning Framework
Zhang, Chunbai, Wang, Chao, Zhou, Yang, Peng, Yan
Visual reasoning refers to the task of solving questions about visual information. Current visual reasoning methods typically employ pre-trained vision-language model (VLM) strategies or deep neural network approaches. However, existing efforts are constrained by limited reasoning interpretability, while hindering by the phenomenon of underspecification in the question text. Additionally, the absence of fine-grained visual knowledge limits the precise understanding of subject behavior in visual reasoning tasks. To address these issues, we propose VIKSER (Visual Knowledge-Driven Self-Reinforcing Reasoning Framework). Specifically, VIKSER, trained using knowledge distilled from large language models, extracts fine-grained visual knowledge with the assistance of visual relationship detection techniques. Subsequently, VIKSER utilizes fine-grained visual knowledge to paraphrase the question with underspecification. Additionally, we design a novel prompting method called Chain-of-Evidence (CoE), which leverages the power of ``evidence for reasoning'' to endow VIKSER with interpretable reasoning capabilities. Meanwhile, the integration of self-reflection technology empowers VIKSER with the ability to learn and improve from its mistakes. Experiments conducted on widely used datasets demonstrate that VIKSER achieves new state-of-the-art (SOTA) results in relevant tasks.
- Europe > Austria > Vienna (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space
Liwei Wang, Alexander Schwing, Svetlana Lazebnik
This paper explores image caption generation using conditional variational autoencoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a "vanilla" CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise.
- North America > United States > Illinois (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Consolidating Trees of Robotic Plans Generated Using Large Language Models to Improve Reliability
The inherent probabilistic nature of Large Language Models (LLMs) introduces an element of unpredictability, raising concerns about potential discrepancies in their output. This paper introduces an innovative approach aims to generate correct and optimal robotic task plans for diverse real-world demands and scenarios. LLMs have been used to generate task plans, but they are unreliable and may contain wrong, questionable, or high-cost steps. The proposed approach uses LLM to generate a number of task plans as trees and amalgamates them into a graph by removing questionable paths. Then an optimal task tree can be retrieved to circumvent questionable and high-cost nodes, thereby improving planning accuracy and execution efficiency. The approach is further improved by incorporating a large knowledge network. Leveraging GPT-4 further, the high-level task plan is converted into a low-level Planning Domain Definition Language (PDDL) plan executable by a robot. Evaluation results highlight the superior accuracy and efficiency of our approach compared to previous methodologies in the field of task planning.
- North America > United States > Florida > Hillsborough County > Tampa (0.14)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
Awal, Rabiul, Zhang, Le, Agrawal, Aishwarya
In this paper, we explore effective prompting techniques to enhance zero- and few-shot Visual Question Answering (VQA) performance in contemporary Vision-Language Models (VLMs). Central to our investigation is the role of question templates in guiding VLMs to generate accurate answers. We identify that specific templates significantly influence VQA outcomes, underscoring the need for strategic template selection. Another pivotal aspect of our study is augmenting VLMs with image captions, providing them with additional visual cues alongside direct image features in VQA tasks. Surprisingly, this augmentation significantly improves the VLMs' performance in many cases, even though VLMs "see" the image directly! We explore chain-of-thought (CoT) reasoning and find that while standard CoT reasoning causes drops in performance, advanced methods like self-consistency can help recover it. Furthermore, we find that text-only few-shot examples enhance VLMs' alignment with the task format, particularly benefiting models prone to verbose zero-shot answers. Lastly, to mitigate the challenges associated with evaluating free-form open-ended VQA responses using string-matching based VQA metrics, we introduce a straightforward LLM-guided pre-processing technique to adapt the model responses to the expected ground-truth answer distribution. In summary, our research sheds light on the intricacies of prompting strategies in VLMs for VQA, emphasizing the synergistic use of captions, templates, and pre-processing to enhance model efficacy.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Africa (0.04)
- Research Report > New Finding (0.66)
- Research Report > Experimental Study (0.48)