hue
Major Philips Hue leak reveals 'Pro' hub with a killer feature
Philips Hue appears to be teeing up a new, more powerful hub that can turn Hue bulbs into motion sensors, according to leaked details and images that briefly appeared on Philips Hue's own website. The unannounced products, which have since been yanked from the "New on Hue" page, included the "faster" Hue Bridge Pro as well as a wired video doorbell, a refreshed and more efficient A19 bulb, permanent and globe-style versions of Hue's Festavia outdoor string lights, a gradient light strip, and the ability to control your Hue lights with the Sonos voice assistant. No pricing details were included in the leaked details, which were live on the Hue website for several hours Wednesday. The leaked products were initially spotted by users on Reddit. Reached by TechHive, a Phillips Hue spokesperson declined to comment.
- Information Technology > Communications > Networks (0.39)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (0.37)
Scientists recreate lost recipes for a 5,000-year-old Egyptian blue dye
Breakthroughs, discoveries, and DIY tips sent every weekday. For being the world's oldest known synthetic pigment, the original recipes for Egyptian blue remain a mystery. The approximately 5,000-year-old dye wasn't a single color, but instead encompassed a range of hues, from deep blues to duller grays and greens. Artisans first crafted Egyptian blue during the Fourth Dynasty (roughly 2613 to 2494 BCE) from recipes reliant on calcium-copper silicate. These techniques were later adopted by Romans in lieu of more expensive materials like lapis lazuli and turquoise.
Is Philips Hue about to unleash its first video doorbell?
Philips Hue appears ready to expand its line of home security devices, with clues to the manufacturer's plans hidden within its own app. The leak details an unannounced product that would round out Hue's existing catalog of security cameras, floodlights, and motion and contact sensors: a video doorbell, which would mark a logical next step in terms of brand's smart security lineup. As HueBlog.com reports, intel about the purported doorbell was discovered by a reader doing a deep dive into the Philips Hue app. Details about the device remain sketchy, but we can be reasonably sure it's in the pipeline. For starters, it appears the doorbell offers both Bluetooth and Wi-Fi connectivity, with the former designed to aid discovery during setup.
Exploratory Data Analysis
Exploratory Data Analysis (EDA) is an approach used by data scientists to analyze datasets and summarize their main characteristics, with the help of data visualization methods. It helps data scientists to discover patterns, and economic trends, test a hypothesis or check assumptions. The main purpose of EDA is to help look at data before making any assumptions. It can help identify the trends, patterns, and relationships within the data. Data scientists can use exploratory analysis to ensure the results they produce are valid and applicable to any desired business outcomes and goals.
ABANICCO: A New Color Space for Multi-Label Pixel Classification and Color Segmentation
Nicolás-Sáenz, Laura, Ledezma, Agapito, Pascau, Javier, Muñoz-Barrutia, Arrate
In any computer vision task involving color images, a necessary step is classifying pixels according to color and segmenting the respective areas. However, the development of methods able to successfully complete this task has proven challenging, mainly due to the gap between human color perception, linguistic color terms, and digital representation. In this paper, we propose a novel method combining geometric analysis of color theory, fuzzy color spaces, and multi-label systems for the automatic classification of pixels according to 12 standard color categories (Green, Yellow, Light Orange, Deep Orange, Red, Pink, Purple, Ultramarine, Blue, Teal, Brown, and Neutral). Moreover, we present a robust, unsupervised, unbiased strategy for color naming based on statistics and color theory. ABANICCO was tested against the state of the art in color classification and with the standarized ISCC-NBS color system, providing accurate classification and a standard, easily understandable alternative for hue naming recognizable by humans and machines. We expect this solution to become the base to successfully tackle a myriad of problems in all fields of computer vision, such as region characterization, histopathology analysis, fire detection, product quality prediction, object description, and hyperspectral imaging.
- Europe > Spain > Galicia > Madrid (0.04)
- South America (0.04)
- North America > United States > California (0.04)
HUE: Pretrained Model and Dataset for Understanding Hanja Documents of Ancient Korea
Yoo, Haneul, Jin, Jiho, Son, Juhee, Bak, JinYeong, Cho, Kyunghyun, Oh, Alice
Historical records in Korea before the 20th century were primarily written in Hanja, an extinct language based on Chinese characters and not understood by modern Korean or Chinese speakers. Historians with expertise in this time period have been analyzing the documents, but that process is very difficult and time-consuming, and language models would significantly speed up the process. Toward building and evaluating language models for Hanja, we release the Hanja Understanding Evaluation dataset consisting of chronological attribution, topic classification, named entity recognition, and summary retrieval tasks. We also present BERT-based models continued training on the two major corpora from the 14th to the 19th centuries: the Annals of the Joseon Dynasty and Diaries of the Royal Secretariats. We compare the models with several baselines on all tasks and show there are significant improvements gained by training on the two corpora. Additionally, we run zero-shot experiments on the Daily Records of the Royal Court and Important Officials (DRRI). The DRRI dataset has not been studied much by the historians, and not at all by the NLP community.
- Asia > China (0.04)
- Asia > South Korea (0.04)
- North America > United States > New York (0.04)
- (2 more...)
DCCF: Deep Comprehensible Color Filter Learning Framework for High-Resolution Image Harmonization
Xue, Ben, Ran, Shenghui, Chen, Quan, Jia, Rongfei, Zhao, Binqiang, Tang, Xing
Image color harmonization algorithm aims to automatically match the color distribution of foreground and background images captured in different conditions. Previous deep learning based models neglect two issues that are critical for practical applications, namely high resolution (HR) image processing and model comprehensibility. In this paper, we propose a novel Deep Comprehensible Color Filter (DCCF) learning framework for high-resolution image harmonization. Specifically, DCCF first downsamples the original input image to its low-resolution (LR) counter-part, then learns four human comprehensible neural filters (i.e. hue, saturation, value and attentive rendering filters) in an end-to-end manner, finally applies these filters to the original input image to get the harmonized result. Benefiting from the comprehensible neural filters, we could provide a simple yet efficient handler for users to cooperate with deep model to get the desired results with very little effort when necessary. Extensive experiments demonstrate the effectiveness of DCCF learning framework and it outperforms state-of-the-art post-processing method on iHarmony4 dataset on images' full-resolutions by achieving 7.63% and 1.69% relative improvements on MSE and PSNR respectively.
Real Time Video based Heart and Respiration Rate Monitoring
Pourbemany, Jafar, Essa, Almabrok, Zhu, Ye
In recent years, research about monitoring vital signs by smartphones grows significantly. There are some special sensors like Electrocardiogram (ECG) and Photoplethysmographic (PPG) to detect heart rate (HR) and respiration rate (RR). Smartphone cameras also can measure HR by detecting and processing imaging Photoplethysmographic (iPPG) signals from the video of a user's face. Indeed, the variation in the intensity of the green channel can be measured by the iPPG signals of the video. This study aimed to provide a method to extract heart rate and respiration rate using the video of individuals' faces. The proposed method is based on measuring fluctuations in the Hue, and can therefore extract both HR and RR from the video of a user's face. The proposed method is evaluated by performing on 25 healthy individuals. For each subject, 20 seconds video of his/her face is recorded. Results show that the proposed approach of measuring iPPG using Hue gives more accurate rates than the Green channel.
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- Europe > Sweden > Östergötland County > Linköping (0.04)
- Europe > Sweden > Skåne County > Malmö (0.04)
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Nauta, Meike, Jutte, Annemarie, Provoost, Jesper, Seifert, Christin
Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learnt by the model. A user is unaware of the underlying classification strategy and does not know which image characteristics (e.g., color or shape) is the dominant characteristic for the decision. We address this ambiguity and argue that prototypes should be explained. Only visualizing prototypes can be insufficient for understanding what a prototype exactly represents, and why a prototype and an image are considered similar. We improve interpretability by automatically enhancing prototypes with extra information about visual characteristics considered important by the model. Specifically, our method quantifies the influence of color hue, shape, texture, contrast and saturation in a prototype. We apply our method to the existing Prototypical Part Network (ProtoPNet) and show that our explanations clarify the meaning of a prototype which might have been interpreted incorrectly otherwise. We also reveal that visually similar prototypes can have the same explanations, indicating redundancy. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition.
- Oceania > Australia > New South Wales > Sydney (0.14)
- Europe > Netherlands (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (4 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Pattern Recognition > Image Matching (0.83)