raindrop
Unsupervised Raindrop Removal from a Single Image using Conditional Diffusion Models
Fazry, Lhuqita, Vito, Valentino
Raindrop removal is a challenging task in image processing. Removing raindrops while relying solely on a single image further increases the difficulty of the task. Common approaches include the detection of raindrop regions in the image, followed by performing a background restoration process conditioned on those regions. While various methods can be applied for the detection step, the most common architecture used for background restoration is the Generative Adversarial Network (GAN). Recent advances in the use of diffusion models have led to state-of-the-art image inpainting techniques. In this paper, we introduce a novel technique for raindrop removal from a single image using diffusion-based image inpainting.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Indonesia > Java > West Java > Depok (0.04)
- Asia > China > Yunnan Province > Kunming (0.04)
REHEARSE-3D: A Multi-modal Emulated Rain Dataset for 3D Point Cloud De-raining
Raisuddin, Abu Mohammed, Holmblad, Jesper, Haghighi, Hamed, Poledna, Yuri, Drechsler, Maikol Funk, Donzella, Valentina, Aksoy, Eren Erdal
Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, the interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving systems are not weather-aware, i.e., if they are unable to discern such changes. In this study, we release a new, large-scale, multi-modal emulated rain dataset, REHEARSE-3D, to promote research advancements in 3D point cloud de-raining. Distinct from the most relevant competitors, our dataset is unique in several respects. First, it is the largest point-wise annotated dataset, and second, it is the only one with high-resolution LiDAR data (LiDAR-256) enriched with 4D Radar point clouds logged in both daytime and nighttime conditions in a controlled weather environment. Furthermore, REHEARSE-3D involves rain-characteristic information, which is of significant value not only for sensor noise modeling but also for analyzing the impact of weather at a point level. Leveraging REHEARSE-3D, we benchmark raindrop detection and removal in fused LiDAR and 4D Radar point clouds. Our comprehensive study further evaluates the performance of various statistical and deep-learning models. Upon publication, the dataset and benchmark models will be made publicly available at: https://sporsho.github.io/REHEARSE3D.
- Europe > Sweden > Halland County > Halmstad (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Ingolstadt (0.04)
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > West Midlands > Coventry (0.04)
- Transportation > Ground > Road (0.55)
- Information Technology > Robotics & Automation (0.55)
Gradient-Guided Parameter Mask for Multi-Scenario Image Restoration Under Adverse Weather
Guo, Jilong, Yang, Haobo, Zhou, Mo, Zhang, Xinyu
Removing adverse weather conditions such as rain, raindrop, and snow from images is critical for various real-world applications, including autonomous driving, surveillance, and remote sensing. However, existing multi-task approaches typically rely on augmenting the model with additional parameters to handle multiple scenarios. While this enables the model to address diverse tasks, the introduction of extra parameters significantly complicates its practical deployment. In this paper, we propose a novel Gradient-Guided Parameter Mask for Multi-Scenario Image Restoration under adverse weather, designed to effectively handle image degradation under diverse weather conditions without additional parameters. Our method segments model parameters into common and specific components by evaluating the gradient variation intensity during training for each specific weather condition. This enables the model to precisely and adaptively learn relevant features for each weather scenario, improving both efficiency and effectiveness without compromising on performance. This method constructs specific masks based on gradient fluctuations to isolate parameters influenced by other tasks, ensuring that the model achieves strong performance across all scenarios without adding extra parameters. We demonstrate the state-of-the-art performance of our framework through extensive experiments on multiple benchmark datasets. Specifically, our method achieves PSNR scores of 29.22 on the Raindrop dataset, 30.76 on the Rain dataset, and 29.56 on the Snow100K dataset. Code is available at: \href{https://github.com/AierLab/MultiTask}{https://github.com/AierLab/MultiTask}.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Transportation (0.67)
- Information Technology (0.49)
ERIC: Estimating Rainfall with Commodity Doorbell Camera for Precision Residential Irrigation
Liu, Tian, Jin, Liuyi, Stoleru, Radu, Haroon, Amran, Swanson, Charles, Feng, Kexin
Current state-of-the-art residential irrigation systems, such as WaterMyYard, rely on rainfall data from nearby weather stations to adjust irrigation amounts. However, the accuracy of rainfall data is compromised by the limited spatial resolution of rain gauges and the significant variability of hyperlocal rainfall, leading to substantial water waste. To improve irrigation efficiency, we developed a cost-effective irrigation system, dubbed ERIC, which employs machine learning models to estimate rainfall from commodity doorbell camera footage and optimizes irrigation schedules without human intervention. Specifically, we: a) designed novel visual and audio features with lightweight neural network models to infer rainfall from the camera at the edge, preserving user privacy; b) built a complete end-to-end irrigation system on Raspberry Pi 4, costing only \$75. We deployed the system across five locations (collecting over 750 hours of video) with varying backgrounds and light conditions. Comprehensive evaluation validates that ERIC achieves state-of-the-art rainfall estimation performance ($\sim$ 5mm/day), saving 9,112 gallons/month of water, translating to \$28.56/month in utility savings. Data and code are available at https://github.com/LENSS/ERIC-BuildSys2024.git
- North America > United States > California (0.14)
- North America > United States > Texas > Brazos County > College Station (0.05)
- Asia > Middle East > Yemen > Amran Governorate > Amran (0.04)
- (5 more...)
- Information Technology (1.00)
- Food & Agriculture > Agriculture (0.93)
- Government > Regional Government > North America Government > United States Government (0.93)
Strong and Controllable Blind Image Decomposition
Zhang, Zeyu, Han, Junlin, Gou, Chenhui, Li, Hongdong, Zheng, Liang
Blind image decomposition aims to decompose all components present in an image, typically used to restore a multi-degraded input image. While fully recovering the clean image is appealing, in some scenarios, users might want to retain certain degradations, such as watermarks, for copyright protection. To address this need, we add controllability to the blind image decomposition process, allowing users to enter which types of degradation to remove or retain. We design an architecture named controllable blind image decomposition network. Inserted in the middle of U-Net structure, our method first decomposes the input feature maps and then recombines them according to user instructions. Advantageously, this functionality is implemented at minimal computational cost: decomposition and recombination are all parameter-free. Experimentally, our system excels in blind image decomposition tasks and can outputs partially or fully restored images that well reflect user intentions. Furthermore, we evaluate and configure different options for the network structure and loss functions. This, combined with the proposed decomposition-and-recombination method, yields an efficient and competitive system for blind image decomposition, compared with current state-of-the-art methods.
RIPPLE: Concept-Based Interpretation for Raw Time Series Models in Education
Asadi, Mohammad, Swamy, Vinitra, Frej, Jibril, Vignoud, Julien, Marras, Mirko, Käser, Tanja
Time series is the most prevalent form of input data for educational prediction tasks. The vast majority of research using time series data focuses on hand-crafted features, designed by experts for predictive performance and interpretability. However, extracting these features is labor-intensive for humans and computers. In this paper, we propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy with raw time series clickstreams in comparison to hand-crafted features. Furthermore, we extend concept activation vectors for interpretability in raw time series models. We analyze these advances in the education domain, addressing the task of early student performance prediction for downstream targeted interventions and instructional support. Our experimental analysis on 23 MOOCs with millions of combined interactions over six behavioral dimensions show that models designed with our approach can (i) beat state-of-the-art educational time series baselines with no feature extraction and (ii) provide interpretable insights for personalized interventions. Source code: https://github.com/epfl-ml4ed/ripple/.
- Research Report > New Finding (0.93)
- Instructional Material > Course Syllabus & Notes (0.93)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.52)
5 free tech tools for staying organized
If you're struggling to stay on top of your tasks or keep track of your notes, maybe what you need are some new tools. I'm always looking for better ways to stay organized. When I find a new app that sounds promising, I pit it against my existing tools in a game of survival of fittest, leaving only the ones that work best for me. These are currently the five services I rely on the most for note-taking, bookmarking, and task management. As we head into the new year, perhaps they'll provide just the kind of fresh inspiration you're looking for.
Potential Auto-driving Threat: Universal Rain-removal Attack
Hu, Jinchegn, Li, Jihao, Hou, Zhuoran, Jiang, Jingjing, Liu, Cunjia, Zhang, Yuanjian
The problem of robustness in adverse weather conditions is considered a significant challenge for computer vision algorithms in the applicants of autonomous driving. Image rain removal algorithms are a general solution to this problem. They find a deep connection between raindrops/rain-streaks and images by mining the hidden features and restoring information about the rain-free environment based on the powerful representation capabilities of neural networks. However, previous research has focused on architecture innovations and has yet to consider the vulnerability issues that already exist in neural networks. This research gap hints at a potential security threat geared toward the intelligent perception of autonomous driving in the rain. In this paper, we propose a universal rain-removal attack (URA) on the vulnerability of image rain-removal algorithms by generating a non-additive spatial perturbation that significantly reduces the similarity and image quality of scene restoration. Notably, this perturbation is difficult to recognise by humans and is also the same for different target images. Thus, URA could be considered a critical tool for the vulnerability detection of image rain-removal algorithms. It also could be developed as a real-world artificial intelligence attack method. Experimental results show that URA can reduce the scene repair capability by 39.5% and the image generation quality by 26.4%, targeting the state-of-the-art (SOTA) single-image rain-removal algorithms currently available.
- Europe > United Kingdom > England > Leicestershire > Loughborough (0.04)
- Europe > Germany (0.04)
- Information Technology > Security & Privacy (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.89)
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
When sunlight strikes raindrops in the air, they act like a prism and form a rainbow. The rainbow is a division of white light into many beautiful colors. These take the shape of a long round arch, with its path high above, and its two ends apparently beyond the horizon. There is, according to legend, a boiling pot of gold at one end. People look but no one ever finds it.
Fast Image-Anomaly Mitigation for Autonomous Mobile Robots
Fumagalli, Gianmario, Huber, Yannick, Dymczyk, Marcin, Siegwart, Roland, Dubé, Renaud
Camera anomalies like rain or dust can severelydegrade image quality and its related tasks, such as localizationand segmentation. In this work we address this importantissue by implementing a pre-processing step that can effectivelymitigate such artifacts in a real-time fashion, thus supportingthe deployment of autonomous systems with limited computecapabilities. We propose a shallow generator with aggregation,trained in an adversarial setting to solve the ill-posed problemof reconstructing the occluded regions. We add an enhancer tofurther preserve high-frequency details and image colorization.We also produce one of the largest publicly available datasets1to train our architecture and use realistic synthetic raindrops toobtain an improved initialization of the model. We benchmarkour framework on existing datasets and on our own imagesobtaining state-of-the-art results while enabling real-time per-formance, with up to 40x faster inference time than existingapproaches.
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.98)