bpp
- North America > United States > California > Alameda County > Livermore (0.04)
- North America > United States > Tennessee > Anderson County > Oak Ridge (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- (3 more...)
Baxter Permutation Process
In this paper, a Bayesian nonparametric (BNP) model for Baxter permutations (BPs), termed BP process (BPP) is proposed and applied to relational data analysis. The BPs are a well-studied class of permutations, and it has been demonstrated that there is one-to-one correspondence between BPs and several interesting objects including floorplan partitioning (FP), which constitutes a subset of rectangular partitioning (RP). Accordingly, the BPP can be used as an FP model. We combine the BPP with a multi-dimensional extension of the stick-breaking process called the {\it block-breaking process} to fill the gap between FP and RP, and obtain a stochastic process on arbitrary RPs. Compared with conventional BNP models for arbitrary RPs, the proposed model is simpler and has a high affinity with Bayesian inference.
COLI: A Hierarchical Efficient Compressor for Large Images
Wang, Haoran, Pei, Hanyu, Lyu, Yang, Zhang, Kai, Li, Li, Fan, Feng-Lei
The escalating adoption of high-resolution, large-field-of-view imagery amplifies the need for efficient compression methodologies. Conventional techniques frequently fail to preserve critical image details, while data-driven approaches exhibit limited generalizability. Implicit Neural Representations (INRs) present a promising alternative by learning continuous mappings from spatial coordinates to pixel intensities for individual images, thereby storing network weights rather than raw pixels and avoiding the generalization problem. However, INR-based compression of large images faces challenges including slow compression speed and suboptimal compression ratios. To address these limitations, we introduce COLI (Compressor for Large Images), a novel framework leveraging Neural Representations for Videos (NeRV). First, recognizing that INR-based compression constitutes a training process, we accelerate its convergence through a pretraining-finetuning paradigm, mixed-precision training, and reformulation of the sequential loss into a parallelizable objective. Second, capitalizing on INRs' transformation of image storage constraints into weight storage, we implement Hyper-Compression, a novel post-training technique to substantially enhance compression ratios while maintaining minimal output distortion. Evaluations across two medical imaging datasets demonstrate that COLI consistently achieves competitive or superior PSNR and SSIM metrics at significantly reduced bits per pixel (bpp), while accelerating NeRV training by up to 4 times.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models
Bogdoll, Daniel, Jestram, Johannes, Rauch, Jonas, Scheib, Christin, Wittig, Moritz, Zöllner, J. Marius
In the foreseeable future, autonomous vehicles will require human assistance in situations they can not resolve on their own. In such scenarios, remote assistance from a human can provide the required input for the vehicle to continue its operation. Typical sensors used in autonomous vehicles include camera and lidar sensors. Due to the massive volume of sensor data that must be sent in real-time, highly efficient data compression is elementary to prevent an overload of network infrastructure. Sensor data compression using deep generative neural networks has been shown to outperform traditional compression approaches for both image and lidar data, regarding compression rate as well as reconstruction quality. However, there is a lack of research about the performance of generative-neural-network-based compression algorithms for remote assistance. In order to gain insights into the feasibility of deep generative models for usage in remote assistance, we evaluate state-of-the-art algorithms regarding their applicability and identify potential weaknesses. Further, we implement an online pipeline for processing sensor data and demonstrate its performance for remote assistance using the CARLA simulator.
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- (7 more...)
- Transportation (0.47)
- Information Technology (0.47)
- Automobiles & Trucks (0.47)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Vision (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Appendix A: Notation
Second, N and the function's computational complexity are bounded by a polynomial in M . If there is an algorithm which generates an approximate sample from an RBM's distribution Note that Mψ (M) is still polynomial in M . However, Theorem 1 states that there is no such polynomial time algorithm if RP null= NP . An example is visualized in Figure 1. Another possibility to obtain Gaussian noise would be the following. The soft-rounding function is differentiable everywhere.
For significance, we are encouraged that all reviewers
We thank reviewers for their thoughtful and positive feedback. We will answer R3's concern about the difference between our model and the conventional We will answer R2's main concern We address some specific comments below and will incorporate all the feedbacks in the revised paper . For R1 - We again appreciate your positive feedback. ": We would like to consider a variant BBP) can naturally lead to such steps. ": No, each model is allowed to have unlimited number For our MCMC inference, the number of active blocks is allowed to be variable order.