Goto

Collaborating Authors

 usps



Review for NeurIPS paper: Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment

Neural Information Processing Systems

Weaknesses: - Central parts of the paper are unclear eg. in line 80 \log P_M (X; \theta) should be the negative cross entropy. The only quantitative results are on adaptation from USPS to MNIST in line 268. However, prior work [1] achieves 96.5% accuracy in comparison to the 55% accuracy achieved by the proposed method. It would be desirable to evaluate the proposed approach on the more complex Facades/Maps/Cityscapes using the MSE metric to facilitate comparison with AlignFlow and [1]. It is unclear how the inductive bias from each of the datasets influence the shared space.


An In-Depth Analysis of Adversarial Discriminative Domain Adaptation for Digit Classification

Choi, Eugene, Rodriguez, Julian, Young, Edmund

arXiv.org Artificial Intelligence

Domain adaptation is an active area of research driven by the growing demand for robust machine learning models that perform well on real-world data. Adversarial learning for deep neural networks (DNNs) has emerged as a promising approach to improving generalization ability, particularly for image classification. In this paper, we implement a specific adversarial learning technique known as Adversarial Discriminative Domain Adaptation (ADDA) and replicate digit classification experiments from the original ADDA paper. We extend their findings by examining a broader range of domain shifts and provide a detailed analysis of in-domain classification accuracy post-ADDA. Our results demonstrate that ADDA significantly improves accuracy across certain domain shifts with minimal impact on in-domain performance. Furthermore, we provide qualitative analysis and propose potential explanations for ADDA's limitations in less successful domain shifts. Code is at https://github.com/eugenechoi2004/COS429_FINAL .


Discovery of Small Ultra-short-period Planets Orbiting KG Dwarfs in Kepler Survey Using GPU Phase Folding and Deep Learning Detection System

Wang, Kaitlyn, Ge, Jian, Willis, Kevin, Wang, Kevin, Zhao, Yinan

arXiv.org Artificial Intelligence

Since the discovery of the first hot Jupiter orbiting a solar-type star, 51 Peg, in 1995, more than 4000 exoplanets have been identified using various observational techniques. The formation process of these sub-Earths remains elusive, and acquiring additional samples is essential for investigating this unique population. In our study, we employ a novel GPU Phase Folding algorithm combined with a Convolutional Neural Network, termed the GPFC method, on Kepler photometry data. This method enhances the transit search speed significantly over the traditional Box-fitting Least Squares method, allowing a complete search of the known KOI photometry data within hours using a commercial GPU card. To date, we have identified five promising sub-Earth short-period candidates: K00446.c, K01821.b, K01522.c, K03404.b, and K04978.b. A closer analysis reveals the following characteristics: K00446.c orbits a K dwarf on a 0.645091-day period. With a radius of $0.461R_\oplus$, it ranks as the second smallest USP discovered to date. K01821.b is a sub-Earth with a radius of $0.648R_\oplus$, orbiting a G dwarf over a 0.91978-day period. It is the second smallest USP among all confirmed USPs orbiting G dwarfs in the NASA Archive. K01522.c has a radius of $0.704 R_\oplus$ and completes an orbit around a Sun-like G dwarf in 0.64672 days; K03404.b, with a radius of $0.738 R_\oplus$, orbits a G dwarf on a 0.68074-day period; and K04978.b, with its planetary radius of $0.912 R_\oplus$, orbits a G dwarf, completing an orbit every 0.94197 days. Three of our finds, K01821.b, K01522.c and K03404.b, rank as the smallest planets among all confirmed USPs orbiting G dwarfs in the Kepler dataset. The discovery of these small exoplanets underscores the promising capability of the GPFC method for searching for small, new transiting exoplanets in photometry data from Kepler, TESS, and future space transit missions.


Fast OT for Latent Domain Adaptation

Roheda, Siddharth, Panahi, Ashkan, Krim, Hamid

arXiv.org Artificial Intelligence

The discriminator Such a shift in data distribution is seen and addressed in on the other hand, attempts to discriminate between almost every field ranging from Natural Language Processing a real data sample and that from the generator. Both models are (NLP) to Object Recognition. Given labeled samples from approximated by neural networks. When trained alternatively, a source domain, there are two groups that any Domain the generator learns to produce random samples from the data Adaptation (DA) approach can be classified into, i) semisupervised distribution which are very close to the real data samples. DA: some samples in the target domain are labeled Following this, Conditional Generative Adversarial Networks or ii) unsupervised DA: none of the samples in the target (CGANs) were proposed in [8]. These networks were trained domain are labeled.


Data Analytics Key to U.S. Postal Service Digital Transformation

#artificialintelligence

Over the past decade, technological innovation has advanced at an increasingly fast pace, creating both opportunities and disruptions in virtually every industry. The postal industry is no exception. According to the report, "Step into Tomorrow: The U.S. Postal Service (USPS) and Emerging Technology," the Postal Service collects massive quantities of data on an ongoing basis. A challenge is putting this data to its most valued use to improve the customer experience. Data-driven advanced algorithms and analytics can play a critical role in the design of these new, last-mile solutions.


USPS gets ahead of missing packages with AI edge computing

#artificialintelligence

The Postal Service is rolling out artificial intelligence tools across 195 of its processing centers to give the agency greater visibility into the terabytes of data it already captures from incoming packages each day. USPS uses the algorithms to categorize packages and to troubleshoot anomalies with packages in its delivery network. AI algorithms can also cut the time to locate missing packages down from several days to a few hours. Todd Schimmel, USPS's manager of letter mail technology, oversaw the agency's partnership with NVIDIA to stand up its Edge Computing Infrastructure Program (ECIP). Each of the four edge servers that are part of the program handles 20 terabytes of package images.


Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network

#artificialintelligence

In 2019, the U.S. Postal Service had a need to identify and track items in its torrent of more than 100 million pieces of daily mail. A USPS AI architect had an idea. Ryan Simpson wanted to expand an image analysis system a postal team was developing into something much broader that could tackle this needle-in-a-haystack problem. With edge AI servers strategically located at its processing centers, he believed USPS could analyze the billions of images each center generated. The resulting insights, expressed in a few key data points, could be shared quickly over the network.


How robots would help the Post Office -- GCN

#artificialintelligence

Congress should pass reform legislation that would establish a Technology Innovation Fund for the U.S. Postal Service (USPS) to enable robotic last-mile postal delivery, a new report states. "Of particular promise are sorting and delivery robots, which could sort mail, including into local delivery orders, deliver mail to homes, or both," according to "A New Vision for Postal Reform in the E-commerce Age," a Feb. 11 report from the Information Technology and Innovation Foundation (ITIF). "One could imagine a postal worker driving to particular routes with a fleet of 10 or so robots, letting each one off to'walk' a particular mail route, and then picking them back up at the end of the route." This funding would help support innovation at USPS, the report states, likening the approach to those at the Defense Department and NASA, which get federal funding for automation and robotics research. Although robotics is not sophisticated or inexpensive enough yet to sort and deliver mail, progress is happening.


Invertible Manifold Learning for Dimension Reduction

Li, Siyuan, Lin, Haitao, Zang, Zelin, Wu, Lirong, Xia, Jun, Li, Stan Z.

arXiv.org Artificial Intelligence

It is widely believed that a nonlinear dimension reduction (NLDR) process drops information inevitably in most practical scenarios, and even with the manifold assumption, most existing methods are unable to preserve structure of data after DR due to the loss of information, especially in high-dimensional cases. In the context of manifold learning, we think a good low-dimensional representation should preserve topological and geometric properties of data manifold. To achieve this, the inveribility of a NLDR transformation is required such that the learned representation is reconstructible via its inverse transformation. In this paper, we propose a novel method, called invertible manifold learning (inv-ML), to tackle this problem. A locally isometric smoothness (LIS) constraint for preserving local geometry is applied to a two-stage inv-ML algorithm. Firstly, a homeomorphic sparse coordinate transformation is learned to find the low-dimensional representation without loss of topological information. Secondly, a linear compression is performed on the learned sparse coding, with the trade-off between the target dimension and the incurred information loss. Experiments are conducted on seven datasets, whose results demonstrate that the proposed inv-ML not only achieves better invertible NLDR in comparison with typical existing methods but also reveals the characteristics of the learned manifolds through linear interpolation in latent space. Moreover, we find that the reliability of tangent space approximated by its local neighborhood on real-world datasets is a key to the success of manifold based DR algorithms. The code will be made available soon.