bucket
Amazon Kendra-Enterprise Level Search with the Connector AWS S3 Bucket
Every company implements machine learning to improve the product experience. One such service provided by AWS for enterprise-level search is Kendra i.e. a machine learning based and continuously makes search engines smarter. Use cases at the enterprise level are often complex and have multiple data sources, to provide more personalized and refined results Kendra keeps learning and updating the algorithm. While customizing and connecting data sources we should always focus on what sort of data we are trying to get out to customers. Kendra understands what you are actually trying to find and then guide you toward more precise content.
Exploring SageMaker Canvas
Building Machine Learning models takes knowledge, experience, and a lot of time. Sometimes different persona such as Business Analysts or other technocrats who do not have experience with ML might have a ML use-case that they may want to address, but lack the expertise to do so. Even ML engineers and Data Scientists who have ML experience may want a model built quickly. This brings us to the domain of AutoML. Nowadays we're seeing a plethora of AutoML solutions from open source APIs to individual services/platforms geared for automating the ML space.
The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Lin, Zhiqiu, Shi, Jia, Pathak, Deepak, Ramanan, Deva
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI. However, existing CL benchmarks, e.g. Permuted-MNIST and Split-CIFAR, make use of artificial temporal variation and do not align with or generalize to the real-world. In this paper, we introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014). We build CLEAR from existing large-scale image collections (YFCC100M) through a novel and scalable low-cost approach to visio-linguistic dataset curation. Our pipeline makes use of pretrained vision-language models (e.g. CLIP) to interactively build labeled datasets, which are further validated with crowd-sourcing to remove errors and even inappropriate images (hidden in original YFCC100M). The major strength of CLEAR over prior CL benchmarks is the smooth temporal evolution of visual concepts with real-world imagery, including both high-quality labeled data along with abundant unlabeled samples per time period for continual semi-supervised learning. We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms that only utilize fully-supervised data. Our analysis also reveals that mainstream CL evaluation protocols that train and test on iid data artificially inflate performance of CL system. To address this, we propose novel "streaming" protocols for CL that always test on the (near) future. Interestingly, streaming protocols (a) can simplify dataset curation since today's testset can be repurposed for tomorrow's trainset and (b) can produce more generalizable models with more accurate estimates of performance since all labeled data from each time-period is used for both training and testing (unlike classic iid train-test splits).
- Education (1.00)
- Leisure & Entertainment > Sports > Soccer (0.68)
Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap
Hidden Markov models (HMMs) and partially observable Markov decision processes (POMDPs) provide useful tools for modeling dynamical systems. They are particularly useful for representing the topology of environments such as road networks and office buildings, which are typical for robot navigation and planning. The work presented here describes a formal framework for incorporating readily available odometric information and geometrical constraints into both the models and the algorithm that learns them. By taking advantage of such information, learning HMMs/POMDPs can be made to generate better solutions and require fewer iterations, while being robust in the face of data reduction. Experimental results, obtained from both simulated and real robot data, demonstrate the effectiveness of the approach.