"... the research area that studies the operation and design of systems that recognize patterns in data." It includes statistical methods like discriminant analysis, feature extraction, error estimation, cluster analysis.
– Pattern Recognition Laboratory at Delft University of Technology
Google's Google Cloud division today announced it has made generally available two search functions that rely on machine learning techniques to help retailers who use its cloud service. Called Vision API product search and Recommendations AI, the two services are part of what Google has unveiled as a suit of functions called Product Discovery Solutions for Retail. The vision search function will let a retailer's customers submit a picture and received ranked results of products that match the picture in either appearance or semantic similarity to the object. Recommendations, said Google, is "able to piece together the history of a customer's shopping journey and serve them with customized product recommendations." Both are generally available now to retailers.
How auto-manufacturers can apply ML & AI algorithms to enhance image analytics on their factory floor and to ensure higher product quality? Despite its great potential for quality control, vision inspection is far from reaching its full potential in manufacturing. Manual inspection, as well as traditional computer vision methods, are prone to error and are often unable to uncover the root cause of problems. In search of a solution to optimize its welding process, a leading powertrain manufacturer turned to OptimalPlus. Using advanced image algorithms, the OptimalPlus platform extracts key features from images, analyzes them, and informs MES decisions in near real-time.
Cisco announced a bevy of new features within its Webex portfolio on Tuesday, as the company ramps up its efforts to compete with rival collaboration players such as Zoom, Slack, Google Meet and Microsoft Teams. The updates, released as part of Cisco's WebexOne virtual event, come as Webex and other video conferencing and cloud collaboration tools experience huge spikes in usage due to the number of workers staying productive remotely through the COVID-19 pandemic. The new feature updates -- which are designed to position Webex as a key platform for the future of work -- include new capabilities for its meetings, calling, messaging, and contact center services, as well as new devices. All told, Cisco debuted more than 50 product updates aimed at providing seamless collaboration and smart hybrid work experiences. At the center of the updates is what Cisco says is an all new Webex experience.
This paper proposes a end-to-end deep network to recognize kinds of accents under the same language, where we develop and transfer the deep architecture in speaker-recognition area to accent classification task for learning utterance-level accent representation. Compared with the individual-level feature in speaker-recognition, accent recognition throws a more challenging issue in acquiring compact group-level features for the speakers with the same accent, hence a good discriminative accent feature space is desired. Our deep framework adopts multitask-learning mechanism and mainly consists of three modules: a shared CNNs and RNNs based front-end encoder, a core accent recognition branch, and an auxiliary speech recognition branch, where we take speech spectrogram as input. More specifically, with the sequential descriptors learned from a shared encoder, the accent recognition branch first condenses all descriptors into an embedding vector, and then explores different discriminative loss functions which are popular in face recognition domain to enhance embedding discrimination. Additionally, due to the accent is a speaking-related timbre, adding speech recognition branch effectively curbs the over-fitting phenomenon in accent recognition during training. We show that our network without any data-augment preproccessings is significantly ahead of the baseline system on the accent classification track in the Accented English Speech Recognition Challenge 2020 (AESRC2020), where the state-of-the-art loss function Circle-Loss achieves the best discriminative optimization for accent representation.
Managing inputs that are novel, unknown, or out-of-distribution is critical as an agent moves from the lab to the open world. Novelty-related problems include being tolerant to novel perturbations of the normal input, detecting when the input includes novel items, and adapting to novel inputs. While significant research has been undertaken in these areas, a noticeable gap exists in the lack of a formalized definition of novelty that transcends problem domains. As a team of researchers spanning multiple research groups and different domains, we have seen, first hand, the difficulties that arise from ill-specified novelty problems, as well as inconsistent definitions and terminology. Therefore, we present the first unified framework for formal theories of novelty and use the framework to formally define a family of novelty types. Our framework can be applied across a wide range of domains, from symbolic AI to reinforcement learning, and beyond to open world image recognition. Thus, it can be used to help kick-start new research efforts and accelerate ongoing work on these important novelty-related problems. This extended version of our AAAI 2021 paper included more details and examples in multiple domains.
BEGIN ARTICLE PREVIEW: Test, Measurement & Analytics WHITEPAPERS How a powertrain manufacturer optimized its welding process using advanced image algorithms from a platform that extracts key features from images, analyzes them, and informs MES decisions in near real-time. How auto-manufacturers can apply ML & AI algorithms to enhance image analytics on their factory floor and to ensure higher product quality? Discover the next generation visual inspection in our new case study. In this case study , you will learn about: Current limitations of image inspection in the manufacturing industry. The O+ end-to-end solution, which brings machine learning and deep learning to image analysis in the production line. How a powertrain manufacturer deployed OptimalPlus’ software on the edge and utilized its image analysis capabilities to optimize the welding process. Despite its great potentia
While photovoltaic (PV) systems are installed at an unprecedented rate, reliable information on an installation level remains scarce. As a result, automatically created PV registries are a timely contribution to optimize grid planning and operations. This paper demonstrates how aerial imagery and three-dimensional building data can be combined to create an address-level PV registry, specifying area, tilt, and orientation angles. We demonstrate the benefits of this approach for PV capacity estimation. In addition, this work presents, for the first time, a comparison between automated and officially-created PV registries. Our results indicate that our enriched automated registry proves to be useful to validate, update, and complement official registries.
This article is part 11 (and the final part) of a series reviewing selected papers from Altmetric's list of the top 100 most-discussed scholarly works of 2019. Deep neural networks (DNNs) are a key pattern recognition technology used in artificial intelligence (AI). A DNN finds the correct mathematical manipulation to turn the input into the output, whether it be a linear relationship or a non-linear relationship1. For example, in the context of facial recognition, a DNN creates a range of outputs correctly corresponding to the range of different facial inputs. However, research shows that DNNs can be easily fooled2.
Pattern mining is well established in data mining research, especially for mining binary datasets. Surprisingly, there is much less work about numerical pattern mining and this research area remains under-explored. In this paper, we propose Mint, an efficient MDL-based algorithm for mining numerical datasets. The MDL principle is a robust and reliable framework widely used in pattern mining, and as well in subgroup discovery. In Mint we reuse MDL for discovering useful patterns and returning a set of non-redundant overlapping patterns with well-defined boundaries and covering meaningful groups of objects. Mint is not alone in the category of numerical pattern miners based on MDL. In the experiments presented in the paper we show that Mint outperforms competitors among which Slim and RealKrimp.
I find them incredibly irritating. Those images you have to click on to prove that you are not a robot. If you are just one click away from a nice weekend away, you first have to figure out where you can see the traffic lights on 16 tiny fuzzy squares. Google makes grateful use of these puzzling attempts. For one thing, the company uses artificial intelligence to train its image recognition software.