Goto

Collaborating Authors

Results


Detection of abnormal events in videos

#artificialintelligence

The rapid advancements in the technology of closed circuit television cameras and their underlying infrastructure has led to a sheer number of surveillance cameras being implemented globally, estimated to go beyond 1 billion by the end of the year 2021 . Considering the massive amounts of videos generated in real-time, manual video analysis by human operator becomes inefficient, expensive, and nearly impossible, which in turn makes a great demand for automated and intelligent methods for an efficient video surveillance system. An important task in video surveillance is anomaly detection, which refers to the identification of events that do not conform to the expected behavior. Abnormal events in the general sense have the characteristics of suddenness,in order to be able to understand the abnormal events in the first time, it usually takes a lot of manpower to stare at the monitoring screen for a long time to observe, so It will not only make people tired, but also easily overlook some inconspicuous events. Therefore, the automatic detection and recognition of abnormal events of surveillance video in complex scenes, as the core subject of intelligent video surveillance systems, is receiving more and more attention from researchers.


Full Stack Web Developer - Computer Vision

#artificialintelligence

Team Description: Our computer vision team is a leader in the creation of cutting-edge algorithms and software for automated image and video analysis. Our solutions embrace deep learning and add measurable value to government agencies, commercial organizations, and academic institutions worldwide. We understand the difficulties in extracting, interpreting, and utilizing information across images, video, metadata, and text, and we recognize the need for robust, affordable solutions. We seek to advance the fields of computer vision and deep learning through research and development and through collaborative projects that build on our open source software platform, the Kitware Image and Video Exploitation and Retrieval (KWIVER) toolkit. About the projects:Kitware's employees have unique opportunities to interact and collaborate directly with customers, visit interesting customer sites, and participate in live field tests and demonstrations.


Locust

#artificialintelligence

"Just as athletes can't win without a sophisticated mixture of strategy, form, attitude, tactics, and speed, performance engineering requires a good collection of metrics and tools to deliver the desired business results."-- The current trend of leveraging the powers of ML in business has made data scientists and engineers design innovative solutions/services and one such service have been Model As A Service (MaaS). We have used many of these services without the knowledge of how it was built or served on web, some examples include data visualization, facial recognition, natural language processing, predictive analytics and more. In short, MaaS encapsulates all the complex data, model training & evaluation, deployment, etc, and lets customers consume it for their purpose. As simple as it feels to use these services, there are many challenges in building such a service e.g.: how do we maintain the service?


How to detect online trends without web scraping

#artificialintelligence

To get text information from the content of each screenshot, we will apply text recognition from these images. Our goal is not only to obtain the words used on the page but also their weights (understood as a measure of their relevance or importance). Thanks to that, we will be able to generate a word cloud, where word size will signal how exposed a word was on the site. Pytesseract is an optical character recognition (OCR) tool for python. It will recognize and "read" the text embedded in screenshots.


Alteryx announces new AutoML product and Intelligence Suite

ZDNet

Alteryx, the public company best known in the self-service data preparation and pipeline realm, has always had interesting and significant AI/machine learning (ML) capabilities as part of its Designer platform. But today, at its Virtual Global Inspire event, the company is announcing some significant new AI/ML capabilities that should resonate with business users and power users alike. Also read: Alteryx says let's get visual ZDNet was briefed on the new products by Alteryx's Chief Data and Analytics Officer (CDAO), Alan Jacobson, who joined the company two years ago from his post as director of global analytics at Ford Motor Company. Alteryx's Intelligence Suite brings Machine Learning and Text Mining tabs into Designer, adding Natural Language Processing (NLP) and text mining; computer vision capabilities for image-based data and optical character recognition (OCR); as well as topic modeling and sentiment analysis. Jacobson described this set of features as the "Pythonic" equivalent of Alteryx's longstanding predictive capabilities based on the R programming language.


AI perspectives in Smart Cities and Communities to enable road vehicle automation and smart traffic control

arXiv.org Artificial Intelligence

Smart Cities and Communities (SCC) constitute a new paradigm in urban development. SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities. Information and communication technology along with internet of things enables data collection and with the help of artificial intelligence (AI) situation awareness can be obtained to feed the SCC actors with enriched knowledge. This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control. Perception, Smart Traffic Control and Driver Modelling are described along with open research challenges and standardization to help introduce advanced driver assistance systems and automated vehicle functionality in traffic. To fully realize the potential of SCC, to create a holistic view on a city level, the availability of data from different stakeholders is need. Further, though AI technologies provide accurate predictions and classifications there is an ambiguity regarding the correctness of their outputs. This can make it difficult for the human operator to trust the system. Today there are no methods that can be used to match function requirements with the level of detail in data annotation in order to train an accurate model. Another challenge related to trust is explainability, while the models have difficulties explaining how they come to a certain conclusion it is difficult for humans to trust it.


Game Plan: What AI can do for Football, and What Football can do for AI

Journal of Artificial Intelligence Research

The rapid progress in artificial intelligence (AI) and machine learning has opened unprecedented analytics possibilities in various team and individual sports, including baseball, basketball, and tennis. More recently, AI techniques have been applied to football, due to a huge increase in data collection by professional teams, increased computational power, and advances in machine learning, with the goal of better addressing new scientific challenges involved in the analysis of both individual players’ and coordinated teams’ behaviors. The research challenges associated with predictive and prescriptive football analytics require new developments and progress at the intersection of statistical learning, game theory, and computer vision. In this paper, we provide an overarching perspective highlighting how the combination of these fields, in particular, forms a unique microcosm for AI research, while offering mutual benefits for professional teams, spectators, and broadcasters in the years to come. We illustrate that this duality makes football analytics a game changer of tremendous value, in terms of not only changing the game of football itself, but also in terms of what this domain can mean for the field of AI. We review the state-of-the-art and exemplify the types of analysis enabled by combining the aforementioned fields, including illustrative examples of counterfactual analysis using predictive models, and the combination of game-theoretic analysis of penalty kicks with statistical learning of player attributes. We conclude by highlighting envisioned downstream impacts, including possibilities for extensions to other sports (real and virtual).


fAshIon after fashion: A Report of AI in Fashion

arXiv.org Artificial Intelligence

In this independent report fAshIon after fashion, we examine the development of fAshIon (artificial intelligence (AI) in fashion) and explore its potentiality to become a major disruptor of the fashion industry in the near future. To do this, we investigate AI technologies used in the fashion industry through several lenses. We summarise fAshIon studies conducted over the past decade and categorise them into seven groups: Overview, Evaluation, Basic Tech, Selling, Styling, Design, and Buying. The datasets mentioned in fAshIon research have been consolidated on one GitHub page for ease of use. We analyse the authors' backgrounds and the geographic regions treated in these studies to determine the landscape of fAshIon research. The results of our analysis are presented with an aim to provide researchers with a holistic view of research in fAshIon. As part of our primary research, we also review a wide range of cases of applied fAshIon in the fashion industry and analyse their impact on the industry, markets and individuals. We also identify the challenges presented by fAshIon and suggest that these may form the basis for future research. We finally exhibit that many potential opportunities exist for the use of AI in fashion which can transform the fashion industry embedded with AI technologies and boost profits.


Pervasive AI for IoT Applications: Resource-efficient Distributed Artificial Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life. The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges. In this context, a wise cooperation and resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed inference, training and online learning tasks across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges.


Bidding adieu to manual document processing

#artificialintelligence

Traditional document processing units required staff members to manually read and key in relevant information from purchase orders, quotes, invoices, remittances and other documents – every day, year after year. This process lowers both staff morale and productivity, and often leads to unwanted errors and increased costs. Intelligent document processing (IDP) is a next-generation approach that uses automation to quickly extract information from business documents. Here are 10 things you need to know about IDP and how it can enable end-to-end process automation for your organization. The first wave of IDP was driven by template-based optical character recognition (OCR) technology.