Goto

Collaborating Authors

Results


Remote Computer Vision Engineer openings in California on August 14, 2022 – Data Science Jobs

#artificialintelligence

Role requiring'No experience data provided' months of experience in None Samsara (NYSE: IOT) is the pioneer of the Connected Operations Cloud, which allows businesses that depend on physical operations to harness IoT (Internet of Things) data to develop actionable business insights and improve their operations. Founded in San Francisco in 2015, we now employ more than 1,800 people globally and have over 1.5 million active devices. Samsara also went public in December 2021 and we're just getting started. Recent awards we've won include: • #2 in the Financial Times' Fastest Growing Companies in Americas list 2021 • Named as a Best Place to Work in Built In 2022 • #19 in the Forbes Cloud 100 2021 • IoT Analytics Company of the Year in 2022's IoT Breakthrough Winners • Forbes Advisor named us the Best Solution for Large Companies – Fleet management software for 2022! We're driving change in industries that are yet to fully embrace digital transformation. Physical operations make up a massive slice of the global economy but haven't benefited from innovation and actionable information in the way that other sectors have.


Intro to MLOps using Amazon SageMaker

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider...


LTC-SUM: Lightweight Client-driven Personalized Video Summarization Framework Using 2D CNN

arXiv.org Artificial Intelligence

This paper proposes a novel lightweight thumbnail container-based summarization (LTC-SUM) framework for full feature-length videos. This framework generates a personalized keyshot summary for concurrent users by using the computational resource of the end-user device. State-of-the-art methods that acquire and process entire video data to generate video summaries are highly computationally intensive. In this regard, the proposed LTC-SUM method uses lightweight thumbnails to handle the complex process of detecting events. This significantly reduces computational complexity and improves communication and storage efficiency by resolving computational and privacy bottlenecks in resource-constrained end-user devices. These improvements were achieved by designing a lightweight 2D CNN model to extract features from thumbnails, which helped select and retrieve only a handful of specific segments. Extensive quantitative experiments on a set of full 18 feature-length videos (approximately 32.9 h in duration) showed that the proposed method is significantly computationally efficient than state-of-the-art methods on the same end-user device configurations. Joint qualitative assessments of the results of 56 participants showed that participants gave higher ratings to the summaries generated using the proposed method. To the best of our knowledge, this is the first attempt in designing a fully client-driven personalized keyshot video summarization framework using thumbnail containers for feature-length videos.


A Taxonomy of Information Attributes for Test Case Prioritisation: Applicability, Machine Learning

arXiv.org Artificial Intelligence

Most software companies have extensive test suites and re-run parts of them continuously to ensure recent changes have no adverse effects. Since test suites are costly to execute, industry needs methods for test case prioritisation (TCP). Recently, TCP methods use machine learning (ML) to exploit the information known about the system under test (SUT) and its test cases. However, the value added by ML-based TCP methods should be critically assessed with respect to the cost of collecting the information. This paper analyses two decades of TCP research, and presents a taxonomy of 91 information attributes that have been used. The attributes are classified with respect to their information sources and the characteristics of their extraction process. Based on this taxonomy, TCP methods validated with industrial data and those applying ML are analysed in terms of information availability, attribute combination and definition of data features suitable for ML. Relying on a high number of information attributes, assuming easy access to SUT code and simplified testing environments are identified as factors that might hamper industrial applicability of ML-based TCP. The TePIA taxonomy provides a reference framework to unify terminology and evaluate alternatives considering the cost-benefit of the information attributes.


Systems Challenges for Trustworthy Embodied Systems

arXiv.org Artificial Intelligence

A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed. When deploying these systems into a real-life context we face various engineering challenges, as it is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction. We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in uncertain, complex, and unpredictable real-world contexts. We are also identifying a number of urgent systems challenges for trustworthy embodied systems, including robust and human-centric AI, cognitive architectures, uncertainty quantification, trustworthy self-integration, and continual analysis and assurance.


Forecasting: theory and practice

arXiv.org Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.


Investigating Neighborhood Modeling and Asymmetry Preservation in Digraph Representation Learning

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) traditionally exhibit poor performance for directed graphs (digraphs) due to notable challenges in 1) modeling neighborhoods and 2) preserving asymmetry. In this paper, we address these challenges in traditional GNNs by leveraging hyperbolic collaborative learning from multi-ordered and partitioned neighborhoods, and regularizers inspired by socio-psychological factors. Our resulting formalism, Digraph Hyperbolic Network (D-HYPR) learns node representations in hyperbolic space to avoid structural and semantic distortion of real-world digraphs. We conduct comprehensive experimentation on 4 tasks: link prediction, node classification, sign prediction, and embedding visualization. D-HYPR statistically significantly outperforms the current state of the art on a majority of tasks and datasets, while achieving competitive performance otherwise. Our code and data will be available.


Big Data Industry Predictions for 2022 - insideBIGDATA

#artificialintelligence

As a result, all major cloud providers are either offering or promising to offer Kubernetes options that run on-premises and in multiple clouds. While Kubernetes is making the cloud more open, cloud providers are trying to become "stickier" with more vertical integration. From database-as-a-service (DBaaS) to AI/ML services, the cloud providers are offering options that make it easier and faster to code. Organizations should not take a "one size fits all" approach to the cloud. For applications and environments that can scale quickly, Kubernetes may be the right option. For stable applications, leveraging DBaaS and built-in AI/ML could be the perfect solution. For infrastructure services, SaaS offerings may be the optimal approach. The number of options will increase, so create basic business guidelines for your teams.


Artificial Intellgence -- Application in Life Sciences and Beyond. The Upper Rhine Artificial Intelligence Symposium UR-AI 2021

arXiv.org Artificial Intelligence

The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


Cyberphysical Sequencing for Distributed Asset Management with Broad Traceability

arXiv.org Artificial Intelligence

Cyber-Physical systems (CPS) have complex lifecycles involving multiple stakeholders, and the transparency of both hardware and software components' supply chain is opaque at best. This raises concerns for stakeholders who may not trust that what they receive is what was requested. There is an opportunity to build a cyberphysical titling process offering universal traceability and the ability to differentiate systems based on provenance. Today, RFID tags and barcodes address some of these needs, though they are easily manipulated due to non-linkage with an object or system's intrinsic characteristics. We propose cyberphysical sequencing as a low-cost, light-weight and pervasive means of adding track-and-trace capabilities to any asset that ties a system's physical identity to a unique and invariant digital identifier. CPS sequencing offers benefits similar Digital Twins' for identifying and managing the provenance and identity of an asset throughout its life with far fewer computational and other resources. Across domains, manufactured and assembled system complexity is increasing. Constituent components require compliance with stringent specifications, must have low defect rates, and increasingly require known provenance relating to origin and interaction histories. At the same time, economic and other constraints affecting production and assembly may necessitate involving diverse and untrusted vendors: a vehicle's parts may be made abroad and assembled domestically, while a medication might be compounded in one country before being shipped to another for packaging and a third for distribution. Power generation plant components might be manufactured globally but require certification in the country of use, while electronics manufacturing for a globally-distributed device may require trust-related integrated circuits to be provided and validated by a single-source vendor.