VideoLectures.NET


Deep learning for activity recognition

VideoLectures.NET

Human activity recognition (HAR) plays an important role in people's daily life by learning and identifying high-level knowledge about human activity from raw sensor inputs. Conventional pattern recognition approaches have made tremendous progress on HAR tasks by adopting machine learning algorithms such as decision tree, random forest or support vector machine, but the fast development and advancement of deep learning have overpass the accuracy of traditional machine learning results. This seminar is focused on Deep learning applied to HAR using wearable sensors. Current architectures used and how to implement them for achieving good results will be explained. Limitations and new challenges will be also discussed.


Amazon Web Services & MxNET

VideoLectures.NET

This repo contains an incremental sequence of notebooks designed to teach deep learning, Apache MXNet (incubating), and the gluon interface. Our goal is to leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place. If we're successful, the result will be a resource that could be simultaneously a book, course material, a prop for live tutorials, and a resource for plagiarising (with our blessing) useful code. To our knowledge there's no source out there that teaches either (1) the full breadth of concepts in modern deep learning or (2) interleaves an engaging textbook with runnable code. We'll find out by the end of this venture whether or not that void exists for a good reason.


META: A Unifying Framework for the Management and Analysis of Text Data

VideoLectures.NET

Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people manage and analyze vast amount of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans for humans. First, since text data are generated by people, they are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. Second, since text is written for consumption by humans, humans play a critical role in any text data application system, and a text management and analysis system must involve them in the loop of text analysis.


Cloud-based Data Mining Tools for Storage, Distributed Processing, and Machine Learning Systems for Scientific Data

VideoLectures.NET

This hands-on training is intended to familiarize researchers and data scientists with the services Azure offers to aid them in their research, especially with regard to high-performance computing, big-data analysis, and analyzing data streaming from Internet-of-Things (IoT) devices.


Using R for Scalable Data Science: Single Machines to Hadoop Spark Clusters

VideoLectures.NET

In this tutorial, we will demonstrate how to create scalable, end-to-end data analysis processes in R on single machines as well as in-database in SQL Server and on Hadoop clusters running Spark. We will provide hands-on exercises as well as code in a public GitHub repository for attendees to adopt in their data science practice. In particular, the attendees will see how to build, persist, and consume machine learning models using distributed machine learning functions in R. R is one of the most used languages in the data science, statistical and machine learning (ML) community. Although open-source R (CRAN library) now has in excess of 10,000 packages and functions for statics and ML, when it comes to scalable analysis using R, or deployment of trained models into production, many data scientists are blocked or hindered by (a) its limitations of available functions to handle large datasets efficiently, and (b) knowledge about the appropriate computing environments to scale R scripts from desktop analysis to elastic and distributed cloud services. In this tutorial, we will discuss how to create end-to-end data science solutions that utilize distributed compute resources.


AAAI videos online!

VideoLectures.NET

After some fine-tuning done to our storage repository we are back with exciting new content provided by the Association for the Advancement of Artificial Intelligence (AAAI), a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.


Convex and Combinatorial Optimization for Dynamic Robots in the Real World

VideoLectures.NET

Humanoid robots walking across intermittent terrain, robotic arms grasping multifaceted objects, or UAVs darting left or right around a tree ... many of the dynamics and control problems we face today have both rich nonlinear dynamics and an inherently combinatorial structure. In this talk, Tedrake will review some recent work on planning and control methods which address these two challenges simultaneously.


31st AAAI Conference on Artificial Intelligence, San Francisco 2017

VideoLectures.NET

The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) was held February 4–9 in San Francisco, California. The purpose of the AAAI conference is to promote research in artificial intelligence (AI) and scientific exchange among AI researchers, practitioners, scientists, and engineers in affiliated disciplines.


We just added videos from the Deep Learning and Reinforcement Learning Summer School

VideoLectures.NET

Don t miss out on any of the newest findings in these rapidly growing fields of research presented at the Deep Learning (DLSS) and Reinforcement Learning (RLSS) Summer School.


Doing text analytics for Digital Humanities and Social Sciences with CLARIN (LDK tutorial), Galway 2017

VideoLectures.NET

Text is a basic material, a primary data layer, in many areas of humanities and social sciences. If we want to move forward with the agenda that the fields of digital humanities and computational social sciences are projecting, it is vital to bring together the technical areas that deal with automated text processing, and scholars in the humanities and social sciences. To foster new areas of research, it is necessary to not only understand what is out there in terms of proven technologies and infrastructures such as CLARIN, but also how the developers of text analytics can work with researchers in the humanities and social sciences to understand the challenges in each other's field better. What are the research questions of the researchers working on the texts?