module


The business LMS – from basic requirement to learning ecosystem MATRIX Blog

#artificialintelligence

Learning management systems are not new to corporate learning; they have been around for quite some time and each year more and more are released. What an LMS basically does is host, distribute, record and report on all learning that goes on within an organization. Apart from that, there are many more additional features that companies ask for and expect today. Probably the most difficult one to incorporate is tracking all informal learning and using the information to provide highly personalized learning. The LMS is the critical component to the entire e-learning program, acting both as the foundation (by incorporating all the modules) and as the engine (by providing the environment in which learners can access them and suggesting various topics based on curriculum and personal interest).


Introduction to PyTorch for Deep Learning

#artificialintelligence

In this tutorial, you'll get an introduction to deep learning using the PyTorch framework, and by its conclusion, you'll be comfortable applying it to your deep learning models. Facebook launched PyTorch 1.0 early this year with integrations for Google Cloud, AWS, and Azure Machine Learning. In this tutorial, I assume that you're already familiar with Scikit-learn, Pandas, NumPy, and SciPy. These packages are important prerequisites for this tutorial. Deep learning is a subfield of machine learning with algorithms inspired by the working of the human brain.


Modular Materialisation of Datalog Programs

arXiv.org Artificial Intelligence

The semina\"ive algorithm can materialise all consequences of arbitrary datalog rules, and it also forms the basis for incremental algorithms that update a materialisation as the input facts change. Certain (combinations of) rules, however, can be handled much more efficiently using custom algorithms. To integrate such algorithms into a general reasoning approach that can handle arbitrary rules, we propose a modular framework for materialisation computation and its maintenance. We split a datalog program into modules that can be handled using specialised algorithms, and handle the remaining rules using the semina\"ive algorithm. We also present two algorithms for computing the transitive and the symmetric-transitive closure of a relation that can be used within our framework. Finally, we show empirically that our framework can handle arbitrary datalog programs while outperforming existing approaches, often by orders of magnitude.


Modular Networks: Learning to Decompose Neural Computation

arXiv.org Artificial Intelligence

Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end. In contrast to existing approaches, training does not rely on regularization to enforce diversity in module use. We apply modular networks both to image recognition and language modeling tasks, where we achieve superior performance compared to several baselines. Introspection reveals that modules specialize in interpretable contexts.


Shape-shifting machine can switch between a delivery drone and an arm that can lift and move objects

Daily Mail

A shape-shifting robot that can autonomously reconfigure itself into a variety of different shapes has been developed by scientists. The device can perceive its own surroundings, make decisions and autonomously assume different shapes, they say. That means the shape-shifting machine can easily switch between a delivery drone and an arm that lifts and moves objects. It is hoped that similar gadgets will one day be used in search and rescue operations and to explore distant planets. The shape-shifting robot is composed of wheeled, cube-shaped modules that can detach and reattach to form new shapes.


Klue Expands Artificial Intelligence Platform with Offering for Type 1 Diabetes dLife

#artificialintelligence

Klue, a digital health company focused on behavior tracking and positive change, announces today it plans to expand its platform to type 1 diabetes. The start-up is known for its gesture sensing and analytics technology that promotes mindful eating and proper hydration. Now, the company aims to use patented technology to create "never-seen-before" capabilities in the realm of treating type 1 diabetes. By using software, it plans to unlock solutions that will be life-changing for patients who have had a lifetime of daily injections and blood sugar monitoring. Klue's fine motor artificial intelligence technology detects high impact moments and allows individuals to better manage their health, all based on insights that are captured automatically from analyzing their wrist movements.


These transformer robots are autonomous

ZDNet

Scientists at Cornell University and University of Pennsylvania have created mobile robots that can autonomously change their shape in order to complete various tasks. The robots are made of 3-inch cubes that can configure themselves into different shapes by connecting with magnets. Here's how it's related to artificial intelligence, how it works and why it matters. The "brain" of the transforming robot is a central sensor module that uses a 3-D camera to perceive and create a 3-D map of the environment in real time, the researchers explain in a paper in the newest issue of Science Robotics. The central module uses a set of algorithms to decide what shape the whole cluster of cubes should take, depending on environmental factors and the task at hand.


This Robot Transforms Itself to Navigate an Obstacle Course

IEEE Spectrum Robotics Channel

When you've got a hammer, everything looks like a nail, but the world starts to look more interesting if your hammer can change shape. For the builders of a class of robots called modular self-reconfigurable robots (MSRR), shape-shifting is the first step toward endowing robots with an animal-like adaptability to unknown situations. "The question of autonomy becomes more complicated, more interesting," when robots can change themselves to meet changing circumstances, said roboticist Hadas Kress-Gazit of Cornell University. The key to achieving adaptability for robots rests in centralized sensory processing, environmental perception, and decision-making software, Kress-Gazit and colleagues report this week in a new paper in Science Robotics. The authors claim their new work represents the first time a modular robot has autonomously solved problems by reconfiguring in response to a changing environment.


Power BI and Azure ML make them work with Power Query

#artificialintelligence

In this blog post, I am going to show how to use the Azure ML web service in Power BI (Power Query). You need to create a model in Azure ML Studio and create a web service for it. The traditional example in Predict a passenger on Titanic ship is going to survived or not? Open Azure ML Studio and follow the steps to create a model for predicting this. Next, we start to create an experiment in Azure ML by clicking on the Experiment in the left side of the window.


Gated Hierarchical Attention for Image Captioning

arXiv.org Artificial Intelligence

Attention modules connecting encoder and decoders have been widely applied in the field of object recognition, image captioning, visual question answering and neural machine translation, and significantly improves the performance. In this paper, we propose a bottom-up gated hierarchical attention (GHA) mechanism for image captioning. Our proposed model employs a CNN as the decoder which is able to learn different concepts at different layers, and apparently, different concepts correspond to different areas of an image. Therefore, we develop the GHA in which low-level concepts are merged into high-level concepts and simultaneously low-level attended features pass to the top to make predictions. Our GHA significantly improves the performance of the model that only applies one level attention, for example, the CIDEr score increases from 0.923 to 0.999, which is comparable to the state-of-the-art models that employ attributes boosting and reinforcement learning (RL). We also conduct extensive experiments to analyze the CNN decoder and our proposed GHA, and we find that deeper decoders cannot obtain better performance, and when the convolutional decoder becomes deeper the model is likely to collapse during training.