module


Visvesvaraya Technological University will teach Artificial Intelligence, machine learning

#artificialintelligence

From this academic year (2019-20), aspiring engineering candidates in Karnataka will have the opportunity to study the most in-demand courses - Artificial Intelligence (AI) and Machine Learning (ML). The Visvesvaraya Technological University (VTU) in its recent Executive Council meeting on May 30 resolved to introduce Bachelor of Engineering (BE) in Artificial Intelligence (AI) and Machine Learning with effect from the academic year 2019-20. The eligibility for admission to this course remains the same as BE and B.Tech programs in VTU. As of now, several colleges have modules in Machine Vision, Robot Programming and Artificial Intelligence in the third semester of the Instrumentation engineering course. But, offering it as degree course in itself is a first in VTU.


Say "Hello" to the SparkFun Artemis

#artificialintelligence

Measuring just 10.5 15.5 mm including antenna, the SparkFun Artemis module is intended to bridge the gap from "maker to market," and from prototype to product. The module has all of the support circuitry needed to make use of the Apollo 3 processor, but has been designed so that routing to the module can be done with lower-cost 2-layer PCBs with an 8 mil trace clearance. That means it can be easily integrated into maker projects, with a short run of circuit boards sourced from somewhere like OSH Park, or picked up in tape and reel quantities used in a production product. Today's release is the'engineering' version of the module and comes without FCC approval or a CE mark, however a fully FCC/CE approved version of the module with an RF shield is set to ship in tape and reel quantities as soon as next month. Traditionally known as a hobbyist supplier, the new Artemis Module is a big departure for SparkFun.


SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection

arXiv.org Artificial Intelligence

Dialogue response selection is an important part of Task-oriented Dialogue Systems (TDSs); it aims to predict an appropriate response given a dialogue context. Obtaining key information from a complex, long dialogue context is challenging, especially when different sources of information are available, e.g., the user's utterances, the system's responses, and results retrieved from a knowledge base (KB). Previous work ignores the type of information source and merges sources for response selection. However, accounting for the source type may lead to remarkable differences in the quality of response selection. We propose the Source-aware Recurrent Entity Network (SEntNet), which is aware of different information sources for the response selection process. SEntNet achieves this by employing source-specific memories to exploit differences in the usage of words and syntactic structure from different information sources (user, system, and KB). Experimental results show that SEntNet obtains 91.0% accuracy on the Dialog bAbI dataset, outperforming prior work by 4.7%. On the DSTC2 dataset, SEntNet obtains an accuracy of 41.2%, beating source unaware recurrent entity networks by 2.4%.


Code free Data Science with Microsoft Azure Machine Learning Studio

#artificialintelligence

Now that we have trained our model, we can use our validation set to see how well our model is doing. We can do this by first of making predictions using the Score Model module and then using the Evaluate Model module to get our accuracy and loss metrics. To make predictions on the validation set we connect the trained model to the left input of the Score Model Module and the right output node of the Split data module to the right input of the Score Model Module. When visualizing the output we can see that we have two new columns. The Scored Labels column contains the labels represented by integers of either 0 or 1.


Linear vs Polynomial Regression Walk-Through

#artificialintelligence

Fish get bigger as they get older. How predictive is fish length (cm) with age (yr) as the explanatory variable? Is the relationship best fit with a linear regression? First, let's bring in the data and a few important modules for the analysis: There are 77 instances in the data set. Now let's visualize the scatter-plot.


Cognitive Knowledge Graph Reasoning for One-shot Relational Learning

arXiv.org Machine Learning

Inferring new facts from existing knowledge graphs (KG) with explainable reasoning processes is a significant problem and has received much attention recently. However, few studies have focused on relation types unseen in the original KG, given only one or a few instances for training. To bridge this gap, we propose CogKR for one-shot KG reasoning. The one-shot relational learning problem is tackled through two modules: the summary module summarizes the underlying relationship of the given instances, based on which the reasoning module infers the correct answers. Motivated by the dual process theory in cognitive science, in the reasoning module, a cognitive graph is built by iteratively coordinating retrieval (System 1, collecting relevant evidence intuitively) and reasoning (System 2, conducting relational reasoning over collected information). The structural information offered by the cognitive graph enables our model to aggregate pieces of evidence from multiple reasoning paths and explain the reasoning process graphically. Experiments show that CogKR substantially outperforms previous state-of-the-art models on one-shot KG reasoning benchmarks, with relative improvements of 24.3%-29.7% on MRR. The source code is available at https://github.com/THUDM/CogKR.


E3: Entailment-driven Extracting and Editing for Conversational Machine Reading

arXiv.org Artificial Intelligence

Conversational machine reading systems help users answer high-level questions (e.g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made(e.g. whether they need certain income levels or veteran status). The key challenge is that these rules are only provided in the form of a procedural text (e.g. guidelines from government website) which the system must read to figure out what to ask the user. We present a new conversational machine reading model that jointly extracts a set of decision rules from the procedural text while reasoning about which are entailed by the conversational history and which still need to be edited to create questions for the user. On the recently introduced ShARC conversational machine reading dataset, our Entailment-driven Extract and Edit network (E3) achieves a new state-of-the-art, outperforming existing systems as well as a new BERT-based baseline. In addition, by explicitly highlighting which information still needs to be gathered, E3 provides a more explainable alternative to prior work. We release source code for our models and experiments at https://github.com/vzhong/e3.


Autonomous Goal Exploration using Learned Goal Spaces for Visuomotor Skill Acquisition in Robots

arXiv.org Artificial Intelligence

The automatic and efficient discovery of skills, without supervision, for long-living autonomous agents, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes give learning agents a human-inspired mechanism to sequentially select goals to achieve. This approach gives a new perspective on the lifelong learning problem, with promising results on both simulated and real-world experiments. Until recently, those algorithms were restricted to domains with experimenter-knowledge, since the Goal Space used by the agents was built on engineered feature extractors. The recent advances of deep representation learning, enables new ways of designing those feature extractors, using directly the agent experience. Recent work has shown the potential of those methods on simple yet challenging simulated domains. In this paper, we present recent results showing the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience.


Project Thyia: A Forever Gameplayer

arXiv.org Artificial Intelligence

The space of Artificial Intelligence entities is dominated by conversational bots. Some of them fit in our pockets and we take them everywhere we go, or allow them to be a part of human homes. Siri, Alexa, they are recognised as present in our world. But a lot of games research is restricted to existing in the separate realm of software. We enter different worlds when playing games, but those worlds cease to exist once we quit. Similarly, AI game-players are run once on a game (or maybe for longer periods of time, in the case of learning algorithms which need some, still limited, period for training), and they cease to exist once the game ends. But what if they didn't? What if there existed artificial game-players that continuously played games, learned from their experiences and kept getting better? What if they interacted with the real world and us, humans: live-streaming games, chatting with viewers, accepting suggestions for strategies or games to play, forming opinions on popular game titles? In this paper, we introduce the vision behind a new project called Thyia, which focuses around creating a present, continuous, `always-on', interactive game-player.


Introduction to PyTorch

#artificialintelligence

The world is changing and so is the technology serving it. It's crucial for everyone to keep up with the rapid changes in technology. One of the domains which is witnessing the fastest and largest evolution is Artificial Intelligence. We are training our machines to learn and the results are now getting better and better. There are GANs which can generate new images, Deep Learning models for translating signed language into text, and what not!