Goto

Collaborating Authors

Introducing HOWLX TOKEN.

#artificialintelligence

A utility token that helps other tokens by giving them a use case through gaming. The idea of creating a game that allows existing meme tokens to be used within an independent game came from days/nights of trying to find a way to contribute to the acceptance and mass adoption of cryptocurrency. To kick start this idea I will be reaching out to the top 2 trending meme tokens. The purpose of this is to give the community of these meme tokens the opportunity to contribute to the creation of the game itself. After which a vote would be passed to decide which meme token will use the game and the token with the most vote/ contributions will become the sole currency for the game.


Angel Token

#artificialintelligence

Angel token is a project that is very relevant, with its simplicity and has its own characteristics. Among these explanations are as follows; * Consumers, namely the Angel Token project, try to provide comfort and trust to investors, what we want is always trying as much as possible * Competition, which is the Angel Token project that is still new, is brave enough to take steps to compete with the big market with projects that are old and so popular.


Building a Question and Answer System for News Domain

arXiv.org Artificial Intelligence

This project attempts to build a Question- Answering system in the News Domain, where Passages will be News articles, and anyone can ask a Question against it. We have built a span-based model using an Attention mechanism, where the model predicts the answer to a question as to the position of the start and end tokens in a paragraph. For training our model, we have used the Stanford Question and Answer (SQuAD 2.0) dataset[1]. To do well on SQuAD 2.0, systems must not only answer questions when possible but also determine when no answer is supported by the paragraph and abstain from answering. Our model architecture comprises three layers- Embedding Layer, RNN Layer, and the Attention Layer. For the Embedding layer, we used GloVe and the Universal Sentence Encoder. For the RNN Layer, we built variations of the RNN Layer including bi-LSTM and Stacked LSTM and we built an Attention Layer using a Context to Question Attention and also improvised on the innovative Bidirectional Attention Layer. Our best performing model which uses GloVe Embedding combined with Bi-LSTM and Context to Question Attention achieved an F1 Score and EM of 33.095 and 33.094 respectively. We also leveraged transfer learning and built a Transformer based model using BERT. The BERT-based model achieved an F1 Score and EM of 57.513 and 49.769 respectively. We concluded that the BERT model is superior in all aspects of answering various types of questions.


Contextual Word Representations

Communications of the ACM

This article aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence.a


How to Participate in Gladius Public Pre-sale – Gladius Network – Medium

@machinelearnbot

This post details how to participate in the Gladius public pre-sale from November 24 (02:00 UTC / 9:00am EST) -- December 30th (22:00 UTC / 5:00pm EST).