A utility token that helps other tokens by giving them a use case through gaming. The idea of creating a game that allows existing meme tokens to be used within an independent game came from days/nights of trying to find a way to contribute to the acceptance and mass adoption of cryptocurrency. To kick start this idea I will be reaching out to the top 2 trending meme tokens. The purpose of this is to give the community of these meme tokens the opportunity to contribute to the creation of the game itself. After which a vote would be passed to decide which meme token will use the game and the token with the most vote/ contributions will become the sole currency for the game.
Angel token is a project that is very relevant, with its simplicity and has its own characteristics. Among these explanations are as follows; * Consumers, namely the Angel Token project, try to provide comfort and trust to investors, what we want is always trying as much as possible * Competition, which is the Angel Token project that is still new, is brave enough to take steps to compete with the big market with projects that are old and so popular.
This project attempts to build a Question- Answering system in the News Domain, where Passages will be News articles, and anyone can ask a Question against it. We have built a span-based model using an Attention mechanism, where the model predicts the answer to a question as to the position of the start and end tokens in a paragraph. For training our model, we have used the Stanford Question and Answer (SQuAD 2.0) dataset. To do well on SQuAD 2.0, systems must not only answer questions when possible but also determine when no answer is supported by the paragraph and abstain from answering. Our model architecture comprises three layers- Embedding Layer, RNN Layer, and the Attention Layer. For the Embedding layer, we used GloVe and the Universal Sentence Encoder. For the RNN Layer, we built variations of the RNN Layer including bi-LSTM and Stacked LSTM and we built an Attention Layer using a Context to Question Attention and also improvised on the innovative Bidirectional Attention Layer. Our best performing model which uses GloVe Embedding combined with Bi-LSTM and Context to Question Attention achieved an F1 Score and EM of 33.095 and 33.094 respectively. We also leveraged transfer learning and built a Transformer based model using BERT. The BERT-based model achieved an F1 Score and EM of 57.513 and 49.769 respectively. We concluded that the BERT model is superior in all aspects of answering various types of questions.