Summarizing Economic Bulletin Documents with tf-idf


In this example, a selection of economic bulletins in PDF format from 2018 to 2019 are analysed in order to gauge economic sentiment. The bulletins in question are sourced from the European Central Bank website. As a disclaimer, the below examples are used solely to illustrate the use of natural language processing techniques for educational purposes. This is not intended as a formal economic summary in any business context. Firstly, pdf2txt is used to convert the pdf files into text format using a Linux shell.

Supervised Learning for Document Classification with Scikit-Learn - QuantStart


This is the first article in what will become a set of tutorials on how to carry out natural language document classification, for the purposes of sentiment analysis and, ultimately, automated trade filter or signal generation. This particular article will make use of Support Vector Machines (SVM) to classify text documents into mutually exclusive groups. Since this is the first article written in 2015, I feel it is now time to move on from Python 2.7.x and make use of the latest 3.4.x Hence all code in this article will be written with 3.4.x in mind. There are a significant number of steps to carry out between viewing a text document on a web site, say, and using its content as an input to an automated trading strategy to generate trade filters or signals. In this particular article we will avoid discussion of how to download multiple articles from external sources and make use of a given dataset that already comes with its own provided labels. This will allow us to concentrate on the implementation of the "classification pipeline", rather than spend a substantial amount of time obtaining and tagging documents.

Intuitive Understanding of Word Embeddings: Count Vectors to Word2Vec


Before we start, have a look at the below examples. So what do the above examples have in common? You possible guessed it right – TEXT processing. All the above three scenarios deal with humongous amount of text to perform different range of tasks like clustering in the google search example, classification in the second and Machine Translation in the third. Humans can deal with text format quite intuitively but provided we have millions of documents being generated in a single day, we cannot have humans performing the above the three tasks. It is neither scalable nor effective. So, how do we make computers of today perform clustering, classification etc on a text data since we know that they are generally inefficient at handling and processing strings or texts for any fruitful outputs?

Tough Times for German Savers: Who's Responsible for Eurozone's Negative Interest Rates?

Der Spiegel International

As the vice president of the European Central Bank (ECB), Vitor Constâncio always had to be cautious. Every sentence and word he uttered about the economy or interest rates had the potential to move financial markets by billions of euros. But now that Constâncio has been in retirement for close to a year, he can speak freely. At the moment, Germany's resistance to debt spending is troubling him. The economist finds it distressing that the German government is sticking to its mantra of a balanced budget despite negative interest rates on sovereign bonds.

DarkEmbed: Exploit Prediction With Neural Language Models

AAAI Conferences

Software vulnerabilities can expose computer systems to attacks by malicious actors. With the number of vulnerabilities discovered in the recent years surging, creating timely patches for every vulnerability is not always feasible. At the same time, not every vulnerability will be exploited by attackers; hence, prioritizing vulnerabilities by assessing the likelihood they will be exploited has become an important research problem. Recent works used machine learning techniques to predict exploited vulnerabilities by analyzing discussions about vulnerabilities on social media. These methods relied on traditional text processing techniques, which represent statistical features of words, but fail to capture their context. To address this challenge, we propose DarkEmbed, a neural language modeling approach that learns low dimensional distributed representations, i.e., embeddings, of darkweb/deepweb discussions to predict whether vulnerabilities will be exploited. By capturing linguistic regularities of human language, such as syntactic, semantic similarity and logic analogy, the learned embeddings are better able to classify discussions about exploited vulnerabilities than traditional text analysis methods. Evaluations demonstrate the efficacy of learned embeddings on both structured text (such as security blog posts) and unstructured text (darkweb/deepweb posts). DarkEmbed outperforms state-of-the-art approaches on the exploit prediction task with an F1-score of 0.74.