Goto

Collaborating Authors

Sentiment Analysis with KNIME - KDnuggets

#artificialintelligence

Sentiment analysis of free-text documents is a common task in the field of text mining. In sentiment analysis predefined sentiment labels, such as "positive" or "negative" are assigned to texts. Texts (here called documents) can be reviews about products or movies, articles, tweets, etc. In this article, we show you how to assign predefined sentiment labels to documents, using the KNIME Text Processing extension in combination with traditional KNIME learner and predictor nodes. A set of 2000 documents has been sampled from the training set of the Large Movie Review Dataset v1.0.


Learning Deep Learning. A tutorial on KNIME Deeplearning4J Integration

@machinelearnbot

The aim of this blog post is to highlight some of the key features of the KNIME Deeplearning4J (DL4J) integration, and help newcomers to either Deep Learning or KNIME to be able to take their first steps with Deep Learning in KNIME Analytics Platform. With a little bit of patience, you can run the example provided in this blog post on your laptop, since it uses a small dataset and only a few neural net layers. However, Deep Learning is a poster child for using GPUs to accelerate expensive computations. Fortunately DL4J includes GPU acceleration, which can be enabled within the KNIME Analytics Platform. If you don't happen to have a good GPU available, a particularly easy way to get access to one is to use a GPU-enabled KNIME Cloud Analytics Platform, which is the cloud version of KNIME Analytics Platform.


Improving OCR Results with Basic Image Processing - PyImageSearch

#artificialintelligence

We still have many ways to go before our image is ready to OCR, so let's see what comes next: Line 25 applies a distance transform to our thresh image using a maskSize of 5 x 5 -- a calculation that determines the distance from each pixel to the nearest 0 pixel (black) in the input image. Subsequently, we normalize and scale the dist to the range [0, 255] (Lines 30 and 31). The distance transform starts to reveal the digits themselves, since there is a larger distance from the foreground pixels to the background. The distance transform has the added benefit of cleaning up much of the noise in the image's background. For more details on this transform, refer to the OpenCV docs. From there, we apply Otsu's thresholding method again but this time to the dist map (Lines 35 and 36) the results of which are shown in Figure 4. Notice that we are not using the inverse binary threshold (we've dropped the _INV part of the flag) because we want the text to remain in the foreground (white). Let's continue to clean up our foreground: Applying an opening morphological operation (i.e., dilation followed by erosion) disconnects connected blobs and removes noise (Lines 41 and 42). Figure 5 demonstrates that our opening operation effectively disconnects the "1" character from the blob at the top of the image (magenta circle).


OCR Passports with OpenCV and Tesseract - PyImageSearch

#artificialintelligence

To learn how to OCR a passport using OpenCV and Tesseract, just keep reading. So far in this course, we've relied on the Tesseract OCR engine to detect the text in an input image. However, as we discovered in a previous tutorial, sometimes Tesseract needs a bit of help before we can actually OCR the text. This tutorial will explore this idea more, demonstrating that computer vision and image processing techniques can localize text regions in a complex input image. Once the text is localized, we can extract the text ROI from the input image and then OCR it using Tesseract.


OCR a document, form, or invoice with Tesseract, OpenCV, and Python - PyImageSearch

#artificialintelligence

In this tutorial, you will learn how to OCR a document, form, or invoice using Tesseract, OpenCV, and Python. On the left, we have our template image (i.e., a form from the United States Internal Revenue Service). The middle figure is our input image that we wish to align to the template (thereby allowing us to match fields from the two images together). And finally, the right shows the output of aligning the two images together. At this point, we can associate text fields in the form with each corresponding field in the template, meaning that we know which locations of the input image map to the name, address, EIN, etc. fields of the template: Knowing where and what the fields are allows us to then OCR each individual field and keep track of them for further processing, such as automated database entry.