Goto

Collaborating Authors

 data preparation stage


Customizing The SentenceDetector In Spark NLP - AI Summary

#artificialintelligence

There are many Natural Language Processing (NLP) tasks that require text to be split in chunks of varying granularity: Making a task to extract names and addresses of a person is almost impossible under these conditions – just because the data preparation stage was not up for it. Subject specific technical terms are sometimes abbreviated in a way that is otherwise, generally not used: (German Legal References: "Putzo ZPO 39. So, lets take the German legal reference example from above and apply Spark NLPs extended capabilities on a sample project (with a series of CoLab notebooks) to see how this will help us splitting text correctly into sentences. And make the first 1000 rulings available as a separate JSON file (since handling a larger data collections is otherwise difficult with a normal CoLab license). I developed a command line tool called unsplit to parse the text from the German legal court rulings to split sentences at a period, except when the period character was at one of the known abbreviations in the previously curated list (the unsplit tool is a C#/.Net command line program which I can publish on GitHub if people are interested). But honestly, I use this as hint towards the quality of a model and tend to say "the truth is in the pudding" and trust the a real world test more than any KPIs: I'll be looking forward on comments about things that could be improved in the data preparation stage of this sentence detection modelling task or other items you might find worth giving me feedback about. Making a task to extract names and addresses of a person is almost impossible under these conditions – just because the data preparation stage was not up for it. Subject specific technical terms are sometimes abbreviated in a way that is otherwise, generally not used: (German Legal References: "Putzo ZPO 39.

  Country: Europe > Germany > Bavaria > Upper Bavaria > Munich (0.07)
  Industry: Law (1.00)

Customizing the SentenceDetector in Spark NLP

#artificialintelligence

There are many Natural Language Processing (NLP) tasks that require text to be split in chunks of varying granularity: 1. Document 2. Sentence 3. Token 4. etc… This post is focused on splitting text into sentences in order to facilitate later downstream tasks, such as, Named Entity Recognition (NER), Text Classification or Sentiment Analysis. Splitting a sentence correctly can be crucial for the success of the downstream task as we can see in the following example. Suppose we (wrongly) split a German legal reference like: "Schütze ZPO 4. Aufl. Now you might say this is special subject stuff and there are always exotic cases. But this issue also occurs in daily life when you want to extract common things. Consider, for example, (an invented) German address (with correct syntax for zip code and so forth): "Dr.