Tokenization of Arabic words using NLTK

Hady Elsahar picture Hady Elsahar · Oct 23, 2012 · Viewed 12k times · Source

I'm using NLTK word_tokenizer to split a sentence into words.

I want to tokenize this sentence:

في_بيتنا كل شي لما تحتاجه يضيع ...ادور على شاحن فجأة يختفي ..لدرجة اني اسوي نفسي ادور شيء 

The code I'm writing is:

import re
import nltk

lex = u" في_بيتنا كل شي لما تحتاجه يضيع ...ادور على شاحن فجأة يختفي ..لدرجة اني اسوي نفسي ادور شيء"

wordsArray = nltk.word_tokenize(lex)
print " ".join(wordsArray)

The problem is that the word_tokenize function doesn't split by words. Instead, it splits by letters so that the output is:

"ف ي _ ب ي ت ن ا ك ل ش ي ل م ا ت ح ت ا ج ه ي ض ي ع ... ا د و ر ع ل ى ش ا ح ن ف ج أ ة ي خ ت ف ي .. ل د ر ج ة ا ن ي ا س و ي ن ف س ي ا د و ر ش ي ء"

Any ideas ?

What I've reached so far:

By trying the text in here, it appeared to be tokenized by letters. Also, however, other tokenizers tokenized it correctly. Does that mean that word_tokenize is for English only? Does that go for most of NLTK functions?

Answer

Jacob picture Jacob · Oct 24, 2012

I always recommend using nltk.tokenize.wordpunct_tokenize. You can try out many of the NLTK tokenizers at http://text-processing.com/demo/tokenize/ and see for yourself.