I am trying to input an entire paragraph into my word processor to be split into sentences first and then into words.
I tried the following code but it does not work,
#text is the paragraph input
sent_text = sent_tokenize(text)
tokenized_text = word_tokenize(sent_text.split)
tagged = nltk.pos_tag(tokenized_text)
print(tagged)
however this is not working and gives me errors. So how do I tokenize paragraphs into sentences and then words?
An example paragraph:
This thing seemed to overpower and astonish the little dark-brown dog, and wounded him to the heart. He sank down in despair at the child's feet. When the blow was repeated, together with an admonition in childish sentences, he turned over upon his back, and held his paws in a peculiar manner. At the same time with his ears and his eyes he offered a small prayer to the child.
**WARNING:**This is just a random text from the internet, I do not own the above content.
You probably intended to loop over sent_text
:
import nltk
sent_text = nltk.sent_tokenize(text) # this gives us a list of sentences
# now loop over each sentence and tokenize it separately
for sentence in sent_text:
tokenized_text = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokenized_text)
print(tagged)