I created a dataframe with sentences to be stemmed. I would like to use a Snowballstemmer to obtain higher accuracy with my classification algorithm. How can I achieve this?
import pandas as pd
from nltk.stem.snowball import SnowballStemmer
# Use English stemmer.
stemmer = SnowballStemmer("english")
# Sentences to be stemmed.
data = ["programers program with programing languages", "my code is working so there must be a bug in the optimizer"]
# Create the Pandas dataFrame.
df = pd.DataFrame(data, columns = ['unstemmed'])
# Split the sentences to lists of words.
df['unstemmed'] = df['unstemmed'].str.split()
# Make sure we see the full column.
pd.set_option('display.max_colwidth', -1)
# Print dataframe.
df
+----+---------------------------------------------------------------+
| | unstemmed |
|----+---------------------------------------------------------------|
| 0 | ['programmers', 'program', 'with', 'programming', 'languages']|
| 1 | ['my', 'code', 'is', 'working', 'so', 'there', 'must', |
| | 'be', 'a', 'bug', 'in', 'the', 'interpreter'] |
+----+---------------------------------------------------------------+
You have to apply the stemming on each word and store it into the "stemmed" column.
df['stemmed'] = df['unstemmed'].apply(lambda x: [stemmer.stem(y) for y in x]) # Stem every word.
df = df.drop(columns=['unstemmed']) # Get rid of the unstemmed column.
df # Print dataframe.
+----+--------------------------------------------------------------+
| | stemmed |
|----+--------------------------------------------------------------|
| 0 | ['program', 'program', 'with', 'program', 'languag'] |
| 1 | ['my', 'code', 'is', 'work', 'so', 'there', 'must', |
| | 'be', 'a', 'bug', 'in', 'the', 'interpret'] |
+----+--------------------------------------------------------------+