i have a pandas dataframe called 'data_stem' and there is a column named 'TWEET_SENT_1' which have strings like below (50 rows)
TWEET_SENT_1
the mack daddy of kiss cross
i liked that video body party
i want to apply porters stemmer in to 'TWEET_SENT_1' column (for all words of a row) i tried below code and it gives an error . could you please help me to overcome this
from nltk.stem import PorterStemmer, WordNetLemmatizer
porter_stemmer = PorterStemmer()
data_stem[' TWEET_SENT_1 '] = data_stem[' TWEET_SENT_1 '].apply(lambda x: [porter_stemmer.stem(y) for y in x])
below is the error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-412-c16b1beddfb5> in <module>()
1 from nltk.stem import PorterStemmer, WordNetLemmatizer
2 porter_stemmer = PorterStemmer()
----> 3 data_stem[' TWEET_SENT_1 '] = data_stem[' TWEET_SENT_1 '].apply(lambda x: [porter_stemmer.stem(y) for y in x])
C:\Users\SampathR\Anaconda2\envs\dato-env\lib\site-packages\pandas\core\series.pyc in apply(self, func, convert_dtype, args, **kwds)
2058 values = lib.map_infer(values, lib.Timestamp)
2059
-> 2060 mapped = lib.map_infer(values, f, convert=convert_dtype)
2061 if len(mapped) and isinstance(mapped[0], Series):
2062 from pandas.core.frame import DataFrame
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:58435)()
<ipython-input-412-c16b1beddfb5> in <lambda>(x)
1 from nltk.stem import PorterStemmer, WordNetLemmatizer
2 porter_stemmer = PorterStemmer()
----> 3 data_stem[' TWEET_SENT_1 '] = data_stem[' TWEET_SENT_1 '].apply(lambda x: [porter_stemmer.stem(y) for y in x])
TypeError: 'NoneType' object is not iterable
Applying three different operations to the series with millions of rows is very expensive operation. Instead, apply all at once:
def stem_sentences(sentence):
tokens = sentence.split()
stemmed_tokens = [porter_stemmer.stem(token) for token in tokens]
return ' '.join(stemmed_tokens)
data_stem['TWEET_SENT_1'] = data_stem['TWEET_SENT_1'].apply(stem_sentences)
(Note: This is just a modified version of the accepted answer)