I am parsing a long string of text and calculating the number of times each word occurs in Python. I have a function that works but I am looking for advice on whether there are ways I can make it more efficient(in terms of speed) and whether there's even python library functions that could do this for me so I'm not reinventing the wheel?
Can you suggest a more efficient way to calculate the most common words that occur in a long string(usually over 1000 words in the string)?
Also whats the best way to sort the dictionary into a list where the 1st element is the most common word, the 2nd element is the 2nd most common word and etc?
test = """abc def-ghi jkl abc
abc"""
def calculate_word_frequency(s):
# Post: return a list of words ordered from the most
# frequent to the least frequent
words = s.split()
freq = {}
for word in words:
if freq.has_key(word):
freq[word] += 1
else:
freq[word] = 1
return sort(freq)
def sort(d):
# Post: sort dictionary d into list of words ordered
# from highest freq to lowest freq
# eg: For {"the": 3, "a": 9, "abc": 2} should be
# sorted into the following list ["a","the","abc"]
#I have never used lambda's so I'm not sure this is correct
return d.sort(cmp = lambda x,y: cmp(d[x],d[y]))
print calculate_word_frequency(test)
Use collections.Counter
:
>>> from collections import Counter
>>> test = 'abc def abc def zzz zzz'
>>> Counter(test.split()).most_common()
[('abc', 2), ('zzz', 2), ('def', 2)]