I'm working on a project at the moment where I need to pick out the most common phrases in a huge body of text. For example say we have three sentences like the following:
From the above example I would want to extract "the dog jumped" as it is the most common phrase in the text. At first I thought, "oh lets use a directed graph [with repeated nodes]":
directed graph http://img.skitch.com/20091218-81ii2femnfgfipd9jtdg32m74f.png
EDIT: Apologies, I made a mistake while making this diagram "over", "into" and "up" should all link back to "the".
I was going to maintain a count of how many times a word occurred in each node object ("the" would be 6; "dog" and "jumped", 3; etc.) but despite many other problems the main one came up when we add a few more examples like (please ignore the bad grammar :-)):
We now have a problem since "dog" would start a new root node (at the same level as "the") and we would not identify "dog jumped" as now being the most common phrase. So now I am thinking maybe I could use an undirected graph to map the relationships between all the words and eventually pick out the common phrases but I'm not sure how this is going to work either, as you lose the important relationship of order between the words.
So does anyone have any general ideas on how to identify common phrases in a large body of text and what data structure I would use.
Thanks, Ben
Check out this related question: What techniques/tools are there for discovering common phrases in chunks of text? Also related to the longest common substring problem.
I've posted this before, but I use R for all of my data-mining tasks and it's well suited to this kind of analysis. In particular, look at the tm
package. Here are some relevant links:
More generally, there are a large number of text mining packages on the Natural Language Processing view on CRAN.