Fuzzy search algorithm (approximate string matching algorithm)

Yahya Uddin picture Yahya Uddin · Sep 1, 2015 · Viewed 39.1k times · Source

I wish to create a fuzzy search algorithm. However, upon hours of research I am really struggling.

I want to create an algorithm that performs a fuzzy search on a list of names of schools.

This is what I have looked at so far:

Most of my research keep pointing to "string metrics" on Google and Stackoverflow such as:

  • Levenshtein distance
  • Damerau-Levenshtein distance
  • Needleman–Wunsch algorithm

However this just gives a score of how similar 2 strings are. The only way I can think of implementing it as a search algorithm is to perform a linear search and executing the string metric algorithm for each string and returning the strings with scores above a certain threshold. (Originally I had my strings stored in a trie tree, but this obviously won't help me here!)

Although this is not such a bad idea for small lists, it would be problematic for lists with lets say a 100,000 names, and the user performed many queries.

Another algorithm I looked at is the Spell-checker method, where you just do a search for all potential misspellings. However this also is highly inefficient as it requires more than 75,000 words for a word of length 7 and error count of just 2.

What I need?

Can someone please suggest me a good efficient fuzzy search algorithm. with:

  • Name of the algorithm
  • How it works or a link to how it works
  • Pro's and cons and when it's best used (optional)

I understand that all algorithms will have their pros and cons and there is no best algorithm.

Answer

Jim Mischel picture Jim Mischel · Sep 1, 2015

Considering that you're trying to do a fuzzy search on a list of school names, I don't think you want to go for traditional string similarity like Levenshtein distance. My assumption is that you're taking a user's input (either keyboard input or spoken over the phone), and you want to quickly find the matching school.

Distance metrics tell you how similar two strings are based on substitutions, deletions, and insertions. But those algorithms don't really tell you anything about how similar the strings are as words in a human language.

Consider, for example, the words "smith," "smythe," and "smote". I can go from "smythe" to "smith" in two steps:

smythe -> smithe -> smith

And from "smote" to "smith" in two steps:

smote -> smite -> smith

So the two have the same distance as strings, but as words, they're significantly different. If somebody told you (spoken language) that he was looking for "Symthe College," you'd almost certainly say, "Oh, I think you mean Smith." But if somebody said "Smote College," you wouldn't have any idea what he was talking about.

What you need is a phonetic algorithm like Soundex or Metaphone. Basically, those algorithms break a word down into phonemes and create a representation of how the word is pronounced in spoken language. You can then compare the result against a known list of words to find a match.

Such a system would be much faster than using a distance metric. Consider that with a distance metric, you need to compare the user's input with every word in your list to obtain the distance. That is computationally expensive and the results, as I demonstrated with "smith" and "smote" can be laughably bad.

Using a phonetic algorithm, you create the phoneme representation of each of your known words and place it in a dictionary (a hash map or possibly a trie). That's a one-time startup cost. Then, whenever the user inputs a search term, you create the phoneme representation of his input and look it up in your dictionary. That is a lot faster and produces much better results.

Consider also that when people misspell proper names, they almost always get the first letter right, and more often than not pronouncing the misspelling sounds like the actual word they were trying to spell. If that's the case, then the phonetic algorithms are definitely the way to go.