Cosine Similarity of Vectors of different lengths?

erikcw picture erikcw · Jun 25, 2010 · Viewed 19.7k times · Source

I'm trying to use TF-IDF to sort documents into categories. I've calculated the tf_idf for some documents, but now when I try to calculate the Cosine Similarity between two of these documents I get a traceback saying:

#len(u)==201, len(v)==246

cosine_distance(u, v)
ValueError: objects are not aligned

#this works though:
cosine_distance(u[:200], v[:200])
>> 0.52230249969265641

Is slicing the vector so that len(u)==len(v) the right approach? I would think that cosine similarity would work with vectors of different lengths.

I'm using this function:

def cosine_distance(u, v):
    """
    Returns the cosine of the angle between vectors v and u. This is equal to
    u.v / |u||v|.
    """
    return numpy.dot(u, v) / (math.sqrt(numpy.dot(u, u)) * math.sqrt(numpy.dot(v, v))) 

Also -- is the order of the tf_idf values in the vectors important? Should they be sorted -- or is it of no importance for this calculation?

Answer

Ken Bloom picture Ken Bloom · Jun 30, 2010

You need multiply the entries for corresponding words in the vector, so there should be a global order for the words. This means that in theory your vectors should be the same length.

In practice, if one document was seen before the other, words in the second document may have been added to the global order after the first document was seen, so even though the vectors have the same order, the first document may be shorter, since it doesn't have entries for the words that weren't in that vector.

Document 1: The quick brown fox jumped over the lazy dog.

Global order:     The quick brown fox jumped over the lazy dog
Vector for Doc 1:  1    1     1    1     1     1    1   1   1

Document 2: The runner was quick.

Global order:     The quick brown fox jumped over the lazy dog runner was
Vector for Doc 1:  1    1     1    1     1     1    1   1   1
Vector for Doc 2:  1    1     0    0     0     0    0   0   0    1     1

In this case, in theory you need to pad the Document 1 vector with zeroes on the end. In practice, when computing the dot product, you only need to multiply elements up to the end of Vector 1 (since omitting the extra elements of vector 2 and multiplying them by zero are exactly the same, but visiting the extra elements is slower).

Then you can compute the magnitude of each vector separately, and for that the vectors don't need to be of the same length.