Understanding Bloodhound.tokenizers.obj.whitespace

Joe.wang picture Joe.wang · Oct 28, 2015 · Viewed 10k times · Source

All, I was trying to apply Twitter typeahead and Bloodhound into my project based on some working sample, But I can't understand below code .

datumTokenizer: Bloodhound.tokenizers.obj.whitespace('songs'),
queryTokenizer: Bloodhound.tokenizers.whitespace,

The original code looks like below.

var songlist = new Bloodhound({
                datumTokenizer: Bloodhound.tokenizers.obj.whitespace('songs'),
                queryTokenizer: Bloodhound.tokenizers.whitespace,
                limit: 10,
                remote: '/api/demo/GetSongs?searchTterm=%QUERY'

            });

The official document just said :

datumTokenizer – A function with the signature (datum) that transforms a datum into an array of string tokens. Required.

queryTokenizer – A function with the signature (query) that transforms a query into an array of string tokens. Required.

What does it mean ? Could someone please help to tell me more about it so that I have better understanding?

Answer

davew picture davew · Apr 20, 2017

I found some helpful information here:

https://github.com/twitter/typeahead.js/blob/master/doc/migration/0.10.0.md#tokenization-methods-must-be-provided

The most common tokenization methods split a given string on whitespace or non-word characters. Bloodhound provides implementations for those methods out of the box:

  // returns ['one', 'two', 'twenty-five']
  Bloodhound.tokenizers.whitespace('  one two  twenty-five');

  // returns ['one', 'two', 'twenty', 'five']
  Bloodhound.tokenizers.nonword('  one two  twenty-five');

For query tokenization, you'll probably want to use one of the above methods. For datum tokenization, this is where you may want to do something a tad bit more advanced.

For datums, sometimes you want tokens to be dervied from more than one property. For example, if you were building a search engine for GitHub repositories, it'd probably be wise to have tokens derived from the repo's name, owner, and primary language:

  var repos = [
    { name: 'example', owner: 'John Doe', language: 'JavaScript' },
    { name: 'another example', owner: 'Joe Doe', language: 'Scala' }
  ];

  function customTokenizer(datum) {
    var nameTokens = Bloodhound.tokenizers.whitespace(datum.name);
    var ownerTokens = Bloodhound.tokenizers.whitespace(datum.owner);
    var languageTokens = Bloodhound.tokenizers.whitespace(datum.language);
    
    return nameTokens.concat(ownerTokens).concat(languageTokens);
  }

There may also be the scenario where you want datum tokenization to be performed on the backend. The best way to do that is to just add a property to your datums that contains those tokens. You can then provide a tokenizer that just returns the already existing tokens:

  var sports = [
    { value: 'football', tokens: ['football', 'pigskin'] },
    { value: 'basketball', tokens: ['basketball', 'bball'] }
  ];

  function customTokenizer(datum) { return datum.tokens; }

There are plenty of other ways you could go about tokenizing datums, it really just depends on what you are trying to accomplish.

It seems unfortunate that this information wasn't easier to find from the main documentation.