Extracting nationalities and countries from text

user6453258 picture user6453258 · Jun 17, 2016 · Viewed 8.7k times · Source

I want to extract all country and nationality mentions from text using nltk, I used POS tagging to extract all GPE labeled tokens but the results were not satisfying.

 abstract="Thyroid-associated orbitopathy (TO) is an autoimmune-mediated orbital inflammation that can lead to disfigurement and blindness. Multiple genetic loci have been associated with Graves' disease, but the genetic basis for TO is largely unknown. This study aimed to identify loci associated with TO in individuals with Graves' disease, using a genome-wide association scan (GWAS) for the first time to our knowledge in TO.Genome-wide association scan was performed on pooled DNA from an Australian Caucasian discovery cohort of 265 participants with Graves' disease and TO (cases) and 147 patients with Graves' disease without TO (controls). "

  sent = nltk.tokenize.wordpunct_tokenize(abstract)
  pos_tag = nltk.pos_tag(sent)
  nes = nltk.ne_chunk(pos_tag)
  places = []
  for ne in nes:
      if type(ne) is nltk.tree.Tree:
         if (ne.label() == 'GPE'):
            places.append(u' '.join([i[0] for i in ne.leaves()]))
      if len(places) == 0:
          places.append("N/A")

The results obtained are :

['Thyroid', 'Australian', 'Caucasian', 'Graves']

Some are nationalities but others are just nouns.

So what am I doing wrong or is there another way to extract such info?

Answer

user6453258 picture user6453258 · Jun 22, 2016

So after the fruitful comments, I digged deeper into different NER tools to find the best in recognizing nationalities and country mentions and found that SPACY has a NORP entity that extracts nationalities efficiently. https://spacy.io/docs/usage/entity-recognition