1 min to read
#100DaysOfNLP Day 4 (Paper Summary): Zero-Shot Learning Classifiers from Declarative Language
On Day 4 of my 100 Days of NLP journey, I decided to read a research paper written by my mentor for the summer Dr. Shashank Srivastava. As you can tell from the title of this blog post the title of the paper is Zero-Shot Learning Classifiers from Declarative Language. The authors of the paper are Shashank Srivatava, Igor Labutov, and Tom Mitchell. The paper focuses on training a computer to classify things by giving it natural language statements. It particularly focuses on the use of quantifiers in a sentence like “all” and “few” to inform the machine as to how it can classify things. The authors use language to train the computer to classify three things, emails, birds, and shapes.
The primary challenge in developing a model that could learn from language was mapping natural language to quantitative constraints that the model could train off of. They ended up doing this well using semantic parsing to turn sentences into quantitative tokens and using posterior regularlization to build the parameters for the classifier. The authors also find that a human teacher’s average performance is significantly worse than the model’s, which they say could be a result of human fatigue. Regardless, the topics discussed in this paper are incredibly interesting and I would encourage you to read the complete paper. If you’re interested in keeping up with my #100DaysOfNLP journey consider subscribing to my newsletter.
Comments