Shallow parsing (also chunking or light parsing) is an analysis of a sentence which first identifies constituent parts of sentences (nouns, verbs, adjectives, etc.) and then links them to higher order units that have discrete grammatical meanings (noun groups or phrases, verb groups, etc.). While the most elementary chunking algorithms simply link constituent parts on the basis of elementary search patterns (e.g., as specified by regular expressions), approaches that use machine learning techniques (classifiers, topic modeling, etc.) can take contextual information into account and thus compose chunks in such a way that they better reflect the semantic relations between the basic constituents.[1] That is, these more advanced methods get around the problem that combinations of elementary constituents can have different higher level meanings depending on the context of the sentence.

It is a technique widely used in natural language processing. It is similar to the concept of lexical analysis for computer languages. Under the name "shallow structure hypothesis", it is also used as an explanation for why second language learners often fail to parse complex sentences correctly.[2]

References

Citations

  1. ^ Jurafsky, Daniel; Martin, James H. (2000). Speech and Language Processing. Singapore: Pearson Education Inc. pp. 577–586.
  2. ^ Clahsen, Felser, Harald, Claudia (2006). "Grammatical Processing in Language Learners". Applied Psycholinguistics. 27: 3–42. doi:10.1017/S0142716406060024. S2CID 15990215.{{cite journal}}: CS1 maint: multiple names: authors list (link)

Sources

External links

See also