Abstract

Word sense disambiguation is an important problem in learning by reading. This paper introduces analogical word-sense disambiguation, which uses human-like analogical processing over structured, relational representations to perform word sense disambiguation. Cases are automatically constructed using representations produced via natural language analysis of sentences, and include both conceptual and linguistic information. Learning occurs via processing cases with the SAGE model of analogical generalization, which constructs probabilistic relational representations from cases that are sufficiently similar, but also stores outliers. Disambiguation is performed by using analogical retrieval over generalizations and stored examples to provide evidence for new word occurrences based on prior experience. We present experiments demonstrating that analogical word sense disambiguation, using representations that are suitable for learning by reading, yields accuracies comparable to traditional algorithms operating over feature-based representations.
Saving...