SSiW


[Note that you have access to examples only. The full resources are freely available for education, research and other non-commercial purposes. Please contact me to obtain access to the complete sources.
Also note that you can find similar but more recent empirical information in the subcategorisation database.]

Head-Lexicalised Probabilistic Context-Free Grammars (HeadLex-PCFGs) represent a lexicalised extension of PCFGs, and incorporate lexical heads into the grammar rules, cf. Charniak (1997) and Carroll and Rooth (1998). As the core of a HeadLex-PCFG, a context-free grammar is developed, with head-marking on the children. The parameters of the probabilistic version of the context-free grammar - both for the unlexicalised PCFG, a lexicalisation bootstrapping, and the lexicalised HeadLex-PCFG - are then estimated in an unsupervised training procedure, using the Expectation-Maximization algorithm (Baum, 1972). The algorithm iteratively improves model parameters by alternately assessing frequencies and estimating probabilities.

We used the statistical parser LoPar to perform the parameter training. The trained grammar model provides lexicalised rules and syntax-semantics head-head co-occurrences, as an empirical resource for inducing quantitative lexical properties at the syntax-semantics interface. The lexical information can be used for lexical acquisition and modeling linguistic phenomena. This page provides lexical information for German and for English.

References:

Glenn Carroll, Mats Rooth (1998)
Valence Induction with a Head-Lexicalized PCFG
In: Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing. Granada, Spain.

Eugene Charniak (1997)
Statistical Parsing with a Context-Free Grammar and Word Statistics
In: Proceedings of the 14th National Conference on Artificial Intelligence. Menlo Park, CA.

Helmut Schmid (2000)
LoPar: Design and Implementation
Arbeitspapiere des Sonderforschungsbereichs 340 Linguistic Theory and the Foundations of Computational Linguistics, No. 149. Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart.

Sabine Schulte im Walde, Helmut Schmid, Mats Rooth, Stefan Riezler, Detlef Prescher (2001)
Statistical Grammar Models and Lexicon Acquisition
In: Christian Rohrer, Antje Rossdeutscher and Hans Kamp (eds)
Linguistic Form and its Computation. CSLI Publications, Stanford, CA.


Lexical Acquisition from the Huge German Corpus (HGC)

The German HeadLex-PCFG was trained on 35 million words of the Huge German Corpus (HGC), a collection of newspaper corpora from the 1990s. We provide the unlexicalised and lexicalised grammar files for parsing, empirical word frequencies, and lexical data on various linguistic phenomena.

Grammar Sources (for Parsing):

Statistical Corpus Frequencies: Lexical Data:

  ... for verbs:   ... for nouns:   ... for adjectives:   ... for prepositions:
Reference:

Sabine Schulte im Walde (2003)
Experiments on the Automatic Induction of German Semantic Verb Classes
PhD Thesis. Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart. Published as AIMS Report 9(2).
[Chapter 3]


Lexical Acquisition from the British National Corpus (BNC)

The English HeadLex-PCFG was trained on approx. half of the BNC, 50 million words. The resulting grammar model was applied to obtain Viterbi parses for the whole corpus, 117 million words. From the Viterbi parses we then extracted lexical information about verbs, subcategorisation frames and arguments.

Parses:

Lexical Data:
Reference:

Sabine Schulte im Walde (1998)
Automatic Semantic Classification of Verbs According to their Alternation Behaviour
Diplomarbeit. Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart.
[mainly: Section 2.1 and Appendix A]