Extracting finite structure from infinite language
dc.contributor.author | Hopgood, Adrian A. | en |
dc.contributor.author | McQueen, T. A. | en |
dc.contributor.author | Allen, T. J. | en |
dc.contributor.author | Tepper, J. A. | en |
dc.date.accessioned | 2008-11-24T13:24:17Z | |
dc.date.available | 2008-11-24T13:24:17Z | |
dc.date.issued | 2005-08-01 | en |
dc.description | This paper presents a novel unsupervised neural network model for learning the finite-state properties of an input language from a set of positive examples. The model is demonstrated to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set. Crucially, it does not require negative examples. 30% of the tests yielded perfect grammar recognizers, compared with only 2% reported by other authors for simple recurrent networks. The paper was initially presented at AI-2004 conference where it won the Best Technical Paper award. | en |
dc.identifier.citation | McQueen, T. et al. (2005) Extracting finite structure from infinite language. Knowledge-Based Systems, 18(4-5), pp. 135-141. | |
dc.identifier.doi | https://doi.org/10.1016/j.knosys.2004.10.010 | |
dc.identifier.issn | 0950-7051 | en |
dc.identifier.uri | http://hdl.handle.net/2086/196 | |
dc.language.iso | en | en |
dc.publisher | Elsevier | en |
dc.researchgroup | Centre for Computational Intelligence | |
dc.subject | RAE 2008 | |
dc.subject | UoA 23 Computer Science and Informatics | |
dc.subject | artificial neural networks | |
dc.subject | grammar induction | |
dc.subject | natural language processing | |
dc.subject | self-organizing map | |
dc.subject | STORM (Spatio Temporal Self-Organizing Recurrent Map) | |
dc.title | Extracting finite structure from infinite language | en |
dc.type | Article | en |