The interdisciplinary linguistic attractor model portrays language processing as linked sequences of fractal sets, and examines the changing dynamics of such sets for individuals as well as the speech community they comprise. Its motivation stems from human anatomic constraints and several artificial neural network approaches. It uses general computation theory to: (1) demonstrate the capacity of Cantor-like fractal sets to perform as Turing Machines; (2) better distinguish between models that simply match outputs (emulation) and models that match both outputs and internal dynamics (simulation); and (3) relate language processing to essential computation steps executed in parallel. Measure and information theory highlight the key variables driving linguistic dynamics, while catastrophe and game theory help predict the possible topologies of language change.
It introduces techniques to isolate and measure attractors, and to interpret their stability and relative content within a system. Important results include the capability to distinguish the sequence of related sound changes, and to make point-to-point comparisons of different texts using common metrics. Other techniques allow quantifiable ambiguity landscapes illustrating the forces that propel different languages in different directions.
2018. Resonances: Second Language Development and Language Planning and Policy from a Complexity Theory Perspective. In Language Policy and Language Acquisition Planning [Language Policy, 15], ► pp. 203 ff.
2012. Chaos/Complexity Theory for Second Language Acquisition. In The Encyclopedia of Applied Linguistics,
2015. 2015 International Joint Conference on Neural Networks (IJCNN), ► pp. 1 ff.
This list is based on CrossRef data as of 14 february 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.