Review article published In:
Vol. 4:2 (2005) ► pp.235248
Clark, Herbert H.
(1996) Using language. Cambridge: Cambridge University Press. DOI logoGoogle Scholar
Engberg-Pedersen, Elisabeth
(1993) Space in Danish Sign Language. Hamburg: Signum.Google Scholar
Goldin-Meadow, Susan
(2003) Hearing gesture: How our hands help us think. Cambridge, MA: The Belknap Press.Google Scholar
Hockett, Charles F. & Stuart A. Altmann
(1968) A note on design features. In Thomas A. Sebeok (Ed.), Animal communication (pp. 61–72). Bloomington: Indiana University Press.Google Scholar
Kendon, Adam
(1988) How gestures can become like words. In Fernando Poyatos (Ed.), Crosscultural perspectives in nonverbal communication (pp. 131–141). Toronto: Hogrefe.Google Scholar
Levinson, Stephen C.
(1997) Deixis. In Peter V. Lamarque (Ed.), Concise encyclopedia of philosophy of language (pp. 214–219). Oxford: Elsevier.Google Scholar
Liddell, Scott K.
(2003) Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. DOI logoGoogle Scholar
McNeill, David
(1985) So you think gestures are nonverbal? Psychological Review, 92 (3), 271–295. DOI logoGoogle Scholar
(1992) Hand and mind. What the hands reveal about thought. Chicago: Chicago University Press.Google Scholar
Quine, Willard V. O.
(1960) Word and object. Cambridge, MA: MIT Press.Google Scholar
Tomasello, Michael & Luigia Camaioni
(1997) A comparison of the gestural communication of apes and human infants. Human Development, 401, 7–24. DOI logoGoogle Scholar
Cited by

Cited by 4 other publications

Gross, Stephanie, Brigitte Krenn & Matthias Scheutz
2016. Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems 17:2  pp. 180 ff. DOI logo
Salle, Alexander, Barbara Schmidt-Thieme, Axel Schulz & Elke Söbbeke
2023. Darstellen und Darstellungen verwenden. In Handbuch der Mathematikdidaktik,  pp. 429 ff. DOI logo
Yeamkuan, Suparat & Kosin Chamnongthai
2021. 3D Point-of-Intention Determination Using a Multimodal Fusion of Hand Pointing and Eye Gaze for a 3D Display. Sensors 21:4  pp. 1155 ff. DOI logo
Yeamkuan, Suparat, Kosin Chamnongthai & Wudthipong Pichitwong
2022. A 3D Point-of-Intention Estimation Method Using Multimodal Fusion of Hand Pointing, Eye Gaze and Depth Sensing for Collaborative Robots. IEEE Sensors Journal 22:3  pp. 2700 ff. DOI logo

This list is based on CrossRef data as of 30 april 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.