Communication is multi-modal

Professor Chloe Marshall presented at the 2018 Microtia UK London Family Day.

Below, she outlines her talk, and argues for the importance and benefit of early exposure to language for deaf babies and children – whether the language is spoken, signed, or both.


My aim in this article is to tease apart three things – speech, language and communication – which are obviously closely related, but not identical. I also want to make two main points: the first is that communication is not just confined to spoken language, but is instead “multi-modal”, and the second point is that too often the focus with deaf children is on developing their speech rather than on developing all the other resources that make up communication. I argue that that focus needs to be challenged.


Let’s look first at the distinction between speech, language, and communication. Communication is the broadest of these categories. Communication encompasses language, but it encompasses other things too.  When communicating face-to-face, people use hand gestures, eye gaze and facial expressions to convey information. They might use gestures that are representational in that they represent or depict the things that they are talking about, for example they might gesture a circular motion to show how they stirred the mixture for a cake.  They might point to something using their index finger, or use a conventional gesture such as the “thumbs up” gesture to show that something is good. They might make eye contact with someone in order to communicate that they want to engage in conversation with that person. They might also nod or shake their head to express agreement/disagreement, and use a range of facial expressions, including smiles and frowns.


We’ll come back to communication later, but let’s turn to language now. It’s obvious what language is – it’s English, French, Polish, Arabic, or Urdu, for example. These are spoken languages, meaning that they are conveyed using speech, and these particular languages can also be written down. But languages don’t have to use speech at all: signed languages are conveyed in the visual-gestural domain using the hands and other body parts – eyes, eyebrows, mouth, tilt of the torso, for example. It might be tempting to think that because they make use of the same resources that speakers use to accompany speech – the hands, eyes, and face – signed languages are just hand-waving, and not as “language-like” as spoken languages. And because signed languages don’t have a written form, it might be tempting to think that they’re deficient in some way compared to spoken languages.


Nevertheless, decades of research have shown that this impression is wrong: signed languages are just as complex, real, and language-like as spoken languages. They’re even processed using the same areas of the brain. In fact, the brain doesn’t actually care whether it’s confronted with spoken language or signed language – it just recognises “language”, processes it and learns it.


If we have to think more carefully about what makes language language, we’re likely to think about rules. Different languages have different rules governing how words are combined into sentences, and different rules governing how words change their form depending on their exact meaning. For example, in English the sentences “the dog bit the man” and “the man bit the dog” do not mean the same thing – although the words are the same, the fact that they are in a different order changes the meaning.


Signed languages are also structured according to specific rules, and therefore also have a grammar. Whereas in English what we call “wh” question words – such as “what”, “who”, and “why”– need to come at the beginning of the sentence (as in “what’s your name?” or “who is she?”), in British Sign Language they come at the end of the question (the word order is “name you what?” or “she who?”). Furthermore, different signed languages have different rules: British and American Sign Language are not the same, for example.


In summary, signed languages are just as real, complex and rule-governed as spoken languages. The only difference is that they don’t use speech. Speech is a subset of language, which is a subset of communication, as illustrated in the diagram below.


I hope I’ve convinced you of my first point, which is that communication is multi-modal, by which I mean it draws on different modes of transmission. Communication isn’t confined to spoken languages, and signed languages offer a means of communication that is just as effective. And even when we speak, we make use of gestural resources – our hands, our eye gaze, our facial expressions. We are all multi-modal communicators.


The second point I want to make is that too often the focus with deaf children is on speech rather than on all the other resources that make up communication. What we know from research is that early exposure to language – whatever that language is, spoken or signed, is really important for ALL children, whether they are deaf or hearing. Children who hear or see lots of language, and who encounter rich and varied language with lots of different words and with lots of people in lots of different contexts, are children who are going to acquire language more quickly.

And this language will form the foundation of their learning at school. Children cannot learn to read or write without having language, or do maths without language, or really learn anything at school without language. It’s also very difficult for children to understand their emotions and to regulate their emotions and their behaviour without language.


Deaf children are, of course, at risk of not acquiring language as well as other children for the very obvious reason that their hearing impairment limits their access to speech. Even hearing aids and cochlear implants don’t “cure” deafness by turning a deaf child into a hearing one. Much of the professional support that deaf children get with their language development will focus on their listening skills and their ability to pronounce speech clearly. Indeed, I’ve had parents of deaf children say to me that their child’s audiologist or speech and language therapist has advised them not to use sign language because it will damage their chances of learning to speak.


But this is wrong: there is absolutely no evidence that using a sign language alongside a spoken language – either as two separate languages (such as English and British Sign Language) or as Sign-Supported English (where some key signs are used in addition to spoken English) – will delay or damage deaf children’s speech in any way. Research has also shown that the earlier a child learns language the better, and that the younger a child starts learning a sign language the easier it will be for them to learn a spoken language when they’re older, and vice versa.


Therefore, if you don’t do so already, you might want try adding some gestures to your communication. You might even want to try learning British Sign Language. In fact, it’s very natural for parents when communicating with their young children – deaf or hearing – to draw on multi-modal resources when communicating. Even when talking with hearing children we tend to gesture a lot and use lots of facial expressions, so we’re already good multi-modal communicators. When we’re communicating with deaf children, let’s think about making good use of the multi-modal resources that we have: it’s only going to enhance, not hold back, their language development.


About the author:

Chloë Marshall, PhD, is Professor of Psychology, Language and Education at UCL’s Institute of Education. Her first career was as a Montessori early-years teacher and teacher-trainer. She currently carries out research into deaf children’s language development and learning, and is particularly interested in how deaf children learn British Sign Language. She is also collaborating with colleagues at UCL, the University of Brighton, and the University of Chicago in a project titled: “The role of iconicity in word learning”, which investigates whether iconic communication (in the form of onomatopoeias, intonation and gestures) by caregivers when talking to their hearing 2-3 year-old children helps their children to learn words.


Editor’s note:

If you would like to use sign language and / or more gestures with your child, here are some resources to get you started. – list of links to advice – NDCS family signing website – baby signing book