Tag Archives: tone

Celebrating Our First Rule of Tone Writing

We took this photo at the end of the day today, to celebrate our first tone rule, resulting in our first rule about how to write tone in this language!

It took us awhile to get there together, but I think it was worth it. We found eight different tone melodies in nouns of form CVCVC (where C is a consonant and V is a vowel). In isolation, each of the melodies falls in the second syllable. The same thing happens when you put a high tone or a low tone before the word.

But when you put either a high or low tone after the word, none of them fall anymore. This happens if you’re adding one or two syllables.

The short version is that the last syllable of the phrase falls. So it looks like the words have a falling tone in isolation, but that’s just because they end the phrase. The same thing with the possessive pronouns (high and low after the noun); they fall because they end the phrase, not (necessarily) because something in those words makes them fall.

So people will certainly be tempted to write this fall, as it is easy to hear. But as it is clearly attached to the phrase (rather than any of these words), it shouldn’t need to be written, except perhaps with a period.

For this interested in what “phrase” means here, so am I.😅 This may be an utterance, a phonological phrase, or a syntactic unit. We’ll need to investigate some longer utterances to find out.

Out and About in Cameroon

The last post focused on workshop issues; this one will have more other life issues. This was the longest workshop I’ve done in Cameroon, and my first time flying within the country. For instance, we stayed at a Benedictine monastery, so we were able to buy fresh milk, from the above and a small number of other cows on the compound.

Downtown Yaoundé from the air

For those looking for some perspective on the capitol city of Cameroon (where we live), here it is from the air. You can see the taller buildings and larger roads going to a center area in the top third of the above photo.

I also got a photo of our neighborhood, complete with our house, the CABTAL building, the soccer field where we get to exercise (even in isolation), and even on end of the building of our local church!

Our neighborhood

Back to the monastery, apparently this order likes to keep busy, and to make things to sell to the community. This is where they make essential oils (from lots of things, with lots of cryptic names —cinnamon was the only one I recognized):

They also have a place behind the building where we stayed, where they microbrew a beer made from locally available ingredients. But as with many places, innovation, industry, and tradition go hand in hand. They also have a talking drum prominently displayed at the monastery entrance:

Talking drum at the monastery
Hear the two tones of the drum

I didn’t ever hear anyone play it (other than me, in the above video clip), but these drums (found across Africa) are dear to my heart. They probably make no sense to most English speakers, but when you speak a tonal language, these drums are putting out the information that you normally use to make words. So the fact that these drums are used to communicate language, which is then understood at a great distance, is a testimony to the importance of tone in these languages. Imagine you had a drum you could hit that made the ‘p’ sound, and another that could make a ‘b’ or ‘k’ sound, and you could just pound out letters (on a drum carved from a tree, no less!), and so beat out the sounds of a word. Anyway, I think it is cool that something so uniquely African exists, that recognizes the unique value of tone in African languages.

Chufie’ workshop

several of us from the workshop
Hanging out at the end of the workshop

I just got back from a workshop where we tested out AZT in a longer workshop, and things went well. I say “longer”, because it was supposed to be three weeks, but we had to isolate after the first day, because of a COVID-19 exposure (the first in our whole community in months). But we got tested:

Our first (negative) test

And then again:

Our second (negative) test

Anyway, it was good to get back to the workshop:

guys working

When we debriefed the workshop, I had two main questions for the guys. First, was the tool easy enough to use? One guy responded that he didn’t really know how to use computers, but this tool was easy to use. So that was great news. I had suspected this, and worked for it, but it was good to hear we’re hitting that target.

The other question was about engagement and involvement: did the guys feel like they were actively taking a real part in the work? Again, they answered yes. In the picture above, the guys are talking through a decision, before telling the computer “This word is like that other one”, or “this word is different from each word on this list”. Framing this question is important, because this is a question that people can discuss and come up with a real, meaningful answer, without knowing much about linguistics. If we were to ask them to tell us if this phrase had a floating tone in it (yup, those are real), we would be asking them to guess and make up an answer, since they would have no idea what the question meant —probably just like most people reading this post. :-) But floating tones are important, and we need to analyze them correctly; we just want to get at them in a way that enables the fullest participation of the people who speak the language.

I didn’t come up with this on my own; far from it, I’m standing on the shoulders of giants, who pioneered how to engage people meaningfully in the analysis of their own language. What’s new here is that these methods are modeled within a computer program, so the user is clicking buttons instead of moving pieces of paper around on a table. Buttons are not in themselves better than paper, but when we work on the computer, each decision is recorded immediately, and each change is immediately reflected in the next task —unlike pen and paper methods, where you work with a piece of paper with (often multiple) crossed out notes, which then need to be added to a database later.

The other major advantage of this tool is the facilitation of recordings. Typically, organizing recordings can be even more work than typing data from cards into a database, and it can easily be procrastinated, leaving the researcher with a partially processed body of recordings. But this tool takes each sorted word (e.g., ‘corn’ and ‘mango’), in each frame (e.g., ‘I sell __’ and ‘the __is ripe’) it is sorted, and offers the user a button to record that phrase. Once done, the recording is immediately given a name with the word form and meaning, etc (so we can find it easily in the file system), and a link is added to the database, so the correct dictionary entry can show where to find it. Having the computer do this on the spot is a clear advantage over a researcher spending hours over weeks and months processing this data.

Once the above is done (the same day you sorted, remember? not months later), you can also produce an XML>PDF report (standing again on the giant shoulders of XLingPaper) with organized examples ready to copy and paste into a report or paper, with clickable links pointing to the sound files.

Anyway, I don’t know if the above communicates my excitement, but thinking through all these things and saying “This is the right thing to do” came before “Huh, I think I could actually make some of this happen” and this last week, we actually saw this happen —people who speak their language, but don’t know much about linguistics meaningfully engaged in the analysis of their language, in a process that results in a database of those decisions, including organized recordings for linguists to pick apart later —and cool reports!

Screenshot of PDF (which has clickable links, though not visible in this screenshot)

The Function of Tone in Ndaka

In an earlier post I mentioned work I was doing to show the importance of tone in the Bantu D30 languages. Here I’d like to go through the conjugation of one verb in one language, to show how tone works, in relationship to consonants and vowels. To start with, here is one verb conjugated two ways:NDK Conjugation 1
If you have studied another language before, you might recognize this kind of listing of the forms of a verb for each of the people who do the action. In English, this kind of thing is boring:

  • I walk
  • you walk
  • he walks
  • we walk
  • you all walk
  • they walk

The only thing of any interest in the English is the final ‘s’ on ‘he walks‘; everything else is the same on the verb. But that’s not the case in many languages, including the languages I’m working with. For instance, there are lots of differences in forms, and you can correlate the differences in forms with differences in meaning. If you line up the verbs as below, you can separate the part that remains the same from the part that changes. You can also notice that in the meanings on the right, there is a part that remains the same, and a part that changes. This is the case for both conjugations:
NDK Conjugation 2

So with a conjugation paradigm like this, we can deduce that for each line in the paradigm, the part of the form that is different is related to the part of the meaning that is different (e.g., k- = “we” and ɓ- = “they”). Likewise, the part of the word forms that stays the same is related to the part of the meaning that stays the same (e.g., otoko = “will spit”).

But, you might ask, this logic gives us otoko = “(did/have) spit” in the first conjugation, but otoko = “will spit” in the second. Which is it? In fact, if you compare each line for each of the two conjugations, you will see that the consonants and vowels are the same for each first line, for each second line, all the way down to the sixth line. So whatever form indicates the difference between “will” and “did/have” is not found in the consonants or vowels. Where is that difference indicated? In the tone. If you compare the second column for each line of the two conjugations, you will see that the lines representing pitch for each word form is not the same between the two conjugations.

A similar problem exists for the prefixes that refer to subjects. That is n- is used for both “I” and “you all”, and the absence of a prefix is used for both “you” and “he”. But looking at this last one first, we can see a difference in the tone:NDK Conjugation 3

So even though there is no difference in the consonants or vowels to indicate a difference in meaning, there is a difference in tone which does. The same is found for “I” versus “you all”, circled here:NDK Conjugation 4

So the bottom line is that for (almost) every difference in meaning, there is a difference in form that indicates that difference. Sometimes that difference is in the consonants or vowels, as we might expect in languages more closely related to English (and even in English, with the -s above), but sometimes that difference is only in the tone.

But the story is a bit more complex than that, since the tone doesn’t do just one thing. We saw above that tone indicates the difference between “will” and “did/have” in these conjugations. But tone also indicates the difference between c, as well as that between “I” and “you all”. That means “you will”, “you did”, “he will”, and “he did” all have the same consonants and vowels, and are only distinguished one from another by the tone. And there’s another quadruplet with “I” and “you all”, and these quadruplets exist for almost every verb in the language: this is a systematic thing.

So with two minimal quadruplets for each verb in the language, it makes sense to ask what is the contribution from each meaningful word part to the tone, and how they come together. For instance, what is the contribution of “you”, as opposed to “he”, on the one hand, and what is the contribution of “will”, as opposed to “did/have”, on the other? And how to these different bits of tonal information combine to form the tone patterns we hear on full words? (Hint: they are a lot harder to chop up than the consonant prefixes above).

Anyway, that’s the essence of what I do, in brief. By looking at the actual pronunciations of words in a system, we can deduce what the contribution of each meaningful word part is, then make hypotheses about how they come together, and test those hypotheses until we come up with a coherent system.

 

 

Consonants

For those of you that have followed my analysis of sound systems in (so far) unwritten languages, I’m sure you’ve already heard enough about tone and vowels. So today, I thought I’d write about consonants!

Language sound systems generally store information in three places. We know consonants (with obstructed airflow) and vowels (with shaped, but not obstructed airflow) from English, but probably about half the world’s languages also use tone (and some estimate 80% of those in Africa). Other languages (which are more like English) use contrastive stress, meaning that the stress on a word changes not only the pronunciation, but the meaning. If I say emPHAsis instead of EMphasis, you get what I mean, though it sounds wrong. But CONvert and conVERT are two different words, the first being a noun, and the second being a verb. We don’t do this kind of thing much, but this is just one of the several ways languages communicate the difference between one word and another.

So you know that tone is like stress (though more complicated, and used a lot in Africa, but not really in English). And you know about ATR, which gives some African languages interesting vowel harmony patterns (and more vowels than Spanish, but less than English). But what about consonants? You might think that I don’t work with consonants much, since I’m studying tone, but that’s just not so. First of all, almost every word has consonants, so they can’t be avoided.

Secondly, and slightly more importantly, there are slight and meaningless (i.e., not changing word meaning) but potentially distracting changes to pitch made by consonants, as in the spikes circled in the following picture:

calculated pitch spikes surrounding voiceless consonants

It would be easy to look that those quick jumps and drops in pitch, and say “wow, something’s going on there”. But there isn’t. These are just a result of the vocal folds starting and stopping vibrating as they go between vowels (with vocal fold vibration) and voiceless consonants (where the vocal folds are relaxed). So as I look at pitch traces with these effects, its important to understand what they are, and to abstract them away, rather than pay much attention to them.

There is a third reason, which is more important to my research. Not only do I work with tone, but the languages I’m working on now have what we call Consonant-Tone Interaction. That is, the tone of these languages is actually impacted by the consonants around them. So it’s important to understand what consonants are in each word.

Normal consonants (in these languages) have a slight negative pressure (sucking) before release, and these consonants don’t impact tone. But those where the airstream is more like typical English pronunciation are less common, but they impact the tone. So how do we tell the difference? There are many ways, but one I’d like to show you can be seen in the following picture:Find the egressive, implosive, and voiceless consonants

I originally developed this image as an exercise, so rather than just go and give you the answer, I’ll pose the question, and you can submit answers in the comments. 😉

I’ll help you out with a few points:

  • The three categories are named and described in the key on the right
  • The vowels are the dark vertical bands; the consonants are between those. 🙂
  • Most of the vertical space for consonants is blank/white, but there is a small dark band at the bottom for some, which indicates voicing.
  • If you look at pitch, recall that tone is relative pitch, so compare the drop over a consonant to the pitches over the vowels on each side, which may not be the same.

So, which consonant types can you find? How many of each, and in what order?

 

 

The Importance of Tone

I presented a poster earlier this term at the Metroplex Linguistics Conference, a conference for linguists throughout the Dallas/Fort Worth area. This year it was sponsored by the Graduate Institute of Applied Linguistics (G.I.A.L.), where a lot of our colleagues either teach or get training before heading to the field. My poster was on the functional load of tone:

functionalloadoftoneinbd30_redacted

While much of that detail may not make much sense to you, the main point is that tone is important for conveying meaning in tonal languages, but not necessarily to the same extent, or in the same ways, from one language to another. So this poster took four of the (tonal) languages that I’m working on, and compared them to each other, alongside Swahili, a non-tonal language used to communicate between people groups where these languages are spoken.

One thing that I found out that was interesting was that the importance of tone in a given language can’t be determined by the number of consonants or vowels in the languages. Each of these languages have about the same consonants (27-33) and vowels (7-9),  but they use tone in very different ways.  This can be seen in the conjugation of verbs, where subjects, for example, are indicated by consonants, vowels, and tone in each of these languages (this is like the ‘s’ in ‘He walks‘, which is not there in ‘I walk’) . But in some of these languages, the consonants and vowels are enough to tell who is doing the action, so at least that part of the writing system could work without writing the tone. In one of those languages (Bɨra), there are letters for each kind of subject. For the other (Bʉdʉ), one of the subjects has no consonant or vowels (like the agreement on ‘I walk_’), but it is still clear who is doing the action, since there is only one such subject. But in the other two languages, there are two subject pronouns that have no consonants or vowels, so you can only tell them apart with tones. And in one of those languages (Ndaka), there is another pair of pronouns, which are both ‘n-‘ , so they are also distinguished only by tone.

So each language puts progressively more importance on tone; it becomes harder and harder to convey all the language’s meaning without indicating tone. In addition to the above, Ndaka also has verb root minimal pairs. For instance, the difference between ‘cook’ and ‘become tired’ is only in the tone; they have the same consonants and vowels. That leads to the following set of eight words, at least six of which are distinguished by tone:

  1. ɔjana. [˨˨˨˨ ˨˧˦ ˧˨˩] You were tired.
  2. ɔjana. [˦˦˦˦ ˦˦˦˦ ˨˨˨˨] He/she/it was tired.
  3. ɔjana. [˨˨˨˨ ˦˦˦˦ ˦˦˦˦] You will be tired.
  4. ɔjana. [˦˦˦˦ ˦˦˦˦ ˦˦˦˦] He/she/it will be tired.
  5. ɔjana. [˨˨˨˨ ˨˧˦ ˧˨˩] You prepared food.
  6. ɔjana. [˦˦˦˦ ˦˦˦˦ ˨˨˨˨] He/she/it prepared food.
  7. ɔjana. [˨˨˨˨ ˨˨˨˨ ˦˦˦˦] You will prepare food.
  8. ɔjana. [˥˥˥˥ ˨˨˨˨ ˦˦˦˦] He/she/it will prepare food.

If you want to pronounce these, in the International Phonetic Alphabet j is pronounced like ‘y’ in ‘you’, so these might be pronounced something like “oh-yawn-ah”, though with different pitches, [˥˥˥˥] being higher and [˨˨˨˨] being lower.

One of the things I’m doing now is developing this material into a presentation for the Annual Conference of African Linguistics (ACAL) next spring. And that will further develop my analysis of the tone in these languages, which will form the majority of my dissertation.