Tag Archives: participatory research methods

Chufie’ workshop

several of us from the workshop
Hanging out at the end of the workshop

I just got back from a workshop where we tested out AZT in a longer workshop, and things went well. I say “longer”, because it was supposed to be three weeks, but we had to isolate after the first day, because of a COVID-19 exposure (the first in our whole community in months). But we got tested:

Our first (negative) test

And then again:

Our second (negative) test

Anyway, it was good to get back to the workshop:

guys working

When we debriefed the workshop, I had two main questions for the guys. First, was the tool easy enough to use? One guy responded that he didn’t really know how to use computers, but this tool was easy to use. So that was great news. I had suspected this, and worked for it, but it was good to hear we’re hitting that target.

The other question was about engagement and involvement: did the guys feel like they were actively taking a real part in the work? Again, they answered yes. In the picture above, the guys are talking through a decision, before telling the computer “This word is like that other one”, or “this word is different from each word on this list”. Framing this question is important, because this is a question that people can discuss and come up with a real, meaningful answer, without knowing much about linguistics. If we were to ask them to tell us if this phrase had a floating tone in it (yup, those are real), we would be asking them to guess and make up an answer, since they would have no idea what the question meant —probably just like most people reading this post. :-) But floating tones are important, and we need to analyze them correctly; we just want to get at them in a way that enables the fullest participation of the people who speak the language.

I didn’t come up with this on my own; far from it, I’m standing on the shoulders of giants, who pioneered how to engage people meaningfully in the analysis of their own language. What’s new here is that these methods are modeled within a computer program, so the user is clicking buttons instead of moving pieces of paper around on a table. Buttons are not in themselves better than paper, but when we work on the computer, each decision is recorded immediately, and each change is immediately reflected in the next task —unlike pen and paper methods, where you work with a piece of paper with (often multiple) crossed out notes, which then need to be added to a database later.

The other major advantage of this tool is the facilitation of recordings. Typically, organizing recordings can be even more work than typing data from cards into a database, and it can easily be procrastinated, leaving the researcher with a partially processed body of recordings. But this tool takes each sorted word (e.g., ‘corn’ and ‘mango’), in each frame (e.g., ‘I sell __’ and ‘the __is ripe’) it is sorted, and offers the user a button to record that phrase. Once done, the recording is immediately given a name with the word form and meaning, etc (so we can find it easily in the file system), and a link is added to the database, so the correct dictionary entry can show where to find it. Having the computer do this on the spot is a clear advantage over a researcher spending hours over weeks and months processing this data.

Once the above is done (the same day you sorted, remember? not months later), you can also produce an XML>PDF report (standing again on the giant shoulders of XLingPaper) with organized examples ready to copy and paste into a report or paper, with clickable links pointing to the sound files.

Anyway, I don’t know if the above communicates my excitement, but thinking through all these things and saying “This is the right thing to do” came before “Huh, I think I could actually make some of this happen” and this last week, we actually saw this happen —people who speak their language, but don’t know much about linguistics meaningfully engaged in the analysis of their language, in a process that results in a database of those decisions, including organized recordings for linguists to pick apart later —and cool reports!

Screenshot of PDF (which has clickable links, though not visible in this screenshot)

Participatory Research Methods

I just realized I don’t have an article to refer to on this topic, while I’ve been using and talking about these methods for some years, so I’ll briefly describe what I mean here.

The term comes from “Participatory Research in Linguistics”, by Constance Kutsch Lojenga (1996). Others have used it, but the basic idea is to involve people in the analysis of their own language, as much as possible.

While this may seem a weird thing to have to say, many Field Methods courses in linguistics involve asking a naive speaker how to say things (or if saying something is grammatical), while the researcher takes notes. Those notes are not typically shared with the speaker, and it is relatively unimportant whether the speaker has any idea what is going on.

This is the paradigm that we are turning on its head, when we want to involve as many community members in the analysis, as much as possible.

Involving as many people as possible is good for our data, because it means we aren’t basing our analysis of the language on what just one person says. I join many in believing that language is a community property, not that of a single person. Yet it is not uncommon to have claims about a language made on the basis of a single person’s production. Involving more people can only increase our confidence that our data represents the language as a whole.

Involving as many people as possible is good for our analysis, as well. When I sit on the other side of a clipboard, and leave the “naive speaker” out of my thinking entirely, I’m looking at only half the problem. Sure, I can see how things look from the outside (etic), but I cannot see how things look from the inside (emic) anywhere near as well as can a native speaker of the language. Even if my analysis could be completely right without that inside perspective, its presence can confirm the rightness of that analysis. But working from both inside and outside the language allows more perspective to push the work forward faster, and on a more sure footing.

Involving as many people as possible is also good for the community of people who speak the language. I have no interest in finding out a lot of cool things about a language, publishing them and becoming famous (as if), and leaving the people who speak that language ignorant of the work. On the contrary, I think the community is best served by being as involved in the work as possible, so that as the work progresses, those who are most closely involved in the work can explain it to those around them —and typically in terms that might escape my attempts to do so. This accomplishes two things: it builds a cadre of people who are able to teach the analysis to others, and it increases the number and kind of people within reach of that teaching.

Consider the implications for literacy work. The above might not mean much to you if I’m dealing with some obscure syntactic phenomenon that you couldn’t even point out in English, like “Successive Cyclic Movement and Island Repair” (which is a real topic of conversation between some linguists, btw). But if I’m producing a booklet that should help literacy teachers teach people how to read, but no one understands the booklet, how will they teach people to read? On the other hand, when I finish a workshop, anywhere from three to fifteen people have a good idea what we’ve done, and could explain it to someone else. Maybe they’re not ready to be literacy teachers yet, but they are at least on their way there.

So involving as many people as possible is good for our data, for our analysis, and for the people we work with. Because I am strongly invested in all of these,  I use these methods almost exclusively.

There is a caveat: I’ve put “as much as possible” hedges above intentionally. I have a couple graduate degrees in linguistics, and I shouldn’t assume that everyone can understand everything I have figured out in a language, or even what I’m trying to figure out, or why. There are times where I have to accept the limits of the people I’m working on, and use what I can get from participatory methods. There are lots of things in my dissertation that I wouldn’t bring up with almost anybody, without some serious background conversation (and for some not even then). Rather, as I consider what is possible, I seek ways to simplify and explain what we’re doing so a subsistence farmer might be able to grasp it. This is why we use papers in workshops, rather than computers. This is why we stack them in piles, organizing them visually on a surface to show the differences between them. To invite and enable more participation, will only increase the value of our work.




Africa Night!

Have you ever wondered how people make their first alphabet?

Starting this summer, we are giving people a taste of Bible Translation work in Africa, through small group meetings designed to be interactive and engaging. We introduce people to the language work we do with Wycliffe Bible Translators, in three parts (total 90 minutes):

  1. An interactive exercise for anyone who can read short English words. See what it’s like to discover your vowels for the first time!
  2. African foods typical to many of the places we have worked
  3. Testimonies, videos, slides and information from Wycliffe Bible Translators and our own experience. Q&A as time allows.

We have worked so far with groups of 6-25 people; we’d like to keep them small enough to allow everyone to participate. If you have a small group or Sunday school class that would be interested, or if you would like to join or host a group, please let us know, and we’ll get you on our calendar.

That said, if you have any questions about Africa, Wycliffe Bible Translators or our work, please don’t be shy; we’d love to discuss it over coffee, too. 😉 🙂