Public Radio for Alaska's Bristol Bay
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Halong highlights questions around use of AI for translating Indigenous languages

rawpixel.com / Markus Spiske

After Typhoon Merbok slammed into Western Alaska in 2022, some Yup’ik and Iñupiaq language speakers were confronted with unintelligible translations in federal disaster assistance materials. The faulty translations, brought to light through reporting at KYUK, made the Federal Emergency Management Agency (FEMA) the subject of a civil rights investigation and led the agency to make policy changes in the ways it handles Indigenous languages.

Three years later, Typhoon Halong has brought new questions about how emergency managers convey information to Indigenous language speakers. KYUK’s Sage Smiley spoke with Annie Rosenthal, a fellow with media outlet High Country News, about her reporting on the use of artificial intelligence for translation and its implications for tribal data sovereignty.

—------------------------------------

Smiley (KYUK): Thanks so much for joining me today. Annie, you recently published an article about looking at the potential use of AI in translating Indigenous languages, specifically in times of disaster. Can you tell us an elevator pitch? What's this reporting about? Where did this come from?

Rosenthal: Yeah, so thanks for having me, Sage. This reporting actually came out of a story that KYUK did a few years back, right after Typhoon Merbok. The reporter, Emily Schwing, looked, was looking at translation of FEMA disaster aid after that series of storms and found, alongside translators at KYUK, that the translations that the federal government had contracted with a contractor in California to provide for Alaska Native communities were, in fact, totally unintelligible sort of nonsense, and that scandal prompted a whole civil rights investigation. It got FEMA to change some of its policies around working with Alaska Native communities to try to do a better job. And so I was curious to see what they were doing after this most recent storm.

So this time, I noticed something interesting, which was that a Minneapolis-based company called Prisma International is hiring for Yup’ik and Iñupiaq language speakers after this storm to help with disaster translation. FEMA wouldn't say specifically whether they are working with Prisma, and Prisma didn't respond to requests for comment, but the thing that got some questions raised around this was that Prisma is a company that uses AI and human translators together, and so some of the local translators I spoke with, alongside experts in Native law and sovereignty, had some questions about what that might mean, both for the accuracy of translations and for data sovereignty, the control that native communities have over how their cultural knowledge is used.

KYUK: Yeah, so how has AI, or some translator AI language model been used previously to success? Like, do you know anything about where there may have been positive outcomes from using Indigenous languages and AI together, and then the flip side, where has it gone wrong? If you know about that?

Rosenthal: Yeah, so this is a very sort of new and emerging field in question that I think people are both excited and a little nervous about. A lot of the experts I spoke with talked about being excited about the potential for AI to help with, specifically, language preservation for endangered languages. There are Indigenous software engineers who are doing really cool work to protect languages, like there's an Anishinaabe roboticist who made a robot to help kids learn her language. Choctaw, computer scientist is doing a similar thing. But then there are also some concerns that we've seen around AI and Native languages that, so AI relies on large amounts of data to do its translation. And for Indigenous languages, there's often not that large availability of data, and so the translations are spitting out made-up words, inaccurate sentences. And then there are also some concerns about how those AI translations are able to convey the nuances of Indigenous culture and ways of knowledge. And so what the experts I talked to were concerned about was also how that use of data would happen, could happen without the input of Native communities who share their cultural information, but then gets used by outside companies to whatever end.

KYUK: So it sounds like there's kind of a two-part concern. Mainly, there are many questions when it comes to AI broadly, but what I'm hearing is that there are concerns about the kind of site control of Indigenous communities over their language and how it's being used and how it's being developed, as there may be input from other data sources or those sorts of things, but then also the accuracy of disaster materials themselves, potentially. If there were to be this sort of combo model used to translate, for example, Yugtun, the Yup’ik language, that there could be just inaccuracies in general with the way that information is being communicated.

Rosenthal: Yeah, no, that sounds right to me. I think there's sort of two buckets of concern. One, like you're saying, is this question of accuracy in conveying this really important information for people who are looking to find out how to get financial assistance for rebuilding after a disaster, making sure that the information is both literally accurate and culturally right and respectful, and then also this other sort of series of questions around sovereignty and how Native people are in control of how their cultural information is being used, and making sure that there's not opportunities for exploitation there.

KYUK: So where does it go from here? FEMA didn't directly answer whether or not they're working with this company that's using the combo of AI and translators. But what are you looking at or continuing to follow as you think about this broader issue of Indigenous languages and the potential use of AI in translating materials?

Rosenthal: I think, I mean, again, all of these are such new questions, and this, both this technology and the legal precedents around it are evolving really quickly, is one thing that I heard from the experts I talked to. And so I think one thing I'll be watching is how this concept of data sovereignty, which has been coming up increasingly in questions of international governance and Indigenous communities, how that idea becomes expressed in us, federal government policy or not, how these agencies are making sure to take into account the questions that technology raises around sovereignty. I think FEMA, again, has taken some steps to improve their translations following what happened after Typhoon Merbok, but this is sort of this whole area of other questions that they were less forthcoming in their answers about how they regulate their use of AI. And I think as AI becomes a bigger part of our day to day lives in all these different ways, it'll be really important to see how people are keeping an eye on that.

KYUK: Beyond what we've been able to touch on, is there anything else about this kind of convergence of AI and Indigenous languages that you think is important for people here in the [Yukon-Kuskokwim] Delta to know?

Rosenthal: I think one thing that's important to think about with AI is that even this company, in their model for how they use AI, they always use it, seemingly alongside human translators. And I think just remembering right now that particularly around Indigenous languages, but in a lot of, a lot of arenas in which we're starting to see AI used, AI is not perfect. And it is, it's essentially replicating and guessing what it thinks real people would do, and so just being cautious consumers when you're getting information that might be coming from AI, making sure that it matches up with what's coming from real people in your community.

KYUK: Well, thank you so much for sharing about your reporting.

KYUK's Evan Erickson contributed to this story.

Sage Smiley is an editor for KYUK. She was KYUK's news director from 2023 - 2025.