Who we are
Roya Pakzad, the creator of Multilingual AI Lab, is a Senior Fellow at Mozilla Foundation as well as founder of Taraaz, a technology and human rights non-profit. Previously she was a Data & Democracy Fellow at the University of Virginia and held positions at Stanford University and Advanced Micro Devices (AMD). With a background in electrical engineering and human rights, her expertise spans digital technologies, corporate accountability, and human rights-centered design. Born in Iran, she now resides in California and writes the newsletter Humane AI. She can be reached at rpakzad@taraazresearch.org.
The Inspiration
As a technology and human rights researcher, a multilingual internet user, and a first generation Iranian immigrant to the United States, Roya Pakzad has long viewed language not only as a form of identity, cultural representation, and belonging but also as a source of power asymmetry, exclusion, and discrimination. In her work assessing the human rights impacts of digital technologies and red teaming AI systems, she noticed a recurring pattern: while issues of trust, safety, and human rights are deeply relevant to non-US and non-English-speaking users, the attention and resources devoted to them are often insufficient, delayed, or treated merely as reputation management.
The Multilingual AI Safety Evaluation Lab was created to help change that. If a lack of tools and data makes it difficult for AI developers to identify and measure these issues, the Lab provides a structured framework to do so. If government agencies need better information to evaluate the usability of AI tools for their constituents or to vet vendors responsibly, the Lab equips them with that capability. And if researchers and civil society organizations need empirical evidence to strengthen their advocacy efforts, this platform makes that possible.
With the support of the Mozilla Foundation, Roya built this initiative to confront the persistent neglect of global and multilingual communities in the AI safety ecosystem.
Collaborate with Us
We are actively seeking partners to collaborate on multilingual AI evaluations. Whether you’re a government agency assessing AI tools for public use, a UN or humanitarian organization leveraging technology for social good, a civil society group advocating for accountability in AI systems, a researcher studying AI safety, or an AI lab testing model performance, we’d love to connect and help you make meaningful use of the Multilingual AI Evaluation Platform.