A new tool called wâsikan kisewâtisiwin is aiming to use artificial intelligence to help make the internet a safer place for Indigenous people. The project, led by Métis entrepreneur Shani Gwin and her Edmonton-based Indigenous communications firm pipikwan pêhtâkwan, is being developed in collaboration with the Alberta Machine Intelligence Institute (AMii). The tool has a dual purpose, intended to help both Indigenous people and non-Indigenous Canadians reduce racism, hate speech, and online bias.
One of the main functions of the tool is to moderate online spaces like comment sections. While the internet has been a tool used by Indigenous people for advocacy, it can also frequently be an unsafe space for communities that are discriminated against. Gwin emphasized that all it takes is one hateful comment for online spaces to become toxic. The tool works by flagging hateful comments and providing sample responses, while also documenting these instances for future reporting.
The second function of the tool is designed to serve as a writing plug-in for computers, similar to Grammarly. This function is intended to help general Canadians understand their bias by flagging any writing that may be biased against Indigenous people. It provides an explanation and a suggestion for how to reword the sentence to be more inclusive and respectful.
Ayman Qroon, an associate machine learning scientist with AMii, explained that the system works similarly to the AI chatbot tool ChatGPT. It is advanced computer software that is trained to understand and generate human language. Qroon instructs the language model to classify comments as hate speech or not and provides rationale as to why.
Gwin highlighted the importance of involving racialized communities, including Indigenous people, in the development of AI to ensure that it does not perpetuate biases. She emphasized that AI is currently designed through the lens of Canada’s dominant culture, and without input from marginalized communities, it cannot analyze and produce culturally safe and respectful content.
Qroon acknowledged that AI can perpetuate bias if the data it learns from is biased. He shared that during the training process, the AI would sometimes try to minimize the tragedies that Indigenous people have experienced. This is why it was crucial to integrate the Indigenous community into the development process and seek their perspectives and instructions.
The project has been selected as a semi-finalist for the Massachusetts Institute of Technology’s Solve 2024 Indigenous Communities Fellowship. Gwin expressed her hope that the tool will help take the emotional labor of education off Indigenous people, allowing them to focus on other important tasks besides moderating comment sections. She emphasized that the tool is not meant to replace Indigenous people but to do the work that they may not want to do, while also working to change the hearts and minds of Canadians about who Indigenous people are.