Players of Valorant, a free first-person shooter developed by Riot Games, will begin having their voices monitored on July 13. Game developers claim that this is to help them develop language models that will be used to evaluate user feedback in all of their games in future.
In order to construct the beta of the system that Riot hopes to release later this year, the company is leveraging the information it has gathered from these recordings. Only English-speaking Valorant gamers in North America will be analyzed by Riot at this time. This mechanism can only be avoided by turning off audio chat or using a different messaging app like Discord.
Voice evaluation during this period will not be used for disruptive behavior reports. That will only begin with the future beta. And we know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter).
There will be no active monitoring of your in-game communications when this system is implemented, and speech logs will only be heard and reviewed if you are reported for disruptive activity. According to the company, it will erase this information after the issue has been resolved, as is the case with text-based chat reports. As with the always-on Vanguard anti-cheat mechanism, which tracks your activities both within and outside of Valorant, it is certain to create privacy issues for some players.
Toxic gamers will be dealt with in a variety of ways by Valorant, in addition to the one described above. This year, Riot began allowing Valorant players to add certain words and expressions to a muted words blacklist that is meant to help filter out abusive speech in conversation.