YouTube’s toxic and nasty comment section has long been a source of frustration for the platform’s creators, viewers, and users. The firm has tried to address this problem in the past by implementing measures like a pop-up warning for users right after they post, so they can be more mindful of others. Now, a new tool is being added to the streaming service that will more forcefully encourage such users to desist from posting nasty remarks and take other, more severe measures.
When comments that violate YouTube’s Community Guidelines are deleted, the account holders will be notified. If the user still continues to publish abusive remarks after receiving the warning, the service will lock their account for 24 hours. The company claims that testing conducted before to today’s release showed that notifications and timeouts were highly effective.
Hateful comment identification is now only available for English-language comments, but the streaming service plans to expand into additional languages in the near future. Notably, both English and Spanish versions of the pre-posting warning are provided.
A user can provide feedback if they believe their remark was deleted in error. However, after reviewing the comments and the feedback, the business has not yet indicated whether or not it would reinstate them.
YouTube has also been attempting to enhance its AI-powered detection algorithms, according to a forum post published by the company. It claims to have deleted 1.1 billion “spammy” comments in the first half of 2022. Also, YouTube claims it has improved its algorithm to better identify and delete bots from live chat videos.
One way that YouTube as well as other social platforms have dealt with spam and abusive content is through automated detection. However, abusers frequently employ other terminology or misspellings to evade detection. Furthermore, it is more difficult to identify the authors of hostile remarks made in languages other than English.