Dozens of rights groups have asked Zoom to stop developing an emotion monitoring technology that intends to monitor the emotional responses and thoughts of its users. Of course, the concept seems intrusive right off the bat, but the ultimate result is not unique. Employers can already keep track of everything their remote employees do, from their mouse clicks to their online activities.
Even though these products have polarizing viewpoints, the monitoring of user activities occurs in a variety of ways and from a variety of sources. Data mining and advertising were found to be common practice among mental health and meditation applications, according to a recent analysis. Pandemics led to an expansion of monitoring user behavior that was disguised as an assessment of employee availability and attention. Even while Zoom has been a huge success story in this period of rapid technological change, it now seems that the company was also working on a worrying project of its own.
27 human rights groups, including Fight for the Future, have signed an open letter to Zoom asking them to stop using AI-driven emotion monitoring tools to study video call participants’ facial expressions. Discriminatory, manipulative, and possibly harmful are all terms used by the organization to describe the program, which is based on the erroneous premise that a person’s speech style, facial gestures, and body language are the same for everyone. Facial recognition systems and other artificial intelligence-based technologies have been shown to be error-prone when processing images representing people of color and with diverse body types, therefore the criticisms are well-founded.
The concept of AI assessing the emotions of someone on a video conference is unsettling, regardless of Zoom’s executive’s cautious approach. For many years, the government in the United States has been advocating for the regulation of artificial intelligence, and the worries are well-founded.