Have you ever experienced digital bullying or harassment at the workplace? With tools such as Zoom, Slack, and Microsoft Teams powering the #FutureofWork, everyone is always on and always charged.
Messages can be deleted, but everything is recorded in the cloud and emotionally charged statements that can be shared, later filled with regrets.
As a result of the passive “Snowflake-filled” culture that is transcending the tech industry, what you say can permanently live as a digital record over your entire life.
You have heard of stories where Kevin Spacey, Justin Caldbeck, Harey Weinstein, Kevin Hart, Louis CK among others have spoken charged remarks or written derogatory comments at women, gay men, and female startup founders.
Most harassment stories don’t make it to prime time news headlines.
Workplace Harassment is Real
Buried beneath stories that led to class actions at Uber among other tech companies, today one must examine how the shift in #DigitalFirst working and personal relationships has created an industry where everyone is fast-fingered on the keyboard to share their opinion without thinking of its impact on other humans.
Sure, there is the FakeNews, BadActors and paid promotion to rally up emotional support when it comes to politics in the US and international elections. Yet, contrary to popular belief, the workplace is also a strategic game of politics, where unfortunately there are often both winners and losers.
In this digital age, ill motives online or at the workplace are high, and translates to harassment.
Since we all want to work and socialize in a safe environment without digital bullying, the right technology is needed to achieve meaningful productivity such as #artificialintelligence and #machinelearning.
Statistics on Bullying at the Workplace
Unfortunately, bullying/harassment could get worse according to a recent survey¹ which found increasing cases at the workplace despite intervention from management. For instance, women were the most affected by bullying at 66% and men at 34% respectively.
These statistics point to the glaring truth about harassment and bullying whose impact has been detrimental to our social health. Companies face this challenge where employees experience harassment which further dents their reputation and productivity.
On to some good news: Artificial Intelligence experts² are working on bots to identify sexual harassment and this will be a game changer as bullying claims rise each day.
Dubbed as #MeTooBots, they can monitor communication among employees including email, chat and social media by flagging conversations and subsequently reporting them.
There are many conversations in our daily interactions and tracking negative comments poses hurdles, which AI programmers aim to eliminate with #MeTooBots.
Teaching Bots to Spot Harassment and Bullying
Despite the breakthrough, teaching computers the nature of harassment is quite challenging since they have to consider subtleties involved and grey patterns.
In as much as machines can take instructions and filter information, bullying takes many forms that could give computers problems making distinctions.
The research paper: An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination³ explores bias by assessing the viability of algorithmic solutions in data sets.
Nevertheless, programmers have developed an algorithm to detect harassment and bullying where information is sent to HR Departments for further investigations.
According to NexLP, an AI firm based in Chicago¹, data analysis occurs by using indicators in the communication patterns to spot anomalies and training computers to detect and report bullying.
Bots consider communication timing and frequency levels in their examination of harassment claims through loads of data. As bots address harassment problems, they have limitations unlike humans who can spot bullying easily.
Prof. Subirana from Harvard⁴ echoes similar sentiments and criticizes bots for their inability to work like people in detecting and reporting anomalies in communication.
#AI based technologies cannot perform at optimum level because of the training sets applied and this could limit them from addressing harassment.
Socialization and communication are unpredictable, and cannot be understood by computers leading to inaccurate results. Manipulation prevails as people alter the functioning of these bots leading to mistrust among staff members.
Bots can only cover the indicators trained on them meaning that offenders can switch to other communication channels or even alter them. There are gaps involved in bots deployment with respect to addressing harassment claims within conversations.
Using AI Tools to Curb Harassment and Bullying
Britain alone, has reported a third of women in the legal sector having experienced harassment and the situation keeps worsening.
The same applies in the United States where over 50% of women⁵ working in the Silicon Valley face countless cases of harassment. Deploying AI tools to address harassment is already in full gear with companies such as Alert-a-buddy² offering employer’s real time updates.
Machine-learning algorithms are being applied in favor of employees, where they report bullying and give victims a safe haven from this social nightmare.
Spot³ is an AI technology that enables employees to recount harassment and input the information for further analysis. Bots send the message without the knowledge of the public and facilitates fair examination of all harassment claims.
Botler AI⁴ assists people to determine underlying issues in their cases and assess the strength of their legal arguments. This technology has worked in Canada and USA where people can send information to agencies or even their HR managers for review.
Botler AI commands an accuracy margin of 90%⁶ meaning that the technology works perfectly in the real world where bullying is prevalent.
Bots for Monitoring Communication
Privacy is a sensitive issue, especially in the communication space where people expect their information to remain secure without exposure to third parties.
There have been concerns that bots can leak employee data leading to negative implications on reputation by creating mistrust according to a recent Bloomberg⁷ survey.
Organizations have standard procedures for communication and delegating this task to bots exposes employee data by leaving information in the wrong hands.
The communication patterns of an organization need to be well organized where nothing comes out in the public domain and remains within the realms of the company.
Critics argue that using communication bots weakens the control structure and interferes with organizational culture.
Staff members expect their information to remain secure and with concerns about ability of bots to safeguard it, technology solutions are the most viable option.
Companies should decide on best practices for their communication and technology to create cohesion.
In the present day, organizations with an understanding of artificial intelligence understand the importance of educating their staff members on the workings of the technology.
As employees realize their communication patterns are checked, the Hawthorne effect⁸ takes precedent leading to less instances of harassment as people understand the consequences.
Reviewing past communication trends is another approach of using bots without causing controversy within the company, as employees worry less about surveillance.
Either way, we should see bots as our technology assistants and not perfect judges who can sift through our conversations and detect bullying or harassment without defects. We need to be realistic that technology has limitations and working together to address these claims is the best alternative.
Human Interpretation of Bot Findings
Bots cannot be perfect yet in handling bullying claims, as tech firms seek to address this problem. Instead, startups need to understand the workings of this technology in the context of socialization and find better options for generating accurate results.
Bots come with their own risks such as exposing information from the company to the outside world and calls for a cautionary approach when executing this technology.
The #MeTooBots idea is a noble one and programmers need to examine the underlying issues to make the technology work for people.
Computers can only see and judge situations based on the algorithms trained on them and the rest depends on how humans can interpret findings to address bullying and harassment.