Image for post

Ever wondered about the risk of exposing algorithms? Companies grapple with the idea of data transparency and fairness given its importance in business sustainability and market reputation among other factors. Few realize the risks carried by the information they handle and this calls for best practices in #DataScience such as accountability and ethical guidelines.

As #artificialIntelligence continues to generate more information, let’s not forget about the prevailing risks. From #Amazon and its faulty AI hiring algorithm that discriminated females, to sexist concerns from #Apple’s credit card, there is no doubt that we need to understand the dynamics of data systems management. The big question here is: how are organizations managing AI risks? The second issue pertains to information acquired about these risks and how they will distribute and safeguard this information.

The concerted efforts by academics and practitioners in the AI field about transparency in AI models is a step in the right direction, as this will curtail trust issues, discrimination and unfairness. Furthermore, evidence suggests that #AI disclosures could cause more harm than good because of information that will increase vulnerability to attacks leading to costly lawsuits for organizations.

On a personal note, I have attended many forums on the #FutureofWork and #Privacy, where conversations about transparency in AI continue to gain momentum and I couldn’t agree more on these concerns.

1. Research Studies

A recent study from Harvard University sheds light on the dark side of AI disclosures by demonstrating how to hack black box algorithms. The researchers narrowed their focus on techniques including #LIME and #SHAP by showing how an unstable image classifier detected various objects within an image. For example, the classifier recognized a guitar via fretboard parts and the bridge section followed by a Labrador Retriever that enabled the visualization of a dog’s facial shape on the right section. The bottom line is that tools such as #LIME and #ExplainableAI facilitate clarity of algorithms considered opaque. The study further shows the scope of manipulation that comes into play within the model and its explainable segment as well.

The same applies to research on the dangers of AI transparency conducted by Reza Shokri where he gathered results about machine learning algorithms and the implications of exposing them. Consequently, theft of ML algorithms spikes based on the explainable model according to researchers from University of California, Berkeley.

From the above research studies, one thing is clear: Disclosing an algorithm comes with a liability such as reducing the security. To this end, data comes with many risks that organizations should be aware of, to avert a crisis and all comes down to AI disclosures. These research studies have proven beyond reasonable doubt, the importance of adopting ethics, accountability and transparency in #AI.

2. Key Take-Aways

Transparency is the biggest challenge for most organizations in the digital world. Upgrading #AI systems and implementation is one solution that companies can apply in their quest to achieve optimization and safeguard their data from threats. As the #Futureofwork beckons, the least companies can do, is transforming their AI systems to adapt to the current technological landscape.

Transparency comes with its own challenges and organizations must show commitment to understand these variables for them to achieve success. For example, they should evaluate the cost options and analyze the underlying issues of their data frameworks. By extension, they should adopt a model that allows transparency costs to be incorporated while at the same time developing an engagement strategy with #ExplainableAI. Additionally, they should explore the nature of model information exposure in the public domain.

3. The Way Forward

There is need to act on bugs and vulnerabilities on an organizational level considering the value of security in the modern #AI applications. Otherwise, these threats will continue hampering the development of AI. Re-assessment of the legal aspect is critical as companies adopt and implement AI since this will reduce miscommunication by creating transparency.

An audio version of this Medium article is available on Apple Podcasts and Spotify.