Dawizz a Blueway company
Your challenges Resources Who are we ? Request a demo

IAX and Data Governance: The winning combo for ethical and transparent AI !

23 février 2023

Louis Perrodo

Cybersécurité Gouvernance Protection Data Ethic

Artificial Intelligence (AI) is at the heart of our society's concerns today: platform recommendation systems, autonomous driving, and generative conversational AI such as ChatGPT and Google Bard. However, these AIs often act as black boxes, making it difficult to understand their reasoning when making a decision or prediction. That is why it is essential to promote ethical principles to ensure the security and integrity of users concerning the governance of the data used to train these models. Furthermore, it is essential to be able to "open the black box" and make their inference explainable.

EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI)

Explainable Artificial Intelligence (XAI) is an approach that allows understanding the results and conclusions of machine learning algorithms, characterizing their accuracy and potential biases, and ensuring user confidence. AI models in "black boxes", highly complex to interpret, pose challenges for understanding how the algorithm arrived at a result. AI algorithms have already shown biases in the past, often related to ethnicity, gender, or religion when used.

For example, in 2022, the chatbot developed by Meta, BlenderBot3, made racist comments after only a few days of public use. How did this happen? The production data differed from the training data.

The benefits of explainable AI are significant:

Increased reliability: By explaining the characteristics and justification of the AI output, humans are more likely to trust the AI model, which strengthens AI's reliability. Regulatory compliance: In sectors such as finance and health, integrating machine learning models into complex risk assessment strategies may be necessary to comply with regulatory requirements for risk management. Ethical justification and bias removal: As XAI is transparent and more easily debuggable, unconscious biases can be avoided, and ethical decisions can be explained. Actionable and robust information: AI and machine learning offer the possibility of obtaining actionable and robust information, but XAI allows humans to understand how and why the algorithm made a decision that seems to be the best. XAI has a lot of potential in certain areas where it promotes trust in the use of artificial intelligence. These areas are: Healthcare: to establish diagnoses transparently by tracing the decision-making for patient care. Financial services: by encouraging the reliability of their use in loan or credit approval. Justice and police investigations: by accelerating the decision-making of DNA analysis, understanding the elements justifying them. Autonomous driving: understanding the decisions of AI in the event of road accidents, for example.

THE THREE KEY TECHNIQUES OF EXPLAINABILITY TO UNDERSTAND HOW AI WORKS:

XAI can be divided into three categories of techniques: global explainability, local explainability, and cohort explainability.

Global explainability aims to explain the behavior of the model as a whole, identifying the features that influence the model's global predictions. This method provides stakeholders with information about the characteristics used by the model when making a decision, such as in the case of a recommendation model to understand the most engaging features for customers. Local explainability helps to understand the behavior of the model at the level of each feature in the data and determine how each of these individually contributes to the model's prediction. This method is particularly useful for identifying the reasons for a problem in production or for discovering the elements that have the most influence on the decisions made by the model. This approach is particularly beneficial in sectors such as finance and health, where individual characteristics are essential. Cohort explainability is between global and local explainability and allows for the comparison of groups or populations to understand how the model behaves in specific cases.

DATA GOVERNANCE: A MAJOR CHALLENGE FOR ENSURING RESPONSIBLE USE OF AI

Data governance plays a crucial role in ensuring the quality, protection, and accountability of data use. It ensures that data is used ethically and responsibly, in compliance with laws and regulations. Controlling data is also important to ensure trust in companies that use their data. However, AI algorithms are often criticized for their lack of transparency and accountability, which can lead to negative consequences for users.

Data governance thus becomes a major prerequisite for generating training data for models.

WHAT CHALLENGES FOR MORE ETHICAL AI?

To have more ethical AI systems, data governance must be able to address several challenges and concerns in order to promote responsibility and transparency of these tools in the daily lives of users.

Data quality: Data used to train AI models must be of high quality and free from bias. If data is incomplete, inaccurate, or biased, it can affect the accuracy and reliability of the AI model. Confidentiality and data security: Data used for AI can contain sensitive personal information. It is necessary to ensure their confidentiality and security throughout the data lifecycle, from collection to destruction. Data ownership: This concept can be complex, especially in cases where data is collected from third parties or in shared data environments. It is important to clarify data ownership rights to avoid conflicts. Governance and regulation: Companies must comply with data protection regulations, such as GDPR in Europe or CCPA in California. They must also establish data governance policies and procedures to ensure responsible and ethical use of data. Transparency and explainability: AI models can be opaque, making it difficult to understand their decisions. It is therefore important to make AI models more transparent and explainable to facilitate trust and adoption of AI.

DATA GOVERNANCE AND EXPLAINABLE AI, AN INDISPENSABLE DUO?

Explainable AI and data governance are closely related. Explainable AI is a technology that heavily depends on the quality and availability of data. Data governance can play a key role in providing reliable and high-quality data for AI systems, and in turn, explainable AI can help improve data governance. Explainable AI can help identify the most important data characteristics for decisions made by AI models. Data governance can then help ensure that these characteristics are properly collected, stored, and processed to ensure the reliability and accuracy of AI models.

In conclusion, explainable AI and data governance are key areas to ensure that AI plays a responsible and ethical role in our modern society. AI that can explain its reasoning and guarantee the confidentiality of the data used would certainly prevent us from blindly diving into scenarios like "Big Brother" that don't seem so far from the reality of a near future!