Ethical AI — Applying the 7 Principles

Abhinav Ajmera
8 min readFeb 18, 2022

--

Ethical AI is based on 7 principles as described here . However, to abide by these seven principles while creating next gen AI applications, it is important to understand how to apply them. In this article we will first understand what the key components are of building an AI application and which of these seven principles can be applied on which component during the development of AI application.

In a nutshell, development of AI applications is described below –

At the core, we use DATA to train the ALGORITHM using COMPUTE. TRAINING uses underlying compute and once the training is complete, we get a Trained MODEL which is leveraged by AI APPLICATION to make the predictions. The creation and maintenance of model is done via MODEL LIFECYCLE.

Let’s us look at each of the component in details and what role it can play to align with the principles of Ethical AI.

1. Data — Data is the basic building block of creating an AI model. All the AI models learn using data and a model is as good as the data it is trained on. There are various types of data like structured, semi structured, un-structured which can be either labelled [for supervised learning] or unlabelled [for unsupervised learning]. If the data has underlying biases, then the model trained using the data will also have the same underlying biases. Let’s say we are creating a model to identify whether a given customer will default on the personal loan. Now if one of the features is ‘ethnicity’ and data that we will be using to train the model is biased towards a particular ethnic community (i.e. for a certain ethnic community, the data shows a very high default rate), the model created using this data will also have the same bias towards that ethnic community in predicting the default of loans.

Creating models with such data will violate the ‘Fairness’ principle of Ethical AI. We can apply the following treatments to ensure the alignment to fairness principle.

a. Remove the features which identify with race, ethnicity, religion, community while training the models.

b. Take sampling of the data using techniques such as stratified samples, so that equal samples are taken from each sub-group.

c. Before considering data for training, identify its skewness using statistical analysis and fix the skewness (e.g. by adding more data).

2. Algorithms [Models] — It is the algorithm which learns from the training data to create a model. Basically, based on the training data, algorithm tweaks its internal parameters. There are various kinds of models like linear regression, k-Nearest Neighbours, Random Forest, Artificial Neural Network, Convolution Neural Network. Recurrent Neural Network etc. As it is the trained model which provides AI predictions, the explain-ability of the prediction (which is one of the principles of Ethical AI) is a function of complexity of the model.

The models can be divided into 3 categories at a broad level — parametric models for e.g. Linear regression and classification models. These have an equation at the core and the coefficients of the equations are tuned/calculated while training. These provide very good explanation of their prediction but accuracy on complex AI problems tend to be lower. Second category is the tree based and other non-parametric models like Random Forest, Decision Tree, K-Nearest Neighbours, Simple Vector Machines etc. These can tackle complex machine learning problems for structured data but the explain-ability is slightly lower than parametric models. Finally, you have Neural Network based models. These can be very complex with millions of parameters to be tuned but can solve very complicated problems both with structured as well as unstructured data. Neural Network based models are more like black box — lowest in Explain-ability. For increasing the Explain-ability of the models, we could use Shapley Additive explanation (SHAP) value to identify how much does the each predictor contribute at a global level [for overall model] and also for each individual prediction . Shap library is available for commonly used languages such as Python. It has methods such as —

a) TreeExplainer() — for getting the SHAP values for tree based models such as Random forest, Decision Tree etc or

b) DeepExplainer() — for getting SHAP values for Neural Network or

c) KernelExplainer() — for other non-parametric models such as K-Nearest Neighbours.

Publishing the Shap value as a practice for each trained model can ensure alignment to the principle of ‘Explainable AI’ principle of Ethical AI.

There is another principle of Ethical AI — Sustainable AI which can be addressed by Algorithms. Training of each model consumes energy (and data) and as the model complexity increases (Parametric < Tree/Non-Parametric < Neural Network); the need for data and energy requirement to train the models also increases. While we can address complex patterns with complex models, but this comes at the cost of more data and energy requirement to train the models. Hence while creating the model, an earnest effort should be made to perform feature engineering consulting domain experts which could help in reducing the complexity of the target machine learning model [by reducing the feature requirements]. At times, we also need to see increase in accuracy due to adopting a complex model and if the business permits, we should go with a simpler model [albeit as slightly lower accuracy].

3. Training — Training is the process where we train the algorithm[model] by feeding the data using the compute. The input to the training process is Algorithm[model] and Data and output is a Trained model. It is the training process where we decide on how much hyper parameter tuning is done for the model, how many cycles of training to perform, what is the acceptable and achievable accuracy for the trained model. Selection of final algorithm by comparing accuracies of different models also happens in Training process. Given that amount of training is directly proportional to the energy we consume, the principle of Sustainable AI can be addressed at this layer. The training should proceed in a step wise fashion with first emphasis on selecting the right features, then on selecting the model and finally on model hyper-parameter tuning. The number of features, data available for training and target accuracy should be taken into consideration while creating a subset of algorithms and hyper parameters to evaluate. Advanced techniques like Bayesian Hyper-Parametric optimization should be leveraged to arrive at target hyper parameters with minimal iterations as opposed to a blue-sky search. Another approach for reducing the training is to use Transfer Learning where you do not train a model from scratch but use a pre-trained model and further train it with the data associated with problem at hand. However, during Transfer Learning, the bias of pre-trained model should be taken into consideration.

4. Compute — We need underlying compute to train the machine learning model. With cloud computing, different computes are available for e.g. number of cores, GPUs etc. However, it is also important to see their energy efficiency while selecting the underlying compute. Selecting high energy efficient compute will ensure that we utilize lesser amount of energy. This becomes even more important since the training is not a one-time process but is an integral part of overall model lifecycle and keeps happening even once the model is deployed into production in order to keep the model relevant. By ensuring energy efficient compute, we can align to the principle of ‘Sustainable AI’.

5. Model (Trained) — Once the model is trained, it is fit to be used by AI applications for making predictions. There is a prediction accuracy associated with every trained model which can be leveraged to decide what kind of applications the trained model is fit to use. For mission critical applications, the trained model accuracy should be very high. If the trained model has high rates of false positives, then applications should use it along with additional checks or should limit its usage in certain fields. For e.g. trained model with high rate of false positive is not good for applications in identification of potential criminals. Similarly for automated vehicles, you need a model with very high accuracy to prevent the accidents. Thus, the principle of Robust and Safe AI can be addressed by the Trained Models.

6. Model Lifecycle — Once a model is put into production, its accuracy can go down over time due to change in external circumstance or the environment in which model is put into use. This decrease in model performance(accuracy) over time could impact its robustness and compromise the safety depending upon its usage. The principle of robust and safe AI can be addressed by the layer of Model Lifecycle. Model Lifecycle takes care of keeping the model relevant and keeping track of different versions of model linking each model to the data used for training. It is model lifecycle where we decide how often the model will be trained and deployed into production, how often the feedback on model’s performance in production will be taken followed by corrective actions to maintain the relevancy of the model. Using advanced tools and processes such as MLOps, we can build robust and safe AI applications by keeping the models relevant and ensuring that the accuracy of models is always maintained and does not deteriorate with time.

7. Application — The Application is interface of the trained AI model with the external world. Applications uses the trained models, feeds it with data and shares the prediction with the consumer [end user or other applications]. Whether the model is used for society’s welfare or for some malign purpose is completely dependent upon the application. For e.g. there is an AI model build which can detect whether an individual has cataract or not just by feeding the image of eyes of an individual. It depends upon the application to restrict the non-responsible use of the model to a large extent. The application can limit the chances of identifying individual’s medical condition without his consent (based on images available off internet) by ensuring that it works only with active camera feed. This declines the chances of misuse without individual consent to a large extent. Hence Application layer addresses Respectful of Privacy and Data Protection principle along with the owner of application. Also, it is the application layer where accountability can be fixed — either to the one creating the application or the one putting into use. The impact of the AI is also determined at the Application layer. Since model by itself cannot decide its usage; it’s the application which uses the model, the application layer to a large extent [along with the user of the application] can address the principles of Carefully Delimited Impact, Controllable and Clear Accountability as well as Respectful of Privacy and Data Protection.

There is one more principle which can be taken care by Application layer which is Robust and Safe AI by performing basic rule-based check on the model predictions for e.g. if the output values are expected in a range then the application can discard model predictions which are outside of a particular range and also provide feedback into the model lifecycle.

The table below summarises which of the 7 principles is addressed by which component/Layer –

--

--

Abhinav Ajmera
Abhinav Ajmera

Written by Abhinav Ajmera

Data Scientist, Data and Cloud Architect

No responses yet