In a recent address, Reserve Bank of India Deputy Governor M Rajeshwar Rao highlighted the critical concerns and expectations for financial institutions implementing Artificial Intelligence (AI) in their operations. Rao categorized the challenges into three main areas: data bias and robustness, governance, and transparency.
Data Bias and Robustness: Rao emphasized that AI’s effectiveness is intrinsically linked to the quality of its training data, inheriting any biases or errors present therein. He contrasted AI’s learning mechanism with human decision-making, which is enriched by diverse life experiences and mitigated by institutional checks and balances against biases. Rao cautioned against the opaque nature of many AI models, which can be difficult to audit and supervise, posing risks like data poisoning, unexpected behavior, and biased predictions.
Governance Challenges: Rao pointed out the novel governance challenges posed by AI, particularly in scenarios where it replaces human judgment and oversight. He mentioned that issues like prompt injection and toxic output could impact governance frameworks in financial institutions. Rao suggested that integrating ‘human in the loop’ and establishing comprehensive governance structures for AI systems could address these issues, emphasizing the need for regular audits and assessments to ensure compliance with laws and regulatory standards.
Transparency in AI Models: Rao raised concerns about the complexity and opacity of AI models, which could lead to discriminatory outcomes and biases over time. He underscored the difficulty financial institutions may face in explaining adverse decisions from AI models. Rao proposed ten principles for designing AI solutions in financial institutions, focusing on fairness, transparency, accuracy, consistency, data privacy, explainability, accountability, robustness, regular monitoring, and human oversight.
RBI Dy Governor outlined ten key aspects for financial institutions deploying AI models, aiming to balance innovation with responsible technology use:
Fairness: Algorithms must not discriminate based on unethical or illegal attributes. Regular audits and techniques to identify and correct biases are essential.
Transparency: Stakeholders should understand the inputs and decision-making processes of algorithms. The process should be clear to regulators and consumers.
Accuracy: AI should use accurate and appropriate training data to minimize decision-making errors. Continuous efforts are needed to reduce false positives and negatives.
Consistency: Consistent application of algorithms across various situations is vital to avoid biases and ensure fair outcomes. Frequent parameter changes for specific interests should be avoided.
Data Privacy: Adherence to data protection regulations is crucial. AI models must handle personal information securely and responsibly.
Explainability: Entities must clearly explain the factors influencing AI decisions to build trust. Channels for customer queries and disputes should be established.
Accountability: Clear responsibility for AI outcomes should be established, with comprehensive governance frameworks including audits and reviews.
Robustness: AI algorithms should be rigorously tested for performance under different conditions and not be overly sensitive to input data changes. Regular updates to training data are necessary.
Monitoring and Updating: Continuous monitoring and updating of AI engines are required to adapt to market changes and risks. This includes monitoring the evolution of self-learning algorithms.
Human Oversight: Human intervention should be included for complex or ambiguous cases, ensuring ethical considerations and addressing unintended consequences and governance issues promptly.
