Best Practices for Ethical and Efficient Deployment of AI in Fintech

Best Practices for Ethical and Efficient Deployment of AI in Fintech
Author: Ramesh Mohan, CISA, CISSP
Date Published: 29 January 2025
Read Time: 14 minutes
Related: Using Risk Tolerance to Support Enterprise Strategy

The use of artificial intelligence (AI) technologies has exploded within the financial services sector. Between identity verification, credit scoring, anti-money laundering (AML) procedures, and more, the market value of AI in fintech is estimated to reach US$19 billion by 2025 and grow to US$70.1 billion by the year 2033.1

The deployment of AI technologies does present some implementation challenges, however, especially for many fintech enterprises working in diversified geographical landscapes. Despite these challenges, AI technologies will play a pivotal role in performing what is traditionally considered back-office work. Automated data collection and entry can be performed by advanced algorithms that enhance the accuracy and efficiency of creditworthiness assessments. Moreover, AI can facilitate communication and documentation processes for quicker turnaround and a better customer experience.

Although many of the direct regulatory requirements of AI differ from country to country, universally applicable best practices could help provide a guiding light for fintech enterprises hoping to harness AI technologies in a responsible manner.2

It is critical to explore the importance of such best practices, with the help of use case examples within the fintech industry that have brought to the fore pressing reasons for ethics-based action and efficient deployment.3

Fairness

To ensure fairness, AI applications should be developed with no potential discrimination against any group, thereby assuring trust and moral integrity. For example, payroll cards that offer an alternate means to disburse wages must be designed with this requirement in mind. Fintech enterprises can develop an AI model to simulate a variety of fee structures and predict their impact on different categories of employees. AI can ensure the principles of fairness so that no group becomes severely disadvantaged. For example, consider a service charge on an employee’s payroll card (e.g., a monthly fee, transaction fee, inactivity fee, replacement fee, balance inquiry fee). While applying service charges, a fintech enterprise can use AI to deploy dynamic service charge analytics algorithms that study transaction data and service charge patterns across different employee groups (e.g., full-time vs. part-time). This could provide insights into whether certain employee groups incur disproportionately higher service charges.

Another application of fairness is in the development of AI-driven chatbots, which can simplify financial tasks; help users with account management, bill payments, and checking balances; and provide personalized financial advice. These services can help users gain financial independence. However, such interfaces must be accessible to people with disabilities—for example, through voice command capabilities, specialized screen readers, and other assistive devices. Inclusion must be ensured so that AI robots interact equally with all users, regardless of physical ability.

By paying attention to data privacy, fintech enterprises keep customer trust and improve their reputations, resulting in customer retention and loyalty.

Transparency

Institutions implementing AI must be required to make their decision-making processes transparent. This is to make user scrutiny possible in order to develop trust. Given the complexity of AI algorithms, demand for transparency regarding their inputs and decision making has grown among stakeholders, including regulators and consumers.

In particular, fintech enterprises can enhance the trust of regulators through complete disclosure of the architecture and parameters used in creating an algorithm, clear documentation of the data used in training the algorithm, and an explanation of the predictions made by the model.4

On the customer front, fintech enterprises can negotiate for in-depth transparency regarding trading processes and market dynamics so that investors can make more informed decisions and support their choices with comprehensive data. Fintech organizations must also ensure that customers are informed that they are interacting with an AI-driven system.5 If a customer believes that they are interacting with a human being while using an AI-driven trading app, they may not question the quality of advice or information they are receiving. A lack of transparency, therefore, can result in poor trading decisions based on incomplete or simplistic AI-generated insights. This also raises ethical issues that could even lead to legal liabilities in cases where a customer, acting upon AI guidance without realizing the limitations of the AI system, incurs significant financial losses.

Accuracy

The assumptions an AI system makes should be accurate to maintain financial integrity and credibility. Artificial price models related to driving intelligence are significantly transforming the old model scenarios for borrowing money, insuring products6 and enterprises, and even trading with traditional assets. These advanced models make use of machine learning (ML) to ingest large amounts of data, identify complex patterns, and increase adaptability to new information—major advantages compared to traditional methods.

The same complex AI models come with their own problems, however, especially where the accuracy of AI predictions is concerned. The dependence on historical data can lead to overfitting. Fintech enterprises can fine-tune the accuracy of AI systems by adopting robust data management strategies through increased quality and advanced augmentation methods. This will help in tackling the broader economic conditions associated with a pandemic, recession, or inflation, which can lead to significant financial losses on a large scale.

To lend even greater effectiveness to AI models, fintech organizations should deploy precise feature engineering, bespoke algorithm selection, and sophisticated ensemble techniques that make use of deep learning for unstructured data, regularization to avoid overfitting, and model assessment and system transparency using explainability tools.

Moreover, there is the need for fintech enterprises to provide strong feedback mechanisms enabling consumers to report any perceived bias. This feedback informs constant refinements to the algorithm to ensure that all applicants receive fair credit opportunities.

Data Privacy

Protecting user data is important and compliance with data privacy regulations enhances trust. Fintech enterprises are at the forefront of changing client onboarding practices by developing and implementing AI-based identity verification systems that use technologies such as optical character recognition (OCR) and biometric or facial recognition.7 These methods greatly reduce the time required for user verification, allowing many clients to be onboarded by these methods—for example, through video calls or verification via a selfie from a reliable source. AI systems can streamline the onboarding process, eliminating some of the need for burdensome paperwork and in-person visits.

These AI-driven models are trained on very large datasets that greatly improve their ability to identify forgeries of documents and significantly reduce human error in document verification. The use of AI in identity verification has its own pros and cons. Overreliance raises significant concerns for privacy and security, which could lead to customer dissatisfaction and, ultimately, a decline in trust, in addition to potential legal complications. Data privacy is critical in these cases as it relates to processing and onboarding fintech enterprise customers using selfies and biometric means. Fintech enterprises must ensure that the photos and biometric data captured are secured and encrypted. A security breach presents opportunities for identity theft and can bring about serious financial consequences for users. In addition, regulators mandate technical protection measures for securing personal information; any gaps in this will incur fines from regulators. By paying attention to data privacy, fintech enterprises keep customer trust and improve their reputations, resulting in customer retention and loyalty.

Explainability

AI decisions should be accompanied by clear explanations that lead to better user understanding and trust. Fintech insurance enterprises are increasingly using explainable AI (XAI) algorithms to achieve efficiency in the claim approval or denial process.8 Such algorithms must clearly explain the process behind their decisions to ensure that the policyholders understand why their claims were approved or denied. Transparency encourages establishing much-needed trust between the insurer and the insured.

In addition, XAI significantly aids in the identification and subsequent correction of any potential biases in claim-processing models. Using XAI, fintech insurance organizations can ensure fair and equitable treatment for all policyholders. This technological progress not only increases efficiency but also strengthens integrity in the processing of claims.

Accountability

Institutions are accountable for the results of AI systems and must rectify any defect. AI in credit scoring applies modern methods based on ML when evaluating data such as financial transactions and borrowing history, considerably enhancing the accuracy and effectiveness of the credit evaluation process compared to conventional methods.9

Accountability in AI credit scoring can help ensure fairness and transparency, eliminating biases and increasing confidence in automated decision making. Developers and operators are held to high standards for accuracy and ethics consistent with regulatory criteria. Through AI’s accountability mechanism, errors can be detected and remedied, thereby improving the constant development of AI systems.

There must be a balance between technological capabilities and human judgment to protect consumer interests and sustain consumer trust in financial services.

Accountability also ensures the reliability of AI for both the user and the regulator with respect to conformance to legal and ethical requirements. For instance, lenders may pass credit scoring costs to clients when checking for loan approval. This practice can be countered by devising ways to compensate customers quickly for charges that the system might have incorrectly imposed, thus maintaining accountability.

Another application of the accountability principle is recording the decisions that chatbots make based on AI. This allows for future audits of the automated decision-making process.

Monitoring and Updating

Constant performance monitoring and regular updating of AI systems are required to identify new challenges. For example, AI-based AML systems aid in the detection and prevention of illegal activity by analyzing large volumes of financial data and identifying patterns and anomalies indicative of such activity taking place.10

As AML regulations change frequently, criminals adopt new laundering techniques. Frequent AI system refreshes may help prevent model drift by allowing for new technological innovations, which tend to provide incremental improvements in the accuracy of detecting suspicious activities.

Failure to consistently upgrade AML algorithms implies a reduced detection capability, an increase in the generation of false positives and negatives, potential regulatory noncompliance, and the potential to incur large fines and reputational damage. This risk can be mitigated if institutions have a well-anchored model management framework that fosters the continued refinement of AML systems in view of evolving threats.

Human Judgment

Human judgment is required when making decisions that align with ethical considerations and social values. The hallucination phenomenon—whereby AI models produce information that is either deceptive or entirely false—has significant implications in fintech, where the accuracy of data is of prime importance. Fintech applications use AI models heavily in stock trading, credit scoring, and fraud detection, and are very vulnerable to the negative effects of hallucinations. When AI systems hallucinate, they can invent patterns or data that are not there, leading to flawed responses and predictions. These inaccuracies can then lead to poor or even catastrophic financial decisions. This further stresses the importance of a sound validation process for AI within the fintech industry.

To reduce hallucination risk, human supervision is required during the AI integration stage to guard against errors or biases that could result in unethical or otherwise damaging decision making if systems were unchecked.11 Human oversight is useful not only in averting risk but also in ensuring transparency and accountability in the decision-making process. Human supervision also assists financial institutions in complying with regional regulatory requirements. There must be a balance between technological capabilities and human judgment to protect consumer interests and sustain consumer trust in financial services.

Reliability

To be reliable, AI systems must act consistently across many contexts to behave predictably and instill confidence in the many users who rely on them Fintech organizations depend heavily on advanced algorithmic models—for example, to determine borrower creditworthiness and set the terms of a loan. In this context, the algorithms must perform consistently across varied scenarios to prevent the introduction of any form of bias that could lead to prejudiced outcomes. It is very important that the AI parameters be stable and not misrepresent results.

Fintech enterprises must commit to following up on audits to ensure consistency in the advanced algorithms their AI systems use. An audit should involve the inspection of datasets that train algorithms. Inherent in the process is rigorous testing and analysis of outcomes to remove any possibility that the algorithm might discriminate against a particular demographic.

It is necessary to educate, inform, and persuade stakeholders regarding the importance and complexity of regulatory technical controls.

In addition, fintech organizations should develop strong feedback mechanisms, empowering their customers to report any perceived biases, which will further inform the ongoing refinements to the algorithms in use and help ensure equitable credit opportunities for all applicants.

Robustness

AI systems should be robustly tested against external threats to ensure dependability. AI in fraud detection utilizes a variety of models to expose anomalies in customer behavior and detect patterns consistent with fraudulent characteristics. The models should have built-in resilience to withstand variations, noise, and unforeseen inputs during their operation to ensure that they can continue to be reliable despite changing conditions.

This resilience should include the ability to manage edge cases or adversarial attacks that were created to fool the system. A high-quality AI system can maintain its performance quality and provide outputs users can rely on even when working with imperfect data or in a different environment than it was trained in.

AI-based fraud detection systems must be subjected to very rigorous testing under stress conditions and security protocols to ensure that cyberthreats are properly detected and addressed, thereby making financial transactions reliable and safe.

Cybersecurity and Operational Resilience

AI has become critical to cybersecurity and operational resilience, given that more stringent cybersecurity measures are necessary to defend against the steady increase in data breaches and cyberthreats. A modern enterprise security strategy should encompass encryption technologies, frequent security audits, and conformance to regulatory and legal requirements. Fintech enterprises must pay close attention to operational resilience, given the rise in regulatory scrutiny of this area.12 Consider a fintech payment platform that experiences a significant increase in transactions during a Black Friday sale. If the fintech enterprise fails to establish adequate infrastructure protections or perform stress testing, it may result in a disruption to service during peak traffic times. This lack of operational resilience could result in regulatory penalties. To avoid this, fintech enterprises must deploy various strategies ahead of major sales, such as capacity planning, scaling infrastructure, disaster recovery testing, and regular stress testing.

Risk Management and Audit

The in-house mechanisms for risk management should be enhanced so that the employed technologies are on par with the regulations applicable to AI. A fintech enterprise with an AI lending platform may use an advanced algorithm for creditworthiness assessments. This increases the fraud and regulatory compliance risk for its growing operations. An effective risk management strategy helps with the identification and mitigation of threats and risk associated with fraudulent loan applications and loan defaulters. The goal of risk management is to minimize the potential risk from new technologies. Also, conducting scheduled audits will ensure that fintech technologies and associated processes are in line with standing laws and regulations.

To achieve regulatory awareness, it is necessary to educate, inform, and persuade stakeholders regarding the importance and complexity of regulatory technical controls. All levels of personnel in the fintech organization should be educated about current regulations as they apply to any emerging technologies being used. This will help ensure smooth transitions when regulations change and make it easier to keep customers informed about how their data is being used.

Conclusion

While implementing AI, fintech enterprises must ensure key best practices such as fairness, transparency, and data privacy to build customer trust and comply with regulatory requirements.

Ethical and efficient deployment of AI requires a proactive role from audit professionals. While assessing an AI-enabled credit scoring system in a fintech enterprise, auditors must follow a structured risk assessment approach, highlighting possible risk that may stem from algorithmic bias, data privacy, and system vulnerabilities. Ensuring diversification and representation of data fed into AI models will help prevent biased credit evaluations. The decision-making capabilities of the algorithm can be strengthened by conducting stress tests across a variety of scenarios to monitor performance under different conditions.

AI systems in fintech must adhere to regulations aiming to protect personal information. A framework should be implemented that allows for periodic monitoring and real-time assessment and adaption to match new risk that emerges (and diminishes over time).

Emphasis on these aspects will lead to more effective risk assurance professionals charged with oversight for AI implementation within fintech enterprises.

Endnotes

1 Dimension Market Research, Artificial Intelligence (AI) in Fintech Market by Type (Solutions and Services), by Deployment, by Application, by End User - Global Industry Outlook, Key Companies (Microsoft, Google LLC, Salesforce Inc., and others), Trends and Forecast 2024-2033, April 2024
2 Frank, E.; “Balancing Innovation and Compliance in Fintech AI,” EasyChair, 18 May 2024
3 Rao, R.; “Innovations in Banking―The Emerging Role for Technology and AI,” BIS, 10 January 2024
4 EU Artificial Intelligence Act, “Article 11: Technical Documentation” European Union, 13 June 2024
5 EU Artificial Intelligence Act, “Article 13: Transparency and Provision of Information to Deployers,” European Union, 13 June 2024 ; EUAI Act, “Key Issue: Transparency Obligations” 
6 FinTech Global, “How AI-Backed Dynamic Pricing Is Transforming Insurance Sales,” 22 April 2024
7 Champion, K.; “The Crucial Role of Identity Verification for Insurance Companies,” 2 May 2024
8 Owens, E.; Sheehan, B.; et al.; “Explainable Artificial Intelligence (XAI) in Insurance,” Risks, 1 December 2022
9 Addy, W.; Ajayi-Nifise, A.; et al.; “AI in Credit Scoring: A Comprehensive Review of Models and Predictive Analytics,” Global Journal of Engineering and Technology Advances, vol. 18, iss. 2, 2024
10 AML Watcher, “7 Use Cases of Artificial Intelligence in Anti-Money Laundering,” 13 May 2024
11 Baker McKenzie, “Europe: Legal Safeguarding Against AI Hallucination Pitfalls,” 25 January 2024
12 Eur-lex.europa.eu, “Regulation (EU) 2022/2554 of the European Parliament and of the Council on Digital Operational Resilience for the Financial Sector,” 14 December 2022

RAMESH MOHAN | CISA, CISSP

Is an information systems and technology audit manager at e& Group with 18 years of internal audit and cybersecurity consulting experience in the banking, insurance, fintech, and telecommunication sectors.

Additional resources