Long Read  

AI a double-edged sword when fighting fraud

As a result, version control of GenAI models, where mathematical proofs are used to ensure that the next production GenAI model is better than the previous one, is becoming an increasing priority.

Fourth, hallucinations of GenAI models are also a problem. This boils down to the GenAI model providing fictitious and/or factually incorrect results, potentially because of misunderstanding the input or because of improper tuning.

Article continues after advert

In addition to the misleading information, it can result in the GenAI model learning incorrect patterns and producing false or misleading outputs. 

Fifth, GenAI models are particularly vulnerable to malicious data injection, which can have significant and long-term consequences – and has the potential to eliminate any good that can come out of using the tool.

If training data is intentionally tampered with or the model is injected with malicious data, it can cause the GenAI model to permanently generate incorrect, false or highly offensive responses. 

Lastly, governance of GenAI models, for example testing models for prejudice and bias, is crucial, but it can be a challenge to provide processes and technical controls to manage versions of GenAI models in an agile but version-tracked manner.

Tackling GenAI fraud management

For enterprises, including financial institutions, to protect against the above risks it is imperative that they equip existing security and data architectures to handle GenAI functions. 

First, in order to protect against GenAI fraud, it is crucial that enterprises have monitoring and policies in place that reduce the chance of inadvertently sharing the company’s intellectual property, as well as personally identifiable information with GenAI.

Being aware of, and staying appraised of, external factors that could dictate, affect or infiltrate the training data is key.

Second, the enterprise should obtain from the vendor evidence of GenAI model explainability when defending the output of GenAI. Explainability includes requesting that the vendor provide reason codes to support GenAI’s decisions.

Lastly, Forrester recommends implementing GenAI as part of a larger tool set. Indeed, using GenAI as only one of the tools available for fraud management and anti-money-laundering policy authoring will significantly reduce the opportunity for malicious activity.

In summary

It is important to be mindful of third-party risk management when implementing GenAI.

All too often we see GenAI users focus too heavily on how the technology will boost productivity but fail to recognise or consider the threat to security and regulatory compliance.

For example, we see increasing numbers of employees feeding sensitive data into GenAI models such as ChatGPT, which could jeopardise the integrity of the company’s security.

Businesses looking to partner with or utilise third-party GenAI applications must implement thorough, disseminated security protocols before employees start using it.