Skip to main content

Interview with Timothée Raymond for Finyear

​​​​​​​Finyear logo​​​​​​​

Interview with Timothée Raymond, Head of Innovation and Technology at Linedata.

​​​​​​​Can you give us an update on the use of AI in the banking sector?
The finance sector as a whole has stayed away from non-consensual AI. Far from complex and scalable autonomous systems, it therefore mainly uses operational and automated algorithms that merely simulate human intelligence to perform simple and repetitive tasks. We use them at Linedata in several of our solutions, for leasing products in particular. According to Gartner's 2021 analysis , machine learning remains the most widely adopted artificial intelligence technology in the banking and finance sector. The main drivers of AI adoption are improved customer experience (43%), cost optimisation initiatives (24%) and improved risk management (19%).

We hear a lot about trusted AI. What is it?
On 21 April, the European Commission suggested the establishment of the first-ever legal framework on artificial intelligence, to accompany the development of this technology and ensure that its fields of application are well defined and better understood by the general public. This is excellent news for individuals in terms of respect for fundamental rights, but also for European companies which, in order to innovate and invest wisely, need to be clear about the regulations and practices surrounding AI (commercial, competitive, ethical, etc.). In the context of heightened global competition and still opaque uses of data, it is all the more vital! In fact, the notion of trusted AI gathers several requirements that converge towards the same objective: making AI an objective, reliable and human-controlled tool. Among these requirements, supported by the European Commission, we find that of the necessary transparency of algorithms on the one hand and that of explicability and pedagogy on the other. It has become essential to know when AI is used, how it works and how it reaches the final result delivered, especially as many algorithms work like black boxes, in which nothing can be observed. In addition to these initial requirements, there is a desire to integrate more ethics, to be sure that certain discriminating biases have not been reproduced or introduced during the analysis.

What does this mean in practice?
Europe has chosen to base the degree of ethical requirement of an AI on the risk associated with it, hence all algorithms are potentially concerned. It seems quite obvious that military or medical AI will be particularly scrutinized, but in reality many AI used on a daily basis can have negative consequences on humans if they are not sufficiently supervised. In the HR sector, for example, Europe explicitly mentions algorithms used in recruitment. In finance, the algorithms used to establish credit scores or to grant credit will have to be re-evaluated. Let's imagine a credit applicant with all the necessary guarantees but living in a geographical area where the inhabitants' credit applications are often rejected. Even if these applications were rejected for legitimate reasons, a poorly trained AI could interpret this statistical coincidence as a systematic rule and de facto discriminate against any new application from that location without good reason. Until now, algorithms were not designed to explain their results, so such differences in treatment might never be detected. It is therefore excellent news that requirements for ethics, transparency and explicability have been introduced. This will help professionals in the sector to make better decisions and will ensure that their clients receive consistent and non-discriminatory treatments.

Do you think this could lead to further developments?
Of course, because this upcoming regulation will encourage the modernization of the AI used. They will be more in line with the reality of the cases and will make it possible to come to more refined, more individualized results. In the specific context of credit application, for example, the particular attention, that will be paid to accessibility criteria, could be reviewed. Thus, in order to have guarantees regarding the sustainability of the applicant's income, we could imagine that other types of contracts, other than permanent contracts, could be better taken into account by the bank’s AI in the preliminary analysis of the documents in the loan file.

Concerning the daily commitment of companies in terms of trusted AI, we can already see the numerous initiatives that, in France as well as in Europe, are emerging here and there to concretely participate in guaranteeing the best level of requirement in terms of AI use. In particular, this involves setting up internal evaluation grids for the AI solutions developed, with several levels of measurement (codes used, training required of the team, observations of business practices, cross-checks, etc.). Others integrate an ethical approach right from the development of algorithms, in close collaboration with the teams involved in the company, whether in terms of consulting or control: ML or automation project managers, innovation managers, developers, IT directors, CISO, the DPO, etc.

More generally, many companies participate in the development of representative bodies for their sector and/or of dedicated educational programs. This is our case since the Linedata group will participate in deep diving AI practices with Syntec on 11 June.

​​​​​​​Just as we are already seeing the emergence of Security by design (which consists of integrating the notion of security right from the design of a program), or Privacy by default (which guarantees respect for personal data), we should think about implementing Ethic by default.

Read the article in French