<img alt="" src="https://secure.perk0mean.com/172683.png" style="display:none;">
arowExplore our blog library

The discrimination inherent in AI

5.10.2023

CEO of Cashflows, hannah Fitzsimons, at the Women in Credit Panel on 'Does your AI discriminate?'; AI; Artificial Intelligence; discrimination in tech; payments; fintech; women in leadership; thought leadership

Does your AI discriminate? This question has become a point of concern for businesses across multiple industries as they begin to implement AI-based systems across their workstreams and processes, particularly those that make decisions that can have a very big impact on people’s lives and businesses, such as for insurance or credit. This is, however, perhaps the wrong question to be asking. Artificial intelligence cannot discriminate itself. The instances of bias cropping up in AI systems and decision-making tools instead raise a larger question around discrimination and bias inherent in data and society as a whole that is being passed on to the AI systems we have created. The question instead becomes: how do we ensure we reduce bias as much as possible when creating and implementing AI? What this conversation around AI is doing is flagging existing bias that is already affecting decision-making and data across our businesses, which is both eye-opening and invaluable.

When it comes to assessing discrimination, we first need to assess how to measure bias and discrimination. How do you tell if the AI system you have implemented is leading to biased results and decisions? This will very much depend on the business, but looking at profit margins or pricing for different customers and examining the commonalities between them is a good place to start. You then need to make sure differences in pricing or decisions being made are justifiable and valid rather than being based on unconscious bias. The challenge then becomes how to prevent bias without ignoring risk. Bias is an integral part of all risk decision-making – for example, women are given lower premiums on car insurance due to the fact that men statistically have more accidents – it is the unconscious or unjustifiable biases that we need to be vigilant towards. We must ensure that the biases we choose to implement are being done so consciously, are warranted, and are done so with consideration. Pricing risk fairly and ensuring that customers are not being overcharged will vary between industries, and businesses should look to industry averages in order to get an understanding of this.

We also have to look to the people behind the technology to assess the kinds of biases that are likely to be being taught to and implemented in AI. I believe there is currently not enough transparency when it comes to the building, development, and implementation of AI and the data it is created using. We need to question who the data scientists, the coders, and the product creators are. Who does the data being used in AI creation represent, how is the data being used, and by whom? One of the main challenges we are facing is ensuring representation across the tech industry and particularly in AI. This is an issue that I am very familiar with, as at Cashflows, we work incredibly hard to try to ensure equal representation of gender and race across the business. Several branches within Cashflows’ tech operations (including risk, settlement operations, compliance, and QA testing) now consist of 50% or more women within respective teams. It does, however, necessitate input on the part of business leaders for diversity to be recognised as important to the success of the company and to be built into strategy. The unfortunate truth is that, when hiring, the talent pool is still not representative, so creating a diverse workforce often means taking more time and effort when searching for new employees. Only 26% of the tech workforce is female, and even less are female ethnic minorities1. In AI specifically, 78% of global professionals with AI skills are male2. The creators of AI are predominantly white and male, so we need to be aware that this will affect the technology being built. Even when created by diverse team, unconscious biases can slip through, so it is imperative that we are look for and eradicate any potential for discrimination that may be passed on.

So, what can we do to prevent biases from seeping through into our AI? IBM has had success with using a three-level ranking system that helps to determine whether the data being used is bias-free or not. Their goal, as should be the goal of all working in this field, is to reduce the bias of AI. IBM has created an ethics board, they’ve developed policies around AI, and work with trusted partners. Opensource data is also proving one of the most promising modes of minimising AI bias as it enables collaboration, trust, and transparency. Engineers will benefit from the perspectives, insights, and contributions of others working in AI, and the public, regulators, and businesses implementing their technology can gain a clearer picture of what is being used and how. Essentially, it always comes down to proper oversight. To create fair AI, businesses must implement:

  • Receiving feedback: Consider releasing surveys and asking for your end users to fill them out. Through these insights, companies can better understand what is missing, what needs to be changed and how the AI system can be modelled better.  
  • Reviewing the data: Looking over the insights that go into the machine learning model ensures the AI is provided with all the right information. Too many samples and data sets, as well as unrepresentative samples, can cause AI systems to be biased.
  •  Real time monitoring: Once built, companies should keep an eye on the algorithmic processes as reviewing results in real time will ensure consistency.  

AI has its limitations and, in my opinion, will always need human oversight, which was a key factor that we based our own implementation of AI decision-making at Cashflows. Based as AI is on historical information, its prediction of the future, whether in terms of decisions around risk or creditworthiness, will always be limited in nuance, especially when it comes to vulnerable people or situations in need of empathy. There is also the concern that without human oversight, the presence of bias and discrimination will be overlooked, compounded, and then start to grow, having a very real impact on people’s lives and businesses. Currently, we are not approaching the opportunities that AI presents with sufficient fear and restraint. Businesses are rushing into implementation as AI technology is seen as being the holy grail of productivity and efficiency without first understanding the risks and potential side effects that it could have.

Fortunately, in November, international governments, leading AI companies, and experts in research will unite for crucial talks on the safe development and use of frontier AI technology in the UK.  The conference will consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. The conference is an essential first step in ensuring the regulation of AI is done in the right way with the right structure. All voices need to be heard in the process, and this cannot be left solely to Silicon Valley and big tech companies.  AI is going to be one of the most difficult technologies to regulate.  There is no simple solution, however we need to ensure the decision considers a range of perspectives. In particular, we need to think about bias, both in AI and in decision-making around AI regulation. Different stakeholders will have different interests, and it’s crucial we have a balance of voices in the regulation room and the right people are at the table engaging with this.

AI will revolutionise the way we work and the world as we know it - of this, I have no doubt. We have the opportunity now to shape what this will look like and to do our best to eradicate the biases that currently permeate our society at every level. We have the opportunity to create systems and technology that work to benefit everyone and have the potential to be more egalitarian than human decision-makers, influenced as they are by their own unconscious biases. The alternative is that we could end up unwittingly implementing systems that exacerbate discrimination and compound it. Businesses must tread carefully and with consideration to ensure they are part of the former rather than the latter.

 


Sources:

1 Women in Tech

2 Forbes