AI systems are increasingly being used by organisations to make decisions The Information Commissioner’s Office (ICO) has released its first draft regulatory guidance into the use of artificial intelligence (AI) systems in organisations. The guidance, which has been prepared in collaboration with the Alan Turing Institute, warns that organisations planning to use AI systems in
AI systems are increasingly being used by organisations to make decisions
The Information Commissioner’s Office (ICO) has released its first draft regulatory guidance into the use of artificial intelligence (AI) systems in organisations.
The guidance, which has been prepared in collaboration with the Alan Turing Institute, warns that organisations planning to use AI systems in their work must be able to clearly explain the decision made to the individuals affected by them. Moreover, organisations must also ensure that their use of AI is ‘transparent and accountable’.
We’d like your views on our first draft regulatory guidance into the use of #artificalintelligence. Co-authored with the @turinginst, ‘Explaining decisions made with AI’ is out for consultation until January 24 2020. Read more & submit your comments here: https://t.co/RZMM0DQZWD pic.twitter.com/NMTvy99DDc
— ICO (@ICOnews) December 2, 2019
Many firms in the UK have started using AI systems to aid with decisions. For example, HR departments in many companies are using such systems to shortlist job applicants, based on the analysis of their CVs. Similarly, insurance firms are now using algorithms to handle claims.
According to New Scientist, a survey recently conducted across the UK showed that nearly half of the people in the country feel worried about AI systems making decisions that humans would be able to explain.
“This is purely about explainability,” says Simon McDougall, executive director for technology policy and innovation at the ICO.
“It does touch on the whole issue of black box explainability, but it’s really driving at what rights do people have to an explanation. How do you make an explanation about an AI decision transparent, fair, understandable and accountable to the individual?”
The guidance discusses how organisations can explain the services, processes and decisions assisted or delivered by AI to affected people, in an easy-to-understand form.
The guidance consists of three sections:
- The basics of explaining AI
- Explaining AI in practice
- What explaining AI means for your organisation
According to the ICO, organisations may find some parts more relevant to them, depending on the make-up of the organisation and their level of expertise.
“We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilise it when developing AI systems. This is why we are requesting feedback,” the ICO said.
The ICO will accept comments on the draft guidance until the 24th January 2020, although McDougall encouraged industry experts to respond to draft guidance before then.
Earlier this year, Rice University statistician Dr Genevera Allen claimed that the results produced by machine learning algorithms are often misleading or wrong, and are causing a ‘crisis’ in scientific research.
Allen suggested that researchers must keep questioning the reproducibility of the predictions or the findings made by machine learning techniques until new computational systems are developed, which are able to critique their own results.