Experts concerned over the ethics of predictive AI

An AI social care prediction system in the UK has experts pondering whether its benefits outweigh the ethical issues associated with it.

The system produces predictive analysis which can help determine if a child is at risk of harm, The Guardian reports. At the same time, the technology can come up with preventive actions. 

The predictive attributes used by one UK council include data on the history of domestic abuse, youth offending and truancy. Families are flagged to social workers as potential candidates for their “Troubled Family programme”. Thurrock council, on the east of London, is working with predictive analytics company Xantura to better identify those needing support and reduce statutory interventions. 

However, Maris Stratulis, national director of the British Association of Social Workers England, has raised concerns over whether predictive analytics is the right way to engage with children and families. As well as data privacy concerns, there are risks of oversampling underprivileged groups and making bad decisions. For example, a social services department will naturally hold more data on poor families than wealthy ones, creating a bias.

Predictive analytics should be used alongside other knowledge to assist social workers and not in isolation, says Xantura CEO Wajid Shafiq. For example, it can put children on the radar of a social worker who may have been missed, allowing them to take a course of action. 

Although some councils, such as Hillingdon in London, have shown success in using predictive analytics systems, the next steps need to focus on gaining ethical trust in data and securing privacy. With child abuse being a minefield of ethical dilemmas, predictive analytics companies like Xantura and councils need to be 100% confident of their strategy for using data.