Dr Beth Singler is the Homerton College junior research fellow in artificial intelligence.
With our lives increasingly affected by artificial intelligence (AI), there’s a need for a big conversation that reaches beyond technologists.
I’m a huge science fiction fan but I recognise it when I’m watching or reading it. The more I research artificial intelligence (AI), the more I’m concerned about the blurring of the line between fact and fiction.
As an anthropologist, I find AI fascinating. That’s partly because it’s such a slippery term, treated differently in different contexts. To those who work in the field, it can mean a very specific, narrow tool. For the general public, it can mean many different things, not least the assumptions driven by science-fiction narratives.
The press often hypes up even the most banal AI story and illustrates it with Terminator pictures, which gives the impression that AI has possibly malevolent capabilities already.
On the other hand, I once took a taxi with a very chatty driver who asked me what I do. I replied that I work in AI and his response was ‘oh, artificial insemination’.
But while we’re being distracted by such narratives and misunderstandings, actual applications right now are potentially quite dangerous.
But while we’re being distracted by such narratives and misunderstandings, actual applications right now are potentially quite dangerous.
When we focus on big, scary futures and the robo-apocalypse, we’re not thinking about the big, scary present, the personal robot apocalypse and the less visible forms of AI that are already being implemented and are affecting people.
Losing trust
We’ve seen the influence of AI on social media and democracy. Now we’re seeing the problem of deep fakes, which will further erode trust.
Issues of trust don’t just lie in deliberate manipulation. Unconscious bias is having some very specific demographic impacts.
But issues of trust don’t just lie in deliberate manipulation. Unconscious bias is also having some very specific demographic impacts. There’s the well-known example of people trying to use a soap dispenser that has a sensor which doesn’t recognise their skin colour because the people who were building the technology weren’t from a variety of ethnic backgrounds.
Part of the problem is that the stereotypes about tech companies do hold true in a lot of instances: they’re often white, male, of certain generations – and that can limit perspectives.
While there is pushback against that, with more efforts to welcome people from different backgrounds – and while some larger tech companies have been good at forming connections with universities that have arts and humanities scholars – it’s not always apparent how much they’re listened to. Sometimes, it’s simply ‘ethics washing’.
Biased neutrality
While unconscious bias is an issue, whether an algorithm can ever be fair or unbiased is a difficult question because our definition of fair and unbiased is, in itself, never unbiased.
Absolutely everything that goes into an algorithm – every dataset, every formulation of the algorithm – comes with our assumptions.
You can say an algorithm is being neutral – but how do we define neutrality? Who gets to define what is a neutral response? Absolutely everything that goes into an algorithm – every dataset, every formulation of the algorithm – comes with our assumptions.
Amazon ran a CV application app for human resources with AI in it and tried to make the application process gender neutral. But the dataset included successful applicants and those successful applicants tended to be men, who tended to do ‘hockey’ at university rather than ‘women’s hockey’. So despite the process never asking if candidates were male or female, the unsuccessful candidates, who did women’s hockey, were more likely to have the word ‘women’s’ in their CV and the algorithm picked up on this.
It was biased because it was built on human presumptions that we’d already fed into the data without even knowing.
AI in education
With AI in education there’s a balance, as in almost any application of AI, between opportunities and risk.
We’re at a crisis stage of underfunding, where teachers are faced with classrooms of 30-plus children and not enough time
In the UK, the opportunity seems to be the personalisation of education. We’re at a crisis stage of underfunding, where teachers are faced with classrooms of 30-plus children and not enough time to give dedicated personal attention to every single learner, who all have very different needs. It makes sense to do what is automatable in order to catch children’s needs and requirements better.
Meanwhile, some AI edtech companies, such as Squirrel AI in China, are focused on personalised pathways for education. In these products, the AI recognises each module that interests the student and then suggests the next module and the next. So the syllabus is less teacher-driven or even state-driven: it is a personalised syllabus.
My concern is that to silo children’s interests, based on them showing interest in one topic, could be detrimental. One of the wonderful things about schools and universities is the opportunities they offer to explore new subjects and find new areas of interest that you didn’t know you could possibly even have.
This kind of AI-based recommendation system can also go terribly wrong. Or it can just be poor technology.
For example, on Amazon, the system sometimes recommends you buy similar things to something you’ve already bought … well, I don’t want 20 rugs. I’ve just bought one rug so why would I still be looking at rugs?
Voices in the room
The answer to all of this is having a variety of voices in the room. You cannot leave it to one group of people because it will have impact on many different types of people: the integration of AI into our lives, day to day, is more than just a technological application.
It’s going to impact people’s choices, their lives and the directions they take their lives in. It’s not possible to reflect on that impact purely from a technological standpoint. That doesn’t take on board the human element.
We need anthropologists and social scientists, historians and people from the arts and humanities to be part of this conversation.
We need anthropologists and social scientists, historians and people from the arts and humanities to be part of this conversation.
Speaking as an anthropologist, we’re particularly useful because we’re so engaged with human communities and ideas. We can see some of the repercussions and see, in advance sometimes, when the knowledge isn’t there in the technological sphere to say what this application will do in a community.
That’s not prediction. It’s having a cultural understanding of interactions between humans that may not be immediately apparent in the application of technology.
Time for the conversation
We know that humans are the creators of bias. We’ve relied on human judgment without AI for centuries and we know it’s flawed.
But if we get enough humans into the conversation, we can try to find the least bad solution. We can stop blindly relying on the output of any algorithm and instead critique it, deal with AI’s black box issues, and ask how it came up with the decision that emerged.
What are the elements in the data that have created this decision? If those elements are collectively decided to be bad in our current society and we don’t want to see that bias, we should push back against the algorithmic decision, using critical thinking, cultural interactions and common sense.
Are we ready for AI? We have to be: a lot of the applications are already here, embedded and having an impact on our society.
Are we ready for AI? We have to be: a lot of the applications are already here, embedded and having an impact on our society. So instead of asking that question, we’ve got to keep talking about it and making it visible when it’s invisible.
We have to engage the people who, like my taxi driver, don’t even think about AI as a topic. We have to spread that conversation wider – and we’re absolutely ready for.
About this post: This article first appeared on https://www.jisc.ac.uk/blog/ and is reproduced by kind permission of UKSPA’s Digital Partner JISC.
Beth Singler is a keynote speaker at Networkshop48, 15-17 April 2020. Dr Beth Singler is the Homerton College junior research fellow in artificial intelligence. She has produced a series of short documentaries – the first of which, Pain in the Machine, won the 2017 AHRC Best Research Film of the Year Award. In 2017, she spoke at the Hay Festival as one of the ‘Hay 30’ shortlist of best speakers to watch. She was also included in the Evening Standard’s Progress 1000 list of the influential people in both 2017 and 2018.