Religious conflict can occur when a group of people who share the same belief or religion feel threatened by another expanding group.
This xenophobia then becomes violent in 20 per cent of cases when people start to question their own beliefs and ideals.
An international team of resarchers used artificial intelligence to create a virtual world with various ‘agents’ that interacted with one another.
This is the first time this form of AI has been used in research and the scientists claim the findings can be used to support governments address and prevent terrorism and social conflict.
The software is a psychologically realistic model of a human that mimics how we think and identify with particular groups.
It showed that people are a peaceful species by nature but humans endorse violence when others go against the core beliefs which define their identity.
Research was conducted by experts form the University of Oxford, Boston University and the University of Agder, Norway.
They combined computer modelling and cognitive psychology to create an AI system that mimics how people act in times of religious unrest.
This included cases of religious violence such as the 2013 Boston marathon and 2007 London terror attacks.
Using a combination of several theories the team of psychologists mimicked how a human being would think and process this information.
The research, published in The Journal for Artificial Societies and Social Stimulation, focused on two specific periods of extreme xenophobia that led to physical violence.
These were the Northern Ireland Troubles that saw Protestants and Catholics warring for three decades and the 2002 Gujurat riots of India.
Justin Laned, one of the paper’s authors, said: ‘Our study uses something called multi-agent AI to create a psychologically realistic model of a human, for example – how do they think, and particularly how do we identify with groups?
‘Why would someone identify as Christian, Jewish or Muslim etc. Essentially how do our personal beliefs align with how a group defines itself?’
Psychologically realistic AI agents is a term from the research team which includes a variety of theories brought together in one model to produce human-like responses to different stimuli.
They built this by creating a simulated environment and filling it with hundreds of human model agents.
All these ‘agents’ – or characters – had slightly different traits, such as age, race and skin colour.
It works on the simple assumption that there is a probability that each agent will interact with another agent at some point.
Escalating tensions surrounding race sprung up when social hazards, such as external members who deny the group’s core beliefs or sacred values, overwhelm people to the point that they can no longer deal with them.
According to the models, people only felt anxious or agitated by the external members when it forces them to challenge their own beliefs.
When challenging their core belief system and their own commitment to their beliefs, or religion, people can turn violent.
This, according to the researchers, only happened in 20 per cent of cases.
All of these instances were triggered by people going against the group’s core beliefs and identity.
Mr Laned said: ‘Ultimately, to use AI to study religion or culture, we have to look at modelling human psychology because our psychology is the foundation for religion and culture, so the root causes of things like religious violence rest in how our minds process the information that our world presents it.’
The team claim a this is the first time a model like this has been applied physically in research.
They did this by looking at how humans process information related to their own personal experiences.
An AI model then combined people who had positive experiences with people of other faiths and those who had a negative interaction.
They claim they did this to understand the escalation and de-escalation of violence over time, and how it can, or cannot be managed.