Dorset cyber security expert warns us to be wary of AI content
Research shows many of us are worried about the lack of legislation
A cybersecurity company in Dorset is urging us to make sure we verify online content after it found only a third of people in Wiltshire could tell if a video had been manipulated by AI.
Cybersecurity firm ESET who have a base in Bournemouth found in a survey that over third-thirds (67%) of people in Wiltshire have concerns over regulation of AI-generated political content.
They also found 40% of people had come across a deepfake online - with a further three in 10 unsure, but feel it likely they have.
Jake Moore is an ex Dorset Police officer and cybersecurity advisor with them, he says so-called deepfakes can be weaponised to manipulate events such as elections by criminal organisations.
He told us that the use of AI to exploit people is a major problem.
"For example, with the election coming up, you could see some criminal groups want to take on the likeness of Rishi Sunak, for example.
"They could make him say absolutely anything and these can be quite influential," he said.
Regulations around AI-generated content is being assessed at the moment and would follow the Online Safety Act becoming law in October 2023.
But with legislation lagging behind the speed of technological growth, the responsibility falls on the public to be aware of fake content and how to spot them.
Think before engaging with content
While AI-content can be convincing, Jake explained some of the potential giveaways that an online video or picture isn't as it might seem.
"There are sometimes some inconsistencies with the movements with the person or the background, maybe some lighting issues or blurred edges, maybe even some audio sync issues that you could just spot.
"However, if something just looks or even feels a bit odd, that feeling shouldn't be ignored and this is where people need to be thinking twice before they interact with this, especially on social media as something as simple as a video can spread like wildfire on social media," he told us.
Jake stressed the importance of trusting where content is coming from to ensure misinformation and disinformation aren't spread.
He said: "Extra research is absolutely key. We spend so much time on social media and on our phones and we haven't always got that much time to do the research, but it's so important to make sure we know that these videos and audio clips have got the backing of verification."
He added: "It's so important before we interact with it or comment on it or share it with our friends to make sure we really do trust it, because as soon as we lose that trust, we could start losing so much in what we are learning about and these days."