So if you've not heard, the OSA, or Online Safety Act 2023, was recently introduced in the UK. And, well, it's unsurprisingly not fantastically popular except for those that don't entirely understand what it does.
I'm giving a very basic description here but the law essentially requires platforms that might have content that could harm children (which is left broadly vague besides pornographic and violent content) to introduce protections to ensure children can't access such content.
The recommendations that are being broadly pushed here are either verification via some form of government provided ID (such as a drivers license), or a photograph of the individuals face that can be passed into a facial recognition solution to determine if the individual is old enough.
This is also an extension to already existing solutions that are in place. Voluntarily, mind, but performed by all major operators in the UK.
Mobile Internet BlockingUK mobile phone operators began filtering Internet content in 2004 when Ofcom published a "UK code of practice for the self-regulation of new forms of content on mobiles". [...] All major UK operators now voluntarily filter content by default.ISP Default network blocking
Internet customers in the UK are prohibited from accessing a range of web sites by default, because they have their Internet access filtered by their ISPs. The filtering programme has applied to new ISP customers since the end of 2013, and has been extended to existing users on a rolling basis. A voluntary code of practice agreed by all four major ISPs means that customers have to 'opt out' of the ISP filtering to gain access to the blocked content.
These on their own already have their share of problems, such as being overzealous in some cases and blocking individuals from accessing informative websites unless they verify that they're 18 or older. I've had to deal with this myself. But on the plus side, you need to only do it once and then you can pretty much forget about it.
Three big question here, though. If the parents are concerned about their child viewing inappropriate content, then who is disabling this filtering? How many children are actually bypassing the filter on their phones, and how? And why did the government feel it necessary to add another layer of protection here?
We often mock China for its Great Firewall, but ironically we essentially have the same thing in the UK. Fortunately for now, you can opt-out. But how long is that going to be the case?
What's bad and what's not?
The primary idea is for porn and violence to be hidden from children, simple enough right?
Mind that children here means up to the age of 18 years old. I think this distinction is actually important, as much as we keep hearing it as children, because as far as age ratings go with published media, we divide them up per 12, 15 and 18 (at least in this country).
If any content is deemed either inappropriate or potentially harmful then verification needs to now be in place to ensure that individual is 18 years of age or older before accessing that content.
This creates some weirdness. It's obviously not practical to attribute an age rating to every bit of content on the web, so if content isn't appropriate for a 7 year old, it's now treated as if it's not appropriate for a 17 year old either, despite a 10 year difference.
The age of consent in the UK is 16 years old. There are sex advice resources that might not be appropriate for younger children. Well, now they're not appropriate for a 16 year old or above either. Oops.
We also just gave 16 year olds the right to vote, but now it's not possible for them to view some content on Gaza, Ukraine, and other events transpiring in the world. So we have a group that can vote, but can't keep informed?
Per the BBC News...
Among the restricted content identified by BBC Verify was a video post on X which showed a man in Gaza looking for the dead bodies of his family buried among the rubble of destroyed buildings. The post was restricted despite not showing any graphic imagery or bodies at any point in the clip.
[...]
The same warning was experienced by users who attempted to view a video of a Shahed drone destroyed mid-flight in Ukraine. The Iranian-made drones, which are widely used by Russia in the full-scale invasion, are unmanned and nobody was injured or killed in the clip. [...]
Unsurprisingly this is extending to historical content too.
[...] Another post restricted on X shared an image of Francisco de Goya's 19th-century painting entitled Saturn Devouring His Son. The striking work depicts the Greek myth of the Titan Cronus - known as Saturn by the Romans - eating one of his children in fear of a prophesy that one would overthrow him and has been described as depicting "utter male fury". [...]
There was another interesting quote in there.
[...] Professor Sonia Livingstone - an expert in children's digital rights at the London School of Economics - said that companies might "get better over time at not blocking public interest content while also protecting children" as the law beds in over time. [...]
With a fine of £18 million, at minimum, there is no room for even taking the risk. It's £18 million, or 10% of global revenue, whichever is greater. This is not going to get better, it's going to get worse.
We're even seeing this starting to apply for music. Even though, for under 13s, Spotify already provides a separate platform called Spotify Kids. This inadvertently treats everything that's possibly suggestive, explicit or inappropriate, as content you need to verify to access.
There are discussions about whether LGBT resources are appropriate for children. There is a high probability that we could see Reform utilise the OSA to block such content to individuals under the age of 18. Perhaps even sooner due to mounting pressure from parents.
Again, they're using the term children but we're actually talking about several different age groups, but because of the way the law works, you're essentially pushed to assume you need to protect the youngest potential factor.
How long before this extends to episodes of Tom & Jerry due to depictions of smoking and violence? Other items of history blocked for being too explicit? A documentary on the atrocities of World War II, such as Auschwitz, for being too graphic?
Verification
There is no verification solution that's provided by the UK government, so instead it falls upon on 3rd party services to provide sufficient solutions.
Users don't get to pick what verification service they use, this is a decision left to the website you're accessing. The two most popular services in use, Persona and Kids Web Services, are both operated by for-profit US companies.
Do you want to provide some US company you know nothing about a photo of your face? Here's a quote from this article per the BBC News regarding a recent breach of a dating app which required photo ID for verification.
Tea Dating Advice, a US-based women-only app with 1.6 million users, said there had been "unauthorised access" to 72,000 images submitted by women.
Some included images of women holding photo identification for verification purposes, which Tea's own privacy policy promises are "deleted immediately" after authentication.
So we've already got a documented case in which a large and popular mainstream app didn't follow its own policy. So why am I expected to just trust any other website to follow through with this when there's absolutely zero oversight?
I've seen a lot of talk about this from the adult perspective. The obvious issues here resulting in potential ID theft, blackmail, etc. But let's look at this issue from a different perspective.
Imagine a young teen attempts to submit a photo of their own face to some random website to try and get past the verification. Because I guarantee you, some are going to be stupid enough to do this.
Now imagine that website doesn't delete those photos, and retains them - sells them on or uses them for nefarious purposes. Wow, well now we've got some nefarious website with photos of odd children they can potentially blackmail, never mind just adults.
Imagine the potential harm that could have.
As we drive these websites further and further underground, and end up on ever more nefarious websites that ask for verification, this issue is only going to get worse. It's just waiting to happen.
There's a lot more that could be said about driving this content underground, as is inevitably going to be the case, but I'll leave that for another time. But critically, it's ironically going to get harder and harder to monitor and regulate this, and young people will be exposed to worse content.
Personally for me, I've since started using a VPN to protect my identity from being stolen or retained by any of these websites and honestly would suggest others concerned about identity theft to consider doing the same. Ironically there's certainly a chance that a VPN might be retaining information about you, so I'd suggest avoiding anything free, as you're likely the product.
Ignorance of Criticism
Currently, it seems any criticism of the act is resulting in accusations of pedophilia? Which is not an accusation that should be made lightly, and yet we've now heard two instances of this happening.
Nigel (fuck face) Farage was critical of the act, and as a result, was compared to Jimmy Saville (per the Guardian). I'm not even sure where to begin with this, as nothing in the act would've prevented a situation as had happened per Jimmy Saville.
Another case, which I can't find the article for, was apparently an accusation made behind closed doors, during the previous government, against an IT specialist if I'm remembering correctly.
It worries me deeply as to why the government is choosing to dig its heels into the ground on this and appears unwilling to listen to any criticism - and is using this kind of language in response. It's absolutely unacceptable.
Final Thoughts
I was debating whether or not I wanted to go into more depth here, but further comments I've got lean a little bit beyond general criticism and more into areas that I think should be looked at to solve these issues better if this act is repealed.
Obviously I was a young teen once upon a time, so I've got my own personal experiences I'd eventually like to share, which seem to match-up with discussions I've had with others, that I think expose how harmful these approaches can inadvertently be for some people growing up and discovering themselves.
But again, I'll save it for another time. Critically though I think this act needs to either be repealed, or at least heavily revised, particularly putting more responsibility on the parent and not the platform holders.
This would've been longer but it's generally depressing to write about. As much as I've seen people mocking the UK for this, we're already starting to see other countries (Australia and EU in particular) look at implementing a similar system elsewhere.
I strongly believe it should be up to the parents to make the judgement if their child is mature enough to view different types of content, with given advisory (per age ratings, or other). This is an education and informational problem, not something governments should take it upon themselves to enforce.
As a final note, if someone knows how I could directly get in touch with someone that is involved with this act, I'd actually really like to potentially interview them and put forward some of my concerns - I would like to better understand the problems, thought process and perhaps debate solutions.