top of page
Update: Age-restricting online services

8 min read

Update

Published:

10 Apr 2026

Last updated:

10 Apr 2026

Update: Age-restricting online services

TL;DR:

An unexpected development in legislation to age-restrict online services, as the Commons tables a new version of the proposed regulations with the aim of compromising with the Lords. This new clause includes a specific focus on harm, as well as flagging a possible intent to age-restrict text or voice chat and/or live streaming.

"Commons proposes compromise on online age restriction regulations"

We’ll guide you through what the changes are and why they might have been offered. If you find this article helpful and want deeper analysis or tailored advice and support, then get in touch – we’d be happy to discuss how our expertise can meet your needs. Even if you’re just curious about it all, we'd love to chat.


What’s going on?

Parliament has spent the last couple of months arguing between the Commons and the Lords (essentially Labour vs Conservatives) about how to age restrict certain online services. The Lords are insisting that the Children’s Wellbeing and Schools Bill should have a blanket clause banning under-16s from accessing social media, but the Commons are in favour of a clause that allows them to make broader, more flexible regulations as and when necessary (and across a far wider range of online services).


The most recent debate has been in the Lords, where they rejected the Commons version and stuck their own one back in. The Commons now needs to vote on whether to accept this rejection, and the expectation was that it would be a straightforward yes/no vote, with the government’s position winning out. Due to conventions around financial privilege, this would likely have been an end to the matter.


However, the government has unexpectedly tabled a revised proposal that makes some concessions. We combed through the old and new versions to outline the key changes for you.


What’s changing? 

The new clause includes: 
  • A focus on harm 
  • Specific references to content 
  • The need to account for different risks at different ages 
  • Restricting risker comms features 
  • A commitment to review in 6 months 

A focus on harm

The previous government proposal just said “the government can make some laws to age restrict stuff on the internet” (we’re paraphrasing, but that’s the gist). Now it says:


“The Secretary of State may, for the purpose of protecting relevant children from a risk of harm (including harm presented by content), make provision by regulations requiring providers of specified internet services [to age restrict stuff]”


In human speak, they’re now binding themselves to a specific purpose – any restrictions they create must be with the aim of preventing harm to children.


This is a good move. Previously, there were concerns about how wide-ranging the powers would be, since the original wording didn’t specify a purpose and that meant any current/future government could ban children from online services for any reason. As this was a bit hammered on in the Lords, it’s not altogether surprising that the government has made this concession.


Reference to content

In the above, the mention of ‘harm’ is specifically intended to include harm from content. This initially appears to be a strange development, since the Online Safety Act already covers content. However, because the scope of the proposed powers is ‘internet services’* rather than the narrower ‘user-to-user services’ in the OSA, it makes sense to ensure that the same principle applies.


This is almost certainly due to the debates in both Houses constantly raising concerns about content on social media and the need to demonstrate that those have been very clearly recognised. Tbh, it’s probably not going to have a huge impact because almost all of those concerns relate to services that are already covered by the OSA, and specifying that harm can be cause by content isn’t particularly necessary.


* this phrase caused slight panic in the telecoms industry, since ‘internet service providers’ is usually used to refer to the people that sell you your broadband connection. However, in the context of this legislation, it means “things that you do on the internet”. We think “online services” might have been a better phrase, which is why we tend to use it.


Accounting for age/risk

While the previous wording would have allowed the government to make different age restrictions for different services and features, the new clause actively binds them to considering “the fact that children of different ages may be affected by an internet service, or a feature or functionality of an internet service, in different ways”. This is another welcome change, since it means a one-size-fits-all approach has to be justifiable.


Riskier comms features

This is the really, REALLY big one. The government has written out a list of functions that it ‘could’ age restrict under the powers. While the clause doesn’t require them to do this, the degree of specificity (and curious similarity to some of the topics in the open consultation…) is a significant indication that they intend to legislate.


The new clause says that the government can require providers to restrict (notably, not ‘prevent’) access to a service or to a feature or functionality of a service through which:


(i) a user of the service could receive unsolicited contact from a person who is not known to the user;


(ii) a user of the service could encounter live oral communications or live video generated directly on the service, or uploaded to or shared on the service, by a person who is not known to the user;


(iii) a person who is not known to a user of the service could encounter live oral communications or live video generated directly on the service, or uploaded to or shared on the service, by the user;


(iv) a person who is not known to a user of the service could identify the actual or approximate location of the user.


We already know from the consultation that there are concerns about ‘stranger pairing’ (i.e. allowing contact between users who don’t know each other) and that this is particularly the case for video games because there are fears that playing alongside strangers normalises this.


The fact that this is about ‘restricting’ rather than ‘preventing’ access means that there’s potentially scope for providers to use approaches that allow limited types of access, or where features are low-risk and ancillary. It could also be a flag that regulations will be more focused on dealing with the feature of a service rather than banning a whole service because of a single feature.


Text chat

Being so broad, (i) is really significant. Text chat between users is ubiquitous across online video games, particularly when playing team games or competing against other players, and is also common during live streaming (see below). Entirely preventing contact with unknown users would effectively mean that children can’t take part in (or potentially even see) text chat. The fact that regulations will only ‘restrict’ access does potentially allow for situations where children can be involved, which is likely to be dependent on the level of risk. If that approach is taken, heavy moderation and proximity to a limited gameplay context could be examples of relevant factors. It also depends on what “unsolicited contact” means, and whether this is literally any message from a stranger, even in a group context.


Voice chat

Given that (ii) and (iii) in combination could cover ‘letting people talk to each other’, there’s a significant possibility that live voice chat is in the crosshairs. It remains to be seen what “a person who is not known to a user” means in practice and how this would be managed. While it could be possible for voice chat to still be available to kids if it’s with people they already know, it seems hard to imagine how that would work in practice.


It’s also going to be important for the government to recognise that not every form of voice/video chat is the same – if it’s heavily moderated, ephemeral, and solely to facilitate gameplay, that’s a much lower level of risk than the ‘roulette’ chat/video services. Where risk levels differ, it’s reasonable to reflect these in regulatory development.


Live streaming

The other function that’s clearly in scope (and almost certain to be age-gated) is live streaming. (ii) and (iii) between them cover both streaming and watching streaming, meaning that the government could specifically seek to restrict children from doing either/both.


What’s interesting here is that the wording of the clause is about restricting children from accessing the service/feature/function; in combination with the possible prohibition on even watching a live stream, this could cause problems for streaming platforms where you can watch without signing in. However, as the wording of the clause is to ‘restrict’ access, rather than ‘prevent’, it’s more likely that this will be used to prohibit children from making accounts, rather than viewing content. Bit like putting up a barbed wire fence instead of a brick wall.


Commitment to action

Finally, the last major addition is a clause committing the government to reporting on progress within six months of the Bill becoming law. As part of this, they will not only have to report on what they’re doing, they will also have to provide a timeline for creating the regulations. This is undoubtedly in response to repeated criticism of the government’s preferred “wait for evidence” approach and the accusations that the consultation is intended to kick the ban down the road.


What does this mean overall?

It significantly changes the context of the current consultation. We now know more about which actions are likely to be a priority, whether in terms of timeframe or boldness of action. This is miles away from the situation at the beginning of the consultation, where the vibe was far more circumspect – action was inevitable, but far less pressured and with more room to breathe and take stock. That’s not the case anymore.


It was always unlikely that there would be another proper chance to influence government policy after the current consultation closes – the nature of parliamentary process means that there is no requirement to consult on specific regulations, and the current exercise is comprehensive. With the additional pressure on the government, both from the 6-month reporting period and from the message sent by their attempts to compromise on their original position, there is now even less of an expectation that further meaningful engagement opportunities will be available.

How can Flux help me?  

Over the last few months, the regulatory environment in this area has shifted considerably. Realistically, the only guaranteed opportunity to have your say on this vital issue is the current consultation. Part of the reason that regulation could be a poor fit for the video games sector is because of the sheer breadth and diversity of the games and platforms on offer – there’s a real risk that some types of games, platforms, functions, or businesses could be in trouble if the nuances of the industry aren’t understood. Video game companies really need to engage with the consultation process so that they can ensure that the regulations built from this legislation are done so on the foundation of knowledge and understanding. With risk, proportionality, and feasibility all varying from game to game, and company to company, the more voices in the room the better.


If you require advice on how this affects you, we can help. If you want to team up with some friends in the same boat and respond together, we can help. If you want us to write your whole consultation response, we can help.


And even if you just want to talk this through a bit, we’re happy to chat, with no strings attached.

Author: Dr Celia Pontin, Director of Public Policy and Public Affairs

bottom of page