Education Committee — Oral Evidence (HC 1839)
Welcome everybody to this oral evidence session of the Education Committee. This is the second of two evidence sessions that we are holding on the topic of screentime and social media. The purpose of these two sessions is to enable the Committee to be informed in the contribution that we want to submit to the Government’s consultation on screentime and social media, which is currently under way. We are very grateful to our witnesses for being with us this morning, particularly as some of them are sharing direct personal experience of the impact that the lack of regulation of screentime and social media has had on their lives to date. I invite our witnesses in turn to introduce yourselves to us, and if you could, make any relevant declarations about the funding that you or your organisation receives for the work you are doing in this area. I hope you will understand why transparency on that is important for this debate.
Good morning, everyone. I am Rani Govender. I am an associate head of policy and public affairs at the NSPCC, and I lead our child safety online team. On the financial matter, at the NSPCC, around 85% of our income comes from supporters. We also receive funding from corporate partnerships, trusts and statutory sources. In terms of our research and work into child safety online in particular, we are supported by the Oak Foundation, whose support funds a substantial part of our research in this space. I also want to note that the NSPCC has a corporate partnership with Vodafone. That partnership helps to support funding our resources to professionals and some of our youth participation work. It does not fund our policy research into child safety online, and it has no impact on our policy development in this space.
My name is Esther Ghey. I am the founder and director of the Brianna Ghey Legacy Project. We receive funding from philanthropic sources, along with public fundraising and local business support in Warrington.
Good morning. I am Andy Burrows, chief exec of Molly Rose Foundation. Thank you for facilitating me appearing virtually. MRS funding comes from three main sources. We are heavily reliant on trusts and foundations, including the Oak Foundation and the likes of the Royal Foundation. We are also reliant on public fundraising and individual donations. I would draw to the Committee’s attention that we have received funds from two anonymous donations, and there have been media reports linking those to Meta and Pinterest. I would like to offer some additional context on that as much as I can, because the Committee might be aware that there have been attempts to question our independence and to at times impugn the motives of the foundation and the family in respect to the ongoing debates. Meta and Pinterest were the two companies that the coroner in Molly’s inquest ruled played a direct contributory role to her death. The family have been clear that they would not ever want to benefit financially as a result of Molly’s death, and that instead of pursuing legal action, they want to pursue change through the Molly Rose Foundation. I hope that gives you some context on some of that reporting.
Thank you very much indeed. I will begin with a question for Esther. You founded the Brianna Ghey Legacy Project in memory of your daughter and launched the Phone Free Education campaign following her murder in 2023. Can you tell the Committee about the role harmful online content played in your daughter’s death?
I will start by telling the Committee how it actually impacted the last two years of her life. Brianna really struggled with mental health, which was a result of her smartphone and social media addiction. She was self-harming and had an eating disorder, to the point where she was hospitalised due to the amount of weight that she lost. It was after her death that I found out that she was actually accessing content online that was encouraging her to do this. She was increasingly isolated due to being excluded from school because of her phone use. She just could not put her phone away. She would refuse to put it away. She would go to the toilet to film TikTok videos. She would be texting other children in class. Because of this as well, she would not engage with health professionals or other youth groups either. There were times when I tried to get her involved in youth activities outside school, such as an organisation called TAGS, which is a trans and gay youth organisation in Warrington. She went a couple of times, but she refused to engage. Then she went home into her room, completely isolated, and back on social media to the people who were doing her harm in the first place. Just to say, Brianna’s case is not isolated. I have been working closely with the NASUWT, and its members are increasingly telling us that social media is a key factor behind worsening behaviour in schools, with 59% in its 2025 survey identifying it as the main driver. In secondary schools, around two thirds of teachers say it affects pupil behaviour. Even in primary schools, nearly half report the same, despite children being below the age of consent for most of these platforms. Teachers are also saying that children arrive tired and irritable, struggling to follow the rules and finding it harder to communicate. Many also link excessive social media use to a loss of interest in learning and warn that disrespectful and sometimes cruel behaviour that young people see online is becoming normalised in how they treat each other. I will read three quotes from teachers quickly; they are only short—I do not want to take up all the time. Two are from secondary schools, and the third is from a primary school: “Pupils feeling extreme social pressures. Pupils with extremely low self-esteem and constant reliance on positive feedback via social media”; “Students go down rabbit holes and end up on sites that offer detrimental advice, spread hate and misinformation”. This is one that really stood out to me: “Children are watching violent and sexual behaviour on the internet. This behaviour is being seen replayed in schools. The number of pupils displaying sexualised behaviour has increased hugely. I teach six and seven-year-olds.” It is not only me and it is not only Brianna who has been impacted by this—it is every child in our country.
Thank you very much. You might have seen that last week we had some of the social media companies in front of us, and we will be hearing from another one later this morning. Do you think social media companies are doing enough to protect children from online harms?
No. I do not think that they are admitting that there are online harms or that their platforms are addictive. There is a complete lack of any moral responsibility.
Esther, you published a statement this week about your engagement—or lack of it—with the Government, criticising the Prime Minister for not having met you. What difference would such a meeting make to you at this time?
It is not just for me; it is for other bereaved families as well. I have 13 other families who have signed the letter. I have met Sir Keir Starmer prior to him becoming Prime Minister, and it would be really useful for him to hear from bereaved families about their experiences, to really understand what is going on in the country.
Andy, the Molly Rose Foundation campaigns for the removal of harmful content from social media following the tragic death of Molly Russell in 2017. How did harmful online content contribute to the circumstances surrounding Molly’s suicide?
Molly was 14 when she died in November 2017. It took almost five years for Molly’s inquest to take place. That was largely because of delays from tech companies in providing information. However, we know that Molly had been algorithmically recommended substantial amounts of harmful material, including content that promotes and glorifies suicide. We know that she was exposed to at least 2,000 posts on Instagram in the six months before she died. In reality, that figure is likely to be far higher. The impact was cumulative in nature. As someone who has subsequently spent time on accounts opened in the guise of a 15-year-old, I can tell you that being bombarded by that content so that your feed is post after post of material that is recommending suicide, promoting suicide and self-harm methods or feeding a sense of helplessness and despair is deeply claustrophobic. At the inquest, it was ruled that social media played a not insignificant contributory role in Molly’s death. It is the first time that we are aware that a coroner’s court has made that finding of fact. It was a tragedy, but eight years later, it is one that we know continues to take place. We lose a young person to suicide, where technology plays a role, every week in the UK. The research that MRF has undertaken has shown that actually very little has changed. In particular on platforms such as Instagram and TikTok, children are still being exposed to a continual barrage of harmful content. In many respects that content is more impactful and dangerous than it was eight years ago. Having heard the evidence from the tech companies last week where Instagram was quoting its teen accounts, the reality is that you can open an account as a 15-year-old and continue to be algorithmically recommended substantial amounts of harmful content. Our research has shown that two thirds of the measures that Instagram talks about in its teen accounts are either substantially ineffective or just could not be triggered at all when tested.
That is really helpful to hear. You have already mentioned the evidence that the Committee took from social media companies last week, where they asserted that they have done a huge amount to protect children online already and that their algorithms are designed to promote positive content and not just the emotionally stimulating. Do you accept those points? More broadly, do you think that social media companies are presently doing enough to protect children from online harms?
Social media companies have demonstrably failed to take children’s safety and basic safeguarding norms into account. There is an utter absence of safety by design. Fundamentally, that is because the incentives for tech platforms are not there for them to prioritise child safety and wellbeing—whether that is in terms of reducing exposure to harm or more positively in terms of being able to offer a safer and more nourishing experience. Further research that we undertook just before the Online Safety Act came into force showed that 48% of girls—one in two— aged 13 to 17 had been exposed to harmful content in the previous week: that is suicide, self-harm, intense depression or eating disorder content. Listening to the evidence that the companies gave last week was really what we have seen for many years: an exercise in PR lines and performative steps rather than meaningful action. I am sure that we will get on to what the next steps should be. However, ultimately, the Online Safety Act is not doing enough right now to force companies to meaningfully change their design choices and commercial decisions. Whatever policy solutions we adopt, until we set incentives that meaningfully force a change in the culture, design decisions and commercial motivations of these companies, child safety will always be a secondary priority for them.
This is a very good time to ask this question. As you know, representatives from TikTok, Meta and Roblox told this Committee last week that a social media ban for under-16s would be difficult to enforce and might push young people into darker, unregulated online spaces. Andy, the Molly Rose Foundation has previous argued that a blanket ban on social media for under-16s would not be the best option. What alternatives to a ban would provide sufficient protection for children online, and what evidence would support that view?
I would first say that we can see from Australia and our research there that this is a matter of the platforms demonstrating malign compliance. The age assurance technology is there to be doing a far better job than the companies have done so far. Universally, parents say two things. First, they want to see decisive action. It is clear that parents’ patience has snapped, and rightly so, but they also want to have confidence that the next steps in terms of policy solutions will be decisive and will work. That is why we are concerned that a ban will not deliver. In Australia, three in five children still have access to at least one active account, half of children say that they do not feel any safer and one in seven children say that they feel less safe. For us, the reality is that if we cannot have confidence that a ban will be watertight, children will continue to be on these platforms. Until and unless we make sure that these platforms are safe by design, the reality is that children will continue to come to the same harms that we see now: the algorithmic recommendation of harmful content, and child sexual abuse and grooming, which continues to escalate markedly. Only if we fundamentally shift the incentives on platforms so that they have a clear, overarching responsibility will meaningful action be taken. We are talking about some of the largest, most cash-rich companies in the world. The Online Safety Act indicated that platforms should be spending the equivalent of what Meta spent for a couple of weeks on advertising on the side of the IMAX on the south bank in Waterloo on adjusting the algorithm in respect of suicide and self-harm content. Until we regulate these companies effectively—and the clear parallel is with the way we regulated banks after the 2008 crash—and until the financial incentives and the broader stack are there to make platforms recognise that regulation is not just something to pay lip service to, we will regrettably just see more of the same.
To push that point, what specifically do you want to see in place of a ban? A ban is pretty clear and unambiguous. If, as you say, tech companies are subverting it in Australia, despite it being clear and specific, they will surely just subvert your suggestions.
That is the exact point. In Australia, this is a signature policy of the Albanese Government and a world-leading experiment, but the companies have still been able to subvert this through malign compliance. That is because the sanctions are frankly not there. We need a significantly strengthened Act where the regime applies with tough sanctions, and where we learn lessons from financial services—I am thinking of the senior manager regime, so there are clear personal incentives for senior decision makers in the platforms. Until we change the culture and the incentives, whether we are looking at regulation or a ban, we cannot expect to get the outcomes that we need. These companies are just too big for sticking plaster solutions.
Thank you very much. Esther, thank you so much for your evidence and for coming to the Committee; we really appreciate it. You have specifically campaigned for age restrictions on social media to protect children from online harm. How do you think an age restriction would help? What are your thoughts on the concerns that organisations raised last week about the limitations and risks of a social media ban?
I would like to see the age range. We were having a discussion before, and the NSPCC—sorry, I am speaking on your behalf—is suggesting different age ranges for different social media platforms. I think I can agree with that. However, at the moment, most of the social media platforms that we have need to be raised to 16. To expand on what Andy was saying, in March 2026 in Australia, the eSafety Commissioner reported that two thirds of parents of teens on social media had reported that no age verification process was required, and complaints indicated that platforms often fail to act when underage accounts are reported. We know that that is an issue. Again, it is social media companies not complying with the regulations in Australia. That is something that we need to do better, but it does not mean that we don’t do it. Early non-compliance is not unusual, such as the initial resistance to seatbelt laws or smoking restrictions. Although social media companies have not been fully compliant, as expected, a March 2026 YouGov survey of parents of children aged 16 and under shows that the ban is already benefiting young people. Sixty-one percent observed two to four positive behavioural changes. Among the improvements, 43% reported an increase in in-person social interaction, 38% reported greater presence and engagement, and 38% reported improved parent-child relationships, so there are benefits already. It is not perfect, but we would not expect it to be perfect straight away. Last night, I was looking through the App Store. As I mentioned before, different apps require different ages. A couple of apps that Brianna was on were Omegle and Bigo Live. I would highly recommend having a look at them. Bigo Live has a tiny dinosaur cartoon that looks really cute. If you were a parent spot-checking your child’s phone, you would not look at it, but it involves livestreaming and meeting strangers. It is basically like a dating site. It does say that it is for 18-plus, but after looking at those, I was recommended Monkey live video chat, Twitch livestreaming and ChatHub, all of which said that you need parental guidance, so they are not 18-plus. Snapchat is absolutely shocking. On the App Store, it says parental guidance. There is no way that Snapchat is suitable for anybody under the age of 18. If you wouldn’t mind, I would like to share a story from a parent I know really well. I am very close friends with this person, and it is shocking for me to hear something so close to home. Her child—an 11-year-old girl—was contacted on Snapchat by somebody posing as another young person. She said that “over time, this person built trust and formed what seemed like a friendship. It later became clear that this was an adult, who went on to send explicit sexual videos and invited my child to a group chat with other children, where sexual images of the children were shared. It was a shocking and deeply distressing experience for our family, and it had a lasting impact. We reported everything to the police, but even they face huge challenges. Fake accounts, disappearing messages and encrypted platforms make it extremely difficult to trace who is responsible. The police were investigating but have now said that they can’t trace the owner of the account, so other children are still being targeted and this is still going on.” It is shocking what is happening on a daily basis.
Thank you both for your answers. I really appreciate it.
I have a question for Esther and Andy; I will go to Andy first. Last week, as you will be aware, we had witnesses from Meta and TikTok in front of the Committee. Depressingly, although probably not surprisingly, they both rejected the idea that their platforms are addictive, although they did accept that there was a risk of over-consumption. How would you describe the relationship of children and young people with social media, and how would you respond to their claims that their social media platforms are not addictive?
The relationship right now is exploitative. The evidence about population-level harms is obviously mixed, but in essence we are probably looking in the wrong direction, because we should be focusing on intent. It is very clear that platforms like Instagram are built on a business model that is engagement-based and designed to keep eyeballs on screens for as long as possible. In that respect, there is no differentiation between the experience for teens and for adults. These platforms are inherently driven by personalised algorithmic recommendations. You just have to look at the recent earnings report from Meta, where you see time and again that they are boasting about improvements in “time spent” metrics as a result of their investment in AI. The idea that they are not building these platforms to maximise time spent and ensure that they are addictive is frankly absurd. It speaks to the lack of incentives in place that the platforms are continuing to be able to build platforms that are addictive by design, and then they have the chutzpah to come in front of your Committee and deny the glaringly obvious.
Thank you. Esther?
Exactly what Andy said—I don’t have anything else to add.
My question is to Rani to begin with. The NSPCC has said that a ban on social media for under-16s is preferable to inaction, but you have stopped short of supporting a statutory ban. Could you share the key reasons for that position, please?
Just for context, at the NSPCC we are of course deeply concerned by the harms that children experience online. In 2024-25, Childline delivered over 3,300 sessions with children and young people about the harms that they experience on online platforms. It is clear that the status quo is not working and that urgent change is needed. At the NSPCC, we are proposing something that we think is more ambitious and, importantly, more comprehensive than a specific social media ban. We want to see the introduction of risk-based minimum age limits so that we have new, clear criteria and standards that set out what a service must comply with for it to be 13-plus and 16-plus, and what cannot be available and accessible for children at all online. That is similar to the approach that we take in gaming and with films. In those sectors, we recognise that the breadth of what children can see and experience means that we have to think about a child’s age and stage of development, in terms of what they can access. We don’t see any reason why the online world should work any differently. We want these new standards to be introduced and robustly enforced. In terms of what would go in those, we would absolutely need to look at harmful content. We also welcome the fact that the Government’s current consultation is looking at harmful functionalities—things like livestreaming and location sharing on social media. It is crucial that they are turned off for under-16s. As we have just said, we should switch off addictive design features like auto-play and endless scrolling, which keep children using those platforms. As I have said, we want this to go beyond social media. It would be a shame and would let children down if the only focus at the end of this consultation was on the harm of social media. We absolutely need to address that, but we need to go further. We need to look at gaming, messaging apps and AI chatbots. We need to ensure that we apply across the online world whatever clear standards we end up with, in terms of what age-appropriate experiences should be. I guess our approach can be summarised by saying that we want to incentivise safety. Where platforms cannot meet these strict standards, they should not be able to offer their services to children. When we are talking about the bans and what is going on across the world at the minute, although we welcome the focus on the age that children should be able to go online and do certain things, it is important to recognise that there is yes or no, on or off answer. All the approaches that have been taken across the world look a bit different. In Australia, they have a broad definition of social media, but compliance is really focused on 10 services. In Denmark, they are setting it at 15, but they are saying that there is parental consent, so children can go online younger. In Indonesia, they have a definition of high-risk platforms, so they are looking beyond social media, and they are focusing compliance on eight companies. We have an opportunity in the UK to think about how we can be as bold and ambitious as possible for children, to look across the online world and all the harms that children are experiencing, and put something in place that reflects the risks that children experience now, but is also future-proof and can react as new risks and services emerge over time.
We have to recognise that some of this is a matter of language—let us take location sharing, for example. A restriction on location sharing is a ban on location sharing for certain people. It is just that some people react to the word “ban” as being more sensationalist. That is probably worth noting. You have outlined a number of other measures. Most people who argue for one measure would accept that there is no single one that will solve the issue. Are you confident that measures other than a statutory ban will be more effective than a ban?
I do not see our position as in opposition to the discussions around a ban. It goes back to my point about what other countries are doing. There is no option of a ban here. Whatever the Government choose to do, they made it clear last night that there will be restrictions at the age of 16 on children’s experiences online. That is going to require some definition in terms of where we draw those lines. What we are advocating for is that we should make those expectations really clear. We think some things, as you say, should be banned for under-16s, like location sharing or livestreaming. But we think there is space to have safer online services for children. Where we can incentivise that safety, we support that happening as well. We are certainly not in opposition to strong restrictions coming in on what is harmful for children. Like you say, a lot of that has become semantics. What we are saying is that there is no easy route out of this. This is complex, and the enforcement measures that come with it will have to be really robust. What we want to see is the Government really grasping the challenge with both hands and putting something comprehensive in place. I do not think this is an either/or. It is about looking across these measures and really pushing services to stop offering harmful functionalities and services to children, and where they can be made safer, making sure that that happens as well.
On the issue of effectiveness, I have a quick supplementary for Andy. The MRF put out your response to the first few months of the Australia ban. Your conclusion was that the Australian social media ban is not effective. Is that a hasty conclusion to come to after 92 days? Isn’t it a matter of perspective? Your own figures say that in those 92 days, 39% of users are off completely. By platform, most of them have halved. In one case, it has reduced by 60%. Some might see that as quite an effective thing to achieve in 92 days. Do you have any thoughts on how strongly you have gone in with, “This is not effective”, so we can look at other things?
I should say that we were also clear that it is very early days. There is a large number of systematic reviews under way. In 12 to 18 months’ time, we will have a much clearer sense of the harm-to-benefit ratio. We felt that it was important to commission that data, given the speed at which public policy discussions are taking place in the UK, but it is clearly only an interim snapshot. One thing that is important, if we do proceed down the route of an Australia-style ban, is that we are ready to learn the lessons of what is working and what is not working. Many proponents of an outright Australia-style ban here in the UK have advocated for it as a decisive firebreak, and our data suggests that it is not that. That is not to say that there cannot be advantages that can come from that. This is really an attempt to ensure that we can have a nuanced, evidence-based discussion of what the implications of a ban will look like, and if we do go down that road, that we can ensure appropriate lessons are learned. To go back to the issue of malign compliance from the platforms, one of my concerns here in the UK would be that we have two regulators, Ofcom and the ICO, neither of which has been prepared or willing to set out outcome-based measures for what highly effective age assurance looks like. There is no figure that says highly effective age assurance means 90% or 95% of children, or, wherever you draw that line, what that looks like. If we go down the route of a ban, yes, I think that would be regrettable, but we have to then make sure that that is watertight. The evidence from Australia shows us that there are clear imperatives about how any ban might be brought forward.
We have seen from the experience of an outright ban in Australia that there are positives and negatives, and that the ban there is clearly not working as comprehensively as the lawmakers might have hoped when they came up with it. Do you think that a slightly more nuanced approach, along the lines of what Rani was saying, that picks out different types of harm and different types of functionality and content, and bans those for particular age groups, would be more effective? For example, it could involve saying that if you have access to a chatroom with strangers, the platform has to be for over-16s—or location sharing, or algorithmic content, or whatever it might be. Do you think that would work better than just banning a platform per se?
That is absolutely the approach we would like to see and I agree absolutely with what Rani said. It is important, because that means we can then start to meaningfully incentivise the tech companies; if they want to offer products to younger children, they have to take out that high-risk functionality. It also means we have measures that can apply across the stack—so not just to social media, but to gaming companies. When I look at the risks— particularly the criminal risks around children being groomed into acts of suicide and self-harm—they are taking place on the likes of Roblox and Discord rather than on social media. That functionality-based approach is risk-based and harm-based, and it means that it can apply not just across social media but across other platforms, such as gaming services and AI chatbots, where lots of these risks now also are.
And it incentivises better behaviour by the tech companies, ultimately.
Exactly.
Following on from some of the previous questions, let’s take this opportunity to hone in and focus on the Australian Government’s social media platform ban for under-16s. Obviously, I will give all of the panel an opportunity to express their views. We have heard, even in this session, some contradictive and inconsistent data coming out of Australia at this time. We have heard that children are finding it relatively easy to circumvent the ban. We hear that two thirds of young people are still accessing some form of social media, yet we also hear that over 60% of parents report positive behavioural changes by their children. My question is: has it been effective so far and what lessons can the UK Government learn?
For me, the statistics that I have presented to you around families and parents show success. As a parent, the main thing for me with Brianna was the struggles in our relationship; we were just constantly battling over her phone use and over social media. I hear from parents a lot that they just want to be able to say to their children, “No, you are not allowed on this,” and I think that that works to an extent. A lot of parents are able to keep their children off social media until the age of 13, because they use that as an age guide. I believe that it is successful, but we also need to wait a bit longer to see the success from Australia. That does not mean that we have to delay, and I think that we could do it better. There are already things to learn from there and the UK could do a better job, because we have the hindsight to learn from as well.
Part of the lessons that you mentioned earlier is around compliance with social media platforms.
Yes, and hopefully I will stick around to hear from Snapchat on the third panel. I would also like to offer them my book, because they need to know and they need to take moral responsibility. I am completely disappointed in social media companies, and I think it is about time that they also take responsibility.
Thank you. Rani?
As you set out, it’s a mixed picture from Australia at the minute. The eSafety Commissioner there shared that 4.7 million accounts have been removed from children since December, and that is not inconsequential. At the same time, we are also seeing stats showing that around seven in 10 parents say their children still have social media accounts. The commissioner has set out significant concerns about the steps that services are taking to comply. What we really need to learn from, though, is the scope of the Australia social media ban. They have technically defined the scope broadly in terms of how social media works, but compliance comes down to looking at 10 key platforms; there is little to no focus on what other services are doing under that system. I have a couple of key concerns with that. One is that we know that the harms children face expand much beyond those 10 services. Even if you do get really effective restrictions on those, it is not about children moving to dark or unregulated spaces; it is about them using the other platforms they use day in, day out—platforms like Roblox, Discord and those that are out of scope. That is why a key part of our recommendation is that this has to go beyond just looking at social media, and has to look at the risk services pose and to use that risk as the basis for whether children should have access or not. I also think it poses a challenge in terms of future-proofing. If those 10 platforms start to be used less—as should, hopefully, be the intention in Australia—and other versions emerge and pop up, will the legislation be adaptive enough to tackle those harms? That is why we think this focus on the risk services pose, and setting that out as a framework, with clear age limits, could be much more robust now in capturing the range of harms, but also going much more into the future. My last point is to again pick up the issue of enforcement. I will not go into it, but I want to reiterate the point that all the solutions we are looking at will require a better understanding of who children are online, where they are and what they are doing. Services have failed completely to do that effectively at the minute, and we need much stricter legal powers to force them to do it properly. We also need not only the use of fines, but senior manager liability and the quick use of business disruption measures where services are simply ignoring their duties to children.
Andy, is there anything you would like to add?
I have two very quick points around enforcement. We have to be realistic about Ofcom’s approach so far. It has taken them 13 months and an investigation—that still has not concluded—to take action against a pro-suicide forum directly linked to at least 164 UK deaths. There are big questions about how we see the step change in enforcement that is required, so that we do not end up in a situation where we have a ban, but then see a similar set of impacts to what we are seeing in Australia—where the picture is messy—and it then takes Ofcom an eternity to take action on enforcement. I would also point to an example from South Korea. In 2011 the South Korean Government banned online gaming between midnight and 6 am in response to concerns about excess use and addiction, and the impact in terms of pupils turning up to school tired and that affecting their educational performance. In the first few months of that curfew coming into force, we saw immediate benefits, with time spent online dropping and some positive indicators. But by year four, we saw that children were actually spending more time online and gaming than they were beforehand. Internet addiction had decreased by just 0.7% and the average child had benefited by an extra 90 seconds of sleep. If we go ahead with a ban, we need to be open-minded about its limitations; we cannot conclude that this is a magic bullet. We also need to recognise that the impacts may improve over time—we may see benefits over time—but that those may start to ebb away, as we have seen with South Korea. We need to monitor this, and just assuming that a ban can achieve everything would be misplaced.
We are under heavy time pressure this morning—it is quite unusual for us to have three panels—so I need to ask our witnesses to be as concise as possible in answering our questions, and the same goes for members of the Committee in their questioning.
In July last year, new age verification requirements were introduced to protect under-18s from seeing harmful content online, including hate speech, pornography and violence. How effective have those measures been in reducing children’s exposure to harmful content? Can you first focus on social media platforms and then on other platforms that children might use?
We can see that there is highly effective age assurance at 18, and then age assurance to determine the age of children beneath that age. There clearly have been real gains around age verification at 18, and that is very welcome. Our concern would be that we are not seeing age assurance being deployed effectively when it comes to identifying children beneath 18 to offer them a suitable and age-appropriate experience and to ensure we are seeing compliance with the codes of practice. That comes back to the fact that we have regulators—both Ofcom and the ICO—that are very reluctant to put a number on it. In the absence of clear, outcome-based measures—wherever you draw that line, a line needs to be drawn—we do not have a focus, which means we are seeing a very messy picture.
Ofcom reported that when it looked across the major pornography sites, the use of age verification did lead to a drop in users accessing them, and it is fair to assume that some of those users will have been children. So, as Andy says, with those hard measures we are seeing improvements, and we also know that children are reporting content on social media being blocked, blurred and so on. I do think, though, that that is not going far enough in terms of efficacy. Some of this links back to how services are assessing the risks on their platforms. Ofcom has been transparent that services are not taking their own risk levels seriously enough. The Online Safety Act allows for very strong mitigations to be put in place, but at the minute that is not happening. This comes back to the point that services are not accurately assessing the risks on their services. So we have seen improvements. Where age verification is being used, we have seen some really significant changes, but it does come back to the issue of enforcement and how companies assess their own risks so that we can get the right mitigations in place.
We have heard a fair bit about VPNs and other workarounds being used by children to bypass age verification requirements. To what extent do you think that that is happening at scale? How should we approach the issue of VPN use?
Again if we look back to the date last July when these restrictions were brought in, there was a lot of discussion about VPNs. There was also a huge use of these age checks. On the day that the restrictions came in, it was estimated that well over 5 million age checks took place. Data collected since by the likes of Internet Matters and others suggests VPN use has remained consistent by children in this time and that there has not been a serious spike. It is really important that data protection, education and support for children continue to monitor and look at this, but the general evidence suggests there has not been a huge use. Where services do not know the location of their users, and where they are coming from, they should take the strongest protections possible until they have a better idea of who their users are. Ofcom providing clearer guidance around companies needing to do that would be welcome, but I do not think the evidence suggests that people are bypassing these measures en masse.
Andy, do you have anything to add?
I would agree with that and just add that in Australia we are not seeing a wide uptick of VPNs, largely because accounts aren’t being taken down.
The NSPCC, as you said earlier, Rani, proposed a risk-based approach to children’s access to social media. You have explained how some of that will work in practice, but I am interested in understanding more about the effective enforcement by platforms and regulators, because we have heard quite a bit of criticism of Ofcom. Of course, the current Online Safety Act is meant to be a risk-based system, but it seems that Ofcom is not able to use the business disruption measures to which it does access to enforce the law, particularly where companies are based overseas. What more should be given to regulators so that they can enforce what we have in our current law, let alone future regulations?
We do have a welcome opportunity to strengthen the legislation here. Something that has been a challenge with the Online Safety Act is the safe harbour provision, where services set out mitigations to address the risks they have found, or Ofcom says they have done enough. They can also set out alternative ways of keeping children safe and take a different approach to what Ofcom says. I think we need to remove the wiggle room and the leeway here. We have to have clear, watertight standards that set out, pretty much in black and white, what has to happen, with no room for alternatives or taking a different approach: “This is the way that it has to be done.” On the way that that is then enforced, I agree that the challenge is that Ofcom does have a huge number of powers at the minute. I would be interested to see the Government use this opportunity not just to strengthen the rules but to strengthen Ofcom’s mandate and the expectation that it acts rapidly and uses those powers. I think that the problem at the minute is that we are working through fines and lots of other processes before we get to the point where we consider using business disruption. We should be looking to say that if a service is not meeting these clear safety duties for children, business disruption should be an immediate lever that Ofcom can pull. There has been lots of discussion in the sector around how you do that so that businesses are disrupted while investigations happen, and we are not waiting for the outcomes of lengthy investigations before measures kick in. I would like to see clarity and a push from Government on using that power as well.
Thank you all for coming to participate in today’s discussion. I say that particularly to Esther, because this is a really emotional topic that is hugely important to all of us. I am not going to ask Andy this question, because I think he has answered it. But, Esther and Rani, do you think that Ofcom and the Information Commissioner’s Office have the right approach to regulating age assurance?
Sorry; I don’t know enough about that to answer. I’m really sorry.
That’s all right; it’s okay. Rani?
We support the approach the regulators are taking, of not mandating one specific way as the only way to do this. With something like age assurance, there has to be a range of options, particularly as we are looking at children using these services more. As we know, children might not have the hard forms of ID verification that adults are much more likely to have. We therefore support the idea that it is tech-neutral. What needs to be really clear is that these services meet Ofcom’s principles, so they are accurate, robust and reliable, and that they are clearly in compliance with the ICO and data protection. I do not think that that has been reported on clearly enough, and I think that that has impacted public trust. I would like to see much better public communication around how these mechanisms are complying. But, as we look to have greater age assurance, and children using these systems more, we have to think about how these systems work flexibly so that they are accessible and safe for all children to use.
Thank you. My next question is about the Government’s announcement last week about putting existing guidance on restricting mobile phones on a statutory footing. When I speak to schools in my constituency of Harlow, there is a mixed response on whether that needs to be statutory or not. Some schools say they do it very well anyway; other schools say, “It’s all well and good making it statutory, but how is it going to be enforced?” What difference do you think putting it on a statutory footing will make? Also, potentially, what additional support will schools and educational establishments need to ensure that this is actually meaningful on the ground?
I welcome this statutory ban coming into law. From speaking to teachers, and from working with the NASUWT, who speak on behalf of 300,000 members, I would like to see the out-of-sight policy removed from the guidance, and also funding for headteachers to implement this. We know that the out-of-sight policy does not work; I know from a personal perspective that it does not work, because of how Brianna was in school. I hear from teachers all the time; in fact, I actually have some quotes—sorry, I’ve put my phone away because I did not realise we were going to go into this. The NASUWT’s “Big Question Survey” found that, when members were asked which pupil behaviour problems caused the most concern on a day-to-day basis, more than a third cited distraction from mobile phones in the classroom. One quote said: “Pupils believe it is their right to access their mobile phones throughout the day, interrupting learning, causing confrontations, damaging their ability to concentrate due to their growing addiction to phone use.” That is what I am hearing when I go to teachers’ conferences. There are so many horrific stories, and I just feel that teachers should not be dealing with this in the classroom; they are there to teach. I would also just say that that sometimes headteachers are not in the classrooms; they are not on the frontline, so they do not see the issues that mobile phones are causing. But, yes, I really do welcome this statutory ban.
I declare an interest, in that I used to be a teacher and I absolutely recognise what you are talking about. My response would be that we can put it on a statutory basis, but do you agree that there still needs to be support? You talked about funding, for example—I am looking at Daniel over your head as I say this. You said that it was important, but what else do you think the Government can do to support—
We are moving on to school representatives in just a second, who will perhaps be better able to answer that question. We are getting towards the end of this session now, but you can come in really briefly.
Just two things: the out-of-sight policy should be removed and the guidance should be more clear. The other points in the guidance are okay, but the out-of-sight policy should be removed, and there needs to be funding for schools to implement it.
That is really helpful; thank you very much. We are hugely grateful to all three of you for coming to give evidence to us this morning. One of the takeaways from this session is that there is more agreement between the three of you and your organisations than is perhaps evident from the public debate on this topic. It has been very helpful for us to be able to drill into that, so thank you very much indeed. We will move on to our second panel because we are pressed for time, but as I always say to witnesses, if there is anything you felt you were not able to get across in the time we had, please do write to the Committee after the session. We would be very glad to hear from you. Witnesses: Daniel Kebede, Darren Northcott and Tom Middlehurst.
Welcome to the second panel of our session on screentime and social media. I am glad to welcome three representatives from unions representing members of the teaching profession to give evidence. Before I ask you to introduce yourselves, at least one member of the Committee would like to make a declaration of interest.
I declare that I am the chair of the APPG for schools, learning and assessment, for which the NEU is the secretariat.
I am an officer on the same APPG.
I was a paid member of the ASCL, the NEU and the NASUWT in my former life as a teacher and deputy headteacher.
Lovely. I invite our witnesses to introduce themselves.
My name is Darren Northcott. I am the national official for education at the NASUWT.
I am Tom Middlehurst. I am the deputy director of policy at the Association of School and College Leaders.
I am Daniel Kebede. I am the general secretary of the National Education Union.
I want to talk to you about mobile phones in schools. We know that most schools already have some kind of policy that prohibits phone use during the school day. Could you tell us how you think the current approach is working in schools?
Currently, our feedback is that it is variable. In some schools it seems to be working well, and in others it does not. In part, that comes on to the issue of whether the guidance is statutory or non-statutory. The fact that the guidance is non-statutory sometimes causes problems in schools that would like to enforce more restrictions on mobile phones and connected devices, but do not feel confident enough because the statutory underpinning is not there. In terms of the experience, it varies. We just saw the announcement from the Government. Our judgment is that statutory guidance would provide further support to schools in making sure they have appropriate restrictions on access to mobile phones and other devices during the school day. To answer your core question, it is variable. In some schools they have good practice, but in others it is less effective.
I agree completely with what Darren said. The effectiveness of those policies is variable because of the ambiguity around the status of that guidance. Sometimes that is not helped by the Government. In January, the Secretary of State wrote to all headteachers saying that she wanted all schools to have due regard to that non-statutory guidance. That letter went to headteachers at 6 am. Later that afternoon, the DfE were putting on webinars about the new app for year 11 to download their results in the summer—which you can only do if you scan a QR code that your teacher has access to and that you must have your mobile phone out in the school at some point in the school day to be able to access. There is a lack of alignment within the same Department and between the Government. That, in itself, creates confusion.
Absolutely.
Tom makes a really important point. The starting point is that schools have had to deal with mobile phone use for decades, and many have really good coherent and appropriate policies in place. However, we are seeing technology play a much greater role in education. Many of our members are talking to us about this need for flexibility so that young people can use their phones at certain points. What underpins that is that budgets in schools are incredibly tight and there is a deficit of appropriate technology for young people to access. That means that their own personal mobile phone is something that schools are wanting to use from time to time for the reasons that Tom pointed out.
I want to go back to what you were saying, Darren, about the guidance becoming statutory. Quite a lot of schools have a “not seen, not heard” policy. That means that you turn it off, put it in your bag, pretend it is not there and do not get distracted by it or look at it in the break or at lunchtime—you just carry it around all day in your bag. Personally, if somebody told me to do that with my phone all day, I would find it a bit of a challenge not to look at it. Therefore, I do not think that that is the most effective way to do this. However, a headteacher in my patch said to me yesterday that the fact that it has now become statutory would make no difference at all—he could still carry on doing the put it in your bag and the “not seen, not heard” policy, because that was still within the letter of the law and so nothing needed to change. Do you think that the Government have gone far enough, or should they have specified that phones be handed in at the beginning of the school day and given back at the end rather than being carried around in bags?
I think that is exactly right. Changing the guidance from non-statutory to statutory is a step in the right direction. However, maybe it is worth leaving the bunting in the loft for now, because I think that there are other steps that would need to be taken. One of the issues with that non-statutory guidance that will be made statutory is that it was not developed in consultation with trade unions or the workforce more generally. It does not necessarily reflect the reality that the headteacher in your constituency was perhaps talking about. Statutory guidance is important because it provides a consistent underpinning, but it is what that statutory guidance says and does not say, which is the key next step.
Do the other witnesses have anything to add?
indicated dissent.
I think our members would welcome a degree of flexibility in how that statutory guidance is implemented. Where a “not seen, not heard” policy is already working and has been for a long time, it would seem foolish to have to change that policy and potentially invest quite a lot of money in pouches or whatever. We might come on later to the role of Ofsted in all of this. However, there is a process by which Ofsted can check whether a policy is effective or not. Of course, if it transpires that that policy is not working and pupils are going in and checking social media in the toilets while they are allowed at break time and lunchtime, that is obviously a problem. Yet, if it is effective then there does not need to be that investment to go further. I think that is a role for the inspectorate.
I agree with your point though about pupils being expected to use tech and then not have tech. It is quite confusing.
I believe that all three teaching unions have welcomed the statutory guidance, but also made the point that more funding is needed to support schools to implement measures. What funding do you believe schools would actually need and what would this cost?
We need to look at funding in the round. There has been a complete erosion of school funding over many years that needs to be corrected and reversed. For example, a pouches system is incredibly costly for a school. You are looking at about £20 per unit. If you have 1,000 children in your school, that is a £20,000 outlay to get going. If we are going to do this seriously, there needs to be the most appropriate funding associated with it.
I would echo Daniel’s points. Depending on exactly what the statutory guidance looks like and the extent to which the Government are clear whether a “not seen, not heard” policy meets the new statutory requirement, a lot of schools will require investment with cash that they just do not have, as Daniel alluded to. Our members tell us that it is between £20,000 and £40,000 to implement that type of policy, which schools do not have at the moment.
Darren, is there anything you wish to add on that point?
The approach that we think is the most encouraging is the lockable pouches approach. As Daniel says, that is not necessarily per unit a huge amount of investment, but it is investment that schools would find it difficult to meet from their own budgets. Some national public investment in that would be helpful and perhaps necessary to get the best possible practice in place. It is not something we can expect schools to meet universally from their own budgets.
With Ofsted now examining mobile phone policies and how effective they are when judging behaviour during inspections, do you welcome that development? I will start with Tom as you mentioned Ofsted earlier.
I think we need to distinguish between Ofsted and what they are doing at the moment while their guidance is non-statutory, and what might happen in September when that guidance does become statutory. We have a real issue with Ofsted. The word the Government used was reinforcing non-statutory expectations, because that leads to the sort of ambiguity that Darren is talking about. Is it statutory or not? If Ofsted are going to report on it, even if it is not, it feels very much like it is to school and college leaders. When the guidance becomes statutory, it is absolutely right that we use the inspectorate to check that that statutory legislation is being followed through at a school level. So I think we should distinguish between those two things. In terms of how Ofsted suggest going about looking at that, it seems proportionate and sensible, notwithstanding the huge concerns we have about the current framework and the number of judgments that Ofsted are having to make in a very short amount of time. It is worth distinguishing between those two different situations that schools will find themselves in, depending on when the ban comes in.
I think inspection of provisions such as restrictions on access to mobile phones and connected devices in principle is reasonable. It would be hard to argue with. But on the practicality, Tom has touched on some of the points that are important to acknowledge. For example, if we are enforcing through what is a very punitive and high-stakes accountability regime, compliance with guidance, then it is incumbent on the Government as a minimum to make sure that schools get the support they need in order to comply with those expectations. I think our members would not be entirely confident that that would be the case in practice.
I certainly think the inspectorate has enough sticks to beat schools and teachers with at the moment, if I am being brutally honest. The issue around behaviour and social media in schools is a really important one. I do not want us to be distracted from what real action needs to be taken. We have 95% of our members saying that social media is leading to a real increase in behavioural issues in school. If you talk to any teacher at the moment, social media is a real issue. I have witnessed young people threatening to stab each other on Snapchat. It is a really difficult climate out there at the moment. What we need to focus on is statutory bans and real legislation on social media for young people. Schools are largely doing a very good job in incredibly difficult circumstances.
What would each of you assess is the impact of screentime on how students interact with each other in all settings, but also how they interact with your staff?
I touched on it. It is an incredibly difficult climate in which to be a teacher or to be a young person at the moment. Young people are spending around 35 hours a week on social media, so being online is like a job, and the content that they are viewing is extreme, misogynistic, racist and homophobic. We have done an algorithm experiment through our campaign, “Big Tech’s Little Victims”—do get a copy of that—and found that within three minutes of scrolling, young people are witnessing extreme content. They are being exposed to one piece of extreme content every minute that they scroll. We would be absolutely insane to think that that does not shape their interactions with each other and with teachers. Female teachers in particular are experiencing a huge increase in misogyny and misogynistic language. What young people, particularly boys, are experiencing online is completely distorting their view of women, sex and relationships, and we should all be alarmed.
I would echo exactly what Daniel said. The key point is that even with a social media ban, whatever that looks like, and a ban on mobile phones in school during the day, young people will still access this content at some point in their lives. Schools are having to navigate how they prepare young people for when they do see that content, and how they navigate, respond to and use digital literacy to make sense of harmful content where extreme views might be being expressed. As well as dealing with the day-to-day consequences of them accessing that, schools are thinking about how they can prepare young people in the future to meet those challenges. It is not just a case of behaviour and attitudes, but about the skill of understanding and making sense of that content.
Understood. And Darren?
Daniel and Tom’s points were very well made. Another point that our members raise is about the amount of screentime that young people are having. There is screentime out of school, but increasingly, there is screentime in school as well. Can school be, to some extent, a haven from endless screentime, whatever the nature of the content, and just being in front of a screen rather than doing things offline, in the real world? That is an important debate to have, particularly with regard to our curriculum, and teaching and learning in schools. I know that there is a live debate about the policy that has been adopted in Sweden, the changes that they have made to their curriculum and whether that was right or wrong—I think it is too early to say. But that is an important part of the debate: the question of how much time in a day children spend not looking at a screen and doing something worthwhile that is not screen-based.
I was going to come to this question later, but it follows on beautifully from there, so I will ask it now. As a former maths teacher—I declare that I was a member of Daniel’s union and Darren’s union—I found some of the homework platforms incredibly useful. We have talked about harmful algorithms this morning. I can tell you that those were some useful algorithms that identified what students were not good at, gave them content to improve on it and kept giving that to them week after week until they got better. However, that of course meant that they were spending time on screens at home. How do we reconcile the statutory ban on phones in schools with the fact that some of these platforms are genuinely helpful to our students?
That is a really good point. It is difficult, and I do not think there is a straightforward answer to it. Of course, there are online platforms that are incredibly useful for young people, particularly educationally. Part of the answer lies in thinking about what a child spends their time doing over the whole day. Part of the solution is making sure that in schools, there are sufficient experiences that are not in front of screens—so by all means use screens, but use them in a proportionate way so that there is time for children to engage in activities and learning that is not on a screen, but structured and formatted in a different way. That is not to discount the importance of those platforms—they are important—but we should make sure that we have a balance between that kind of learning and other forms of non-screen-based learning.
Thank you. Tom or Daniel, do you have a point on this?
A lot of the debate has been about the quantity of screentime, and little has focused on the quality of screentime and the quality of the experience that young people are getting. We could do more to understand what that looks like. That is why I think the point that Rani made during the previous panel is so important: that taking a risk-based approach to some platforms not only future-proofs what might happen later with different tools, but allows for those quality tools and apps that help young people to learn and foster a sense of belonging in some people, because they can help young people flourish if used in a proportionate way.
Obviously the use of online platforms for learning must align with the reality of the classroom. Schools have different policies on how to use them, and lots of that will depend on their financial ability to purchase devices. On some platforms such as Google Classroom, there is an ability to use a device that allows them to access that portal while excluding everything else. We need to take that sort of approach, really.
Finally, and very briefly from me, have your members identified any specific social media platforms that are causing more harm than others?
I think they identify a range. The names I would share would probably be the names you will anticipate. I do not think there is one that is seen as being particularly more damaging than another. We heard about Snapchat earlier and you hear quite a lot about it, I have to say, but I would not want to say that there is a significant emphasis on Snapchat compared with the other social media platforms that are generally used.
That is fine, we have got Snapchat next so we can ask them.
But also largely ones that allow those sorts of group chats. When we were at school, we could have a falling out with somebody in the day, go home and things would cool off in the evening. Now, if there are inter-pupil disagreements or bullying or whatever, it is 24/7 for them because of the nature of these group chats.
We have not talked a lot about parents in this at all, but parent groups—they are not on Snapchat, it is often Facebook or Whatsapp groups for parents—are particularly problematic and quite insidious sometimes for school and college leaders.
Thank you all for your answers.
Can I drill in a little bit more on what a good and appropriate approach to the availability of technology within schools would look like? You have been saying that pupils’ use of their own personal devices, for all the reasons that we know about and the accessibility of social media platforms on those devices, is problematic in schools. You support the ban that the Government are introducing on those uses, but there are platforms that are increasingly deployed within schools, including some of those platforms being recommended by the Government. For a long time, we have not had a particularly adequate articulation of what an appropriate and fit for purpose approach to IT looks like in schools. Schools have often lagged behind where the rest of the population is—where workplaces are—in relation to access to technology. From your members and the work that you undertake as trade unions, what does “good” look like in relation to schools’ access to technology? How can it be ringfenced and closed, and appropriate for the kinds of platforms that can help children’s education within the school day?
There is a lot in that question, but you are alighting on an important point. I think that one issue around technology, and maybe this goes back to the point about the use of apps and other platforms in learning, is that sometimes, the experience is technology-led rather than learning-led. Technology gets adopted because it is technology and because it is new and has different types of functionality, whereas I think that teachers and leaders would probably advocate an approach that is around learning first: what learning objectives do we want the children and young people to achieve? How can technology be brought in to help them achieve those aims and objectives? That is, I think, a better way of thinking about it. You do not always see it; sometimes schools can be put under a lot of pressure to adopt technology before they necessarily have the time—sometimes it is about time and schools are under a lot of pressure—to think about what technology is going to be useful to help our pupils achieve and make progress with their learning. That would be one aspect of a good policy.
To add to that, at the moment, the Government are drafting a new national curriculum that, for the first time in a long time, all state-funded schools will have to follow from September 2028. Explicitly within that is teaching about digital literacy. I think that once we establish as a country what all young people will learn about digital literacy, then we can follow on with what the best tools to help them learn that are—exactly as Darren was saying. Again, the Committee might want to look at the implementation of that curriculum when we have that published at Easter next year.
We have heard a lot about the quite extreme things that young people see online: the harmful content and misogyny, which is really disturbing. I visited a school in my constituency a couple of weeks ago, and one of the girls raised her concern about disinformation online. That can sometimes be very obvious—the things we spoke about, which are obvious to us—but sometimes it can be quite subtle. Tom, you have touched on digital and online literacy education. I declare an interest in that I am a former maths teacher; I worry about giving teachers more to do by putting more on their plate and not taking anything off, but do you think that schools need to do more on digital and online literacy education? I am thinking about dealing with that understanding of bias and disinformation, even if it is sometimes quite subtle. Also, do you think teachers are currently equipped with the right knowledge and training? Equally, when it comes to political disinformation, for example, do you think they have the confidence to talk to young people about that? During my time in school, I was very careful to be politically neutral for obvious reasons, but that is a skill that you need as an educator.
Again, there is an awful lot in that issue. It is important. There are certainly issues around political impartiality; that is a statutory duty on schools to make sure that everyone working in that school is politically impartial. Most people would think that that is right, and that schools should be politically impartial spaces. But in that context, you have to think about how you address those really important points around disinformation. You are right about children and young people being exposed to bad information and being concerned about it. Our concerns would reach beyond the child and young person population, though, to the population as a whole, in terms of disinformation and its impact on our society. Education has a role to play, and I am not sure that teachers would report that the training and support they get is sufficient to enable them to play their part in addressing those issues. However, one consistent message that you would get from teachers and leaders is that schools cannot solve all the problems of society on their own. They are not isolated from it; they reflect the societies in which they are located. As part of a strategy to address the pernicious impact of misinformation online, schools have a role to play. Maybe they are not supported to play that role as much as they should be, but it is important to look at the other policy levers we have to address what is a really profound issue.
In terms of digital literacy and our members’ confidence to educate around disinformation, we surveyed our members on that issue, and about one in 10 primary teachers, in particular, feels that they lack the confidence to teach about that effectively. There needs to be adequate training provided and the associated funding.
Sorry, does one in 10 feel confident or not feel confident?
They do not feel confident in delivering lessons about how to discern misinformation online, so we have to consider that and bring about the appropriate investment and training.
Two things occur to me on the point about disinformation. One is that, for the first time, citizenship is being put into the primary national curriculum. We really welcome that; we think that is good, but I don’t think the support that schools will need for that has been recognised yet. The national curriculum will be the first step in setting out what we want young people to learn, but there is a huge amount to do on the implementation of that. The other point around disinformation is that I am looking forward to reading the inquiry from the Committee that Mr Swallow chairs on schools preparing young people for votes at 16. I think that will touch on some of this in a really useful way.
The Government have published some guidance on screentime for children under the age of five. I just wanted to get a sense of how necessary you think that is and what impact it will have on children, particularly when they start school.
It is helpful; you would rather have it than not have it, so it is a step in the right direction. At the very least, if it is encouraging families to talk about screentime in the home, how much time very young children are spending in front of screens and whether we have that balance right, that is helpful. It would be interesting to see the impact of that on families, and whether the Government are planning to monitor and perhaps research that impact. My sense is that this is not just an issue for the very youngest children. As I think we touched on, it is an issue for older children as well. We should think about developing that guidance in a way that means it is applicable to older children. Particularly when children are reaching school age, it is helpful for the Government—it helps to make better policy—if you engage with teachers and leaders on what that guidance should look like. It would be a shame to develop guidance for school-age children on screentime outside of school without gaining the expertise, knowledge and understanding of teachers and leaders in what that guidance should look like. So yes, it is a good step, and there may be other steps that can be taken, but maybe there are other things that can be done as well.
We might come back to some of that.
We welcome that guidance, and we would like to see it used in a practical way—being given to all parents as part of the Bounty pack they get given when they become new parents. We think the fact that Sure Start centres were decimated over the last 20 years was a real shame. The Government’s commitment to introducing new Best Start centres is really positive. We should be not just producing this guidance, but engaging parents with that guidance, showing them how they can use it, and showing them how, if they already have children, they can implement it retrospectively. We know that that is a big problem, because if a child is used to screentime, taking it away from them is harder than starting from scratch. We think that is all really positive.
Of course, it is very useful. It needs expanding. Your average 12-year-old is spending 21 hours a week on a phone. According to the OECD, 70% of 10-year-olds already have a smartphone. We do a state of education survey of thousands of teachers, and particularly in the early years, what is coming back is this concern for the under-fives around their parents’ smartphone and online time. Because of that, these under-fives are having less interaction, less time outside and more exposure to inappropriate content, particularly if there is influence from older siblings in the room.
What is the ideal role for schools in all of that? You mentioned the contextual factors of parental use, as well as early intervention and education from Best Start family hubs and so on. What do you see as the optimum role for schools in that jigsaw of support?
That is a really important question. We spent a lot of time previously talking about technology in schools. I think it was Darren who made the point about schools being a haven from technology and screentime. That is something we should pursue. In fact, I worry about a direction of travel where you have state-funded schools that are quite technology-heavy, but those who go to the fee-paying schools are in environments that are quite technology-light but have lots of arts and creativity. What is emerging is quite a class-based issue.
Also, schools are very much the interface between families and Government policy, so schools inevitably have a role in sharing and making sense of that information and guidance. The key point we need to make is that, if Government want schools to play their role, they have to fund them to do that. We talked a bit about Ofsted earlier. As Darren and Daniel said, there are already so many sticks with which Ofsted can beat school and college leaders. We do not want this to become yet another one—for it to become an expectation that schools play this function in society and then to hold schools accountable for that without adequately funding them to perform it.
Schools are developing their approaches to helping children apply more critical thinking to screentime and what they see online, but schools also spend a lot of time engaging with parents around what children are learning in schools. As I think Tom was saying, that might be a helpful way in which parents can learn what children themselves are learning in schools. That could help prompt the kind of conversations within families about, “How much screen time are we accessing? What’s the quality of that screen time? As a family, do we need to think about how much time we are spending in front of screens?” It could be a helpful contributor to those useful conversations that families might have.
Darren, you mentioned that there is talk of guidance for older children—for over-fives. What would be at the top of your list for what should be included in guidance for that phase of life?
This goes back to what we were just discussing. One of the key messages that schools would want to develop on the part of pupils is the critical thinking around, “What am I seeing? Can I believe it? How do I test it? How do I explore alternatives to the view that is being presented to me online?” Those basic principles of what good digital literacy looks like should certainly inform schools’ approaches, but there are certainly very transferrable principles from that school context to guidance that might be provided for parents and families.
Tom, is there anything different that you would put top of the list?
That focus not just on the quantity, which is very much in the guidance for younger children, but on the quality of the screentime.
Can I ask about the role that screens and mobile phones play in your members’ relationship with parents? We can talk about guidance, a statutory ban and a whole range of policy changes that the Government are introducing, but ultimately it is your members who are on the ground negotiating with children and parents about what is in the best interests of children. Is there anything you can tell us about your members’ experience of that? Is there friction? Is everybody pulling in the same direction, in the interests of the child? Is an adequate understanding of what is in the interests of the child shared between professionals and parents? Are there issues that you can foresee as your members seek to navigate the implementation of this new guidance?
Part of the discussion that schools have with parents around, for example, access to mobile phones and other connected devices on site is around why such restrictions are being put in place. Some of the work that we were just talking about in terms of engagement with families, and encouraging conversations about mobile phone use and social media access within families, is part of helping them engage with this in a more constructive way than you sometimes see—I do not want to overstate that, but it does sometimes happen. The statutory guidance is really important in supporting those conversations, because schools are able to say to a parent who may be pushing back against restrictions being placed on access to mobile phones and connected devices on site, “But this is a national expectation. There is no school in England you can go to that will permit what you are asking us to permit.” That is very helpful for schools. It is certainly something that some schools have raised with us as a barrier to encouraging parents to support what the school is doing, rather than challenging it.
Further to Darren’s point, that is the guidance around mobile phones in school, but because there is no guidance on screentime for older children, as Dan was talking about, lots of schools will write their own guidance for parents on it. They will talk about being “secondary ready”, being school ready and not having mobile phones in the bedroom at night, but schools are writing it. They are drawing on research and evidence, but if they could point to guidance that comes from the Government to say, “Look, this is not just us saying that; this is everyone. We can help you implement it as a school,” that would be so much more useful than every school doing their own thing.
I have nothing to add. Parents and schools want it on a statutory footing to take pressure off everyone.
Would your members support a ban on social media for under-16s? What would you say regarding the argument that at age 16, young people will face a cliff edge? They are likely to have exams just around the corner.
I think around 90% of our members would support a ban on social media for under-16s. Their experience is that young people having access to social media is incredibly detrimental to them and to their experience as teachers. What exists at the moment is like a wild west in which we have big tech giants harvesting young people’s attention in the pursuit of profit. Our members would absolutely like to see this ban. Our frustration is that this is something we could have led the world on. We are behind the curve on real action on big tech. I hope that we can see things move at much more pace.
Thank you. Tom?
We welcome the ban, but not a ban alone. We do not think that it is enough. There is a genuine risk that, when the ban comes in, we see it as, “Okay, that has been done now. We have done that ban,” and it is almost a get out of jail free card for these tech companies. It has to be a ban and something, and I think we are more interested in what that “and” is. It is all the stuff we have been talking about—the risk-based approach that Rani talked about, the literacy and the education that comes around all of that—but we cannot think that we get a ban through Parliament and then that’s done.
We recently adopted a policy that supports a ban on access to social media for children under the age of 16, for some of the reasons that we have just heard about and which were discussed in the previous panel. It was a balanced decision. The union looked at both sides of the argument and understood that some arguments against our position had some force. But I think that in the final analysis, that does seem to be the most appropriate and effective way of protecting children from some of the harms that we have heard about. One issue that sometimes get raised—this is not an irrational point—is, “Is there a cliff edge?”, so children aren’t accessing social media until the age of 16 and then suddenly they are able to access it. That argument perhaps overlooks the work that schools can do. We just talked about digital literacy, becoming more confident and effective in online engagement, developing those skills and that understanding through education. So it is not a cliff edge. It is not that you don’t know anything about social media and you are then 16 and completely exposed to the whole world of it. What you’re doing, before children reach the age of 16, is hopefully helping them to develop the skills so that when they have the ability to access social media, they have the toolkit that enables them to do so and to keep themselves as safe as possible.
Can I pick up on that? The cliff edge argument is, I think, a nonsense. There is a cliff edge at 18 when children are able to access alcohol. We don’t serve them little doses of wine at lunchtime to get them used to it. What we can do, of course, is teach them about social media; we need that digital literacy space. I would also go further on what else we need—we need a windfall tax on these big tech giants and to invest that in child mental health services.
A new concern included in the Government’s consultation “Growing up in the online world” was, rightly, the growth of AI chatbots and children forming excessive reliance on and connection to those, including using chatbots to ask for advice. Have your members come across these problems, and what are your views on possible age or time-limit restrictions on the use of chatbots by children and young people?
Our membership and our profession are incredibly concerned about young people and the use of AI chatbots, not only in young people’s engagement with them, but in what it means for teaching and learning broadly. There is a real problem in that the world is moving at such pace that we have not got the space to really educate around it effectively at the moment. I think that is largely down to how schools are measured. If we want to allow the space for effective education on digital literacy and AI, we have to change how we do accountability.
I come back to the point around digital literacy and education. I think this is about how you use those platforms in a healthy, sensible way, because they certainly have value. Again, taking a risk-based approach to all of this would mitigate the fact that these things are moving more quickly than legislation will allow for. These platforms will change and iterate in a way that laws just cannot keep up with, so we have to take that risk-based approach, coupled with education of young people in how to use them safely, effectively and ethically.
I think that part of the use of AI chatbots for that purpose is that a child cannot identify an alternative source of advice, so they go to whatever AI platform they use and ask it for advice. It does raise concerns, and that is something that our members have flagged. Part of that is about asking how we make sure that children have an effective source of advice if they have something on their mind or are concerned or anxious. That is why we have supported moves to have school-based counsellors available in every school, so that there is a trained counsellor who children can approach to discuss their concerns. It may be driven by the fact that many children have no source of advice—no one they can talk to in confidence—other than an AI chatbot. That is quite distressing.
We have talked about social media platforms and AI chatbots. The Committee recently hosted a group of primary school children in Parliament to talk to us about reading—we have a separate inquiry about reading for pleasure. Most of them were really enthusiastic readers, and it was lovely to talk to them about their experience of reading. But part way through that session, we were talking about what some of the barriers to reading for pleasure might be, and the teacher said to them, “Well, there is something, isn’t there, children, that a lot of you spend a lot of time on, which might mean that you are not always reading at home?” What she was getting at was Roblox. Can you say a little about your members’ experience of the impact on children’s education of gaming platforms that enable them to interact with each other but are also enormously absorbing of their time? As we are considering our response to the Government’s consultation on banning social media, I wonder whether the definition of what is within scope needs to be broader than what we would classify narrowly as social media platforms alone.
I think this is about identifying criteria that you can apply to different platforms to see whether they should be in scope or not. Children do spend a lot of time on those gaming platforms, so whether they should be in scope is a legitimate question. The more fundamental question, which perhaps reflects what we were saying before, is how much time children spend in front of a screen anyway, whether it is Roblox or something else. There is a wider debate about the balance of screentime versus non-screentime in children’s lives. But yes, I think that should be in scope, and it certainly should be considered.
Almost certainly, yes. We don’t want to end up in a situation like Australia, where they are just looking at 10 platforms. It has to be broader than that.
Yes, absolutely—bring it into scope. Young people are accessing things like that at a younger age, and there is very little regulation or control.
Thank you. That is really helpful.
As well as speaking to hundreds of adults in my constituency and my local schools, I have tried to speak to as many young people as I can about this issue. What strikes me is that they actually raise huge concerns about social media and their own consumption of it. Although the balance of supporting a ban is probably slightly less strong in that age group, it is fair to say that they nevertheless have concerns. They also raise issues around their rights to access information—the freedom to enjoy their time and interests. How do we strike the right balance between giving young people agency and supporting their independence and their ability to engage in current affairs with safeguarding them from online harms?
Those are always balanced judgments with any rights frameworks. Let’s take, for example, the United Nations convention on the rights of the child. A lot of those provisions are engaged when we have these discussions about balancing children’s safety versus their rights. Proportionality is quite important. Our view would be that if you take, for example, article 3 of that convention, which is about the state’s responsibility to act in “the best interests of the child”, that has particular weight, but children do have rights around data protection and privacy, and I think those issues are engaged around age assurance. For example, if children are sharing data to have their age assured, how is that data protected? So yes, there are balances of rights that need to be considered. However, I think that to some extent we have not placed enough focus on that fundamental duty in the UNCRC, in the Children’s Act 1989 and in all subsequent legislation, which is that the first consideration has to be, “How do we keep children safe and how do we protect them, to a reasonable extent, from harms and risks?”
You are referring to the digital age of consent, which is 13 at the moment. Obviously, however, it is fairly clear that social media platforms frequently allow young people to access their services, or do not sufficiently block them from doing so.
There is a legitimate question around whether that age of consent was identified in a different world from the world that we are living in now.
I have nothing to add.
I met quite a large group of students in Northern Ireland last week and they were asking me questions. They asked me about social media and what I thought we should do. I said, “I think we should ban it for under-16s,” and there was this huge gasp. We have to remember that these are young people and that we are the adults. Like tobacco or alcohol, social media is something that needs regulating because it is harmful, and it is incredibly harmful to young people. When I ask young people what they like about social media, they say that it provides them with some sort of community. I think what we need to do is invest much more in making the real world their community, and that includes youth clubs. I was speaking to a young LGBT person who said, “I’ve found social media useful, because I’ve been able to meet other LGBT young people.” We need to make the real world that safe space, rather than having these unregulated online spaces.
I will come in specifically on that point, because as a gay man who grew up at the start of the social media age, this is something I think about a lot. There are challenges around creating that vision, particularly in rural communities and less connected communities, for smaller groups of individuals, such as young people who are questioning their sexual or gender identity. I take the point you are making, but is that always realistic and achievable?
I know of a number of schools that have LGBT clubs and so on. I welcome that. Youth clubs have been so eroded. Actually, one thing that was fascinating about Northern Ireland was that all those young people still went to a youth club on a night-time. We do not have that in England and Wales, largely. These are the spaces we need to create for young people, rather than these isolated and atomised online worlds.
Thank you very much. It has been really interesting for the Committee to hear your evidence this morning. If you have any further thoughts or there is anything you were not able to convey to us this morning, we would love it if you would write to the Committee afterwards. That would be great. Examination of witness Witness: Jacqueline Beauchere.
Welcome back to the Education Committee’s evidence session on screentime and social media. We are, unusually, hearing evidence from three panels of witnesses this morning, and the third panel is just one witness from Snapchat. The reason for that is that the witness was unable to attend the session that we held last week when we heard from other social media platforms. We are grateful to Jacqueline Beauchere for taking the time to join us online. Can I ask you to introduce yourself to the Committee?
Certainly. Thank you, Chair, and hello all. My name is Jacqueline Beauchere, and I serve as the global head of platform safety at Snap Inc.—the company behind the popular visual messaging application, Snapchat. Thank you for convening today’s session and for the Committee’s flexibility and accommodation, given last week’s unforeseen circumstances that prevented Snap from participating with the other companies. My participation was confirmed one week ago, and we are happy to join today. I look forward to providing our perspective on these critical issues.
Thank you very much. Our Prime Minister met one of your colleagues from Snap Inc. very recently at Downing Street, along with other social media providers, to discuss children’s use of social media. He told them that where social media use comes with real risks, looking the other way is not an option. Do you agree with that statement from our Prime Minister?
I do.
What is your company’s assessment of the balance of harms that young people experience and encounter on your platform relative to any benefits that they derive from it?
Perhaps I could start, Chair, with a general explanation. Online safety and digital wellbeing for young people in particular are core priorities for us at Snap. They have been from the beginning, which is evidenced by our safety by design approach. Snap was deliberately architected differently from inception, and that was nearly 15 years ago. Snapchat is primarily a private, friend-to-friend messaging service, and the majority of time spent on Snapchat is in private messaging between and among friends, or those who typically know each other in the physical world. Snapchat was designed to enhance friendships with people we already know in real life. It is not an ideal app for meeting new people and for sharing experiences with the entire world. In the UK, we reach 23 million people, and that includes 90% of 13 to 24-year-olds. That is critical, because we know that Snap’s focus on interpersonal communications plays a positive role in young people’s lives. Research conducted by King’s College London, which we supported, specifically examined exactly what young people are doing on our platform. It is clear that Snapchat plays a vital role in strengthening social bonds between and among young people. I will give a couple of examples from that research. Eighty per cent of young people aged 13 to 24 say that they stay in touch with friends on Snapchat who they do not often see in real life. It also helps 78% stay connected with close friends who they do see in real life, and 69% say that they use the platform to stay connected with family members. The bottom line here is that not all platforms are the same, and Snapchat is in fact different from traditional social media.
My question was about the balance of harms and benefits. Can you talk about your company’s understanding and articulation of the harms that young people encounter on your platform?
As I mentioned, we made these deliberate design choices from inception to help protect teens particularly from unwanted contact. We conduct research every year in six countries, including the UK. That is not research particular to Snapchat, but across platforms, services and devices. We ask 13 to 24-year-olds, and parents of teenagers, what kinds of risks they face online, and unwanted contact is something that comes up every year in every country. Unwanted contact is what we are trying to help protect teens from, and we have several features that help with that. In particular, we require mutual friend acceptance. Teens cannot receive a message from someone that they have not affirmatively accepted as a friend or that they do not already have in their phone contacts. On Snapchat, friend lists are private. There are no public profiles for under-16s. There are no public likes and public followers. These things prevent people from coming into our lives without invitation or permission. All of this is going to help with the issues of unwanted contact. I would also offer that in multiple instances, we show in-conversation warnings to teens about accepting new friends, because we know that risk can come from the people that we meet and the content that we are consuming and sharing. Specifically, if a teen accepts a friend request from someone who, for instance, they might not share many mutual friends in common with, or might be outside their geographic area or has been blocked or reported by others for non-egregious offences, we ask that teen if they can trust this person and if they still want to be friends with them. It is important to note that in a given week, we display these in-app warnings more than 14.5 million times in the UK to 3 million UK users, and that results in nearly 45,000 blocks and 200 reports. Finally, another piece that we have is our parental supervision tools. We offer parents and trusted adults visibility into who their teens are connecting with on Snapchat. Again, we know that the risks can come from the people that we meet, the people we are interacting with and the content that we are sharing and consuming. We allow those parents to see who their teens are connecting with and who is connecting with them in this Family Centre suite of tools. We also—
Forgive me. I have asked you twice now about your company’s understanding of the harms that young people experience on your site, and you have only spoken about the measures that you have in place to protect them. Is it for us to conclude, therefore, that Snapchat does not accept that young people come to harm on your site? If that is not the case, please could you answer the question?
I would like to point out the difference between risk and harm. Anywhere in the world, in any population, there are going to be people who are up to no good, and that goes to online platforms as well, but there is a difference between risk and encountering risk, and facing actual harm. The risks range—and our research shows this—from anything from misinformation or disinformation to hate speech, cyber-bullying, harassment, sexual harms and even suicide and self-harm content being available. That is the case on all platforms, all services, all devices. If I could finish the point about the parental tools, we have made available a new online safety learning programme for teens and their parents, and it is now embedded into the platform in these parental supervision tools. It is called The Keys. A lot of platforms have online safety awareness-raising and educational materials, but this one is different, because it is leaning in particularly to those harms that young people could face online today, and some of the most challenging issues. They are on Snapchat; they are on all platforms. Chair, these are whole-of-society issues that we are dealing with. They are not particular to technology, but technology is certainly manifesting and amplifying them. The unique harms that we are dealing with in this learning programme are illicit drug activity, financial sextortion, the consequences and risks of sharing nudes and intimate imagery online, and cyber-bullying and harassment. It is not by any means the be-all and end-all, but it is something that we are leaning into to make sure that we are arming young people, and arming their parents, with what they need to know about these issues.
We are going to drill into some more of the detail of the experiences of young people on your platforms, which this Committee has heard about from parents and some of the independent research organisations.
I am dealing with a case involving one of my constituents who has received disgusting, distressing, misogynistic messages on your platform, Snapchat. It was difficult for the parent-carers to report the messages, and I think we are going to look into that. For me, Snapchat does not feel like a safe place for our children and young people. What are you doing to proactively—the key word here is “proactively”—identify abuse and harmful content like that that my constituent has been affected by, and how do you evaluate the effectiveness of your measures, if you have any?
Thank you for raising the constituent matter. Our trust and safety team investigated and took appropriate actions on the account involved—
I just want to add that my constituent is 13.
We briefed your office earlier today, and, to protect the privacy of those impacted, particularly minors, we think that further details are best handled offline, but I want to emphasise that we take reports of harm extremely seriously, and we act swiftly to address them. On your other question, sadly, there is no ensuring the safety of users anywhere online, any more than anyone’s safety can be ensured in the physical world. That is why at Snap we embrace these twin objectives of risk mitigation and harm reduction. We have already noted the safety by design approach that we assume from inception. That is exactly when safety by design is appropriate—that is the exact application: when a platform is being designed. We also employ proactive and reactive measures because we are determined to make Snapchat a hostile environment for any kind of illegal activity or activity violating our community guidelines. We conduct, or we have in place, as I mentioned, the cross-industry annual research. We have aggressive policies that specifically detail what is permitted on Snapchat and what is not, and they are enforced consistently. We have these innovative detection approaches because we are trying to get upstream; we are trying to confront these bad actors and identify this type of potential harm before it can become an issue. We have awareness-raising and educational efforts under way, which I also shared. We have these in-app warnings. We have prompts. We have tools for blocking and reporting and removing. We make Family Centre available for parents and trusted adults. We also support law enforcement in their efforts to bring bad actors to justice.
And despite all those measures that you are talking to us about extensively, young people are still coming to harm on Snapchat. Jess has some statistics, which we would like to ask you about, that evidence that.
Analysis by the NSPCC, which is a major child-protection charity here in the UK, looked at police data for offences of sexual communication with a child. It ranked Snapchat as clearly first among platforms being used. In 2024-25, of 2,111 offences where the police identified a platform, 40% were on Snapchat. You said just now that Snap adopted a safety by design approach from the very start. If that was the case, how on earth is child sexual abuse material being asked for from children, with children being groomed to produce child sexual abuse material for offenders? How has that ever been possible on your platform if it takes a safety by design approach? How can the parents, the public and the children themselves using your platform have confidence that they can be safe in the face of that devastating data?
I am well aware of the NSPCC’s efforts and this research that they do on an annual basis. We have been in discussion with them for a couple of years now, actually looking into the methodology as to how they survey police forces and what police forces are identifying as Snap being involved in some of these cases. I posit that perhaps they are not always looking at Snap as the platform at issue, but as the platform that could be implicated in some other way. There are often—
You are denying that this police data exists on sexual communication with a child offences being related to anything to do with Snap.
I am not denying that. I am just saying that I would like a little closer examination of the data to make sure that it is, in fact, what it purports to be. I am not saying that this does not take place on Snapchat; we understand that these issues arise, and—
We literally heard a witness just earlier, Esther Ghey, talk about a close friend of hers whose child was invited to a forum where there were children who were nude—it was illegal child sexual abuse material. We literally heard that today. Even one case is too many, surely.
Absolutely. I 100% agree, one case is too many, but I would—
Why don’t you just ensure that your app, when it is used by children, bans nudes by default? Why do you not just ban any kind of sharing or receiving of nude pictures? We know that that is technologically possible, so why don’t you just put that on right here, right now, so that it is just not possible for a child to do that?
We have a number of efforts under way that we will be able to share, hopefully, later on—in the coming months or what have you—but I would like to say that there is a difference between what is going on in private chat versus the more public side of the app. The private chat portion of Snapchat is really the 21st-century equivalent of a telephone call, and we cannot dive into what is happening in that private chat any more than we can listen in on what is going on in a telephone call. We really do rely on people reporting to us if there are bad actors, if there is bad behaviour going on, on the platform.
The difference, of course, is that, if we are talking about a telephone call on a landline, that would happen in a child’s home with other people around, so doesn’t that statement that you just made about Snap being unable to police communications between your users in private directly contradict your statement that it is safe by design for children?
Safety by design measures are incorporated from inception. That means trying to determine how a platform could be misused or abused. Those considerations were taken into account 15 years ago. I hope that the Committee would agree that the online safety risk landscape has evolved markedly over the past 15 years. I, for one, have been involved for more than 25 years, and it has evolved markedly. I would love to go back to the days when it was just spam and phishing that we were contending with. Now there are some very real and serious issues that we need to confront as a total society. Yes, platforms are presenting problems, because we are providing that instrumentality. We take these issues very seriously and we take our responsibility very seriously, but these bad actors have to be addressed in other aspects and other areas of society as well. This is very much a joint responsibility.
I know plenty of colleagues have questions to ask, but may I interrupt? It is absurd to say that 15 years ago the sharing of child sexual abuse material was not a problem. It was. There were issues involving illegal child sexual abuse material right from the very design of the internet and then with social media platforms. These issues were raised consistently by child protection organisations and by security organisations like the police. Your safety by design from the start ignored those very serious issues of protecting children. I am sorry, but it is just absurd that you think that this is something that has appeared just in the past few years. We know that is simply not the case. You cannot have a safety by design system that does not think about child sexual abuse.
Unfortunately, child sexual exploitation and abuse is the thread that has been consistent throughout my career in child protection. Those are things for which we are employing all the resources that are available. We use PhotoDNA to detect known child sexual abuse material, and we do this in private chat. When someone uploads to the camera roll, from the camera roll to Snapchat, we make sure that that is not child sexual exploitation and abuse imagery. We also employ Google’s content safety API and Google’s CSAI Match. We employ all of these technologies to try to determine if those images exist and to get them off the platform. They have no place on Snapchat, no place in the online world, and we want to protect children.
Thank you.
Again, these are things that we have done from the beginning and since those technologies have been available.
The reason Snap was not able to provide a witness at last week’s Committee session was that it was laying off 16% of its workforce and aiming to use AI to reduce costs by £368 million a year. What assurances can you give this Committee that Snap will not be reducing staffing in its safeguarding teams as part of those workforce changes?
To my knowledge, the trust and safety teams were not impacted by those changes.
You are in charge of platform safety at the global level. Could you explain the structure of Snap here in the UK and what team you have here in the UK that works in-country to provide safety on your platform, so that when, for example, families raise concerns, they are being handled in-country?
Thank you for the question. We have a trust and safety team based in London. I cannot tell you exactly how many people it is, but it is a fairly large team. Every time I am in London, I sit with that team in the queues to see what kind of reports are coming in and to make sure we review them accordingly. We have a dedicated team. We have a follow-the-sun model at Snap, as most companies do, and we make sure we have trust and safety teams around the world working 24/7 around the clock around the globe. But we do have a dedicated team in London.
You say that you cannot give us exact numbers and you cannot give us complete reassurance that the team will not be affected by workforce changes. Could you write to the Committee, please, on both of those points to confirm how large the team is here in the UK and whether it is being affected by any workforce changes?
Again, to my knowledge, it is not being affected. It has not been affected, but we can give you the exact numbers of the folks in London working on trust and safety. I would like to point out that it is not just a specific trust and safety team that works on safety issues. This work is a fundamental part of Snapchat, involving people across disciplines and across functions at the company, whether that is legal, communications, outreach, advocacy, product and engineering, trust and safety, or operations and outreach—numerous functions and disciplines. We work in a cross-functional way across a cross-functional team, and safety is a part of everyone’s role at Snap.
There are now 16% fewer people working on that. On the point around AI, your co-founder and CEO said, “a new way of working that is faster and more efficient” is now needed—presumably referring to AI and other solutions. What assurances can you give us that the safety of children can be made faster and more efficient through the use of AI and other technologies? You have already been pressed on, for example, making sure that technology is implemented to identify intimate images. What other technologies are you looking to introduce, and how will you make sure that relying on AI and other systems will make your systems more safe and not less safe?
Our trust and safety teams are employing AI in a variety of ways, particularly to help moderate content. On the public side of the app, they use moderation and AI to determine if something is in violation of our community guidelines. It is also moderated before it can reach a large audience. That is why having content go viral on Snapchat is not something you often hear about. We are also using AI on the private side of the app to look into various keywords, phrases and other material, which are used to identify potentially violating content. All of that will continue, and will be ramped up with AI agents and other things. It has been happening for some time now, but we are confident. It is important to note that we always have a human in the loop. It is not that AI systems taking over entirely; there are always humans in the loop to review this material, particularly more serious or egregious cases.
I do not want to press you on this point, but it is really important. We have been referring a lot to intimate images. Given the number of young people on your platform, you will appreciate why this is particularly important. You mention using AI to search for keywords and other linguistic markers in private messaging. Is it the case, today, that no checks are going on of the images being shared in private messaging?
Sorry, the keywords, phrases, emojis and emoji strings are leveraged in the public side of the app; if I said something else, I misspoke. On the private side of the app, as I mentioned earlier, we do leverage PhotoDNA with the images. We are looking for known illegal images, and we are doing that through hash matching, of course, through PhotoDNA. That is going on in the private side of the app in terms of media, to make sure that known child sexual exploitation and abuse material is not being shared.
Your rules for using the platform presumably apply equally in both public and private parts of your app.
The community guidelines, yes. There are more community guidelines specific to what we call Snap Stars—influencers and others on the platform. On the public side, they have stricter rules to adhere to.
But you seem to be setting out to the Committee that the way you enforce those community guidelines—the way you ensure that the content being served on your platform is both within those guidelines and, more broadly, within the law—is much more strongly monitored and enforced in the public side of your app than in the private side of the app. Is that fair?
Again, as I represented to the previous Member, I am saying that the private side of the app is very much a hard line for Snapchat. Privacy is very important. The app was built around privacy—to mimic and simulate real-life conversations—but in private chat, we rely heavily on reporting to us any content being shared that is inappropriate or behaviours that are inappropriate. We rely on that being reported to us, but we are employing certain techniques and detection mechanisms. If the Committee would like specific details, we can follow up with them in writing.
I do not want to take too much of the Committee’s time, but this is really important and quite alarming. Regardless of whether the user is under 16, under 18 or any age, your position is that the privacy trumps safety.
There is no trumping. These are two domains and two issues that work hand in hand, hand in glove. Privacy and safety are basically two sides of the same coin: one cannot exist without the other. We do not see them as in tension with one another; we see them as complementary to one another. That is why privacy by design and safety by design were so instrumental in Snapchat’s formation.
Thank you. I think we might be struggling with the logic of that when it comes to children and their ability to exchange messages in private, but we will go to Chris Vince for the next question.
The UK communications regulator, Ofcom, reported that 13 and 14-year-olds in the UK who use Snapchat spend an average of two and a quarter hours a day on your platform. When we heard from other social media companies last week, they said they had specific tools, like curfews, and you have previously mentioned some parental controls. We heard about, for example, warning messages that remind you that you have been on an app for x amount of time, that it is night-time and that you should probably go to bed. Do you have controls like that on your platform?
I would posit that time spent on an app is a bit of a false indicator. A teen, for example, might have a really bad day and spend a couple of hours in a video chat with a friend, which is very similar to a telephone call for older generations of years past, and then the next day those two teens might exchange just a few messages. This is underscored by some recent research from the University of Manchester. Experts have repeatedly told us that what is most important is how teens spend time online, what they do and who they do it with, and not so much the time spent. It is basically quality over quantity.
Is it not about when as well? For example, would you not agree that, if that was two hours at two o’clock in the morning and they have school the next day, that would be an issue? I really want to press on the idea of curfews and saying, “Actually, you shouldn’t be on Snapchat at two o’clock in the morning.” Do you have anything like that?
We do have the ability for parents to see in family centre where their teen is spending time on the app and how much time is being spent on what portion of it. I would sound one note of caution about the curfews. We have been counselled by external experts, particularly those who work in suicide and self-injury, that if there is a curfew at 10 o’clock, and if a child is at risk in a situation, that might be the only time they could reach out to someone that they need to talk to. I would bring that other dimension into the conversation; it is something that needs to be nuanced and discussed at a finer level. For probably the majority of young people, it is the case that lights should be out, phones should be off and sleep should be had, but—
So you have no specific tools on Snapchat for that. You agree on the premise of it, but there is no specific tool that says to a young person, as you say, “Your lights should be out.” You have no tool like that at a specific hour.
As I said, it is because it is a very different experience on Snapchat than on some of the other platforms. We do not have the endless scroll of seemingly unending feeds of unmoderated content. A teen might pick up the phone several times a day for snapping and messaging, but is not necessarily going to stay on for hours at a time. We do, again, have that feature in family centre where parents and trusted adults can see who the teen is connecting with and how much time they are spending on any particular portion of the app, whether that is the camera or messaging. That is where the bulk of the time is spent for the bulk of our users, as opposed to some of the other features.
On the point we have circled around, a number of social media companies we spoke to last week were in denial about how addictive their apps were. Having been a teacher and worked with young people, I know the desperation that young people feel to get their streaks up. I had one young person who was frustrated that they had a detention because they were not going to get their streak in that day. Would you recognise that there is an element of addictiveness to Snapchat, and do you think that it is potentially damaging to young people?
If you would like to talk about streaks, I am happy to address them. You should know that they are only visible to the two people in the streak. They are wholly optional, and they are intended as a means of celebrating and maintaining friendships. Friendships need care and feeding, and these kinds of regular interactions help to build and deepen those friendships. We have heard directly from our community that streaks create a shared sense of commitment and add a fun element when it comes to celebrating friendships that they know they need to nourish on a daily basis. In North America, we have thousands of Snapchat users who have streaks of more than 2,500 days, which means that they have communicated with the same friend every day for nearly seven years. You should also note that we do not send push notifications for streaks, and we have extended the window for when streaks expire.
But I understand that you have faced legal action in New Mexico and the USA due to accusations that your platform is designed to be addictive. How do you respond to that?
We do not see it like that. It was not designed to be addictive; it was designed to enhance friendships between people who know each other in real life—both friends and family members. Anything could rise to that level, but that is not how we look at the platform. I am not a medical professional, but I do deal with our 19-member global safety advisory board. I lead and manage that effort specifically, and they—medical professionals, academics and researchers—counsel us on these types of issues. We know that scientists and researchers urge caution when using the term addiction.
Can I put to you that time, for all of us, is a limited resource. Two hours, 13 minutes a day spent on Snapchat for 13 and 14-year-olds living in the UK is two hours, 13 minutes that those children are not spending outdoors, not enjoying exercise, not spending time in real life with their friends, not reading for pleasure and not interacting with their parents and other family members. Do you really think that two hours, 13 minutes a day on your platform for an average 13 and 14-year-old in the UK who is a Snapchat user is the best possible use of that limited resource of their time?
Again, I would caution quality as opposed to quantity. We have been told, even by members—
I’m really sorry, but we are talking specifically about the quantity here. I am talking specifically about the two hours, 13 minutes that UK 13 and 14-year-olds who are Snapchat users are spending on your platform and not doing other activities that we know are beneficial for them. I am asking you a direct question with a yes or no answer: do you think those two hours, 13 minutes on Snapchat are the best possible use of time for those 13 and 14-year-olds? The quantity, not the quality.
Listen, I can’t answer that for every young person out there. We hear from our teen council—we have a council across Europe that includes members from the UK—that it is underappreciated and unknown by adults that the bulk of their time on Snapchat is spent doing homework. We would really have to dive into the individual cases; we cannot just make a blanket statement that two hours, 13 minutes is good or bad or otherwise. These are highly nuanced issues, and I would really encourage the Committee to address them as such.
Ms Beauchere, can I just say that as a former teacher—
I am sorry, but because of time, I am going to go to Mark for his next question.
Thanks, Jacqueline, for your answers so far. How would you assess the proposals to introduce a ban on social media for under-16s in the UK, which would almost certainly include your platform, Snapchat?
When it comes to bans and restrictions, it is important to keep in mind that, as I said, not all platforms are architected the same. As we were just discussing, not all teens of the same age are of the same maturity level. I went to Australia in August 2024, before they were considering their action. I argued at the time, and I would still say here today, that we feel that decisions about a teen’s readiness for social media are best left to families and should really consider a variety of factors: age, maturity level, executive functioning skills, the family’s values. In fact there is nothing right now precluding parents and families from holding off on giving access to social media to their teens, if they so choose. The most important question to ask about a blanket ban is, “Will it truly improve safety for teens?” and many online safety experts are against blanket bans. More than 150 such organisations in Australia came out in opposition to a ban, but still Australia was determined to be this world first mover. In doing so I think they hastened the law’s passage and its implementation, and early indications suggest it is proving difficult to enforce, and it might even be encouraging circumvention, thereby [Inaudible] unregulated spaces.
One of the reasons why I think people are finding it difficult to enforce, at least at first, is that your organisation has been accused of malign compliance with the law in Australia. We have heard evidence that your organisation is not doing anything to get young people below the age of 16 off Snapchat; in fact, we know for certain that you are being investigated by the Australian eSafety Commissioner for non-compliance with that ban. What would you say to that, and what reassurance can you give this Committee that if the UK brings in the same ban, you will actually comply with it?
That is wholly untrue. We are exercising all due measures. In Australia, as you know, we have to take reasonable steps as a platform. We have so far, to my knowledge, removed 450,000 under-16s from the platform. They are circumventing the rules; they are coming back and trying to get back on the platform. We now have another effort under way, where another 147,000 are going to be removed. Half of those were removed already, and have to be removed again because they are circumventing these types of interventions. No ban will ever be 100% effective unless we have reliable, predictable age assurance and age verification technology. That does not exist now; there are many efforts under way around age verification and age assurance, but it is not as easy as it sounds, and young people are circumventing these measures.
I appreciate that you have rejected or pushed back on many of the statistics and pieces of evidence we have used in our questioning, but Australia’s eSafety Commissioner published a report that found that, prior to 10 December 2025, nearly 70% of all parents were reporting that their children were still on the platform. That suggests that you are not doing enough to get children off there; in your earlier answer, you said that that was not true, but that finding suggests that it is.
We are in close communication with the Australian eSafety Commissioner and her office. We are being monitored on what we are doing on a regular basis, and we are constantly evolving our techniques and our processes. Again, the baseline was reasonable steps, and I would posit that we are going far beyond reasonable steps. If you want to compare data, we also saw data from the eSafety Commissioner some time ago that showed that many underage teens were on these platforms with the consultation and permission of their parents.
All I will say is that I hope when or if such a ban comes into force in the UK, you will comply with it fully and make every effort to get every child under the age of 16 off your platform. Darren has a question to follow up on this.
Ms Beauchere, you have described a number of measures and brought us back to the intent and design that you claim makes Snapchat safe for children. You are asking us to believe that there is a balance, and that we can say that, rather than being unsafe or having significant unsafe elements, Snapchat is safe. If that is the case, can you explain why countries around the world, from Malaysia to France to this country, are considering or implementing restrictions on most children having access to those sites? Whatever we think of Governments, they generally do not have the time, resource or energy to go around banning things that do not need banning or restricting things where that would be a waste of time. What is your view on why Governments and nation states are taking action on precisely the kind of sites you run?
I would like to get away from the words “safe” and “100% safe”. I talked earlier about having to make the distinction between risk mitigation and harm reduction, which are our main goal. We want to make sure that we make a distinction between actual risks and actual harms. Risk exists in every facet of life, and unfortunately, so does harm. There will never be any absolute state of being 100% safe, online or offline, so we have to focus on risk mitigation and harm reduction. On the other part of your question, about the various Governments looking into this, there is a campaign of sorts by individuals who are coming out against social media and online platforms and who are voicing the idea of bans and so forth. It also has to do with the notion of problematic use—what some might call “addiction”, although again, scientists and researchers urge caution around that term. There is an active campaign in this regard. Personally, I do not think that a blanket ban is the way to go; I think there are other ways to address these issues, which have to be taken in a collaborative way across a variety of sectors and actors. For instance, I do not think we do enough to really instil critical thinking in young people. They have to have critical thinking; they have to have that executive function. They will encounter risk; that will happen online and in real life. They have to know what to do with it once they encounter it.
Thank you. I am going to move on to a final few questions about the Government’s consultation, which asks respondents about restricting functions such as disappearing messages and location settings. Those are both quite fundamental features of Snapchat; if such restrictions were introduced in the UK, would they not completely undercut your platform’s operation?
I am sorry, could you repeat that?
If it was not permissible to have disappearing messages and location sharing on Snapchat, how would your platform work?
It is primarily a messaging platform. Location sharing on the Snap Map is off by default for all users. If users were not going to share their location, that is one thing they would not have to opt in to. I do not think it destroys the notion of the platform because, again, it is primarily about private messaging between individuals.
And disappearing messages?
Again, Snapchat was designed to mimic real life. Messages that delete by default are basically not creating this verbatim transcript, and by that we are demonstrating the value that we give to private communications. We have 24-hour retention, and we also now have an infinite retention option for message preservation. So I don’t think that is an issue.
At the moment, if snaps are being used to bully, harass or groom children on your platform, but they disappear, how can they be made available to the police, education authorities, parents or anybody else who wants to look at them in detail and investigate exactly what harms children are coming to?
We always tell young people, parents and others that disappearing messages do not necessarily equate to disappearing evidence. It is always important to reach out to Snapchat—we tell this to law enforcement as well—to see if we have data or information that could help with a particular case or situation. That is why we really encourage people to report it to us if something is going on in a private chat that we are not aware of, and to make it known to us where there is a bad actor, bad conduct or what have you.
You have heard from members of this Committee about their direct experiences of getting in touch and nothing happening.
I would like to know that they do get in touch and that they are asking. Sometimes information and data will be available; sometimes it will not. Again, just because the messages disappear, that does not mean the evidence necessarily does.
Finally on location sharing, we know that location sharing can be turned off, as you have said, and it might be off by default, but that is not the norm. In my experience as a parent of a teen, and in the experience of lots of parents of teens, it is not the norm that teen users have location sharing turned off. It is a common experience that our teens often experience profound distress—for example, from being able to see that their friends are gathered in a location and they were not invited, or that something is happening that they do not know about. Aside from the very serious types of harm that we have talked about—children experiencing unwanted content, sexual content and so on—I want to ask about that fundamental feature, and whether you think it is a good thing for the mental health and wellbeing of our children and young people that they can see information that might cause them distress. It is not necessarily harmful content, but it is a fundamental feature of the way they use your platform.
Again, the location sharing on the Snap Map is off by default for all users, including under-18s. It is also a double opt-in feature, so you have to opt in at the device level and at the app level. We sometimes hear that young people might see where their friends are gathering when they are not there and were not invited; that happens in real life as well—you are invited to a party, or you are not invited to a party.
But it really doesn’t. If you are at home and you are not online, you don’t know what your friends who are at their homes or absent from you are doing. You are relieved from that emotional pressure when it is the weekend or when you are not together with your friends. You don’t know—that is the whole point. On Snapchat, children can see things that cause them distress when they are in the private space of their own home. It is one of the things that gives us cause to believe that platforms such as yours are eroding the mental health and wellbeing of our children and young people, because they are subject to precisely all those social pressures, 24 hours a day and with no respite from them.
Snapchat was designed to relieve those social pressures. Our founders were some of the first people to grow up with social media, and they were not enamoured by what they were seeing. That is why they wanted to make a more private app and a more private experience, where you could share things in the moment with the people that matter most: family and friends. There might be some issues with particular smaller features of Snapchat, which some may take issue with, but it is not by any means the totality of the app.
I think there will be parents listening to this evidence session across the country whose experience with their own children and young people is profoundly at odds with the arguments you have made today, but that brings us to the end of our evidence session. We are grateful to you for coming, and particularly for the early start you have had on the other side of the Atlantic. Thank you very much for that.