Education Committee — Oral Evidence (HC 1839)

21 Apr 2026
Chair303 words

Welcome to this oral evidence session of the Education Committee. Our session this morning is the first of two oral evidence sessions that the Committee is holding to look at issues around screentime and social media in the context of the Government’s consultation on future policy in this area. Our predecessor Committee undertook some important work in this area, and we want to refresh the evidence we have taken in public so that we can make a contribution to the Government’s consultation. This sits alongside a longer-term piece of work that we are undertaking about the role of AI and edtech in children’s lives and the education system. which will take a bit longer to conclude. We are grateful to our witnesses for joining us today. I need to put on record that we are extremely disappointed that Snapchat, which was due to come to give evidence today, withdrew from that commitment at quite short notice. We are hoping very much to hear from Snapchat on its own at our meeting this time next week, but I should put on record that we have taken the formal decision today that we will use our powers to summon a witness from Snapchat in the event that Snapchat is not co-operative with the Committee in coming to give evidence. This is important given the prevalence of children’s use of Snapchat and its relevance to the debate we are having as a nation and in this Committee at the moment. I am grateful to the three witnesses who have joined us today. Can I invite you to introduce yourselves for the Committee? Alistair Law: I fully share your view on the importance of being able to give evidence in this debate. I am Alistair Law. I am director of public policy for northern Europe at TikTok.

C
Rebecca Stimson16 words

Good morning. I am Rebecca Stimson. I am the director of public policy UK for Meta.

RS
Chair9 words

Thank you. Joining us virtually, we have Laura Higgins.

C
Laura Higgins32 words

Thank you for inviting me and facilitating my joining virtually. I am Laura Higgins. I am a senior director of community safety and civility at Roblox, an immersive gaming and creation platform.

LH
Chair59 words

Thank you very much. I will begin our questioning this morning. The Prime Minister met some of the most senior representatives of each of your companies, along with some other companies, at Downing Street last Thursday to discuss children’s use of social media. He told those representatives that “things cannot go on like this”. Do you agree with that?

C
Rebecca Stimson240 words

We were really pleased to be invited to that conversation—a very important, timely conversation, as is this one today. We welcomed the opportunity to lay out the steps that we have taken, particularly around 13 to 18-year-olds on our platforms—I am sure we will get into that in the conversation this morning. We absolutely agree that, unfortunately, this is not a job that is finished and done, and that there is always more to do to ensure safety, particularly as technology evolves. We recognise the real concerns that were expressed at that roundtable, and we look forward to working with the Government and Parliament as we continue through the consultation process on what next steps might be the best idea. Alistair Law: I represented TikTok at that meeting in my capacity as director of the TikTok UK board, and I was very pleased to do so. I share the concerns that the Prime Minister set out; there were a number of different concerns, which I am sure we will talk through in more detail. I was pleased to be able to give an overview of the age-appropriate experience that TikTok provides under-16s on our platform, but I completely agree with the notion that safety is a race that is never run. This is an area where we will continue to review the evidence, continue to make changes and continue to invest in ensuring that children on our platform are protected.

RS
Laura Higgins125 words

I understand that the meeting held by the Prime Minister was specifically in relation to social media platforms, so we were not invited to that conversation. We are, however, very pleased to be part of this broader conversation. We share the Committee’s commitment to keeping children safe online. We want to work with the Committee and the Government to ensure that we are all helping to contribute to this conversation so that any rules that come out of the process are proportionate and well targeted. As a platform that is used widely by children, we recognise that it carries real responsibility, and we do not take that lightly. We are looking forward to the conversation and sharing some of the work that we are doing.

LH
Chair1179 words

The tone of those initial answers is very much that there is work to do, but that this is a managed, business-as-usual progression, with some monitoring and possibly some tweaks. Alistair Law, the Committee has been briefed today on a report, coming from a police investigation, that children on your platform are being groomed into sexual activity and are selling themselves through that grooming, and that content of that nature via TikTok is ending up on the dark web, in the hands of predators. Do you think monitoring, engaging in a debate and taking some further steps are enough in that context? Alistair Law: In the context that you mention just there, we absolutely abhor that kind of activity. It has no place on our platform. When we became aware of the report that you mentioned, which was first reported in the Telegraph, we immediately contacted the Home Office and the National Crime Agency, and within a couple of days we were talking directly to the police force itself. We have a number of different steps that prevent and act against the kind of activity you have just described, which I am happy to go into, but we took that report and tried to make sure that we identify direct harm and learn lessons from it. We were speaking to the police force within a couple of days. We have law enforcement teams that are designed to engage directly with police forces, so that if any violative activity takes place on our platform, we can respond to it in real time. We can then continue to build our learning because, as you say, the predators and objectional bad actors in this situation are always looking to use whatever means they can to achieve their objectives. We have a whole set of things we do on our platform—use of content moderation, taking down and preventing any livestreaming by somebody under the age of 18, and additional checks as to how old they should be—but when we identify activity that is seeking to circumvent that, we want to know about that as quickly as possible, and take action and close any gaps as quickly as possible as well. We are working with the police unit OCCIT to further understand what it has learnt, particularly in an off-platform environment, because, as you say, sometimes we find that people are trying to direct people off-platform. The more we can learn about what takes place in closed spaces, be it the dark web or encrypted messaging services, the more we can strengthen our measures. To completely agree with you, that needs robust direct action, and that is action we are taking.

Again, that might be a reasonable and credible response if this was an exceptional type of activity and you were reacting immediately to something occurring sometimes. The report says, “Abuse and the sexualisation of children is frequently noted taking place on the platform itself. Within just a few days of reviewing offender behaviours on the platform, OCCIT noted hundreds of accounts dedicated to the sexualisation of children—many of which specifically focussed on those from the UK.” So this is normal on your platform; it is not exceptional, harmful behaviour that is happening occasionally. Why are you not stopping it? Alistair Law: As I say, we are very grateful for the report and, in particular, being able to learn the ways that bad actors try to evolve their approach—

Why did you need the police to tell you what is happening on your own platform? Alistair Law: We constantly look at trying to make sure that we enforce the rules we have with all the different content moderation approaches we have. To set it out, any time a video is uploaded to TikTok, it is scanned on upload by auto-moderating technologies that look for a range of different harms and anything that breaches our community guidelines. There are certain things that AI models have been pretty effective at identifying. Nudity is a great example—obviously leaving aside CSAM and the most abhorrent images. It is prohibited on our platform in general and we are quite effective with our AI models, because they have good training data and are very effective at blocking it on upload. We take other measures, such as ensuring that under-16s do not have access at all to direct messaging. Where direct messaging is allowed when you are over the age of 16, we still ensure that we proactively scan any images for known or novel CSAM as well. None of that means that we have completed our work in preventing activity on that, because of the fact that people evolve their approaches. We have met with OCCIT about and are continuing to learn more of the evolutions that predators are undertaking to try to circumvent approaches and evolve their behaviour. Maybe they are using key words discussed on the dark web; we need to understand those key words so that we can block them and prevent people from off-platforming on that basis. It is a constant level of evolution, but I want to assure the Committee that we are dedicating priority resources on a significant level to tackling it.

I want to drill into that example early into this evidence session, because it demonstrates that, with the best will in the world, you cannot control it and you have not been controlling it. We cannot accept that that level of harmful activity is normal across the platform and you are doing your best endeavours to make sure that it does not happen when it continues to happen. Do you agree that the approaches you are deploying so far are not working, that a platform such as yours simply cannot get a grip on this, and that we therefore need different approaches to the ones that have been taken to keep our children safe? Alistair Law: Any single example of harm relating to that kind of activity occurring on the platform is obviously a failure and something that we need to directly address. At scale, the kinds of moderation approaches I was talking about mean that, of the violative content available on our platform in general, 99% of it is taken down on a proactive basis under our activity and close to 90% of it is taken down with zero views on pure automation—not even involving humans. Is that a complete and total deliverance of what we need to do in this area? No. We need to continue to work on that. However, from a scale perspective, the models, the investment that we have in AI and our content moderation approach are doing huge work to ensure that our community is kept safe. Our responsibility is to continue to enforce and invest in improvements in that enforcement and, as I say, to work with partners like this police force and the NCA to make sure that, as bad actors evolve their techniques, we are aware of that as soon as possible and can put our own measures in place to address them.

C
Chris VinceLabour PartyHarlow80 words

Thank you all for your time today. The previous iteration of this Committee’s 2023 report was distinctly concerned about the harms of screentime. In fact, it specifically said there is an “overwhelming weight of evidence submitted to us suggests that the harms of screentime and social media use significantly outweigh the benefits for young children”. Do you agree with that assessment and, whether you do or do not, what benefits you see that young people gain from using your service?

Rebecca Stimson176 words

Of course. There are lots of benefits that people gain. You can see on our platforms, on Instagram in particular, people following things like BBC Bitesize, some of Britain’s best institutions and museums. There is great tutoring, school and educational support, which I know would be particularly relevant to this Committee. We have generally found that screentime can clearly be a problem if someone is passively scrolling. Over-consumption of anything can be a problem—it depends a bit on what you are doing. Because of that, we have this thing called teen accounts, where we have defaulted all 13 to 18-year-olds into a much more restricted experience. We give teens a nudge after an hour of being on the platform; it is muted overnight; and we also have parental controls, so they can set the right screentime for their family, which can be as little as 15 minutes a day on the app. There is a mixed picture about screentime concerns. We absolutely hear it, and we have tried to respond to that by giving people choices.

RS
Chris VinceLabour PartyHarlow118 words

Can I come back on that really quickly? It is interesting what you said about the different experience you give to teens. I have got Facebook because I am that age, right? There is a real danger of spending a lot of time scrolling because the algorithm is specifically designed to give you more content that you want—I get a lot of stand-up comedy, because that is what I am interested in. The problem with that, I think, is that it can extend the amount of time people spend on a screen to levels that are not actually good for them. Have you done anything for younger users of your accounts to tackle that potentially addictive algorithm problem?

Rebecca Stimson170 words

As I said, our recommender system is designed to offer you connections with your friends, family and the things you enjoy. That can also, as you say, lead to it giving you more comedy if you like comedy, and keep doing so. Thirteen to 18-year-olds, who have been defaulted into a more restrictive set of settings, get told after an hour to leave the app. The app is muted overnight, between 10 pm and 7 am. That account is managed by a parent, who has full control over the time. If they felt they wanted to restrict their child’s time online, it can be reduced to as little as 15 minutes a day. We have not enforced that centrally, because there are lots of examples of young people making really great use of these apps to connect with issues they care about—activism, hobbies and interests, and education. We think it is better to leave it to give some control to families. However, like I said, we interrupt after an hour.

RS
Chris VinceLabour PartyHarlow67 words

On the idea of having a teen account, my concern would be whether there is a danger that young people and teenagers set up an adult account. There is probably an answer to this—and I do not know what it is—but how do you ensure that people under the age of 18 or 16 do not just set up an adult account? How do you control that?

Rebecca Stimson171 words

We have a whole range of ways in which we try to mitigate that. We recognise that absolutely accurate age assurance is an industry-wide challenge, but we take a multifaceted approach to ensuring that people are not lying about their age. One thing that AI has really helped us to advance recently is that, if you lie and say your birthday is 1975, it can scan what you are doing, your friends, what you are posting and images. It has a very good way of telling that you are not that age and that you have lied. You are then defaulted into that experience. We recognise that challenge, but we try as much as possible to prevent people from setting up fake accounts where they are claiming to be an adult when they are not. As I said, in this new restricted experience, it is linked to a parent’s account, which can then have a great deal of autonomy and control over what is happening for that teen on our platforms.

RS
Chris VinceLabour PartyHarlow1582 words

Ali, the same question to you on the general point about screentime. And on Rebecca’s point, what is TikTok doing to deal with the issue of age verification? Alistair Law: I want to go back to the first part of your question about the positive elements of our platform, which is the ability to create, discover and express yourself. Very much from a learning perspective, we introduced on the home page a STEM feed in 2024. This is curated content specifically on science, technology, engineering and maths. It is available to all under-18s by default. We find that a little under a third of them visit on a weekly basis. TikTok has been a place where communities like BookTok have created huge levels of new interest in reading, which is something else this Committee has looked at previously. I think the benefits of connection, community and expressions of creativity are meaningful and material, but of course we recognise what you say about the concerns around screentime. We also have an age-appropriate experience. The experience that you get on TikTok if you are 15 is very different from when you are 25. That starts by ensuring that you have a default one-hour screentime cap, so you will not be able to use the app after an hour. Within that, we also have screentime break recommenders. If you have been using it for half an hour straight, a pop-up will come up saying, “You have been using this for 30 minutes.” You can snooze that, you can dismiss it or choose to take a different view. We also have a notification curfew so that nothing comes through as a notification on the app from 9 pm overnight. If you are using the app at 10 pm, we have a thing called sleep hours reminder, which is a full-screen takeover that prompts people that it might be time to put down the phone, move away and go to bed. It actually takes them through a meditative breathing exercise as a way to kind of shift their energy as well. So, there are huge amounts of individual nudges. I have not mentioned the features that are restricted, like direct messaging and going live. I think we can leave that for safety discussions. But if you are 15 on the app, there is a whole host of things that act together to present a balanced and healthy relationship with the app.

On the hour cap that you mentioned, does the app close completely after an hour? If that is the case, how long after that can you open up the app again? Alistair Law: As default, it is an hour cumulative over a 24-hour period. And yes, it will tell you that you have reached your limit and it will close the app. It is not completely mandatory. When we designed it, we did so with the Boston Children’s Hospital digital wellness taskforce. There is not an awful lot of academic research out there that says what a level of screentime should be and is applicable to people between the ages of 13 and 15. As part of their research, they alighted on an hour as a starting point. As Rebecca said, there is the ability to go beyond that or lower than that. We also have parental tools, so that if you have linked with your child on our family pairing feature, you have additional levels of control. You can set lower screentime caps and you can set blocks of time away if you do not want people to access it during school hours, for example. But our focus is very much on thinking about the multifaceted different ways that people might be using the app and might be nudged into considering their use of it so that it is balanced.

I will go back to the initial part of the question. The previous Committee found that social media and screentime do pose harm. Would you agree with that? You have put that hour cap in place for a reason, so you must have concerns about a young person using social media for a longer period of time in a 24-hour period. Alistair Law: As Rebecca said, the overconsumption of anything is something that we would be concerned about. We will always be led by the evidence, but the evidence is contested. There are UNICEF findings as well. Screentime as a whole and the idea of a particular limit has not been firmly established from an academic perspective—I know you are hearing from academics later on. From our perspective, putting in place a range of different measures that act together gives people agency as well. You can think of the interventions that we have in three buckets. Some stuff is just on and you can’t change it. The notification after 9 pm that I talked about and the screen hours report that you are given at the end of the week are just on. Then there are things that are on by default. You could opt out if you want, but I mentioned, for example, the full screen takeover meditation and breathing exercise. We found that 98% of under-16s kept that on, and as a result we extended that to under-18s as a whole. Then you have things that you can opt into, such as the family pairing parental control. All of them are little nudges, little features, that collectively add up to something quite powerful.

I take your point about evidence. I always mention this in every Select Committee, but I used to be a teacher. One of the things that I am concerned about, which Rebecca touched on briefly in her statement, is the amount of time that young people are spending on these platforms late into the night. You mentioned the 9 pm curfew—I will call it that—but that is not mandatory, so there is a danger that young people will ignore it. You said that 98% did not change that function, but they could still then go back on it afterwards, presumably. Do you think there is more you could do to ensure that young people are not sitting up and scrolling through TikTok, or whatever it is, till 2 o’clock in the morning, because that will have a significant impact on their education? Alistair Law: There are a couple of different things in there. There is the 9 pm notification cut-off. After that point, you will not get anything that buzzes via our app to say that you have received a message or anything like that. Then there is the 10 pm sleep hours reminder. Yes, you are right that you can dismiss it. It will then come back—I think, either 10 or 20 minutes later—as another takeover as well. It is not mandatory at the moment. I think this goes back to my point: we will be led by evidence. It is something that we introduced probably about a year ago, and we saw a good deal of take-up. We are constantly working with our partners—I mentioned the Boston Children’s Hospital, but we also have our own safety advisory councils, including a global youth safety council—to evaluate what more we can do. What young people tell us primarily is that they want agency. They want the ability to set their experience, and that is why we have both these default settings and also a whole range of additional tools that you can opt into. We have a digital wellbeing centre—I have not touched on that, but I can go into more detail on it later—through which we create missions; people can earn badges on the basis of educating themselves about the features that they have, such as the screentime cap and the ability to set limits. There is a whole panoply of different ways that we are thinking about trying to come together, but of course, as more evidence comes out and as we learn more, we will do more.

This is my last question for you, before I come to Rebecca. We have talked about the evidence, but the reality is that social media has grown so quickly. When I was at school it was Myspace, which has now died off. Do you think part of the issue is that these social media platforms are being developed before the evidence of their impact has fully been understood? Do you think that is a potential challenge that young people and other users face? Alistair Law: You are absolutely right that it is a very fast-moving industry. In part, that is one of the reasons why we are continuing to invest in the features that I have outlined. Prior to working at TikTok, I did 10 years in the TV industry. Obviously, there are more channels now than ever, and there is a greater amount of content on streaming services and so on. From our perspective, doing things like an hour screentime cap as default and a sleep hours reminder goes beyond what we see in other areas, be it TV, gaming or messaging. Part of the reason why we are doing that is that we are conscious of the potential impact and we want to make sure that our users have both set features and tools available to them to guard against that.

Sorry, Laura; I have kept you waiting. Could you answer the initial question about the predecessor Committee? Sorry—it was a while ago that I asked that question.

Laura Higgins266 words

That is fine. We are not a social media platform: the primary engagement in Roblox happens within the experiences and games themselves. Children are actively playing with their friends or building and creating; they are not passively consuming algorithmic feeds. That is one element—it really is about what people do when they are in these online spaces. We absolutely agree that balance is everything. We really want to encourage healthy lives online and offline, which includes physical activity as well as the kind of creation and play that happens within Roblox. We have a suite of tools for parents. They can set daily play limits, and when they cut off, they do not come back on until the following day. There are also other ways that they can manage when and how their children are using Roblox. We do not serve any push notifications to users under the age of 13. We also worked with our Roblox teen council to get its input on the wellbeing tools that it wanted. It is very much as Ali mentioned: we heard about the desire for agency, so we introduced “do not disturb” mode so that users can opt out if they just want some quiet times to themselves. Users also have online status controls and their own screentime insights. It has been interesting to hear from young people that when they get that little bit older and the parents are perhaps a little less involved, they are still aware that they might be playing a little bit longer from time to time, and find those screentime insights helpful.

LH
Chris VinceLabour PartyHarlow148 words

I appreciate you are a slightly different platform, in terms of social media, but you still have direct messaging, so there are some potential challenges there. What do you do specifically to protect young people when it comes to direct messaging and the potential bullying messages they could get from other users? Just on the point I made to Alistair about screentime, the nature of gaming is that it is addictive—addictive with a lower-case “a”. You want to carry on and get to the next level, so there is still that danger about potentially playing until 2 o’clock in the morning. This is not a new phenomenon. I remember kids playing on Grand Theft Auto or whatever it might be. That is not a good example because they should not be playing that until 2 o’clock in the morning. What are you doing to tackle that particular issue?

Laura Higgins239 words

We will talk about the communication piece first. All communication is off by default for under-nines. We actually just launched a new feature called Roblox Kids and Roblox Select, which are our new age-based account frameworks. I will talk a little more about that in a moment, hopefully. Roblox kids is really for the under-nines. It is a ringfenced experience where they have games that are specifically suited for their age group and no communication. In terms of direct messaging, communications are off by default. They are opt in, and we need parental consent to access those messages. That is just one piece. In terms of the problematic use of any platform, we of course do not want that to happen. We want people to be having a healthy and thriving time on the platform, so for the younger ones we encourage parents to be involved. We take responsibility for safety—it sits with us—but parents also know what is right for their families, which is why we try to give them more granularity to manage what is right for each individual child. What we see on our platform does not tend to be very long play sessions; it tends to be more on the weekends, or kids getting together after school. We encourage and work with expert organisations—for example, to create resources, do guidance and provide wellbeing tours for the community and parents—to try to prevent that from happening.

LH
Chair73 words

I will go to Caroline in a moment, but I have a brief question. Broadly, within society, where there are behaviours and activities that are addictive, such as with gambling or alcohol, we say to adults, “Here are a bunch of regulations, interventions, advice and guidance that helps you to avoid getting into difficulty with that behaviour,” but we say to children, “You can’t go there at all.” Why is this any different?

C
Laura Higgins168 words

There is no evidence directly that says that games are addictive by nature. We know there is still a lot of work going on in that space. Anything that is consumed excessively is harmful, so we would discourage that. We want to see this healthy balance of activity, but in the same way as parents get involved with their kids around what they watch on TV, how long they would be allowed to watch that for or what books they might read—some of that might be supervised and some might be unsupervised—there is always a conversation that happens around that, in which the parent says, “It’s time to go to bed now,” or, “Please can we turn it off?” We would really encourage those conversations to still be happening. We appreciate that for some young people there can be problematic online use, so we appreciate the work that goes into supporting organisations to prevent that from happening and providing support for those young people, should they need it.

LH
Chair11 words

Would either of you like to respond on that broad question?

C
Rebecca Stimson270 words

I am happy to. We don’t design Instagram or Facebook to be addictive. Independent research only recently in the US has shown that the vast majority of parents and teens using that platform find it to be a positive experience, for some of the reasons we were just discussing. But similar to Laura’s answer, we absolutely recognise that there can be risks of people misusing the platform and poor behaviours. Where we see that, we have defaulted teens into a much more restricted experience and given parents a stronger ability to intervene. That includes things like time, but importantly includes things like an ability to reset your teen’s algorithm. For example, if they are going down a rabbit hole of content that you as the parent might not think is healthy or suitable, you can completely reset that. It is not the way that our platforms are designed to operate, but that does not mean that we have waited, stopped, and not had those tools built and made available. Alistair Law: I would share a couple of the views there. I don’t think that there has been a clinical finding of addictiveness on this, but that does not mean that we don’t recognise the responsibility to drive healthy use. I mentioned the variety of different things that we have in place as default. I think we are the only major platform that has a screentime cap of an hour as default for under-16s, along with the other measures that I mentioned as well. We are cognisant of the potential for overconsumption, and we have put in place measures as appropriate.

RS
Chair37 words

There will be parents and adults of all types watching this session who find the claim that there is no evidence that this is not addictive not credible, based on their experience. I will leave that there.

C
Caroline VoadenLiberal DemocratsSouth Devon638 words

I would like to move on to talk about a ban. Obviously, that conversation has been led by Australia going first, but Governments across the world are now waking up to the dangers of social media and discussing how they are going to introduce a ban. Would you agree that the failure of companies like yours to adequately protect children and young people from addictive algorithms, violent content, sexual predators and so on has led to the worldwide push for a ban? Are you concerned that one could well be coming in in the UK as well? Alistair Law: We recognise the concerns that you have just talked about, and what is important in this debate is that there are a number of valid concerns that come together. There are concerns around harmful content, the level of time spent online, and the impact on wellbeing. Those are the sorts of concerns that we and my trust and safety team, who are all dedicated professionals—we have clinical psychologists, ex-law enforcement, people who have worked for NGOs on human trafficking—are dedicated to trying to deliver on. On safety, we were a later platform than many of our competitors, only launching in the UK in 2018. From the start, we designed our platform with safety in mind, both as a way to deliver for our users but also as a competitive advantage. I recognise the level of concern; our response is to set robust guidelines and enforce against them, and create an age-appropriate experience. I have spoken about some of the wellbeing elements that we have on there, but from a safety perspective, there is additional content that cannot be seen by under-16s. For example, graphic fictitious violence might be available to over-18s but not to under-16s. Earlier, I mentioned direct messaging; that feature is completely turned off. You cannot access it if you are under the age of 16.

But you could access it if you were pretending to be over 16. Alistair Law: Similar to the answer that Rebecca gave earlier, we start from the perspective that if you are set in the app store as being under 13, which is the age that you can join TikTok, the app will not even appear. If you try and sign up, we will ask for your date of birth with a neutral age gate, and if you put something in under the age of 13, then we will block you from reapplying. Obviously, people will try and circumvent that, but we too default people into an under-18 content experience until we have the level of confidence, using signals on our platform, that they are the age they have said that they are. We recognise, as Rebecca said, that this is an industry-wide challenge in terms of accuracy, but we adopt a prudent approach to that. In terms of your question about the ban, the UK Government consultation is a thoughtful and considered one that is asking a wide range of questions of, importantly, a wide range of services. By Ofcom’s own measures, the OSA regulates 150,000 services, and the three different buckets of concerns—about harmful content, time spent and wellbeing—are ones that are applicable to children’s experience online as a whole. Most important to us is that we think that we have got a good model setting rules and enforcing them, and we have got an age-appropriate experience. Can you find a way to bring other services into that level of model and have a level of collective learning about what age-appropriate experiences look like? If you cannot, then clearly, for policymakers, for this Committee and for the Government a more robust option is possible. But I think that will need to operate across the board if it is to go to the heart of what parents are concerned about.

Rebecca Stimson258 words

Facebook is one of the oldest of the apps; it has been here for more than 20 years. Similar to what Alistair just said, we have had safety features and policies built in from the very beginning, and we work with hundreds of experts to design our products and policies, including in the UK. The conversation at the moment just reflects that this is an evolution of the same concerns around safety, which are perfectly valid and legitimate. What we hear most consistently from parents is screentime, what content can my teen see, and who can contact them. Our teen accounts experience, which we have had since 2024, reflects those concerns. We default people in, because we also recognised that it was overwhelming for parents—the average teen has about 40 apps on their phone, and everything has different settings. We realised that it is actually much more helpful to default in and allow parents to come out if they choose to do so. Those things can only be turned off for under-16s by the parent. As we mentioned in the consultation, there are interesting things around what kind of features and functions are right for younger people. That is an evolving conversation that we are really interested to have, to look at whether things like autoplay, infinite scroll and other features should now be looked at. We have some restrictions on that already, but we are interested in where that consultation conversation might go, as the conversation evolves around the right measures of online safety for younger users.

RS
Caroline VoadenLiberal DemocratsSouth Devon167 words

Do you think ban is the right word? It is an interesting word, isn’t it, because we don’t allow children to go into nightclubs because they would be exposed to alcohol and other harms, and we don’t allow children to drive a car or to smoke, but we do not call it a ban. We do not say that 14-year-olds are banned from going into nightclubs; we just say that you have to be 18 to go into a nightclub. Given everything we are seeing with children’s problematic use of social media, the effect it is having on mental health and wellbeing, and the fact that 93% of parents believe it is harmful to their children and over two thirds want to see it banned, do you think we are using the wrong language, and we should just say that this is not something that is safe for children and young people, and that they should be over 16 before they are allowed exposure to these platforms?

Rebecca Stimson213 words

Part of the reason why we do not think a ban is the way to go is because it is going to lead people to believe that it is impossible to access these apps, and as we are seeing in Australia, it is not actually enforceable or effective as a measure. A better conversation is around where are young people spending their time and what is the evidence around certain features and functionalities of those platforms that we may want to look at. A ban is misleading in that sense. I do not think it is helpful language, because we do not think it is going to be something that is possible in practice. I want to go back to one of the things that DSIT published quite recently. Under the last Secretary of State, they undertook a year-long study of the available evidence, which also concluded that there is not strong, robust, concrete evidence of harm either way. That does not mean that we should not have a conversation about people’s concerns, and you can see that we have made huge investments and strides around safety online. But what the Government are trying to do, around evidence gathering for where the best and most effective policy intervention is, is the right approach.

RS
Caroline VoadenLiberal DemocratsSouth Devon526 words

I think the evidence argument is on very shaky ground now. The evidence of most parents in the country, who would say that their children are spending too much time online, suggests that there is a problem. I want to move on to the problems with the Australian ban, which you mentioned, Rebecca. The eSafety Commissioner in Australia has criticised practices from platforms, including both TikTok and Meta—apologies Laura, we will come to Roblox in a second—for repeatedly messaging children who are under 16 to encourage them to age assure themselves for the platform, using unreliable facial recognition software, letting children repeatedly try to age assure if they fail at the first attempt and making it hard to report age restricted accounts. Alistair, how would you respond to these criticisms? Alistair Law: We, as I said, have a multilayered approach to age assurance, which represents the investment that we have made into AI models and signals that we can use on our platform. At the moment, we use that for age 13, which is our cut-off point, and for identifying whether people are the age that they say they are—under 18 or otherwise. From an Australian perspective, it is what we are using, plus some additional aspects that allow them to appeal in cases involving under-16s and stopping use. This goes back to the important point that the consultation needs to establish and address: what is the collective view, from an age assurance perspective, about how much confidence you want? You talked about the idea of unreliable facial age estimation technology. That is an element that is in use under the OSA for pornographic sites here in the UK, and you begin to make obvious challenges and trade-offs between privacy and safety when you go down the route of using that as the sole way. We have an approach that, at the moment, uses your activity on the platform to estimate whether you are the age that you say you are, and when you want access to riskier features, such as going live for the first time, we have a more robust series of things—we ask for an ID check or facial age estimation. It shows that there is a variety of drawbacks and benefits to different versions of age verification. It is a critical element that the consultation will have to opine on, because whether you set a limit at 16 or you set different experiences under the age of 16, it comes back to the level of confidence that you can have in people being the age that they say they are.

Would you say that TikTok is guilty of messaging children under 16 to get them to age assure, and letting them try repeatedly if they fail? Alistair Law: From my understanding, what that referred to was that as we were coming up to the period of the ban, we were encouraging people to verify at the higher level if they were over the age of 16. If people had lied and we had not caught that, that would result in them being messaged, but the additional level is aimed at identifying that.

Rebecca Stimson260 words

As I mentioned earlier, we use AI detection, so even if you have lied about your age, we try to detect that, and then we will age gate you into proving your age. We use Yoti, as guided by the OSA, for facial recognition but also for document verification. We try to take a proportionate approach to the higher levels of asking people for their ID for obvious reasons. We also know that many people do not have any formal ID, so we have to have a multilayered system. We are also a founder of a project called the OpenAge Initiative, which looks at having age verified once on your device. Again, this is similar to what I said about trying to make it as easy as possible for parents in particular. We are thinking about age verification at the device and app store level, where people who run app stores know the billpayer of that phone and have access to information that we would not have, and can block at the app store level. That is not to say that we would stop doing what we do. It does not remove our responsibility, but there is a missing piece of the jigsaw there. You are absolutely right. In any steps that the Government take as a result of the consultation, improving accurate age assurance will be important. It was not really in the Online Safety Act. Ofcom has done a bit of work to look at it, but we think it is a really important part of this conversation.

RS
Caroline VoadenLiberal DemocratsSouth Devon126 words

Laura, I would like to turn briefly to Roblox, because I know that you have not been covered by the ban in Australia. There has been criticism that the Australian ban does not go far enough, and that the harms of sites such as Roblox are comparable with social media. I appreciate you say that you are not a social media site and that you do not share the same harms, but I would challenge the idea that Roblox is not addictive, from what I have heard at first hand from many friends and colleagues whose children are fairly addicted to it. How would you respond to that? Do you expect that Roblox would be covered by a ban if it came in in the UK?

Laura Higgins164 words

We were not included in the Australia ban because of the fundamentally different design and purpose of our platform. We are much more of an active play piece, and we do not provide those kinds of social media services and features. Due to the different types of experience that young people have when they are on our platform, a blanket ban that captured us would remove access to a lot of educational and creative experiences for young people, particularly here in the UK. We are also concerned about pushing young people to less regulated environments. We know from what has happened in Australia that a lot of young people use VPNs and still go into the less regulated spaces. We are not pushing back on regulation; we do understand that there is a need for it, but we really want to make sure that the regulation follows all the evidence around what features are appropriate and how young people are actually using these platforms.

LH
Caroline VoadenLiberal DemocratsSouth Devon47 words

If a ban were to be based on features and functionality, rather than a platform, which is the way Australia chose to do it, then it could well include Roblox—for example, banning anything that would allow anyone under 16 to be direct messaged by a bad actor.

Laura Higgins242 words

We have already rolled out a huge number of safety and policy updates—145, I think, in the last year and a half. Most recently, just last week, we announced Roblox Kids and Roblox Select. This is our new age-based framework for making sure that young people are having the best experience for their different ages. This follows on from the previous work we did around facial age estimation, which we launched in January, which really narrows down communication around who children can talk to, putting them in buckets with children who are a similar age. They can talk with children just a little older and a little younger, but it would be in the same way that, for example, they would probably hang out with kids from a year above them in the playground at school. We’re also now adding age-gated access, being much stricter about the types of games and experiences that they can experience, and also we are rolling out additional tools for parents to have more oversight up until age 16, with tools that can help restrict or add communication to trusted friends. For example, adults being able to contact an unknown teenager would not be able to happen. It is mandatory facial age estimation. If somebody chooses to join the platform and chooses not to undergo facial age estimation, they will automatically be defaulted to our lowest settings, which allow no communication with anyone else on the platform.

LH
Caroline VoadenLiberal DemocratsSouth Devon19 words

You are confident that a bad actor could not ever contact somebody who was under 16 through your platform?

Laura Higgins175 words

One thing it is important to acknowledge is that bad actors come in all ages. We have a range of other tools that are constantly running across the platform at the same time. We know that peer-to-peer abuse happens on all platforms, so facial age estimation would not tackle that. We have other tools, such as our AI tool Sentinel, which is grooming detection that picks up contextual conversations. We proactively report—we work very closely with law enforcement. If we detect any signals, we will escalate those to law enforcement both here in the UK and through NCMEC. We are also members of Project Lantern—I believe my colleagues here who are speaking are also members—which is a signal-sharing project run by the Tech Coalition. If we detect signals about particular bad actors, we are able to share information to prevent them from being on the platforms collectively. That also helps us when it comes to things like people setting up old accounts, because we are able to track them and ban them across the platform.

LH
Caroline VoadenLiberal DemocratsSouth Devon41 words

An expert recently advised parents that they should be with their children at all times while they were on Roblox, because it is not a safe platform for young people, in the opinion of that expert. How would you answer that?

Laura Higgins223 words

I would push back on that. Millions of people do have a really safe and healthy experience on Roblox all the time. We do appreciate that we have a younger audience on Roblox than on a lot of the other platforms, and we take that very seriously. For the younger ones, this might be their first experience of going into an online space, so we provide safety tools. It is safe for young people but every child is different, and we encourage parental involvement. This is not about us pushing responsibility back on parents; we want to work in partnership with parents, so that they have the tools, and they feel that they are still in charge of their child’s experience. As their child develops, grows more skills and builds more resilience in these spaces, the parent can sit back a little bit further and let their child go and explore more. But in those very early stages, I think it is a real positive for families to walk this together, particularly, somewhere like Roblox, because it is actually fun to sit down as a family and play together. It is a really good opportunity for those kinds of conversations. Otherwise, you might have to sit down and have a conversation about online safety, whereas this offers a natural place to do that.

LH
Chris VinceLabour PartyHarlow41 words

At present, children can consent to have their data processed by companies only at the age of 13, so young children obviously need parental consent. The Government’s consultation has proposed to raise that age of consent. Would you support that change?

Rebecca Stimson145 words

I think it is obviously a different thing to the ban, as it relates to the age at which we can process a person’s data without their parental consent, but people sometimes think it is a different way of getting to the same outcome. We obviously comply with the GDPR in this country. If they raised the age of data-processing consent, that capture many businesses in the UK that are data-processing under the GDPR far beyond social media companies. We do not have a strong opinion about whether that is the right measure. I guess the point is that some people conflate it slightly, and they think that it would result in a ban, which is not my understanding of how that might work. Alistair Law: I really do not have much more to add to what Rebecca said—that pretty much represents our view too.

RS
Laura Higgins36 words

Again, this applies slightly differently to us, because we already do not process the data of young people, and we already have parental involvement and parental consent for most features up until the age of 16.

LH
Chris VinceLabour PartyHarlow280 words

The Government’s consultation asks about restricting or banning the following features: disappearing messages, livestreaming, location sharing and sending or receiving videos that contain nudity. You mentioned some of those things, but what restrictions on those features do you already have on your site? Alistair Law: We have taken up the few that you called out there, so that you cannot access them if you are under 16. Direct messages are a good example. We are not defaulted off with opt-in; direct messages are just not available to under-16s at all. Livestreaming is another good example, and you cannot go live until you are 18. That is an area of our site where you have to provide a greater level of age verification via facial age estimation or digital ID proof. That goes back to my earlier point about trying to make sure that we are both understanding the evidence and designing our platform in a way that provides an age-appropriate experience, but makes sure that the most risky features are prevented from being accessible. The examples that you called out there would reflect our—

Location settings as well? Alistair Law: We have only recently introduced a nearby element to what you see. In terms of people uploading content, the content that you upload if you are under the age of 16 is not available in other people’s “For you” feed. Again, you are set to private by default, so the only people who can see what you are posting at all will be those who you have had direct contact with, such as accepting a friend request. However, even then, you are not able to direct message on our platform.

Rebecca Stimson115 words

Similarly, our settings are all defaulted to private, so there is no location and you are not discoverable by someone you do not know. You cannot be contacted by someone you do not know, and you cannot be tagged or mentioned in anything by somebody you do not know. Also, who you know is visible to your parents and can be controlled, as well as whether someone sends you a friend request or a follow request. There is no livestreaming, and we have talked about the time restrictions, so there is quite a long list. With all our settings—I think it is similar to what Ali said—you need parental approval to change any of them.

RS
Chris VinceLabour PartyHarlow48 words

Laura, I have a specific supplementary question for you. You mentioned that your platform attracts younger users, and you recognise the importance of the work you do on that. Two in five of your users are under 13, so how do you keep those very young children safe?

Laura Higgins237 words

I will just answer the previous question, because it is a quick answer: we do not have any of those features on the platform. We do not have any image sharing, and there is no encryption in chat. We filter and monitor everything. On your specific question about under-13s, as I mentioned, we have now rolled out facial age estimation, which is mandatory for all users. That means that we are much more accurate about who is in which age group, and what they can access on the platform. We are really bringing parents into the conversation and giving them visibility. We have synced parent accounts. They have their own dashboard where they can see what games their kids are playing. They can actually opt in or out of specific games. For example, if a child is age rated to the under-nine age group, and they have much more narrow access to the most mild and minimal experiences and games on the platform, but they want to play with their older sibling, their parents are able to adjust that so that they have access to specific ones. The parents can see who a child is friends with and help manage their friends list. They can control the communication as well. As well as that, by default, we do not have a lot of those features that are on the more risky end. Those are the main things.

LH
Chair259 words

Alistair Law, the OCCIT report refers to children as young as five livestreaming. The measures you have outlined take us so far in a world where everybody is behaving as they should, but they are optional and can be turned off. We know that not every parent understands social media or knows what their children are accessing. Not every parent is on this all the time. That report of children as young as five livestreaming, and livestreaming harmful content in a context of being groomed, would imply that the safety measures you have outlined are not working. Alistair Law: On your first point, about how these things are not mandatory and can be turned off, direct messages cannot be turned on if you are under 16. With livestreaming, we go through the process of additional levels of age verification with facial age estimation or document ID proof. We are speaking to OCCIT about some of the specific examples it gave. Some of those, by the way, were off platform, as opposed to directly happening on the platform itself. It is absolutely a challenge that I think we are all alive to whether or not the age verification elements you have for those risky aspects of your site are sufficient. We are putting a lot of work into that.

We are at the point in the session where we are going to have to speed up our questions to get through all the topics we want to, so I ask Members and witnesses to be as brief from now on.

C
Dr Johnson427 words

We talked a bit about the potential harm, but it feels a bit like it is not being properly acknowledged. I work as a consultant paediatrician, and I still do clinics to maintain my registration while being a Member of Parliament. I frequently see children with headaches, behaviour changes, tiredness and exhaustion. They come to clinic with parents very worried that their child has something clinically seriously wrong with them. Then we find that they are clinically well, thankfully, but that they are spending hours and hours and hours on social media. You will be aware of Lord Darzi’s report on the state of the health service, which was published at the end of 2024. The technical annexe shows graphs that very clearly relate time on spent on social media to mental health problems, particularly in girls. I think there are significant issues there. Do you have statistics within your companies on the amount of time children are spending on social media? Can you tell how long someone spends on a platform? Presumably you can, because you can put in restrictions on that time. Presumably, you have statistics on how long children are spending on average on your platforms and what proportion are spending a long time per day. I want an answer from each of you on what figures you have or, if you do not have them to hand, whether you can provide them to the Committee in writing afterwards. Alistair Law: I am happy to go away and see what I can provide to the Committee in writing. The thing that I would most highlight is that we do not want a situation where there is overconsumption. That does not serve us as the right thing to do, but it also does not serve us from a commercial perspective because, as you say, overconsumption risks leading to burnout and people not actually enjoying using the app. Our business objective is to create a healthy and sustainable relationship with users. That is not dissimilar to the experience I had in the TV industry for 10 years, where you want people to return to your channel but you do not want excessive levels. That is why we have designed the experience in the way that we have, with default screentime caps, take a break reminders, sleep hour reminders and notification cut-offs, as well as family parental controls that we can look at. We are alive to the risk. We are focused on working with partners and our dedicated trust and safety team to create an environment—

DJ
Chair66 words

I am sorry, but we need to be brief. You had a specific question from Caroline Johnson, which was: how long are children using social media on your site? Do you have the data, and can you provide it to the Committee? Please could you answer the question. Alistair Law: As I said, I will go away and see what I can share with the Committee.

C
Dr Johnson1 words

Laura?

DJ
Laura Higgins12 words

I will go back and we will confirm in writing for you.

LH
Rebecca Stimson2 words

The same.

RS
Dr Johnson153 words

Rebecca, there has been talk about work with law enforcement. I had an horrific constituency case where a child was being bullied at a really serious level, and the family moved a long distance across the country to escape that. The bully used Instagram and his knowledge of the child’s interests and hobbies to find this child at the other end of the country and then used a profile on Instagram to bully and threaten, and he threatened to burn the family’s house down. The police had very great difficulty finding out from Instagram who was the owner of that profile. Perhaps things have changed in recent times, but if the police and law enforcement came to you and said that some serious crime was being committed using a profile, how long would it take you to provide that police force with the data on which IP and person is using that account?

DJ
Rebecca Stimson130 words

It should be very quick. I am very sorry to hear about that incident. Hopefully it was a while ago. However, if it is an ongoing problem, we have a dedicated law enforcement team. Since the coming into force of the illegal harms duties under the Online Safety Act, we have a dedicated illegal harms reporting channel. We did have that for law enforcement who are on board. It should be very quick. It sounds like it was not in that instance. I am very sorry, because that sounds like a really awful case. However, if it is something that is ongoing, I would be happy to talk to you about it as well. It should be instant, and we do aim for it to be as quickly as possible.

RS
Manuela PerteghellaLiberal DemocratsStratford-on-Avon43 words

Ofcom’s data shows that 23% of under-13s in the UK have a TikTok account, 14% have an Instagram profile and 19% have a Facebook profile, despite being under the minimum age restriction. Your age assurance or verification measures are not working, are they?

Rebecca Stimson178 words

As we have said, there is a real problem with age assurance. We take a huge range of steps to try and make it as accurate as possible. As I mentioned in my response to your colleagues, we think that greater involvement from app stores who have linked to the bill payer of that device would be really helpful, because we can take that signal and use it through our systems. I do know that the data in the Ofcom report you are talking about was reported by parents, which suggests that parents may be helping their children get online or are aware of it. There is also some evidence that we saw recently about how, if there is a ban on social media, half of parents would put their children back on it. I think this is a collective challenge. We absolutely have a responsibility that the OSA includes requirements around highly effective age assurance. There is a limit to that technology at the moment, which is, I think, what you are calling out in your question.

RS
Manuela PerteghellaLiberal DemocratsStratford-on-Avon274 words

Alistair, we have heard that children as young as five use your livestreaming facilities. We also know from the OCCIT report that these livestream files end up shared by sex offenders. What are you doing about it? Alistair Law: As I said on that specific one, we have met and are continuing to talk to the police about some of those examples—some of which actually took place off platform. To answer your question on under-13s, we report on a quarterly basis as to the number of under-13 accounts that we remove. I think that the figure is published in our transparency report every quarter. We recognise it as a thing that people will try and circumvent. The collection of measures that we have available to take against it is one thing. The other thing, returning to the point I made earlier, is that even if people are trying to circumvent, we will use AI models and AI signals to identify how old they are. However, in the first instance we are focused on creating a safe environment for everyone. That means that when you join up with TikTok, you are entered into an under-18 content experience, regardless of the age you say you are. We will obviously then look to work quickly to identify whether somebody has been trying to circumvent our approach, and we report on that on a quarterly basis.

Laura, we also heard some serious concerns regarding Roblox. There have been accusations that it is possible for people to create games depicting the mass shootings at Sandy Hook and Columbine, and even recreating Epstein island. What are you doing about that?

Laura Higgins169 words

Again, we have no space for those experiences and games on our platform. They are strictly prohibited. We use technology to scan for game names, so they should be flagged up using our filters. On the content within the experiences themselves, before a game can be published, it goes through some automated moderation processes, where we scan for things like audio and video files in the experience. What we have found with a lot of those experiences is that they have originally not started out with that intent—they have not been designed and published on the platform as those experiences—but they have been moderated later and made to re-create some of those experiences. But we absolutely have no space for that. We are constantly monitoring and taking down those accounts. They are very prohibited, as I say. It is unfortunate that any of them have ever appeared. We are working on new technologies all the time to detect them and prevent them from being uploaded in the first instance.

LH
Manuela PerteghellaLiberal DemocratsStratford-on-Avon20 words

The game developers say that 30% of games flagged up for concerns get accepted. How do you respond to that?

Laura Higgins49 words

That is a very serious allegation. I cannot comment, because I have certainly not seen those statistics. We believe that our systems are robust in checking content before it is uploaded and, on the rare occasion when there is something really bad, being very swift in taking it down.

LH
Peter SwallowLabour PartyBracknell233 words

The way that Instagram, TikTok and Facebook work is that users upload content. We have talked a lot about that content today. There are various ways to use apps but, broadly speaking, other users are then fed that content through their “For you” page, or the Instagram or Facebook feeds. What those users are fed is driven by an algorithm. We know that algorithms are designed to promote the content that will drive the most engagement, and that often includes negative engagement. Speaking to young people in my constituency, they are aware of that—they are aware that the content they are seeing often has a negative effect on their mental health, and they have told me they have often felt ashamed at some of the content they have been fed. They felt unable to talk to a trusted adult about what they have seen, and they are concerned that the addictive nature of the algorithm has left them spending far longer on social media than perhaps they intended. When they loaded up the app, they intended just to check it quickly, after a long day at school or whatever the case may be. I am certain that your companies have done auditing of your platforms and the effects of your algorithms on young people—on compulsive use, displacement of sleep or emotional distress. Will you share the results of those audits with this Committee?

Rebecca Stimson63 words

Before I answer your direct question, I should say that our algorithm is not designed to promote the virality of harmful content; it is quite the opposite. It is designed so that everything you get on Instagram and Facebook is personalised to you, so mostly your friends and family, your interests and the things you follow. It does work in a different way—

RS
Peter SwallowLabour PartyBracknell83 words

Rebecca, I am going to stop you there. I am a Labour MP and, broadly speaking, on the centre left of politics. Whenever I go to my Facebook feed, all I am served, endlessly, is far-right content. That is not designed for me—it is designed to get my engagement, but it is not designed for me. Obviously, that is the experience of a 33-year-old man, not a child, but I do not accept that your algorithm shares content that is designed for people—

Caroline VoadenLiberal DemocratsSouth Devon69 words

May I contribute? You say it is mostly friends and family, but when I go on Facebook—which I do not do very often any more—on my personal Facebook page, all I get is adverts, adverts, adverts. The adverts are designed for a middle-aged woman—beauty products, health products, menopause stuff. It is relentless and I barely see anything from someone I actually know any more. It is just not there.

Rebecca Stimson400 words

To go back to your question, we publish transparency reports about how successful we are at finding and removing. This is all algorithmically driven; it is not just about the recommender system, to the comment you two were making, but also about the safety features. To your question directly, we would be happy to share information with you about how the algorithm works and the kind of results it is achieving. Alistair Law: Again, it starts with the content moderation element. We are clear, under our community guidelines, that hate, harassment and hateful ideology are prohibited, so the starting point should be—I am happy to pick up with you on any examples—that the kind of extremist content you are talking about should not be present. When it comes to the algorithm itself, TikTok is a place for discovery. One of the things that you see commonly if you are on the “For you” feed is that you will go through popular videos, but you will also go through videos that have had zero views and very few likes, and the idea is that we are comparatively content agnostic. When you first join TikTok, you are served a series of the most popular videos, and depending on how you engage with that—if you like it, watch it or share it—we will say, “What are the other users that have exhibited a similar liking for videos as you have? They are clustered over here. Here’s a series of videos that they also like,” and we will then present that to you. It is content agnostic. I have two final points. In terms of what we think about doing to try to disperse people’s activity, we recognise that we do not want people to go down any kind of a rabbit hole. So one of the things that we will additionally do is insert in there different kinds of content—content that you might not have expressed an interest in, but that might surprise you or you might discover. The final thing that we also do is allow you to reset your algorithm at any given point—I think it takes two, maybe three, clicks to wipe the slate clean. Our algorithm is designed to provide an enjoyable experience, and an experience that gives you content that you value and that can surprise and excite you, and that is how we set the platform up.

RS
Peter SwallowLabour PartyBracknell31 words

Rebecca, how does the Facebook algorithm rate an “angry” reaction or a “laughing” reaction compared with a “like” reaction in the way that it then feeds into the algorithm that curates?

Rebecca Stimson27 words

We take into account a number of signals to work it out. Obviously, first and foremost, anything that violates our community standards is removed before it is—

RS
Peter SwallowLabour PartyBracknell42 words

To be clear, because I do not want to go down this track again, I am not talking about content that violates; I am talking about content that falls well within your guidelines but nevertheless curates a negative experience for young people.

Rebecca Stimson87 words

Where we get something like an “angry” reaction, or people dismiss something or report it—anything like that that might suggest that it is not a high-quality, helpful, pleasant piece of content—the algorithm is designed to down-rank it. It also does it if it has been fact checked or screened for some reason. Multiple signals from multiple different ways we and our users interact with that content mean that it would be down-ranked. When we down-rank something, it means that it would be seen by significantly fewer people.

RS
Peter SwallowLabour PartyBracknell22 words

So, if someone puts a negative emoji reaction on a piece of content, that is scored against it in terms of engagement?

Rebecca Stimson1 words

Correct.

RS
Peter SwallowLabour PartyBracknell35 words

That is helpful to know. The Government’s consultation has asked respondents whether personalised algorithms—algorithms that are designed to target content, in this case, specifically at young users—should be age restricted. Would you support that move?

Rebecca Stimson270 words

We think the personalised algorithm is one of the reasons people come to our platforms. They want to see the content that they want to see—friends, families and the things they follow. The personalised algorithm is also a really important part of how we keep people safe—for example, by knowing who you are, how old you are and other features about you, so we make sure as much as possible that you do not see harmful content. Like Ali said, we also allow both parents and anyone else to completely reset the algorithm and the parameters on which we may be targeting content at you if it is not what you want. We do not think it is the most fruitful area of discussion in the current consultation. We think a more interesting area is some of the features that we have been talking about in the conversation today. Alistair Law: The personalised algorithm has huge levels of benefit in terms of showing you content that is relevant to you. It is also, at a slightly removed point, the way that lots of online services operate. If you are a subscriber to a newspaper, a magazine or anything like that, you are still shown content that is personalised to your preferences. We think our responsibility is primarily to keep people safe with our content moderation. Then we make sure that there are appropriate dispersals, so that a level of personalisation does not get you locked into particular types of content, and there are still dispersal techniques and injections of new types of content that will challenge you and surprise you.

RS
Peter SwallowLabour PartyBracknell98 words

Do you make money out of having a personalised algorithm? Alistair Law: We do not serve personalised targeted ads to people who are our younger users. Obviously, as you get older and over the age of 18, they will be personalised. But you cannot raise revenue as a TikTok seller, so we make a very negligible amount of money from under-16s.

One last question on this point with brief answers appreciated. You both refer to the way your algorithms support your age-verification process. What level of quality assurance have you done on that? How accurate is that process?

Rebecca Stimson7 words

Do you mean the age-assurance process specifically?

RS
Peter SwallowLabour PartyBracknell23 words

Specifically the process of looking at what users are engaging with, to check that they are the age that they say they are.

Rebecca Stimson21 words

It is part of how we look at this. As we have been saying, we also do like Yoti facial recognition—

RS
Peter SwallowLabour PartyBracknell4 words

How accurate is that?

Rebecca Stimson79 words

Telling the difference between over and under 18, it is very accurate. If you are trying to tell a 13-year-old from a 14-year-old, the accuracy does drop, because that is just more difficult to do. It is an ongoing thing we are working with. As we say, it is one of the reasons we think that verifying at device level and app store level would be incredibly helpful additions to the work that we are doing on age assurance.

RS
Peter SwallowLabour PartyBracknell123 words

Okay, so it is not accurate enough. Mr Law? Alistair Law: I would share what Rebecca said in terms of the differential between over 18 and under 18—that is high. We obviously share information with Ofcom under the Online Safety Act; the efficacy of these techniques is an area they are looking at closely. From an under-13 perspective, we have a specific dedicated AI model—the under-13 model—which we launched first in the UK before any other jurisdiction, which is designed to try to pinpoint signals that might give that differentiation. It is an area we continue to invest in, and it is an area we are talking to Ofcom about.

Again, I note that you are not putting a specific number on it.

Caroline VoadenLiberal DemocratsSouth Devon30 words

Can you tell the Committee whether any of your engineering or product teams are rewarded, either financially or professionally, for increasing time spent or session length among users, including children?

Rebecca Stimson42 words

That is not how our engineering teams are goaled. In fact, one principle we have guiding the work that we do at the company is to ensure that our users have a safe and enjoyable experience. It is not measured by time.

RS
Caroline VoadenLiberal DemocratsSouth Devon15 words

So there is no incentive to increase the amount of time someone spends on Instagram/Meta?

Rebecca Stimson147 words

No, we reset our algorithm entirely a few years ago to prioritise on time well spent, rather than just time as a metric, because we recognise that that is not actually the right way to look at how people are using our platform. Alistair Law: It goes back to my answer to Chris earlier. We look for user growth as a whole, and that is not based on individual sessions’ worth of time spent. It is based on the overall experience people have. There is a good thing we have at TikTok. You can look at anybody’s key objectives, right up to and including the CEO. You can see what they are being benchmarked against. The first one he has is that safety is the north star to becoming the most trusted and safe platform that there is. That is really important guidance for all our work.

RS
Caroline VoadenLiberal DemocratsSouth Devon328 words

Looking at addiction, Rebecca, Meta was recently found guilty in a Los Angeles court to have deliberately designed addictive products that harmed a young user. We know she is not alone because there are thousands of other court cases now in the pipeline. Research shows that reward schedules are especially potent for young people because they do not have the cognitive ability to resist the dopamine hits. All three of your apps contain features such as infinite scrolling, autoplay and algorithms that reinforce addiction. On Roblox you are adding a new function, with a reward scheme, into it. Could you tell us briefly and succinctly, what exactly are each of you doing right now to reduce the addictive nature of your platforms? Alistair Law: I do not think we accept the premise that there is an inherent addictiveness. The measures we are taking are the ones I have set out already. Something like the screentime cap that exists as a default hour for anybody under the age of 16 acts as a way to try to ground people in the particular experience. Yes, it can be varied, yes, there are other experiences. There is something like sleep hours, which we introduced last year. We have introduced updates to family pairing. It is a constant level of ongoing review of the evidence, working with partners, ensuring that we understand the experiences of people, so that we can build the tools that give them agency and a balanced environment.

But you settled that case out of court, so you did not actually go into the courtroom. Alistair Law: It was a US litigation process, and my understanding is that it is going to appeal. There is more to come in that area, but it was not a conclusion on TikTok. We will continue to deliver on our responsibility to make sure that users have a safe, balanced and healthy experience on our app.

What are you doing to reduce addiction?

Rebecca Stimson98 words

Just to answer that quickly, first, we are appealing that court case, so we also do not accept the premise that our platforms are addictive—I obviously cannot talk about that too much. I mentioned the algorithm reset that we did, which led to 50 million fewer hours being spent online. Parents can set a 15-minute-a-day limit for total time spent on our apps, and we have the interruptions that we have talked about. We do not intend for our platforms to be overconsumed by anyone and have introduced a whole range of ways to try to prevent that.

RS
Caroline VoadenLiberal DemocratsSouth Devon2 words

And Laura?

Laura Higgins101 words

We do not actually build the algorithmic systems that maximise time on platform: we do not have infinite scroll, autoplay, “follow accounts” and so on, which I know are all part of this debate. For us, the main motivation for the platform and the people who create on it is to have fun and to build fun experiences where people can come and play together. Again, we will continue to help parents to have more autonomy in deciding what is right for the youngest children on the platform, and to ensure that wellbeing is at the heart of everything in Roblox.

LH
Manuela PerteghellaLiberal DemocratsStratford-on-Avon51 words

The European Commission’s preliminary findings, published in October last year, said that Meta does not have sufficient mechanisms for children and parents to report illegal content. As we have heard, examples of that include sex offenders, suicide fora, violent pornography and misogyny, and so on. How are you tackling these concerns?

Rebecca Stimson48 words

Absolutely everything, on any of our platforms, can be reported to us: there are three dots at the top of the page, and it is pretty straightforward to do. We have also built a dedicated illegal harms reporting channel for our platforms following the introduction of the OSA.

RS
Manuela PerteghellaLiberal DemocratsStratford-on-Avon17 words

I try to report illegal content, and I do not understand what happens after you report it.

Rebecca Stimson70 words

What happens after depends a little on what the content is. It could go direct to dedicated law enforcement specialists. It can go to specialist teams that look at things such as, as you mentioned, child sexual abuse material. It could go through a moderator or human reviewer working generally across our platform; it could also go through our automated system. It depends a little on what has been reported.

RS
Manuela PerteghellaLiberal DemocratsStratford-on-Avon56 words

Thank you. In November 2025, the NSPCC recommended that tech companies take steps such as using metadata to identify suspicious behaviour and restricting adults’ ability to search for and communicate with child accounts. Are you—and I want to hear from Laura as well—actively implementing such processes? If so, what is the timeframe for completing that implementation?

Rebecca Stimson185 words

We absolutely do use metadata to pick up where someone might be acting suspiciously. For example, if an adult account is sending lots of follow or message requests to younger users, that kind of behaviour will get detected and can be looked at. As I said, since 2024, younger people aged between 13 and 18 default to having teen accounts. They cannot be discovered by anyone or contacted by anyone that they do not follow, and who they follow is approved and seen by their parents. Alistair Law: It is very similar for us, with the added element of direct messaging not being available to under-16s. As Laura mentioned earlier, we have increased the amount that we work together as an industry to make sure that signals are shared between platforms, so that we can align where an issue might be cross-platform. This is another area where it is absolutely critical to work with law enforcement, and to make sure that we have information from them on things from an outside or off-platform perspective that might be relevant to an assessment that we do on-platform.

RS
Laura Higgins56 words

It is not possible for adults to communicate with children on the platform, due to our facial age estimation, which buckets people into similar age groups. An adult can only talk with a child if they become trusted friends, which can happen only by parental consent. That is the first safeguard we have in that space.

LH
Chair27 words

I am really sorry to interrupt. You say that it is not possible, but there is evidence that it happens. How do you reconcile those two statements?

C
Laura Higgins150 words

Any incident where a child has been harmed in any way because of contact through Roblox is absolutely awful, and we really are truly, truly sorry that anything like that has happened. We rolled out facial age estimation in January this year, so we are quite confident now that we are preventing contact on our platform between adults and children. We are also really focused on preventing off-platforming, which, as Ali mentioned, can sometimes be where the harm actually happens. We have a PII classifier that detects and prevents the sharing of personal information to try to prevent children from being taken from Roblox into other spaces. As Ali said, we work very closely with law enforcement and our partners through Lantern. We continue to use our AI tooling, such as Sentinel, to detect any grooming-type language or behaviour on the platform, which we then proactively report to law enforcement.

LH
Peter SwallowLabour PartyBracknell143 words

Like many Members of Parliament, I have been speaking to constituents about social media for under-16s as part of the Government consultation into this. I was contacted by a number of constituents who have concerns about bullying on social media, including Amy, who works as a social worker. She says that she has worked with children and their families when they have been contacted and groomed via social media platforms. She says: “I don’t think parents understand the risks of social media enough. Social media gives children access to each other 24 hours a day. Children have no safe place anymore.” We know that bullying takes place in the real world—offline—but it has also been exacerbated by social media and online spaces. When bullying occurs on your platforms, how does it manifest? What features are most typically used to drive that bullying behaviour?

Rebecca Stimson84 words

It is probably easier to answer by saying that we have built the features in response to where we see it manifest. That can be, for example, being tagged or having comments around you. Someone might post about you to try to bully you. They might also comment on your posts to try to bully you. We have put in a whole range of features so that if you unfollow someone, or block and report someone, they will not be able to do that.

RS
Peter SwallowLabour PartyBracknell53 words

Just on that point, according to Childline, if I block someone on Instagram, they can still find my profile. Therefore, the advice is that I change my username. Is that the case? If so, why is the advice that the victim should change something about the way they are responding on your platform?

Rebecca Stimson190 words

I don’t know when that statement from Childline is from, but that is no longer the case. Since 2024, teen accounts are private by default. They are not discoverable and cannot be contacted by anyone who is not following them. The act of not following someone—not being connected to them—means that they will not be able to do any of the things that I was just talking about. The other thing is that we default all our teen accounts into our strongest predicter for hidden words. We have a whole range of terminology that tends to be associated with bullying comments. Where the systems detect that, that is immediately moved into a different inbox. The victim of those comments never has to see them. They can either mass-delete them or mass-report them. As you rightly point out, asking the victim to deal with some of that is clearly not always the right approach, so hidden words is on for all under-18 accounts to pick up bullying remarks. As I say, those are moved away, and the victim can deal with them en masse, rather than having to experience those comments.

RS
Peter SwallowLabour PartyBracknell27 words

And when bullying is detected, either automatically or because it is reported by the victim, what is the typical response time, and what feedback do they receive?

Rebecca Stimson71 words

The vast majority of bullying, where we detect it, is found and removed before it is seen by anyone. Our recent community standards report, which is publicly available, shows that it is under 1% of bullying comments online. We are very successful in finding it. When it is reported to us, it will go to either an automated reviewer or a human reviewer, and usually the response times are extremely quick.

RS
Peter SwallowLabour PartyBracknell115 words

Obviously, we are talking about the bullying of children. I am a Member of Parliament, and as you can imagine I sometimes have people on social media say very unkind things about me. That is of course their democratic right. There are things that take place on social media that are deeply unpleasant but are not illegal and might not be sufficiently severe to require you to flag that as breaking protocols. I put it to you that those protocols should be much stricter for a situation involving children. Are your protocols stricter? If a young person receives an abusive comment on their platform by another young person, what action is taken against that bully?

Rebecca Stimson163 words

As I say, they are stricter because I completely agree with what you are saying. As you said, those measures go beyond the law because we are not necessarily talking about illegal harm, as you have also recognised. The default settings are all stricter. Where we find someone who is bullying, the actions will vary depending on the situation. It can be up to and including removing that person’s account. It can be about removing posts. Sometimes when we are dealing with younger people, we try to give them the opportunity to learn. You might not automatically go to removal of their account. We can do things where, if we detect that you are about to say something that we believe could be bullying, we give you a nudge to make you think about it to try and encourage better behaviour. There are a range of sanctions we take against either the content or the individual posting it, depending on what is happening.

RS
Peter SwallowLabour PartyBracknell13 words

How do you inform the parents of both sides when those incidents happen?

Rebecca Stimson57 words

With teen accounts, which are of the last couple of years, parents can see everything that is happening on their teen’s account. We also give them notifications if their teens are looking for certain content. At the moment, it is mostly suicide and self-injury content, but we are looking to expand that as well. That could include—

RS
Peter SwallowLabour PartyBracknell31 words

You are looking to expand letting parents know that their children have been involved either as the victim or the perpetrator of bullying, but currently you do not actively inform parents?

Rebecca Stimson28 words

We do in the sense that if you have a parent-managed account, you can see everything that your child is doing. What I meant with my answer was—

RS
Peter SwallowLabour PartyBracknell6 words

That is not actively informing them.

Rebecca Stimson29 words

As I was about to say, if you mean an active notification, at the moment, that is just for suicide and self-harm. But we are looking to expand that.

RS
Peter SwallowLabour PartyBracknell171 words

We are very short on time, so I will very briefly come to you, Ali, on TikTok. Alistair Law: Very quickly, then, there are a lot of similarities. If you are under 16, you are private by default. Your content will not appear in other people’s “For you” feeds. As I have been mentioning, direct messages are not available to you. The idea of people directly contacting you and bullying you via direct messages is simply not applicable. Our community guidelines prohibit bullying and harassment. To your point, they apply a different standard to young people and the general population than they do to democratic discourse with public, elected officials. We also have family pairing, which is our parental control tool. With that, you can see where people have requested or friended or blocked someone. If you have an under-16 who has blocked someone, that will be available in family pairing as well. We are continually looking at additional features in that area.

Laura, I will come to you very briefly.

Laura Higgins168 words

We have chat filtering for in-game chat. That will prevent any kind of harmful language. It does not necessarily have to be swearing; even just mean language will be detected. We are currently rolling out a rephrasing project where, if somebody types something that may be a little bit unkind, it will reword it using AI into more appropriate language. We work with anti-bullying organisations globally to create resources to support parents—either of children who are bullying somebody else or of victims of bullying—with how they can support their child as well as with what we do on the platform. Again, we have a number of sanctions depending on the seriousness of the incident. It is reported to us and we will escalate through our internal channels. We worked with our teen council to create youth friendly versions of our community standards for this year to make sure that they are out and available, and we are as clear as possible about what is allowed on the platform.

LH
Chair116 words

Thank you all very much. I will have to draw the session to an end here. There are one or two topics that we did not get to ask you because of time constraints that we might write to you about to follow up on after the session if that is okay. Thank you all very much for being with us this morning and for giving us your evidence. Witnesses: Professor Pete Etchells, Professor Victoria Goodyear and Professor Amy Orben.

Welcome to the second half of our session on screentime and social media. In this session we will hear from academics who have undertaken research in this field. Will our three witnesses introduce themselves briefly, please?

C
Professor Etchells19 words

Hi everybody, I am Peter Etchells. I am a professor of psychology and science communication at Bath Spa University.

PE
Professor Goodyear40 words

Hi, I am Professor Goodyear. I am professor of physical activity, health and wellbeing at the University of Birmingham. I am NIHR lead of a large study on smartphone policies and training director of the ESRC centre for understanding behaviour.

PG
Professor Orben89 words

Hi, I am Professor Amy Orben. I am a research professor at the University of Cambridge. I have directed independent research commissioned for DSIT on how we can achieve better evidence about social media impact and harms. I am also an independent advisor for both the science advisory council for the DFE and the evaluation of the social media minimum age Act from the Australian eSafety Commissioner. I attend today in the capacity of my job at Cambridge. I am not speaking to represent any positions of those entities.

PO
Chair66 words

Thank you very much. As we know, there are many different sources of funding for research into social media, screentime and their impact on children and young people. It is important that we are transparent about those sources of funding, so could each of you please let the Committee know what the sources of funding were for the research that you will be talking about today?

C
Professor Etchells25 words

None. The last body of funding I had was in 2022 from the British Academy to look at the impacts of monetisation in video games.

PE
Professor Goodyear28 words

I have received funding from the ESRC, Research England, NIHR, Birmingham alumni, the Department for Education, the Wellcome Trust, the Society for Educational Studies and the British Academy.

PG
Professor Orben57 words

I also have a big team, so there is a list. I get a lot of funding from UKRI, the Wellcome Trust, the Huo Family Foundation, the Jacobs Foundation—gosh, I can provide the whole list of funding if you want. It is on my website for all to see. I also get National Institute of Health funding.

PO
Chair28 words

That is great, thank you. I hope you understand why it is important for us to ask that question and to have that information in the public domain.

C
Manuela PerteghellaLiberal DemocratsStratford-on-Avon22 words

My question for the panel is: which types of screentime do you think are particularly harmful and which particularly beneficial for children?

Professor Goodyear113 words

There is a mix of benefits and harms. I can talk to our recent paper, published a month ago, which was co-produced with young people, teachers and families—that is 177 participants. Those participants report that there can be harms associated with phones and social media screentime relating to physical activity and sleep, and that it can cause issues with friendship. They also report that they can experience time-wasting with scrolling, but at the same time, screentime can benefit those participants’ physical activity, and it can be used as a mood-enhancing tool, for relaxation, as a distraction from negative experiences, and for homework. From the perspectives of young people, it is a mixed picture.

PG
Manuela PerteghellaLiberal DemocratsStratford-on-Avon6 words

Does anyone else want to add?

Professor Etchells150 words

Very often we get stuck in the frame of thinking about screentime in what we call an exposure response model, so we ask how much of one particular type of screentime—whatever it is that we are interested in—is linked to an increase or a decrease in a particular outcome that we are interested in. That focuses on harm a lot of the time. The vast majority of studies that take that approach in this area do not show clear or consistent answers because the rather unhelpful answer is that it depends. It depends on other things that are going on. A more recent way of thinking about this in the research literature takes what we might call an ecosystems approach. It asks how particular types of screentime interact with other factors that we know also have impacts on mental health, sleep and bullying to either increase or reduce the risk.

PE
Manuela PerteghellaLiberal DemocratsStratford-on-Avon2 words

Professor Orben?

Professor Orben5 words

I think that covers it.

PO
Manuela PerteghellaLiberal DemocratsStratford-on-Avon21 words

Is it possible to establish a causal link between screentime and harmful consequences for children? If so, what is the timescale?

Professor Etchells212 words

That is the big question. In theory, it is possible, but I don’t think we are there in terms of the quality of the research that we are able to do at the moment. There are lots of reasons why that is the case. First, research is relatively slow by comparison with the rate at which technology changes. We are still having conversations about whether things like violent video games affect aggression. I think the public conversation moved on from that 10 or 20 years ago, yet we are still trying to discover the best ways to assess that. In some senses, we may be asking the wrong type of question. When we ask a question along the lines of whether x causes y—for example, does a particular type of screentime cause a mental health issue?—the answer will always be, well, yes and no. The research now is starting to move toward thinking about a different formulation of that question. Because the answer is always yes and no and it depends on the specifics and the individual, why do some children and young people thrive online while some others really struggle? I think that might be a more useful question to tackle as we try to understand where the specific problems are.

PE
Manuela PerteghellaLiberal DemocratsStratford-on-Avon11 words

Does anyone else on the panel want to add to that?

Professor Orben192 words

We can also think about different scales of evidence. We have really quite strong evidence that social media is harming individual children. We see that in the legal case in the US; I know that the lawyers have been working on that for many years. We see that in the UK in coroners’ reports stating the involvement of social media-type content in children’s deaths—very severe harms. I think that stands in and of itself. Often, when policy makers ask about that, that is the response, because often that is what we use to understand whether we need to intervene on a malfunctioning product. On the level of when you average the whole amount of use across the whole population of children and young people, I agree with Pete Etchells that it becomes more difficult at times. There are a lot of different harms that occur. We can talk about sleep, productivity, attention, mental health, wellbeing and physical activity—all have different levels of evidence, but what they share is the fact that we often do not have the data needed to understand the impacts, because companies are not sharing that with us appropriately.

PO
Manuela PerteghellaLiberal DemocratsStratford-on-Avon61 words

Academics often focus on the mixed evidence about the impact of screentime and the lack of a causal link to harms. Why do you think there is such a disconnect between academic research, which as we have heard is going very slowly, and public opinion? Also, do you think there is a case for action now, while the research is ongoing?

Professor Goodyear131 words

As we mentioned, there are different types of evidence. There is case-based evidence and the academic evidence that we have mentioned. Within the academic research, there is also low-quality and high-quality evidence. To get causal evidence is often experimental. It takes rigorous study designs to give that evidence. In clinical trials of different drugs and so on, you spend a lot of time with different committees writing the research, having it peer-reviewed, having protocols evaluated and having input from different groups, from youth groups. To get causal evidence takes a long time, and the challenge with social media and phones is that they have moved at an exponential pace. The pattern in the research and the trends in society are both trying to tackle the same issue at a parallel time.

PG
Professor Orben274 words

I really agree with Professor Goodyear. We are trying to hold a billion-dollar industry to account with very feeble and non-strategic R&D support. My team is now 16 people, but the longest contract, beyond me and my administrator, is about three to four years. It is very difficult to produce this high-quality research, even though the public are crying out for it. A lot of the time people say, “We’ve figured out what to do about cars. Cars were causing harm. We have seatbelts, we have MOTs and we have education in schools,” and I completely agree; but we had about 100 years to get from the first car to the 20 millionth car—I can provide the actual numbers—whereas we had two years for that same level of rise with TikTok. As you just heard from the previous panel, we need to speed up by a magnitude of change in evidence creation. What I have been arguing over the last year or two is that we should think not about whether or not there is evidence, but about gradations of evidence and about risk. The core thing for you to understand as decision makers is how to weigh the risks of taking a decision now on incomplete evidence, which might not be perfect and might have unintended consequences, versus the risk of not taking that decision when harms could be accumulating and when evidence might not arise when you think it will, or the risk of taking another decision. That is probably something you cannot communicate in a media interview, for example, but in conversations like these, we can start talking about those complexities.

PO
Manuela PerteghellaLiberal DemocratsStratford-on-Avon24 words

Thank you. Do particular groups of children have especially negative experiences when using social media and gaming websites, compared with other groups of children?

Professor Goodyear153 words

Yes. We do see variation in experiences. From a qualitative perspective—that is, understanding from young people—we see differences by age, culture, religion and different vulnerabilities. What is common—I have been researching this area for over 10 years, across different ages, from age 11 all the way through to young adults aged 25—is that social media is a very powerful medium and its content is really powerful, and that can bring a lot of benefits, but also harm. The challenge is that a young person can be vulnerable for no particular reason on one day, and it can switch to controlling them. That is the key challenge that we experience in our research: how do we help young people to feel in control of their social media use, so that it is not controlling them? For me, that brings us back to the importance of education, and to thinking about regulation and age-appropriate design.

PG
Professor Etchells181 words

There are a few threads of evidence. Some research from about 10 years ago shows that you can almost predict the types of risks and harms children and young people come across online as a function of pre-existing offline vulnerabilities. For example, children who are young carers are at more risk of cyber-scams, identity theft and things like that. Again, it goes back to understanding the ecosystem in which this situation occurs. Children in those sorts of situations are often at home more than their peers, which might mean they are more online, and they have a relative lack of support or digital literacy education to help them to spot those sorts of things. There is other research which suggests that young people with mental health conditions use social media for support—to share their experiences and seek information or support. In some cases, they may find that asynchronous communication easier. However, it is essentially a random chance as to whether they actually find support or are fed content that makes things worse. That is why we need to do more work.

PE
Manuela PerteghellaLiberal DemocratsStratford-on-Avon8 words

Professor Orben, do you have anything to add?

Professor Orben1 words

No.

PO
Chris VinceLabour PartyHarlow135 words

Thank you, Professor Etchells, for mentioning young carers; that is something I am very passionate about. Like Peter, I have spoken to schools in my constituency about a social media ban, and I have had a mixed response, which is really interesting. What are your views on a potential social media ban for under-16s? To add to that, we heard from the lady from Roblox, who seemed very keen to distance it from being a social media platform, but there are certainly some of the same challenges. I know you have done a lot of work on the addictiveness of computer games. If we were to go down the road of banning social media for under-16s, would you want it to have the same limited scope as in Australia, or would you consider extending that?

Professor Etchells401 words

It is important to understand that there are lots of options on the table here. Even when we just talk about a ban—perhaps a better term is a restriction on opening an account—there are multiple ways to implement that. In Australia, it is for under-16s, but that age is essentially arbitrary in terms of understanding particular effects. However, what does or does not work in Australia—it is important to say that we do not know either way at the moment; we are still waiting for the evidence to come in to understand what is working there—will be different in the UK. One of the key differences is that they do not have GCSEs in Australia, but, at 16, we do. Therefore, if we are talking about implementing an under-16s ban in the UK, what we are actually saying is that we will be introducing this thing that we are very concerned about, in terms of its distractive element and its potential impact on mental health, at one of the critical times in a teenager’s life and education. The age that might work in one area might not necessarily be right for another. I do not think that is something to worry about; it is about thinking, “Well, what are the options open to us?” Do we want to consider a younger restriction, maybe at the age of 13—that lines up with parts of the current age of consent legislation and the Gillick principle—or potentially even 11 or 12, so it comes in at that friction point between primary and secondary school, but then grandfather that in so that it increases year on year until we do get to 16? I think that there are potential advantages to that sort of approach, in that it avoids the cliff edge at age 16 that a lot of people are concerned about. It also gives time for the evidence base to develop; we know that there are ongoing efforts to try to understand the impacts of these sorts of restrictions, but we are just not there yet with actually getting the data, and that would allow time. Critically, it would also allow us to think about what a nationally co-ordinated digital literacy programme looks like, so that, at the age at which we do decide children are allowed access to social media, they are equipped with the tools to be able to navigate that.

PE
Chris VinceLabour PartyHarlow29 words

So you would favour it—you would say that there is evidence to suggest that a restriction, potentially in the way that you have suggested, would be a good thing?

Professor Etchells124 words

Honestly, we do not know either way at the minute. We just do not have the evidence base to understand whether a restriction of this nature would work. That is not me saying that I do not think it would work; I genuinely do not know, and I am very keen to be led by the data on this. Something that I really like about the Australian model is that that is clearly the case, and they are evaluating it. I would like to see whatever we do in the UK take a similar approach and say, “Okay, because of this relative lack of evidence base, what can we do to evaluate this as we go along, to understand what works and what doesn’t?”

PE
Professor Goodyear225 words

I would echo that and say that there is limited evidence at the minute to support delayed access to social media or interventions targeting curfews or behaviours. That is not to say that the evidence is not coming; there are certain case studies, which maybe Professor Orben can talk to in a moment, with different evidence. It is really tricky because it is about getting the right balance between benefits and harms, as we have mentioned, and, importantly, because systems need to be in place for the unintended consequences that might happen if there is a ban, and for the young people who find value in social media for mental health support, and those sorts of things, and for community. It is important that, whatever the age limit is, we support young people to transition into a technology-filled world. Whether that is at 13 or 16, the technology is not going away; we live in a digital society and use lots of different things. Our evidence from schools suggests that they need support in helping and educating young people to be aware of different aspects of phones and social media, and they need support from trusted adults as well. Whether or not there is a ban, I think it is really important that systems, education and support for schools and families are put in place.

PG
Professor Orben405 words

I agree with what has been said. Interestingly, we have not had an experimental study—probably because of the complexities that Professor Goodyear was talking about—that has reduced or removed social media for healthy under-18s to see what happens. So it is very hard for us to judge what will be going on. The question is not just what the outcome would be if this perfect ban were put in place. It is also about the effectivity of the ban, how people will react to it, what the behaviours will be. We are seeing in Australia that it isn’t a simple on/off switch. That sort of evaluation is why we are looking to Australia with a lot of curiosity and hope that results will be provided quickly. It is also why my team are running the big trial up in Bradford, where we are looking at a social media curfew and a social media reduction, tested across schools. We will randomise year groups to receive this social media reduction and curfew intervention or to have a control condition. Interestingly, when we started thinking about this study, we were open to all possibilities. Having led the big review for Government on what research is missing, I felt that an experimental study was missing, but we need to be able to have young people engage in our study for it to be successful. So the team up in Bradford were fantastic and did a lot of engagement with young people, and we found that if we said, “We want to give you a ban or not a ban,” they all said, “Well, we won’t engage in your study, then. We won’t sign up. We’re not interested. We don’t want to test a ban.” But they were and continue to be very interested in a limit and in the curfew idea. We are getting pretty good engagement with our initial contact with schools and students, so I do think they want help. What we have been finding is that they do know that they are struggling. That is where evidence creation can be critical, but as I was saying, it is about weighing up the risks of waiting versus acting early. That is not something that just the science can talk to, because it is about ethics, morals and what the population wants, so I think it is inherently a political decision as to when and how to act.

PO
Chris VinceLabour PartyHarlow98 words

Just to push back on something Professor Goodyear said about the digital age, that doesn’t necessarily mean that you couldn’t have a social media ban at 16, does it? It just means that whenever the decision is made as to when young people are able to access social media, digital education is important. It wouldn’t necessarily mean that there couldn’t be a ban. It just means that whatever decision is made—Professor Etchells touched on some alternatives—about the restriction of social media, it is important that support and teaching are given at that stage. Would you agree with that?

Professor Goodyear104 words

That is absolutely right. Young people have repeatedly said to us, from the time of my earlier Wellcome Trust studies, that the support provided in school is outdated and irrelevant. It is about catching up with the latest trends and patterns that are going on in young people’s lives. One of the challenges with social media and phones is that they outpace the experiences of many adults when they were children. So it is really difficult for teachers and parents. Alongside the legislation about bans, we really need to be supporting parents and schools to deliver that. So yes, I agree with your point.

PG
Manuela PerteghellaLiberal DemocratsStratford-on-Avon20 words

How would you assess the effectiveness of the Government’s guidance on screentime for children up to the age of five?

Professor Etchells14 words

That is a good question. Is this the new guidance that is in place?

PE
Manuela PerteghellaLiberal DemocratsStratford-on-Avon3 words

Yes, on screentime.

Professor Etchells202 words

It is a great question. One thing is that there are two elements to the guidance. The very media-focused element of it is the limits, and we actually have a history of that. The idea of no screentime under a certain age—I think under the age of two—was the view held by the American Academy of Pediatrics for a long time, but there was no evidence to support that, so they moved away from it. In the report accompanying the guidance, that is similarly the case, so we still do not have a good evidence base to understand whether it will have a meaningful impact on parents’ lives. A more useful element of that guidance is in trying to support parents in understanding the importance of things like face-to-face interactions and prioritising offline moments during the day, and giving practical guidance on that. To answer your question, to evaluate that, we need to talk more to parents and caregivers about how they use screens and how they view them in the home. Is this guidance landing with them? Do they understand it? Are they taking it on board? Are they changing their practices? And we need to look at that over time.

PE
Professor Orben212 words

I was on the working group that helped to feed into this. A lot of effort was put into doing this review at very quick timescales. Even in the younger age group—under-twos—there is a recommendation to use screens only if engaging with a caregiver in that activity. For example, it is not banning facetiming your grandparents when somebody is with you, or looking at photos together. What Professor Etchells noted is really critical, though. The question of effectivity is not just about how evidence-based the guidelines are and how the different risks and precautionary approaches have been weighed up, but about how they work in practice. I was on the scene when the CMOs did their last guidance report, which was out in 2019. It still surprises me how little some of the recommendations have trickled down into popular culture and into the system. Last summer I was asked to comment on a BBC Radio 4 trial that had asked children not to have their phone in their bedroom, which found positive impacts. I felt like I had to note that it had been a recommendation from the CMOs since 2019. We have not seen that much progress there. However, having the guidance is critical, and it has been put together well.

PO
Manuela PerteghellaLiberal DemocratsStratford-on-Avon50 words

My next question is also on the guidance. The Education Policy Institute noted that the guidance did not include advice on managing screentime when children have older siblings and ensuring a consistent approach when children are with other family members. Professor Orben, what guidance should be offered on those issues?

Professor Orben98 words

I think there has also been discussion about that. It was part of the evidence report and did not go into the guidance. We were just part of advising on that evidence report. It is a critical issue, and it would need to be looked at. I would not want to deliver guidance to parents now on a Committee without, for example, going through due consultation processes, because this is not just about weighing up evidence; it is about weighing up risks and how we balance them. Apologies, but I do not think I can do that individually.

PO
Manuela PerteghellaLiberal DemocratsStratford-on-Avon21 words

That is fine. Professor Etchells, are there any other omissions from the guidance that you would like to highlight to us?

Professor Etchells261 words

I think the guidance is as good as we can do at the minute, so I am very supportive of it. A lot of it aligns with the RCPCH guidance that was put in place in 2019. That is no longer in place, but the principles are the same. This goes to a broader question of how we support parents and caregivers on screen use. The example of how you deal with situations where older siblings are in the picture is really tricky. There are elements of research and guidance around parenting generally that apply here. One of the big things that we need to help with is how to support modelling healthy behaviours around digital tech use as parents and caregivers, because a lot of us struggle with that. A lot of us have developed poor habits with our technologies. When I talk about modelling healthy behaviours, it is not just about whether you are on your phone when you should not be or when it is not appropriate to be, but how you are talking about that and developing rules within the household so that you are supporting your children. You are demonstrating that it is okay to talk about this—that we do not want to be on our phones sometimes, but sometimes we need to be, and we do not want to be when we shouldn’t be. It is okay to talk about that. Keeping those lines of communication open is one of the most important things we can do to support their journey through digital tech use.

PE
Peter SwallowLabour PartyBracknell120 words

That is a very helpful segue, Professor Etchells, because I wanted to talk about the role of parents in ensuring healthy screentime for their children. We heard in the previous panel about some of the challenges, where controls might be available or presented to parents, but they do not always necessarily feel that they are able to use them to their full effect. Of course, there are huge pressures on parents in the modern world in terms of balancing work and parenting responsibilities, and there is also a wider question about extended family having caring responsibilities. How reasonable is it to expect parents to be across all of this, and how can we support them to be more across it?

Professor Goodyear153 words

From our research, we see that phones and social media impact on broader lifestyle behaviours. Similar to the guidance Professor Orben mentioned, some guidance from the CMOs is about those lifestyle behaviours. It is about sleep and physical activity—we know that there can be displacement of those activities if phones are used late into the evening. I think, in that context, we need to help parents understand how to help young people to develop healthy relationships with their phones. Alongside the phone guidance, there is sleep hygiene, safeguarding, how we mitigate against conflicts and bullying, and how we open up conversations with young people about those issues. Some of the young people say to us that if they spoke to their parents about some content, it could be particularly damaging because they might say, “We’ll just take your phone or social media away.” Helping them to have those open conversations is really important.

PG
Peter SwallowLabour PartyBracknell64 words

I have spoken to young people about this, and the question I always ask is: “Put your hands up if you’ve seen something distressing online,” and they all put their hands up. I say, “Put your hands up if you spoke to a trusted adult about it,” and none of them do. It is quite shocking. Professor Orben, do you have anything to add?

Professor Orben252 words

I think this is a really difficult space, and it is one that is not appropriate. We shouldn't just be putting the pressure on children and parents to deal with it. Safety needs to be baked into these platforms by design so that there are more safeguards in place. We have been quite strong across the research community for quite a long time that there are features that are inappropriate. I think we also need to consider that there are very different families with very different needs and pressures. You need a certain amount of time to familiarise yourself with all the new features coming out every couple of days or weeks on all the different apps that your child uses. There is a real concern about a differential: children from families where somebody has the time to consider this might be more safeguarded than the children from families where somebody has to be working multiple jobs or from single-parent households—naturally, a lot of them are trying their best to keep their children safe, but there might be additional difficulties. People have also been discussing this in relation to other technologies, for example, where some families will have money to buy the nice technology that is safer and others will not. They will have to be in the attention economy that is grabbing your attention for profit. I think that causes a lot of concern, and it is a reason why we cannot just put this on to parents as a solution.

PO
Peter SwallowLabour PartyBracknell73 words

That is a really helpful response. Professor Etchells has already alluded to this, but a survey by Ofcom shows that 52% of children aged between eight and 17 think that parent screentime is too high. I certainly know, as an adult, that my screentime is too high. What measures can be taken to help parents manage their own screentime so that they can be the excellent role model we have been talking about?

Professor Orben58 words

The CMO guidance talks about screen-free mealtimes being really important. That would be one of the first steps, because meals are critical times to be with your child. The modelling is really important. We see that platform designs have become better over time, and maybe some parents will struggle to disengage because there are now quite powerful platforms—

PO
Peter SwallowLabour PartyBracknell12 words

When you say “better”, do you mean less harmful or more addictive?

Professor Orben5 words

Better at harnessing people’s attention.

PO
Peter SwallowLabour PartyBracknell6 words

I thought so—I was just checking.

Professor Orben248 words

Thank you for clarifying. Over time, we have seen increasing concern among young people and young adults about a lack of agency over their own use. You are not alone, and we are not alone, in sometimes feeling like we are exposed to a product that is very hard to disengage from. There have been some really interesting studies. For example, a study in Denmark looking at small design changes in social media—this was in young people, but there have also been very high-quality studies in adults—found that they allow people to disengage more quickly. For example, we might enter the app quite habitually; as I think you said in the previous panel, you open it up, you are there, you wanted to do something and then you get stuck. One of the interventions that was tested was a six-second pause, so that when you click on the app, you have to wait for six seconds and then you can enter the app. Another was that, when you enter the app, you need to say how long you want to spend online; after that time has elapsed, the app closes. In a sample of young adolescents, both of those reduced social media time by about an hour a day—so they were on social media for three hours, and when they were receiving this intervention, they were on it for two hours. Again, we can’t just put it on parents. We need to build in processes to help them.

PO
Peter SwallowLabour PartyBracknell6 words

That is very helpful. Thank you.

Caroline VoadenLiberal DemocratsSouth Devon42 words

I was going to ask you about the benefits of having a legislative ban on phones in schools. The Government moved on that yesterday. Do you think a statutory ban in legislation will have a positive effect on children’s health and wellbeing?

Professor Goodyear520 words

I led the NIHR-funded evaluation of school phone policies. We evaluated outcomes in 30 different secondary schools across the UK, involving 1,227 pupils. When we compare the differences, we find that there are no differences in outcomes across permissive and restrictive schools in mental health, physical activity, sleep, attainment or behaviour. Since I last spoke to the Committee, we have had two further papers that explain why perhaps there are not those differences and things to consider around this statutory ban to make it beneficial in the school context. The first was an analysis of the time that teachers are spending managing phones during the school day. We found that teachers in permissive or restrictive schools are spending 100 hours a week, which is the equivalent of three full-time teachers, managing issues related to phones. That is not just saying, “Put it away,” if pupils have their phone out. It is managing issues that happen on phones outside of school: conflicts, bullying and those sorts of things. There needs to be an approach to policymaking so that, whatever policy is embedded, it reduces the time that teachers are spending managing these issues. That is really important in refining those policies. The second point is that we did 40 focus groups with parents, pupils and teachers—177 participants in total. With that work, we put the perspectives together and asked what the trends are: what do people agree on and what do they disagree on? The overarching message is that the phone bans will not eliminate some of the issues; they just change how they happen. That has consequences for school and family life. In restrictive schools, for example, we see that there are benefits: there is an increase in in-person interactions, and that improves pupil friendships. But when schools have a phone ban, pupils report compensating outside school for their lack of time on their phone during the school day. That is a challenge for physical activity, sleep and homework. There are also challenges when schools ban phones but then put homework on phones. Parents are saying that children are on their screens more outside of school, which is a real challenge for schools. Thinking about the join-up between inside school and outside school is really important. Some of the phone-based interactions and things that happen outside school are still brought into school, whether or not a phone ban is in place. Conflicts still happen, bullying still happens and pupils are distracted by not being able to access those phones. It is really important that the support mechanisms are still in place, that the policies are refined and that it is not just a blanket ban. Teachers are trusted by young people—some young people—and they are safeguarding experts who can pick up and spot issues. By banning phones, we are not going to diminish the issues. There needs to be support within the school context. Yes, there are different types of bans—pouches, lockers and all those sorts of things—but I think the wider guidance for schools about how we can help young people to manage their phone use is really important.

PG
Caroline VoadenLiberal DemocratsSouth Devon19 words

I will come back to you on something you said in a minute, but let’s go to Professor Etchells.

Professor Etchells250 words

I agree with that. As with the conversation around social media bans, when we talk about different alternatives, they are not mutually exclusive. I think we should be considering lots of things at the same time. One of the big things in all these spaces is the social norms that have developed around phone use in schools, or social media use beyond them. A lot of work needs to be done to help children and young people to understand that maybe some of those social norms do not need to be in place, and on how might we shift those. That will hopefully help with the situation that Professor Goodyear talked about, where a lot of what is going on—certainly anecdotally I have heard in primary schools, as you go towards years 5 and 6—relates to dramas that happen on WhatsApp after school and are brought to school leaders by parents who want them to do something about it. That is beyond the remit of a school, but there is an educational element there. We have qualitative work as well around the impact of phone use on sleep. Again, a lot of it is about the social norms. There are expectations that teenagers should always be available and should immediately respond to things on social media or text. It is those sorts of things that really impact sleep and then spill over into the next day at school. Work around media literacy, algorithmic literacy and digital literacy is really important.

PE
Professor Orben76 words

I am not an expert on schools, but what I agree with the panel on is the need for a multi-pronged approach. With whatever decision we now make as a country, especially if we want our children to thrive more, a school phone ban will be just one of many things we need to do around online spaces. As they have said, children do use phones outside of school and that will need to be addressed.

PO
Caroline VoadenLiberal DemocratsSouth Devon253 words

I will come back to something Professor Goodyear said about children using their phone out of school more when it is banned in school. That is interesting, because I have spoken to lots of schools that have pretty strict phone bans in place and a couple where they are not even allowed to bring their phones into school—it is just, “Leave it at home and if you need to contact your parents use an old-fashioned dumb phone,” which seems to work fine. They have said that the phone usage has dropped off massively because they are so used to not having them around. They just do not have their phones with them. They do not have them on the bus or when they are commuting. They are talking to each another as friends and their dependence on phones has dropped. Also, the evidence seems to show that where there are lots of secondary schools with a ban, children are not getting a phone at 10 or 11 in preparation for going up to secondary school. I hope you would agree that is a good thing because, first, you are not having the WhatsApp dramas in year 6, which I have also heard from primary school headteachers and they could really do without; and secondly, you are gaining a couple of extra crucial years at 11, 12 and 13 where parents are saying, “Well, actually maybe you don’t need a smartphone.” Do you think that that is a likely outcome and a positive thing?

Professor Etchells40 words

Yes, I think so. It is definitely a positive thing and it goes back to that idea that you are changing the social norms and the expectations around them. I think that is a really healthy, positive thing to do.

PE
Caroline VoadenLiberal DemocratsSouth Devon16 words

It is about the peer pressure not to have a phone rather than to have one.

Professor Etchells151 words

Yes. Where we have to be careful in how we change those social norms is whether we introduce unintended negative consequences along the way. One of the potential issues around, for lack of a better term, blanket phone bans in schools is how are we going to deal with exemptions. Exemptions will be needed for young carers, potentially, or for children who need to use phones, very reasonably, to monitor medical needs. At face value, it is easy to put in an exemption to support those children, but what sort of social situation does it create for them in the school and what expectations around who can use that phone, and things like that? So it is not simple and straightforward—nothing ever is—but we need over the long term to look at how are we changing those perceptions about things, and are they going in the direction we want them to?

PE
Caroline VoadenLiberal DemocratsSouth Devon8 words

Would you like to add anything, Professor Goodyear?

Professor Goodyear197 words

I just say that schools are complicated; not every school is the same. Some parents talk about the importance of providing their children with a phone if they have to travel to and from school over particularly long distances. It is about the wider picture of what the school phone policy is about. Is it about the consequences if you break the rules? Is it about the consistency across teachers who enforce it? Is it about the wider ethos and vision of a school? We know from Canada, for instance, that schools that promote physical activity tend to have more physical activity behaviours and less screentime. So it is about the bigger picture, and every school is different and has its own values and rules. There is the importance of having this regulation across schools, but it is also about thinking, “In your school, what is important to your pupils? What are the values? What is the make-up of your students and parents? How can we design and put support systems in place?” and all those things, so that it is not just a statutory policy, but relevant to each individual school and its make-up of students.

PG
Chair76 words

I want to ask about AI chatbots. There is a question in the Government’s consultation about concerns that children could become dependent on AI chatbots, treating them as friends or partners and not necessarily understanding the distinction between an AI chatbot and a real person. To what extent do you understand the risk posed by AI chatbots, and do you think that this should be an area where the Government consider further regulation to protect children?

C
Professor Orben318 words

This is exactly one of those questions around risks and weighing up different risks, how quickly technology moves and how difficult it is to understand even what is going on on the ground. We have just done a survey with students in Bradford, where we have seen much higher AI use for education—we cannot disentangle whether that is in the school or for homework—as well as a small proportion of young people using it for support or to talk through difficult social situations. I am always surprised by how slowly we move as a research field, but that is because we have to go through these long-term funding cycles. A lot of the R&D funding is very retrospective, where I am reviewing a lot of grants on social media while I think we should be doing a lot more work on AI. I welcome this being in the consultation. We need to understand whether it is one of those areas where we take a precautionary approach. Naturally, we will probably hear that some young people use it for some sort of gain, but we will also hear increasingly accumulating horror stories about individuals. That is where evidence will play one role, but it needs to be a societal conversation. The other critical point is that we are dealing with an industry that is very closed off and does not share findings or data in an open way. I worry about a negative cycle where, because it does not share data with independent researchers, we as independent researchers cannot do good-quality studies, which means evidence of harms takes longer or might even be impossible to get by the time that action needs to be taken. That maybe lets companies off the hook, so we need to really think about this. I welcome it being in the consultation; it needs to be considered, and we need a considered policy response.

PO
Professor Goodyear50 words

I would agree. We need to be future-proofing for AI. Some lessons have been learned from the consultations on social media, which we have spoken about, since early 2018 and 2019. We need to be future-proofing so that we are not in this situation in 10 years’ time with AI.

PG
Chair257 words

Finally, there are a couple of quite unusual features of the debate that we are having nationally. We are all representatives: we are elected by our constituents, and we listen very carefully to them. MPs across the country hear overwhelmingly from their constituents that social media and screentime are causing harm to young people. That is what we hear and the pressure for action feels very urgent. I think it would be hard to find a Member of Parliament in the House of Commons who is not experiencing that pressure to act and, often, seeing those impacts and concerns in their life with their own children and so on. That sits in sharp contrast to the view from academic researchers, which, for a long period of time, has consistently been, “We can’t tell, we can’t prove a causal link, the evidence isn’t definitive and we can’t give you an answer.” Governments have placed a great deal of emphasis on the view from academics, and the position that it is harder to find the definitive evidence and casual links has been used by successive Governments as a reason to delay taking action. How does that sit with you? Do you feel the weight of responsibility? Do you think that you are perhaps not always asking the right questions in your research? Are there features of the way that your research is commissioned, funded and regarded by Governments where changes could help society to get to better solutions faster? That is what our constituents are asking us to do.

C
Professor Etchells270 words

It is very frustrating, and it has been like this for a long time. Ten years ago, I wrote an open letter saying that we need more funding in this area before we develop robust Government policy and guidelines for parents. This goes back to the question around funding in this space. There are data out there that we could use to answer to some of these questions quickly, but the companies are reticent to share that with independent researchers. At the same time, researchers are extremely cagey about engaging with the companies on a direct basis because of the perceived, or actual, conflict of interest that might arise. For a long time, a lot of researchers in this area have argued that we need an independent third party to mediate this process. It might be a Government or a charitable foundation that brings the right people in the room together so that we can figure out privacy issues, propriety software issues and conflict of interest issues, and actually get the data that we need to answer some of these questions. Without that, what has happened over the past 10 to 15 years is that we as researchers have had to resort to poorer methods. A lot of the research in this area relies on self-report questionnaire-based studies. People will give you an answer if you ask them how much time they have spent on TikTok, but we simply do not know whether that number is accurate in any way. We need more objective data and the mechanisms to share that responsibly in a way that protects the researchers involved.

PE
Professor Orben525 words

We work extremely hard with what we have, and I represent a community that understands the weight of the importance of this on us. I share with others that these are the things that we think about when going to sleep and waking up. It often feels like we are living in a dual reality. It is my role to represent and accurately give evidence about what the evidence base is. That is my job, but so is communicating that with, as I said, risks and how we can weigh up things so that a lack of evidence is not equated to evidence of absence. It is utterly serious that this work is undertaken. We have been calling for better access to social media data for years, as Professor Etchells has said. We called for it in the Online Safety Act when it was still being considered. We were promised it for many months, and then it was cut. It was replaced with the decision that Ofcom would produce a report about how we can deliver data access for researchers. That report was delivered in July last year; we have heard nothing official from DSIT on how it is responding to that. When the raw materials are not there, we cannot produce what is required in the timescales required. We sometimes need to think about evidence creation as being in a different system than the one we are in currently, where it is being done in universities and independent research teams. More and more I think, “What are the other consumer products that could be harmful to a whole population that we as a society manage the risks for?”, and I think of food. The Food Standards Agency has solid funding. It tests foodstuffs for potential harm and potential pathogens. It is done in a really strategic way, and I think we maybe need to move towards a system like that that is proactive, thinks about risks when they emerge and has a standard facility to do that. Those are the things I wrote about in my report to DSIT that we submitted in May last year, and we have seen little to no action. As I have the microphone, I add that on conflicts of interests, I think it was important that you raised the question about research funding at the beginning of our evidence submission. I would welcome it if we could all put in place a formal submission about our funding, and not just our research funding, which I try to completely disclose. I just remembered I had UK Government funding and funding from the Nathoo family trust. I think we need to not just disclose funding, but also things like what Committees we are on and what sort of travel we do. There are gold-standard disclosures that I have on my website. I was just ruminating during this Committee session that such disclosures are really important, so I will submit mine to you after this Committee. That will be the complete version for you, because I did not have that whole list with me to talk you through at the beginning.

PO
Chair19 words

That is appreciated. Just briefly, because we are quite over time, Professor Goodyear, do you have anything to add?

C
Professor Goodyear124 words

I would just say that in terms of the evidence, I think our responsibility is to provide good, high-quality evidence, and that takes time. It takes time from safeguarding, ethical review, protocol development, evaluation. My research is funded by the NIHR, which is funded by the Department of Health and Social Care. My responsibility is to ask questions about school phone policies, for example. That research started in 2019—putting the application together—and the publication was in 2025. So large national studies that go through rigorous protocol and review take a long time. There was covid in the middle, so that was a slight delay. But if you want us to present high-quality evidence, we have procedures and policies that we follow to ensure that.

PG
Chair66 words

Thank you very much. It has been very interesting for us to hear from you today. Consistent with what Professor Orben has said, if there is anything that you did not have the time to convey or that you think the Committee should know about, please do write to us afterwards. We would welcome that very much. That brings our evidence session today to a close.

C