Australia Banned Kids From Social Media. Now It Has Advice For The Us.
As countries around the world race to counter the tech industry’s ever-expanding influence, one issue in particular has become a major rallying cry: protecting children online. And Australia just put itself out front.
In late November, Australia became the first country in the world to ban minors under age 16 from social media. Oddly enough, it’s an American who will help enforce the unprecedented legislation: Julie Inman Grant, who has served as Australia’s eSafety Commissioner since 2017.
The U.S. Congress is alsoconsidering legislation to protect kids from harmful content online. But the Kids Online Safety Act, which has bipartisan support from such far-apart backers as Elon Musk and Senate Majority Leader Chuck Schumer, failed to pass this year amid opposition from top House Republicans, including Speaker Mike Johnson, who fear it will lead to the censorship of conservatives.
The POLITICO Tech podcast recently interviewed Inman Grant about how she plans to implement the first-of-its-kind law — and what advice she has for the United States.“For too long, the burden for safety has fallen on the parents themselves or the children, rather than the platforms,” she said. “So the way that the government designed this is to put the burden on platforms.”
The following conversation has been edited for length and clarity. Listen to the full interview on POLITICO Tech.
You hear all the time from parents and from kids experiencing some type of harm online. Connect that work to this social media ban keeping kids under 16 off of social media. Why is that a necessary step?
The government felt that the safety changes and improvements across the technology industry had been incremental rather than monumental. And I think there’s been a global movement that, frankly, started with Jonathan Haidt’s book, The Anxious Generation. One of our premiers, who is like a governor, his wife read the book, and he came out and said, I want parental consent for kids at the age of 14. And then another state premier said, Well, I want to do this, but ban kids at the age of 16. And then every state and territory had a different plan. And then what happened was the head of the opposition party, Peter Dutton, said, We have an upcoming election. If the coalition comes in, we will institute a ban in 100 days. And then News Corp, which is a very powerful force in the Australian media landscape, was running a campaign around keeping kids safe. And so there was huge political momentum. I think the Prime Minister felt social media is not showing the social responsibility we think they need to, that there’s a social license [and] we need to protect kids. What they’ve done to date is not enough. I guess I’d also say I don’t really refer to it as a social media ban.
How do you refer to it?
A social media restriction bill. There are a lot of exemptions, which I think are important, and what we’ll be working through with our minister and the government now is, what do they consider social media? I mean, kids aren’t posting to Facebook and Instagram like they were 10 years ago. There’s ephemeral media, like in Snap. There’s short-form videos. A lot of the messaging apps that we see today are more closely resembling social media sites. If you look at WhatsApp, it’s got stories and it’s got channels. And then online gaming platforms, of course, are facilitating social interaction through chat and through live play. So we’ve got to figure out who’s in and out in terms of who will be impacted, and then what the requirement will be once we determine that.
Anyone who has been around teens knows they can find a workaround to a lot of rules, right? And so, how do you verify age? How do you actually make sure this works?
Well, I’ve been working on age verification in one way, shape or form since 2008. I think you probably remember that the Harvard Berkman Center had an Internet Safety Technical Task Force where they looked at this. And I distinctly remember a quote from Richard Blumenthal, the senator today, who was the [Connecticut] attorney general then — well, if we can put a man on the moon, we can certainly verify the age of a child, right? And on one hand, that made me chuckle. On the other hand, I’m like, he’s right — except it’s not just a technological issue, it’s an ecosystem issue. And what we’ve seen over the past decade and a half is that ecosystem and that industry around safety tech and age assurance really maturing. So there are a range of different tools out there. Obviously biometrics can be used. Some form of government ID or digital ID. Although for this particular bill, digital ID or government ID can’t be the sole way of looking at this. So what I mean by the ecosystem is we need to make sure that we’re balancing the imperatives of safety in terms of preventing children from accessing social media sites, particularly where there are addictive features like endless scroll, where there are opaque algorithms sending people down rabbit holes, where there is no visibility or explanation as to why they’re being served particularly harmful content, for instance. There are a range of other things that need to be considered, but we need to balance the privacy with the safety.
Some of the criticisms of the law, which I’m sure you’ve heard, especially from social media companies, is that it’s government overreach, or it’s suppressing free speech. Why is it the government’s place to make some of these decisions, and not parents or school administrators or tech companies?
Well, interestingly, the real political momentum did come from parents who feel like it’s just too hard. I mean, 95 percent of parents tell us, if you’re on a social media site that has 50 different parental controls, you’re lucky to be able to figure out how to toggle on one or two. So, for too long, the burden for safety has fallen on the parents themselves or the children, rather than the platforms. So the way that the government designed this is to put the burden on platforms. And after we establish who’s in and who’s out and what technologies can be used, I will have to develop something called the reasonable steps that companies will have to take. And you make a really good point about freedom of speech and freedom of expression. We’ve done a lot of research, particularly with vulnerable communities: LGBTQI youth, those with disabilities and who are neurodivergent and First Nations youth. So on the one hand, all of these groups, because they’re marginalized, receive higher degrees of online hate. But they also say to us, being online makes us feel more ourselves than we are in the real world. We’re able to find our tribe. We’re able to connect and learn more about ourselves. And you know, my commitment to the young people of Australia — and we have a Youth Advisory Council that actually input into this process — is that we’re not trying to cut kids off. We still will give them the ability to be able to use messaging apps and online gaming where, you know, there are certainly some risks there, but there are a lot of healthy problem solving and connections that can happen in those fora, too.
You talk to a lot of global regulators, counterparts in other countries. Is this law something you think others should emulate?
Listen, in many ways, it is a big experiment, because it hasn’t been done before. We’re not new to implementing novel and complex legislation. When this law came out, I'm independent, so I raised concerns about what I thought was quite a bold and decisive move. But I described the water-safety approaches that Australia has been so successful in doing over the past few decades, and that’s because there were a lot of very tragic drownings in backyard pools.
So Australia was one of the first countries to make sure that fencing was required for every pool, and it’s backed by enforcement. But we haven’t tried to fence the ocean. In some ways, that’s futile. But what we do do is, we teach kids to swim at the earliest age and to become strong swimmers throughout their lives before we let them go without supervision. We have lifeguards. We teach them to swim between the flags. Where we know there are sharks, we put up shark nets. We teach them how to swim against rip [currents] so they don’t get taken out to sea, or so tired that they can’t swim back to shore. So all of these things should be applied to online safety, I believe, as well.
You started your career on Capitol Hill. I know you spent a number of years at Microsoft and other tech companies. How does an American become the top online safety regulator in Australia?
I came to Washington, D.C. in 1991 with big ideals and even bigger hair, and big shoulder pads, too. And I worked for my hometown congressman and he looked over the cubicle one day and said, we’re breaking up the “Baby Bells,” but we also have this small little company in our electorate called Microsoft. So will you work on tech and telecom issues? So this was before there was even an internet. And then I was recruited to be one of Microsoft’s first lobbyists here in Washington, D.C. in 1995. So I’ve been working at tech policy ground zero in Washington.
When I look back at those early, heady days, none of us would have expected that Section 230 of the Communications Decency Act wouldn’t have been touched by now. Of course, social media wasn’t even a consideration back then. You know, Mark Zuckerberg would have probably been more interested in Dungeons and Dragons as a teenager than building the worlds for a social networking platform.
Here in the U.S., lawmakers are looking at their own version of kids’ online safety legislation, though it’s not a social media ban. I wonder what you think the U.S. can learn from Australia and your approach?
I’ve had the pleasure over the past year or so to meet with Sen. [Marsha] Blackburn and Sen. [Ed] Markey. We just did a two-day workshop with the Department of Homeland Security and 38 tech companies around Safety by Design to tackle child sexual abuse material, and it was an incredibly successful couple of workshops.
So I’ve built a very different kind of regulator. We’re agile, we’re anticipatory. We do some cooperative initiatives with companies like Safety by Design to try and encourage them, just as Congress did back in the late ’60s, early ’70s, following Ralph Nader’s book Unsafe at Any Speed and all the data that they had around embedding seat belts preventing traffic fatalities. They eventually had to legislate around the world to embed those car safety features, and at the time, the car manufacturers pushed back. But think about all the life-saving technologies we take for granted that are in our cars, from airbags to anti-lock brakes. Cars compete on their safety standards, right? And so what we’re trying to say is this area of technological exceptionalism, where tech companies are moving fast and breaking things — and we saw that with the AI companies in 2023 — maybe we need to just slow down a little bit and take that brilliant innovation, assess the risks, understand the harms and embed the safety protections upfront. So the analogy would be, embed the virtual seat belts and erect the digital guard rails to prevent the next tech wreck from happening.
Do you think the U.S. needs an online safety regulator?
I would be absolutely delighted if the U.S. had an online safety regulator, and one that really was focused on harm remediation. I’ve, of course, been following the debate over the years around online safety in the United States, and it’s really been about suppressing conservative voices versus progressive voices. It’s been politicized in a very different way. Online safety is very bipartisan in Australia, because I think the kind of collective value is, this is an extractive industry. It’s taking revenue out. It’s taking our citizens' data out. So we want to protect our citizenry, and we expect that these companies that are providing services here, and ostensibly causing some harm, are showing responsibility and are also respecting our laws. If we had a fellow regulator to work with in the United States, I think it would be game changing.