British lawmakers will consider the world’s first online safety laws, cracking down on harmful, fake or dangerous content on YouTube, Facebook, Twitter and Google.
If adopted by Parliament, a new independent regulator will be established to ensure companies meet their responsibilities. The Online Harms White Paper was a joint proposal for the Department for Digital, Culture, Media and Sport and the Home Office. It includes a “duty of care” that requires companies to take reasonable steps to keep users safe and take action against illegal and harmful activity.
“The Internet can be brilliant at connecting people across the world — but for too long these companies have not done enough to protect users, especially children and young people, from harmful content,” Prime Minister Theresa May said. “We have listened to campaigners and parents, and are putting a legal duty of care on Internet companies to keep people safe. Online companies must start taking responsibility for their platforms, and help restore public trust in this technology.”
The laws would prohibit the sharing of child abuse or terrorist-related content, force social media platforms to publish annual transparency reports on harmful content and what they are doing to address it and make companies respond to users’ complaints quickly.
Fines could total billions of dollars for the bigger companies, Culture Minister Margot James told Business Insider.
Several recent incidents prompted British lawmakers to take action, including the New Zealand mass shooting in which the gunman live-streamed his attack on two mosques. Instagram, owned by Facebook, banned images of extreme self-harm after the suicide of British teenager Molly Russell.
The new proposals would apply to any company that allows users to share or find user-generated content or interact with each other online. It goes beyond social media sites to include file-hosting sites, public discussion forums, messaging services and search engines. There would also be a code of practice issued by the new government regulator to prevent the spread of misleading and harmful disinformation with dedicated fact checkers, especially during elections.
“The era of self-regulation for online companies is over,” Digital Secretary Jeremy Wright said. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”