Technology platforms that host user-generated content or allow people to communicate will need to proactively remove and limit the spread of illegal or harmful content, or they could be fined up to 10% of turnover under the UK’s long-awaited online harms legislation.
On 15 December 2020, the UK government released its full response to the Online Harms White Paper, signalling that the Online Harms Bill will establish a statutory duty of care for online companies to their users, which will be legally obliged to identify, remove and limit the spread of illegal content such as child sexual abuse, terrorism and suicide material.
The companies will also have a much greater responsibility to protect children from harmful content or activity such as grooming, bullying and pornography. The rules will be enforced by Ofcom, which the bill officially confirmed as the online harms regulator.
Ofcom will have the power to fine companies failing in their duty of care up to £18m, or 10% of annual global turnover – whichever is higher – and will also be given the power to block non-compliant services from being accessed in the UK.
The government has also suggested that Ofcom will be empowered via secondary legislation to impose criminal sanctions against individual executives or senior managers at technology firms, for example, if they do not respond fully, accurately and in a timely manner to information requests by the regulator.
The legislation will apply to any company in the world that serves UK-based users, with the rules tiered in such a way that the most popular sites and services (those with large audiences) will need to go further by setting and enforcing clear terms and conditions which explicitly state how content that is legal but could still cause significant physical or psychological harm will be handled.
This will include misinformation and disinformation about a range of topics such as coronavirus vaccines, marking the first time online misinformation has come under the remit of a government regulator.
A small group of companies with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1, while Category 2 services include platforms that host dating services or pornography and private messaging apps.
Less than 3% of UK businesses will fall within the scope of the legislation and the vast majority of companies that do will be Category 2 services, the UK government has claimed.
“I’m unashamedly pro-tech but that can’t mean a tech free-for-all,” said digital secretary Oliver Dowden. “Today, Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation. We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech.
Priti Patel, home secretary
“This proportionate new framework will ensure we don’t put unnecessary burdens on small businesses, but give large digital businesses robust rules of the road to follow so we can seize the brilliance of modern technology to improve our lives.”
Home secretary Priti Patel added: “We will not allow child sexual abuse, terrorist material and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences.”
The legislation will also include safeguards for freedom of expression and pluralism online, with specific measures to ensure journalistic content on news publishers’ websites is protected, even when shared on various social media platforms. Reader comment sections on news publishers’ sites will also be exempt.
The government has said the legislation will make clear what specific harmful content and activity will be covered, as well as set out how the rules will apply to communication channels and services where users expect a greater degree of privacy, for example in online instant messaging services and closed social media groups.
“We’re really pleased to take on this new role, which will build on our experience as a media regulator. Being online brings huge benefits, but four in five people have concerns about it,” said Ofcom chief executive Melanie Dawes.
“That shows the need for sensible, balanced rules that protect users from serious harm, but also recognises the great things about online, including free expression. We’re gearing up for the task by acquiring new technology and data skills, and we’ll work with Parliament as it finalises the plans.”
The Online Harms White Paper was published by the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office in April 2019, and put forward the world’s first framework designed to hold internet companies accountable for the safety of their users.
Both Lords and MPs have previously expressed frustration about delays in the Online Harms Bill, as well as the lack of a full government response to the whitepaper.
In February 2020, the government set out its initial response to the paper, putting Ofcom forward as a potential regulator. However, in the government’s own press release announcing the response, it said the full version would be “published in the spring”.
The government now claims it is planning to introduce the law in early 2021 in an Online Safety Bill, which it further claimed will set a global standard for proportionate but effective regulation.
“Today’s announcement is a welcome step forward, pursuing a differentiated approach for illegal and legal harms and with appropriate exemptions for ‘lower-risk’ services such as product reviews or email,” said Antony Walker, deputy CEO of trade body TechUK.
“However, significant clarity is needed on key elements of how the proposed regime will work in practice, including the criteria used for the categorised approach and ‘high-risk services’. While outlined exemptions are encouraging, there are still too many companies included in the scope, with over 175,000 ‘Category 2’ businesses needing to know their exact responsibilities and where these exemptions apply.
“The prospect of harsh sanctions, including senior manager liability and fines of 10% of global turnover, will be particularly worrying to this broad cohort of companies and risks discouraging startups and new investment at a time when we should be encouraging and capitalising on the UK’s reputation to drive growth and investment.”
On 27 November 2020, the government unveiled plans to set up a dedicated Digital Markets Unit (DMU), which it said would work closely with regulators, including Ofcom and the Information Commissioner’s Office, to introduce and enforce a new code to govern the behaviour of platforms that currently dominate the market.
“Digital platforms like Google and Facebook make a significant contribution to our economy and play a massive role in our day-to-day lives – whether it’s helping us stay in touch with our loved ones, share creative content or access the latest news,” said business secretary Alok Sharma at the time. “But the dominance of just a few big tech companies is leading to less innovation, higher advertising prices and less choice and control for consumers.”
The European Union (EU) is also set to unveil new rules in its Digital Markets and Digital Services Acts, which are expected to focus on competition and making platforms responsible for the content they host respectively.
The European Commission (EC) has taken a particular objection to large technology companies using data they gather from one service to “improve or develop” new services in a different line of business, as this significantly stymies the ability of smaller firms to compete.
In a joint opinion piece for the Irish Times, published on 6 December, commissioners Margrethe Vestager and Thierry Breton wrote that “the business and political interests of a handful of companies should not dictate our future”, adding that “swift preventive intervention powers and the possibility to impose sanctions…will be able to prevent harmful behaviour before it even takes place”.