UK Parliamentary inquiries
House of Commons Digital, Culture, Media and Sport Committee inquiry: Disinformation and ‘fake news’
The Committee’s interim report, Disinformation and ‘fake news’, was published in July 2018; the final report is due by the end of 2018. Key recommendations include: a re-designation of tech companies as neither publisher nor platform and the establishment of clear legal liability with regard to harmful and illegal content published on their sites; the introduction of a levy on social media companies to fund a major media literacy programme; a further levy on tech companies operating in the UK to pay for the Information Commissioner’s Office to expand its work; a public register for political advertising; a ban on advertising to Facebook ‘lookalike audiences’ where users have requested not to receive political adverts; the introduction of digital imprints to online election campaigning material; the Electoral Commission’s maximum fine limit to be changed to a fixed percentage of turnover; the Electoral Commission to establish a code for social media advertising during election periods; an audit of the online advertising market by the Competition and Market Authority regarding fake accounts; and the UK Government to consider a new, voluntary Atlantic Charter to protect citizens’ digital rights:
The Government published its response to the interim report in October 2018:
Following a meeting of the International Grand Committee on Disinformation and Fake News convened by the Select Committee, nine Parliamentarians signed a declaration on the principles of the law governing the internet:
House of Lords Communications Committee inquiry: The internet: to regulate or not to regulate?
The Committee’s work is ongoing, with its report not published as of January 2019.
House of Lords Communications Committee inquiry: Growing up with the internet
The Committee’s report was published in March 2017 (disclosure: Chair of the LSE Truth, Trust and Technology Commission, Professor Sonia Livingstone OBE, was the Committee’s specialist advisor). Key recommendations included: Government to create a new Children’s Digital Champion to advocate on behalf of children; the UK to maintain legislation incorporating the standards set by the General Data Protection Regulation (GDPR), regardless of EU membership; the adoption by industry of a set of minimum standards; and digital literacy to be the fourth pillar of a child’s education.
The Government published its response in October 2017:
House of Commons Science and Technology Committee inquiry: Algorithms in decision–making
The Committee’s report was published in May 2018. It calls on the new Centre for Data Ethics & Innovation (see below) to examine algorithm biases and transparency tools, and to determine the scope for individuals to be able to challenge the results of all significant algorithmic decisions that affect them and, where appropriate, to seek redress for the impacts of such decisions. It calls on the Government to provide better oversight of private sector algorithms that use public sector datasets, and to look at how to monetise these datasets to improve outcomes across Government.
The Government published its response in September 2018:
UK Government initiatives
The details of the Digital Charter were not published by the start of 2019, beyond a broad outline policy paper. The Charter is described as ‘a rolling programme of work to agree norms and rules for the online world and put them into practice’ and as ‘based on liberal values that cherish freedom, but not the freedom to harm others’. Priorities under the work programme include disinformation, online harms and cyber security. The development of the Charter is being undertaken collaboratively with industry, business and civil society.
White Paper on online harms (forthcoming)
The Government’s White Paper is expected in spring 2019, according to comments made by DCMS Secretary of State Rt Hon Jeremy Wright QC MP’s comments before the DCMS Select Committee in October 2018. This will clarify what legislation the Government thinks is required regarding online safety, and whether or not a regulator is required. The White Paper is the joint responsibility of DCMS and the Home Office.
UK Council for Internet Safety (UKCIS)
This new organisation will bring together more than 200 organisations representing government, regulators, industry, law enforcement, academia and charities, working together to keep children safe online. This builds on the work of the UK Council for Child Internet Safety (UKCCIS) that was previously in operation.
In an answer in the House of Lords on 4 December 2018 to Baroness Benjamin about UKCIS funding, Lord Ashton of Hyde confirmed the five focus areas of the body: online harms experienced by children; radicalisation and extremism; violence against women and girls; serious violence; hate crime and hate speech.
(See also a paper by Dr Victoria Baines: ‘Online child sexual exploitation: Towards an optimal international response.’ Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3240998)
Centre for Data Ethics and Innovation
This is a new institution set up by Government, to ‘advise the government on how to enable and ensure ethical, safe and innovative uses of data, including for AI. It will work with, and advise, Government and existing regulators’. It will sit within DCMS for the first two years before being set up as a statutory body. It is to be established following a public consultation designed to inform its operations and priorities. Its operational strategy is expected to be published in Spring 2019.
The Government response to the consultation was published in November 2018:
Board members were announced in November 2018:
Protecting the debate: Intimidation, influence, and information (Cabinet Office consultation)
This consultation aims to crack down on threats and abuse towards those standing for election. It will also ‘review whether the requirement to have imprints, which is added to election material to show who is responsible for producing it, should be extended to digital communications’.
Digital competition expert panel
Chaired by Professor Jason Furman, the expert panel’s objectives are to consider the potential opportunities and challenges the digital economy may pose for competition and pro-competition policy, and to make recommendations on any changes needed. This is a joint HM Treasury/Department for Business, Enterprise and Industrial Strategy initiative. The panel is due to report in early 2019 following a public consultation.
Press sustainability: The Cairncross review
The review, chaired by Dame Frances Cairncross, is established to investigate the sustainability of the UK press market. To inform the review, DCMS commissioned academic research from Mediatique to look specifically at the changing state of the press market. The panel is due to report in early 2019.
Described as a single flagship for Artificial Intelligence, machine learning and data science in defence to be based at Dstl (Defence, Science and Technology laboratory) in Porton Down. Countering fake news is included in the list of work that the Lab will engage in.
National Security Communications Unit
Announced in January 2018, the intention is that this new initiative will be tasked with ‘combating disinformation by state actors and others’, according to a Government spokesman. Further information is yet to be published.
PR Week reported in November 2018 that funding for the six-month pilot phase had finished, and said that the Cabinet Office had not responded to a request for comment about whether funding was continuing. Its future is uncertain.
Information Commissioner’s Office (ICO)
The ICO’s investigation into data analytics in political campaigns led to the publication of a progress report in July 2018, to inform the DCMS Select Committee inquiry with which it overlapped. Based on its investigation, the ICO fined Facebook £500,000. A second report, Democracy Disrupted? Personal Information and Political Influence included a recommendation that the Government introduce a statutory Code of Practice for the use of personal data in political campaigns, and a third report in November 2018 repeated that call, arguing that self-regulation was inadequate and saying that the Code should include platforms, data brokers and the media.
The ICO has also published a call for views from stakeholders regarding the development of the proposed statutory code of practice for the use of personal data in political campaigns. This is the first stage in the ICO’s consultation on this matter:
In September 2018, Ofcom published a discussion document about online harmful content. This was based on its experience of regulating the UK communications sector and was intended to inform policy-making as it relates to online.
An accompanying speech by Ofcom Chief Executive Sharon White to the Royal Television Society provides context:
The Electoral Commission’s Digital campaigning: Increasing transparency for voters report calls for stronger powers for obtaining information about election campaign spending, greater fines for breaches of spending laws, more detailed and more punctual reporting on spending, and better labelling of digital campaign materials and ads. It was published in June 2018. www.electoralcommission.org.uk/__data/assets/pdf_file/0010/244594/Digital-campaigning-improving-transparency-for-voters.pdf
Commission on Fake News and the teaching of critical literacy skills in schools
Jointly run by the National Literacy Trust and the All-Party Parliamentary Group on Literacy, the Commission’s report was published in June 2018. Recommendations focus on the need for critical literacy to be taught in schools, including the use of a range of texts on a variety of platforms that illustrate political bias. It calls for media organisations and Government to work together to identify and enforce appropriate regulatory options to ensure that digital media platforms are effectively tackling the proliferation of fake news.
UK thinktank and NGO responses
Doteveryone’s Regulating for responsible technology report was published in October 2018. It recommends the establishment of a new independent UK Office for Responsible Technology (ORT), which would have three functions: (1) to empower regulators; (2) to inform the public and policy-makers; and (3) to support people to find redress. Doteveryone proposes that the ORT’s anticipated cost (c. £37 million) would be met via a levy on industry, and by government investment.
The report builds on Doteveryone’s previous Green Paper, published in May 2018
Article 19 published Self-regulation and ‘hate speech’ on social media platforms, which recommended a model of self-regulation of social media, based on existing systems of press self-regulatory councils that are common throughout Europe.
Carnegie UK Trust (William Perrin and Professor Lorna Woods)
Via a series of blogs for the Carnegie UK Trust, Perrin and Woods propose legislation to create a duty of care based on (new) statute, so that social media service providers would be responsible for preventing harm of their users. The proposal would apply the same principles to online platforms that have traditionally been applied to corporate-owned public spaces, in order that harm can be prevented.
Global Partners Digital
The report A rights-respecting model of online content regulation by platforms calls for online platforms to establish a set of standards that would be monitored by an international, global multistakeholder oversight body, comprising representatives from online platforms, civil society organisations, academia and others. Platforms that failed to meet the standards would be publicly called out and provided with recommendations for improvement.
The report Tackling misinformation in an open society recommends mandated transparency for political advertising, equipping existing bodies (e.g. Office for Budget Responsibility, Office for National Statistics, House of Commons Library) with a mandate to inform the public, and cautions against over-hasty reaction.
Royal Society of Arts (RSA)
Focussing on contentious use, the Artificial Intelligence: Real public engagement report argues that the citizen voice must be embedded in public AI systems through public deliberation.
France’s National Assembly adopted two controversial ‘fake news’ bills in October 2018, which must be approved by the Senate before they become law. The bills enable a candidate or political party to seek a court injunction preventing the publication of ‘false information’ that might influence an election result during the three months leading up to a national election, and give France’s broadcast authority the power to take any network that is ‘controlled by, or under the influence of a foreign power’ off the air if it ‘deliberately spreads false information that could alter the integrity of the election.’ They are widely viewed as targeting Russian state-backed broadcaster RT. French minister of culture Françoise Nyssen has also announced her intention to create a council on press ethics.
The Network Enforcement Act, known as NetzDG, compels online platforms to provide ways for users to notify them of illegal content, and allows for fines of up to €50 million if they fail to remove ‘manifestly unlawful’ hate speech or other harmful content within 24 hours. They are required to publicly report on how they deal with notifications. The law has been criticised by NGOs for being overbroad and increasing the risk of censorship.
The Copyright Directive calls for a ‘link tax’ that aims to ensure that content creators receive are paid when their work is used by sharing platforms such as YouTube or Facebook, and news aggregators such as Google News. (The European Parliament has voted in favour but the final vote is due in January 2019.)
An interim Digital Services Tax has been proposed by the European Commission that would apply to revenues created from certain digital activities that escape the current tax framework entirely, for example, from selling online advertising space, from digital intermediary activities that allow users to interact with other users and that can facilitate the sale of goods and services between them, or from the sale of data generated from user-provided information. This has been proposed at 3%. It would be an interim measure until reform has been implemented. It is currently under negotiation, and has been criticised by the tech companies.
The European Commission convened a High Level Expert Group on Fake News and Online Disinformation that reported in March 2018. It focused mainly on non-regulatory responses, with recommendations including the creation of a network of Research Centres focused on studying disinformation across the EU, the continuation of the work of the Group by means of a multistakeholder coalition that will establish a code of practice for platforms, empowering users and journalists with tools they can use to flag and avoid disinformation, and increasing citizen media and information literacy.
Tackling online disinformation: A European approach – a communication in which the European Commission outlined its policy responses. Its aims included establishing a self-regulatory code of practice (see below), creating a network of independent fact-checkers, tackling cyber-enabled threats to elections in member states, media literacy work such as organising a European Week of Media Literacy, and exploring increased funding opportunities to support initiatives promoting media freedom and pluralism, quality news media and journalism.
In December 2018, Vice-President Andrus Ansip published a statement outlining the Commission’s action plan countering disinformation and setting out progress to date. New initiatives include the introduction of a rapid alert system with Member States so that ‘disinformation can be quickly countered with hard facts’, and an increase to the budget of the European External Action Service.
The self-regulatory Code of practice on disinformationwas published by the European Commission in September 2018. Signatories, including Google, Facebook, Twitter and Mozilla, commit to act in five areas: disrupting advertising revenues of certain accounts and websites that spread disinformation; making political advertising and issue-based advertising more transparent; addressing the issue of fake accounts and online bots; empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content; and empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
This list will be regularly updated to reflect ongoing developments.