The recent violence in Bengaluru as a result of a message that was circulated on social media platform once again underlined the need for strengthening our regulatory system. The Internet serves as a space for public communication and opinion formation. Hence, the neutrality of the intermediaries is critical. It is a characteristic of search engines and social networks that they filter, personalise and present contents to users based on personal data they have collected on them. To serve the overarching business model, algorithms are programmed to ensure that the users spend as much time as possible on their respective platforms. In recent times there had been several cases of hate speeches, fake videos and disinformation being placed on the social media platforms resulting in communal violence. In fact, every time such incidents have taken place, there had been a noticeable increase in harmful messages being circulated.
Three factors are driving nations world over to strengthen their procedures and surveillance systems. First, new types of techniques and methodologies are being used to cause harm the society, economy and create doubts in the governing system often resulting in unexpected violence of high intensity. Second that not all intermediaries may be neutral and that they could exacerbate the harmful impact by allowing quick circulation, perhaps to targeted persons. Third, there has been an increasing realisation that the cyber space cannot be covered under the existing laws dealing with traditional crimes, hence a comprehensive legal framework is needed to deal with the problem.
A look at the other countries which have or are introducing new systems for regulating Internet Intermediaries i.e. social media platforms is relevant for formulating an effective system for our requirements. In 2017, Germany passed the law known as the Network Enforcement law (NetzDG) requires social media platforms with more than 2 million registered users in Germany to put in place procedures to remove illegal and harmful contents within 24 hours. Enforcement lies within the power of the competent 14 state media authority. The state media authorities have far-reaching rights of information and investigative powers vis-à-vis the media intermediaries. Intermediaries can be punished with fines of up to EUR 500,000. In Australia, the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act, 2019, imposes obligations concerning the need for intermediaries to the inform authorities of abhorrent violent material being circulated using their services, and to remove such contents expeditiously. The Act goes beyond the social media platforms and includes any internet site which enables users to interact with other users and any electronic services where users can communicate with others. The regulatory authority is vested in the eSafety Commissioner, who has been given powers in this respect. Heavy penalties are to be imposed for failure to notify the Australian Federal Police of the details of the abhorrent violent material [AUD$168,000] and to remove the harmful content [AUD$2.1 million / 3 years imprisonment for individuals or the greater of AUD$10.5 million and 10% of annual turnover for companies].
In UK, the white paper on “Online Harms in the United Kingdom”, seeks to establish a regulatory framework for a broad swathe of intermediaries, that would impose a proportionate “duty of care” on the entities. This would include intermediaries such as social media platforms, hosting sites, public discussion forums, messaging services and search engines of all sizes. A new regulator is being constituted for ensuring the compliance of the regulations. Heavy penalties are considered for non-compliance.
The above efforts have three aspects in common- first, the responsibilities of Internet Intermediaries to remove the harmful contents expeditiously have been defined; second, establishment of a strong regulatory system to monitor the social platforms; and introduction of severe penalties through a legal framework. India too is moving towards imposing obligations on intermediaries. Our regulatory system is evolving on the basis of several court cases. While there are concerns for the protection of privacy and a Data Protection Bill is under consideration, the need for the law enforcement agencies to trace the culprit remains a major issue. In judicial cases there are signs of activist approach, often creating problems for the Law enforcement agencies. The safe harbour clause meant for journalistic and editorial contents are often misused. The main issue remains to have a balanced system that would protect privacy and ensure ease of doing business while at the same time the mischief makers or agents of adversaries should not be able to harm our national interests by placing contents that can adversely affect our economy, political institutions and social harmony. Our adversaries also use social media for intelligence gathering and laying honey trap. In 2008, the Government issued an amendment to the IT Act of 2000 to specify penalties for different offences including intent to threaten the unity, integrity, security or sovereignty of India or to strike terror in the people. In 2011 MeitY issued Guidelines that required the social media platforms to remove harmful contents in 36 hours and observe due diligence by intermediaries to identify such contents. This also indicated that content that “threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign states, or public order or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting any other nation” should be removed. However, despite the above efforts, in recent times there has been a significant increase in the number of harmful contents, fake videos and inciting material being placed on the social media harming our national interests. In view of the above, some steps are under consideration to strengthen the system.
Sanjay Dhotre Minister of State for Electronics and IT informed the Rajya Sabha in November 2019 that rules are being revised to make it mandatory for social media platform to observe due diligence as prescribed in 2011. The revised rules will include removal of the originator of the information and deployment of technology-based automated tools for proactively identifying and disabling public access to unlawful information. We need to address the lacunae in our system. Through internet, the malicious contents reach to a large number of persons instantaneously and therefore they have the potentials for having a greater impact than physically addressing the targets. Hence the punishments need to be stricter as has been done by other countries.
We need to formulate a holistic policy comprising three components. First, a comprehensive legal framework needs to formulated that should define the obligations narrowly for social media. This would help the law enforcement agencies as also the courts considerably.
Second, there is a need for an independent empowered regulatory body which should itself proactively monitor and inform the law enforcement agencies. Everything cannot be left to the intermediaries.
Third, we should go beyond the removal of harmful contents. Efforts should be made to counter the possible adverse reactions over the malicious contents in a proactive manner. The Ministry of Information and Broadcasting’s Bureau of Outreach and Communication [BOC] is the nodal agency for circulation of the government’s messages in an effective manner. The BOC should be expeditiously given necessary information by the National Information Board [an inter-ministerial board that deals with information security issues] under NSA to counter the narrative built by the harmful content, for circulation through various channels. These should be aligned with the proposed national cyber security strategy that would be aimed at dealing with adversaries in cyber domain. The system should be reviewed periodically as new threats keep on emerging which require calibration of our efforts to deal with them.