NEW DELHI: The Ministry of Electronics and Information Technology (MeitY) is considering dropping a contentious clause from the draft intermediary guidelines that required companies to develop automated tools to ‘actively’ monitor content on Twitter, WhatsApp, Facebook and other platforms.
“Instead, the final rules could ask the social media platforms to develop mechanisms using AI (artificial intelligence) to find out accounts transmitting illegal, inflammatory or any content which could disturb law and order or threat to national security, and then take them down,” a senior government official told ET.
The official said that the tools could resemble the ones currently used by WhatsApp to tackle child pornography on its platform.
WhatsApp has said that it was using AI tools to identify several accounts which shared child pornography on the platform. The company earlier this year said it had removed close to 1,30,000 accounts in just 10 days through the AI tools, without decrypting any messages.
The clause around automated tools to ‘actively’ monitor content had been a long-standing demand of the home ministry and were included in the draft guidelines which were released in December last year. However, this clause has been strongly opposed by global tech giants and civil society members who have said this could raise concerns around ‘censorship’, thus curbing freedom of expression and would require them to reengineer their products.
MeitY however has pushed back against the home ministry’s demand as it believes that the social media companies rightly claim that they are “platforms” and not publishers or broadcasters of content, thus their accountability for content on their platforms needs to be fixed accordingly.
“So, the current Safe Harbour rules allow them immunity from taking the onus of content posted but at the same time they can’t completely shrug the responsibility off their shoulders and need to cooperate with law enforcement agencies,” the official said.
“The idea is not to violate anyone’s privacy but there is a genuine concern on part of law enforcement agencies which needs to be addressed and we felt this could be the middle ground,” the official added.
The final intermediary guidelines however will come out only after the Supreme Court judgement on the traceability issue and on linking of Aadhar to social media accounts, due for next hearing on September 13.
MeitY wants intermediaries such as WhatsApp to help trace the origin of messages which fan criminal activities such as riots, mob lynchings, etc. WhatsApp has pushed back, saying it infringes on its privacy rules.
“We are waiting to see the Supreme Court judgment as it has an active bearing on the social media intermediary guidelines,” the official said, adding that governments world over were grappling with the issue of how to regulate social media and that it was only natural for tech companies to oppose any sort of reigning in.
MeitY though agrees with MHA on the issue of traceability. “While we are awaiting the SC judgment on the matter, traceability of messages is is critical and non- negotiable”.
The intermediary guidelines for internet and social media companies such as WhatsApp and Facebook have assumed critical importance with the government seeking to crack down on fake news and rumours that have fueled violence, including lynchings, in parts of the country. The consultation process is over and MeiTY is working on the final rules.
The draft guidelines which were released by MeitY last December mandate social media companies to nominate a grievance redressal officer in India and develop a monitoring and filtering mechanism to check content. The draft mandates all intermediaries to hand over to government agencies any information related to cyber security, national security, investigations, prosecutions or prevention of an offence within 72 hours along with originator of the content.
Also, it requires them to take down or disable content considered defamatory or against national security under Article 19 (2) within 24 hours on being notified by the appropriate agency in addition to using automated tools to identify, remove and trace the origin of such content.