Technology giants, under intense pressure after the British Prime Minister, accused them of providing a "safe space" for terrorist ideology, on Monday defended their handling of extremist content following the London terror attack.
Speaking outside Downing Street on Sunday, Prime Minister Theresa May said: "We cannot allow this ideology the safe space it needs to breed.
"Yet that is precisely what the internet and the big companies provide," she said.
Her terse statement came after three Islamist men carried out a knife and van attack in central London, killing at least seven people and injuring 48 others. The Islamic State terror group has claimed responsibility for the attack.
British Culture Secretary Karen Bradley said tech companies needed to tackle extremist content, in a similar way to how they had removed indecent images of children.
"We know it can be done and we know the internet companies want to do it," she told the BBC.
Islamism-inspired terrorism remains the principal terrorist threat to both the United Kingdom and British interests overseas, a report that studied terror-related convictions in the country had said in March.
The study by the Henry Jackson Society had warned that Half of Britain's jihadists are now radicalised online.
Amidst intensifying pressure, Google said it had already spent hundreds of millions of pounds on tackling the problem of extremist content online.
Facebook and Twitter said they were working hard to rid their networks of terrorist activity and support.
Google, which owns Youtube, along with Facebook, which owns WhatsApp, and Twitter were among the tech companies already facing pressure to tackle extremist content.
Google said it had invested heavily to fight abuse on its platforms and was already working on an "international forum to accelerate and strengthen our existing work in this area".
Google added that it shared "the government's commitment to ensuring terrorists do not have a voice online".
Facebook said: "Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it - and if we become aware of an emergency involving imminent harm to someone's safety, we notify law enforcement."
Meanwhile, Twitter said "terrorist content has no place on" its platform.
Home Secretary Amber Rudd said yesterday that tech firms needed to take down extremist content and limit the amount of end-to-end encryption that terrorists can use.
End-to-end encryption renders messages unreadable if they are intercepted, for example by criminals or law enforcement.
The Open Rights Group, which campaigns for privacy and free speech online, warned that politicians risked pushing terrorists' "vile networks" into the "darker corners of the web" by more regulation.
The way that supporters of jihadist groups use social media has changed "despite what the prime minister says", according to Dr. Shiraz Maher of the International Centre for the Study of Radicalisation (ICSR) at King's College London.
They have "moved to more clandestine methods", with encrypted messaging app Telegram the primary platform, Dr. Maher said.
Dr. Julia Rushchenko, a London-based research fellow at the Henry Jackson Centre for Radicalisation and Terrorism, said that more could be done by tech giants to root out such content.
She felt that the companies erred on the side of privacy, not security. "We all know that social media companies have been a very helpful tool for hate preachers and for extremists," Dr. Rushchenko said.
Simon Howard, Chief Executive of UKSIF, the UK Sustainable Investment and Finance Association, said: "We'll need all the technology companies to do a bit more and we'll have to decide what the UK legal framework in which they do that is.