This latest hearing is the first since the storming of the US Capitol.
Politicians believe that was a tipping point for greater regulation.
They have said they plan to change the legislation that protects online platforms from liability for content posted by third parties.
The session began in combative style with the chair Mike Doyle asking all three executives whether they felt they bore responsibility for the events in Washington. None were prepared to give a one word "yes" or "no" answer as he demanded.
He also challenged the platforms to remove 12 prolific anti-vaxxers from their platforms, which he said account for 65% of vaccine disinformation, demanding a deadline of 24 hours for them to get back to him.
More generally, Congress is considering scrapping Section 230, the legislation that was crafted in the early days of the internet so that website owners could moderate sites without worrying about legal liability, by effectively saying that they are not publishers.
Facebook boss Mr Zuckerberg proposed limited reforms,going further than his two peers.
"We believe Congress should consider making platforms' intermediary liability protection for certain types of unlawful content conditional on companies' ability to meet best practice to combat the spread of this content," he wrote.
On disinformation more generally, he said hateful content made up only a small fraction of what Facebook users saw - with political posts accounting for 6% of what US users saw in their news feeds.
He also outlined the efforts his team had made to counter disinformation, including working with 80 fact-checking organisations and labelling debunked stories. Facebook had removed more than 12 million pieces of false content relating to Covid-19, he said.
Mr Pichai said YouTube had worked throughout 2020 to identify and remove content that was misleading voters, while information panels on the video-sharing website's homepage about Covid-19 had been viewed more than 400 billion times.
He also mentioned Section 230, saying repealing it "would have unintended consequences - harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges".
Twitter's Mr Dorsey said efforts to combat misinformation must be linked to "earning trust" from users by focusing on "enhancing transparency, ensuring procedural fairness, enabling algorithmic choice, and strengthening privacy".
He did not reference the legislation but spoke about two recent experiments - Birdwatch and Bluesky - that Twitter is trialling to tackle misinformation.
Birdwatch has about 2,000 participants drawn from the Twitter community, with "birdwatchers" able to flag misleading tweets and annotate them with notes. Early studies of how it is working seem to show the notes range from balanced fact-checking to more partisan criticism.
Bluesky is an independent team funded by Twitter which is working on creating open and decentralised standards for social media.
The three executives appeared before two Senate subcommittees and the Energy and Commerce Committee.
The hearing was announced in February, after it emerged that many of those marching on the US Capitol had organised themselves on social media and supported campaigns that falsely claimed the presidential election was stolen from Donald Trump, such as StopTheSteal.
At the time the chairs said: "Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation."
At Thursday's hearing, lawmakers continued to criticise the tech bosses for not moving strongly enough to address misinformation, particularly related to children, frequently interrupting or limiting their responses.
The theatrics, common to many congressional meetings, drew a response from Mr Dorsey on Twitter, who retweeted a call for lawmakers to engage with him in a "substantive" discussion and tweeted a question mark, followed by a yes/no poll.
An annual report from UK organisation Hope Not Hate found:
* there was an explosion in conspiracy theories during the UK lockdowns
* British conspiracy theorists commanded massive online followings - David Icke has 780,000 on Facebook, 900,000 on YouTube and 230,000 on Twitter
* between 15-22% of Britons believe the main Covid conspiracies are true
The riot in Washington was a pivotal moment for the tech giants too. Facebook removed Mr Trump's account and is awaiting a ruling from its own oversight board on whether he can be reinstated. Twitter went further, banning him permanently even if he decides to run for office again.
But many felt social media platforms had allowed the wave of distrust about the legitimacy of the election result to grow.
US campaign group SumOfUS has reviewed dozens of social media accounts, pages and groups - and says that the tech platforms' "policies, algorithms, and tools directly fuelled" violence. Twitter, Facebook and Google "came up massively short in preventing the escalation of violence," its report concluded.
"Internet companies should be held to account by democratic institutions for their policies on misinformation. But more broadly we believe these decisions should be made through open, democratic and transparent processes, rather than by commercial interests."
The three executives have faced a series of congressional hearings this year. This will be the fourth appearance by Mr Zuckerberg, and the third for Mr Dorsey and Mr Pichai.