That's where things would return to, but the twist Musk brings in is the expectation of free speech absolutism that can easily be construed by some as the ability to post the content they want without moderation. The challenge for social media companies is they are platforms that can amplify all sorts of information, which is why sorting out how to best moderate that is going to be really tricky.
I dunno about it being tricky or not. It really depends on the guidelines put into place and the definitions those guidelines use.
1). If you start with the Section 230 of the CDA, my reading of it's intent was primarily to keep porn and graphic violence away from children, giving the social media platforms the power to take that content down without repercussions to them, i.e. to censer that speech, nor any liability to them of what their users post, no liability to them for their users posting copyrighted material and such on their platform.
If held to that clarity, simplicity and common sense, does it still look like it is very tricky? I'm more leaning to it wouldn't be.
2). This then was abused, over-extended to 'hate speech', generally never defined any further, and there are some who claim that any speech which disagrees with them politically or their political point of view as being the same as a physical assault, and thus 'hate speech', justifying the censorship of political views and political speech with which they don't agree. Suddenly you have censorship decisions driven by political agenda and leaning of the censor in question.
3). This then extended again to include 'disinformation', which, even more blatantly, applied to information the censor doesn't agree with nor support, and suddenly, you have disfavored political facts and information being suppressed, which favored political facts and information are made even more prominent, through bot accounts which re-Tweet and like that content.
Sure, when you end up in either #2 or #3 it can get really trick really fast. That doesn't mean that if you stuck to #1 that it would have to be tricky.
It's also important to note that social media companies aren't just blank slates for its users since there is some form of curated experience running in the background (e.g. moderation that keeps trolls etc. out to minimize "noise") via their terms of service.
This curation that you speak of is already a function of a publisher and less so a function of a platform.
Which is social media? A platform? Or a publisher?
The rules governing the two are vastly different.
Publishers being liable for the content they've curated, platforms come with a minimal level of curation.
Social media can't be claiming publisher rights with their curation, while at the same time claiming platform rights which only include minimal curation.
For clarity's sake, it needs to be either one or the other, but not both.
The needle Musk will have to thread now is allaying the fears of advertisers who fear his comments on free speech absolutism will lead to a space certain advertisers don't want to be associated with; that much is clear by his softened comments made to advertisers this week.
That, however, still doesn't make them a "utility", and that model would put in place moderation policies closer to what people like Musk are bucking.
Meh. It's the closest analogy which fits. Hell, a great deal of the Internet actually 'runs' on the physical telecommunication networks, doesn't it?