I was recently listening to Fred Wilson and Howard Lindzon talk about, among others, news sources and curation, which is a topic dear to my heart. (Which I wrote about before here and here).

Curation is hard, people tend to fall into confirmation bias circle jerks, and we are in a crisis of media legitimacy. (More generally, the legitimacy of the establishment and objective reality.)

If you think the moon is made of green cheese, you have a problem. If a lot of people think the moon is made of green cheese, society has a much bigger problem. (Or that climate change is a hoax invented by the Chinese, or that vaccinations cause autism, etc.)

The marketplace for ideas is faltering. I’m not sure if it’s only the arrival of the unwashed masses, Facebook likes, adverse selection, filtering tools that aren’t fit for purpose, or manipulation and ‘fake news.’

What seems to have gone missing is the sharing ethic that existed when the blogosphere and Twittersphere and social bookmarking were young.

What works to surface quality content is human curation, like Memeorandum, Abnormal Returns, and an engaged community that actively promotes and shares quality and feels that it’s important… like AVC, Hacker News, the better subreddits, Open Culture, Arts and Letters Daily, Brain Pickings, etc.

Ray Dalio seems to think a new market design with some kind of regulatory component is needed. I don’t know what he has in mind, but one could imagine a self-regulatory body that agrees on journalistic standards like CFA ethics: clearly separating fact from opinion, having a reasonable basis for any statement of fact or conclusion, promptly correcting errors, etc. And then people and media sources that meet their standards get a seal of approval, and it investigates violations, censures or even expels people and organizations that don’t meet the standard. Sort of a credit rating agency for journalists.

I don’t really think that is compatible with the USA’s tradition and sense of the free press, or that e.g. the New York Times would submit to a regime like that. But Dalio is not wrong either, we need better consensus on what we expect from journalists and tools to signal credibility and hold people accountable.

David Siegel and Cathy O’Neil think personalization and big data are the problem.

Siegel frames the problem as one of micro-personalization. The picture I have is, in the old days everyone watched Cronkite and read the New York Times, and elite media institutions set the agenda. With the Internet and cable TV, the media fragmented. Everyone is more receptive toward media outlets that reflect their own values and point of view, and gravitates towards outlets that reflect them. But the more you hear mostly news and points of view that confirm your own biases, the stronger those biases get. And personalization and news recommendations are the ultimate silo or filter bubble. You only hear the news you like to hear. The end result is a singularity of polarization, where anything that doesn’t toe a narrow line triggers cognitive dissonance and a strong emotional reaction of ‘OMG mainstream media bias’/’fake news’.

This seems completely plausible. But the root issue is fragmentation, not personalization algorithms per se. You can make the algorithm optimize for whatever you want, for instance try to surface quality from a variety of points of view. The algorithm genie is not going back in the bottle. To the extent it’s an algorithm problem, the answer is to improve the algorithm.

O’Neil’s “Weapons of Math Destruction” point is that any manipulation that can fool humans can fool the algorithms. We’re stuck with misinformation, and the solution is to turn to trusted sources. Hmmh…what happens if sinister forces fool your trusted source, or make you think a source can be trusted when they are bought and sold? Then, if I only believe stuff once it gets to Bloomberg, I’m ignoring a lot of information. But the crude manipulation of the form ‘Pope endorses Trump’ is easy for the top quartile of readers to detect, so it should be feasible for personalization algorithms to filter that out based on where the news came from, the ratio of credible sources spreading it, spam reporting. Just like Google filters spam, news algorithms should filter egregiously fake news.

Powerful forces will be using big data and machine learning to manipulate you. But big data and machine learning algorithms are also very powerful tools to fight fake news spam and help surface quality. And we should be doing that instead of decrying personalization, and building a market design which is more resistant to manipulation and provides tools to surface quality.

I think if we could somehow build a pervasive ‘pay it forward’ karma and reputation ecosystem that rewarded people for sharing quality and burying fake news and garbage, and somehow get back some of the old sharing ethic, it would go a long way. That’s what I’d be thinking about if I were a VC or online community entrepreneur. Tools to let people signal quality and build credibility and fight the noise machines.

It’s a huge problem and solving it would be huge for democracy and free market capitalism. That being said, if you have a large chunk of people who want to tear everything down, you’re not going make progress building trust and credibility.

And fish rots from the head. If you’re led by someone with a habit of lying, denying reality, and calling everything he doesn’t like “fake news,” you’re going to continue to have a crisis of legitimacy and reality-based institutions.