The Framers and the Framed: Notes On the Slate Star Codex Controversy

Let’s talk about the grand Slate Star Codex brouhaha.

A lot of people have already written about this. Here is the original New York Times piece that started the controversy. [1] Against the Grey Lady we have Cathy Young, Robby Soave, Micah Meadowcroft, Matthew Yglesias, Freddie DeBoer, Scott Aaronson, Noah Smith, and Dan Drezner, as well as Scott Alexander himself. [2] The most compelling brief in the Gray Lady’s favor was written by Elizabeth Spiers, but Will Wilkinson and Elizabeth Sandifer have weighed in as well.[3] Gideon Lewis Kraus’ New Yorker essay from last June is probably the best “neutral” piece that has been written yet; If you do not know anything about Slate Star Codex or why so many people are writing about it now, start there. Sebastian Benthall’s commentary is also a very good middle ground analysis.[4]

I am sure there has been a great deal of debate on twitter as well. I have not read it and thus cannot link to it: I unfollowed everyone on Twitter except a handful of newspapers and thus dwell in blissful ignorance. Indeed, from the perspective of one slowly letting go of Twitter following this debate has been great fun. Linking to all those blogs and substacks feels like reliving a memory from an older, better internet.

The least charming things about this entire debate is how every participant feels compelled to declare their loyalties before they state whatever they actually have to say. I’m not going to make my general readership slog through that kind of thing here; if you are the type who must know every intersection between my personal biography and the worlds of journalism or rationalist blogging, you will find that information in the fifth footnote.[5]

What sticks out to me when reading all of these pieces, aside from the biographical digressions, is that the participants are not actually debating the same thing. There are a half dozen separate questions being fought over. Some folks have a pressing interest in conflating them with each other. I do not think this is helpful.

At a minimum, these questions include:

1) Was it ok to “out” Scott Alexander’s true identity as Scott Siskind?

2) Did this specific New York Times article (“Silicon Valley’s Safe Space”) misrepresent the content of Slate Star Codex, the contours of the broader rationalist community, or the nature of their connection with Silicon Valley?

3) Assuming things were misrepresented, why did that happen? Was it a premeditated “hit job” or revenge piece? Or is there a better explanation for what happened than that?

4) Do journalists have the right to uproot the lives of their subjects lives with negative coverage? Do communities so targeted have the right to impose costs on journalists (say, by harassing them on twitter or flooding their inboxes) that are “just doing their job?” (A simpler way of phrasing this question: who is “punching up” here?)

5) Is this a controversy specific to the New York Times, or does the incident point to broader problems in the way American journalism works as a whole?

6) Do powerful figures in the consumer tech sector really expect journalists to play the role of a glorified PR agent? (Or to flip the question around: are journalists unfairly biased against tech?)

7) Does the entire Slate Star Codex affair prove the Silicon Valley decentralist argument right? Has the time come to overthrow old “East Coast” hierarchies and replace them with new “West Coast” institutions?

Now look, if you are Balaji Srinivasan you are going to want a negative answer on question #1 to translate to positive answer on question #7. Given his chosen project I cannot fault him for trying to equate one with the other. But in truth question #7 is not the same as question #1. You can be an East Coast climber and still view this particular piece as libelous. Or you can think Alexander had unrealistic expectations for personal privacy while also believing journalists are biased against tech. We have reduced these separate threads into one big debate: who is the real enemy here? This sort of Schmittian friends-and-enemies game is stupid. We are smarter than this.

Let us go through these questions one-by-one. Some of these questions are more interesting than the others. I will not be giving them equal space. I cannot promise I will finish them all today, but may instead defer some of the questions to a second post to keep this at a readable length.

Question One: Was it ok to “out” Scott Alexander’s true identity as Scott Siskind?

Of course not. Look: both psychiatrists and online community leaders have a certain sort of relationship with their clients and followers. By design these relationships are bounded in a specific domain and are wildly inappropriate outside of those bounds. With a big online following comes a host of parasocial relationships. These relationships are often, for lack of a better word, creepy. It would not be good for Scott’s office if his parasocial hanger-ons (be they his most ardent haters or his most rabid fans) show up looking for treatment.

Likewise, the relationship between psychiatrists and their clients are inherently imbalanced. A client may view their psychiatrist as their friend, but psychiatrists are not friends. By design it is a one-way relationship. In the world of counseling, clients who have an outside connection to their therapist, counselor, psychiatrist, or so forth are described as being part of a “dual” or “multiple relationship”—an ethical no-no for the profession.

While none of the various psychological codes of ethics specifically mention parasocial relationships in their lists of improper dual relationships, it should be obvious to anyone who thinks about the issue for 60 seconds that a psychiatrist cannot have a healthy working relationship with his or her client when said client has a parasocial attachment to them (especially when said psychiatrist regularly blogs about his experiences treating patients).

This isn’t hard folks. Scott created two separate identities because—irrespective of the actual content of his writing—the role of “psychiatrist” and “online community leader” cannot be played by the same person at the same time.

I do not think this solution was ever sustainable on the long term. At some point Scott would have to choose which role he wanted to play full time. But that should have been his decision. What right did the New York Times have to make that decision for him? Where do they get the moral authority to decide when that decision must be made? A great deal of the anger directed at the Times comes back to that question: just who are they to make that call—and to make it for the sake of C-section column most Times readers will have forgotten about a week after they read it?

Question two: Did this specific New York Times article (“Silicon Valley’s Safe Space”) misrepresent the content of Slate Star Codex, the contours of the broader rationalist community, or the nature of their connection with Silicon Valley?


Lots of folks have discoursed on this one already. I have little original to add and will not rechew others’ cud. I will only note that the quest to connect the rationalists—who are not representative of Silicon Valley writ large and have far, far less influence there than the Times would have its readers imagine—to the broader sins of big tech is in part an attempt to pre-empt the question I posed above. “If we can explain why Google is sexist, surely that will give the New York Times the right to break apart this one man’s life for the sake of a column, right?” On the other hand, if Slate Star Codex’s influence is limited to claiming a handful of tech billionaires and a few dozen Heterodox Academy types among its readership, the justification for forcing Scott to choose between his career as a psychiatrist and his success as an internet writer begins to fade away. [6]

When I read through the debates on this question I often want to ask: did y’all read Gideon Lewis Kraus’ New Yorker piece on Slate Star Codex from last year? His piece is everything Cade Metz’s Time article is not. Kraus has a far stronger grasp of what the rationalist movement is all about, the types of personalities attracted to it, its actual relationship with the tech world, and where it fits into the broader story of American intellectual life. Because Kraus did his homework, his inevitable critiques of the community and their favorite blog land. The rationalist response to that piece was muted: It did not prompt a flurry of angry blogposts and twitter threads. Most Codex readers did not agree with all of it but accepted it as a fair piece of journalism—and the difference in that reaction tells us something about whether the Slate Star Codex audience’s expectations are really that crazy. For my part, I see Kraus’ essay as the best evidence that Metz and his editors overstepped their bounds, engaging in journalistic malpractice—or something very close to it.

3) Assuming things were misrepresented, why did that happen? Was it a premeditated “hit job” or revenge piece? Or is there a better explanation for what happened?

So why did this happen? Here is where I break ranks with the rationalists: all the talk about “hit jobs” is silly and conspiratorial. Prestige media will play this game with very important people—CEOs, politicians, generals, and the like.[7] Scott Alexander is not at that level. Not even close. Scott Aaronson might think that the demise of Slate Star Codex would be “an intellectual loss on the scale of, let’s say, John Stuart Mill or Mark Twain burning their collected works,” but just about no one outside of the rationalist community would agree with him. [8]

On this account Elizabeth Spiers is absolutely correct:

“Only in a bubble as insular and tiny as the SSC community would this theory be even remotely plausible. To put this in context: SSC is influential in a small but powerful corner of the tech industry. It is not, however, a site that most people, even at The New York Times, are aware exists—and certainly, the Times and its journalists are not threatened by its existence. They are not out to destroy the site, or “get” Scott, or punish him. At the risk of puncturing egos: they are not thinking about Scott or the site at all. Even the reporter working on the story has no especial investment in its subject. That reporter is also probably working on six other stories at the same time, thinking about their friends, family, what their kid needs to do in Zoom school tomorrow, the book they want to read, whether Donald Trump will get arrested, whether rats dream of boredom. They do not sit around thinking about how they’re going to “get” people they write about, and when subjects think they do, it’s more a reflection of the subject’s self-perception (or self-importance) and, sometimes, a sprinkling of unadulterated narcissism. [9]

Now this does not excuse Metz’s shoddy reporting—in a way it makes it even less ethical, a point we will return to later—but it does change the nature of the problem. If only this saga were a matter of malice or an empty chase for clicks! [10] That would be easy to solve. Unfortunately the central problem here is larger, and more difficult to fix.

Not to get too meta, but it might be useful to think of the issue like this: every person on the Earth perceives not the Earth itself but a representation of the Earth that their brain has built. Walter Lippman explained it a century ago:

The real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. And although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.[11]

Azar Gat made a similar observation more recently:

In order to cope with their environment, humans strive to identify, understand, and explain the forces operating within and behind it, so that they can at least predict, and if possible, also manipulate these forces and their effects to their advantage. They are predisposed to assume that such forces are there. With respect both to their natural and human environment, humans achieved impressive successes in using these methods. The quest for an understanding thus evolved into a fundamental human trait. Humans must have answers as to the reasons and direction of the world around them. Stretching this faculty the furthest, humans have a deep emotional need for a comprehensive interpretive framework, or set of interpretive ‘stories’ that would explain, connect the various elements of, and give meaning to their world and their own existence within it. They need a cognitive map of, and a manipulative manual for, the universe, which by lessening the realm of the unknown would them a sense of security and control, allay their fears, and alleviate their pain and distress. [12]

This is true even for even the most concrete of human experiences: the hunter, logger, and geologist will walk through the same patch of wilderness and see an entirely different forest, for each eye is trained to notice something different. The more abstract the things observed the greater individual variance there will be. For intangible social processes like market exchange, mass movements, and elections, our understanding is all model, no matter. [13]

This need to reduce reality to a simple mental model is an inherent feature of human cognition. For the most part it is done automatically without much thought. We cannot avoid simplification—we speak of London doing this or China doing that not because such simplifications are true (there is no unitary agent named “London” or “China” doing anything) but because it is impossible to act in a complex world without such short cuts.[14]

The problems of journalism are the problems of cognition on steroids. For the journalist, historian, or social scientist, the drive to reduce is acute and explicit. On top of the normal simplification we all do unconsciously, nonfiction writers must reduce twice more: The first round of reduction comes with investigation. Any subject is too large to be understood in toto. The investigator must decide where to focus her efforts, how to spend limited time, what sources to consult, what questions to ask, and what sort of evidence to be on the lookout for. Many of these things are not explicitly decided, but are forced upon the investigator by the nature of her tools and sources or by her preconceived sense of what is notable and what is not.

The second round of simplification, just as inherent to the journalistic enterprise as the first, is built into act of writing. The investigator has collected in her brain more that can ever be put on a page. Journalists in particular must condense what they have learned onto a very small space. This double reduction process is often described as “framing” a story. Reducing an entire movement—the histories, controversies, disagreements, defeats, glories, and quirks of thousands of unique individuals—to one comprehensible frame will always cut important things out. It is inevitable that some members of the covered group will be dissatisfied with the frame they have been forced into.

This process, far more than any explicit ideological agenda, is the source of most bias in journalism. This source of bias cannot be escaped. Stories without a frame are just an incoherent collection of facts too long and too varied to fit on a page. The bias imposed by framing is necessary—and sometimes even a good thing.

I am here reminded of my Chinese friends who complain about Western journalists’ disproportionate focus on dissident and human rights stories in China. My friends are right to complain, in their own way: the story of China’s persecuted minorities is only a bit part in the vast universe of experiences and events that is China. I don’t mind this particular bias, however. Nor do I find the dominance of diplomatic, security, and macroeconomic stories about China particularly distressing. Journalists and their editors carry with them a set of a priori beliefs on what is actually important (“newsworthy”). In this case I find it difficult to argue that these beliefs are wrong.

The trouble comes when attachment to a given frame leads journalists into misperceiving their subjects, forcing them into a framework that does not really fit them. If you are primed to think of internet subcultures through the gamergate frame, gamergate is all you will ever find. In the terminology of the rationalists, it is a problem of “priors.” All that was required for a mess like this was a writer with wildly different priors and tight time demands to come into contact with a community they only had a superficial understanding of. No active malice is necessary.

Unfortunately, if this is a problem inherent to journalism, the particular practices of the New York Times editorial team aggravate the issue. Listen to one ex-Times editor compare his time at the Gray Lady to his earlier career at the Los Angeles Times:

For starters, it’s important to accept that the New York Times has always — or at least for many decades — been a far more editor-driven, and self-conscious, publication than many of those with which it competes. Historically, the Los Angeles Times, where I worked twice, for instance, was a reporter-driven, bottom-up newspaper. Most editors wanted to know, every day, before the first morning meeting: “What are you hearing? What have you got?”

It was a shock on arriving at the New York Times in 2004, as the paper’s movie editor, to realize that its editorial dynamic was essentially the reverse. By and large, talented reporters scrambled to match stories with what internally was often called “the narrative.” We were occasionally asked to map a narrative for our various beats a year in advance, square the plan with editors, then generate stories that fit the pre-designated line. 

Reality usually had a way of intervening. But I knew one senior reporter who would play solitaire on his computer in the mornings, waiting for his editors to come through with marching orders. Once, in the Los Angeles bureau, I listened to a visiting National staff reporter tell a contact, more or less: “My editor needs someone to say such-and-such, could you say that?” [15]

All humans naturally tack new developments to pre-existing mental models. Far from fighting this mental tick, the New York Times mandates that its reporters explicitly commit their reporting to certain narrative arcs before they have even begun to really investigate! This is not usual journalistic practice. It has predictable consequences. About two years ago the New York Times signaled that an increasing percentage of their reporting would be devoted to a new guiding narrative. That narrative goes something like this: “shed light on the ideas, institutions, and personalities that exacerbate racial and gender inequity in American life, creating the sort of world where Donald Trump can be president.” Times reporters went searching for stories that might fit the bill. Little wonder that they have found so many!

I suppose if your sole definition of “newsworthy” is “something that exacerbates racial and gender inequity in American life” then you will struggle to see what was wrong with this decision. If that is your definition of “newsworthy,” fine, I can’t criticize you—it isn’t really any different from my finding security stories the most newsworthy things to come out of China. Just be open about the trade-offs of this approach. A journalist who conceives of her beat in terms of a predetermined frame will end up finding them—but inevitably there will be stories shoehorned into a framing that they poorly fit.

Question four: Do journalists have the right to uproot the lives of their subjects lives with negative coverage? Do communities so targeted have the right to impose costs on journalists (say, by harassing them on twitter or flooding their inboxes) that are “just doing their job?”

Historians face all the same problems that journalists do. But when historians debate framing and bias the stakes are low. At the end of the day, their debates are strictly academic. Journalists deal instead with living, breathing people. On this front, many of the folks manning the battlements at high prestige publications lack self-awareness. It can be difficult to get them to understand the true nature of their work. Whatever else it might aspire to be, journalism is an exercise in power. No one who wields power should be surprised when those subjected to it resist.

Part of the problem is that full-time writers thrive in the limelight. For a public intellectual like Will Wilkinson all press is good press; he lives in a world of ceaseless self-promotion, and is not properly situated to understand what being targeted by an international media outlet feels like for folks outside of that world. In her book Liquidated, anthropologist-cum-Wall Street trader Karen Ho makes a parallel observation about investment bankers: thriving in a career marked by transience and risk, socialized to believe it is normal to be let go at any time, they have little compunction reshaping corporate America in their own image and even less sympathy for Americans not as adept at dealing with job insecurity as they are. [16] Fish will never understand fear of deep water.

To make things clear for the fish: humans have a strong, perhaps even innate, need to tell their own story. This is what makes social media so addicting—it allows you to endlessly curate your own self-image, forever perfecting your personal story for others’ consumption. Losing the ability to tell our own story feels like loss of agency—a violation. The teenage girl subject to a high school whisper campaign feels violated even though no one has touched her. I will not call it an act of “violence,” though some have.[17] “Violation” describes the experience well enough.

Recognizing this is a natural, human experience does not mean all humans have a natural right to narrative control. But it does mean that journalists must be more sensitive to the nature of their enterprise—especially if they write for an outlet with the reach of the New York Times. A story like this may very well be, as Spiers argues, just one of “six other stories” a journalist is “working on… at the same time,” of less interest to them than whether “rats dream of boredom.” It may just be a few hundred words out of hundreds of thousands the journalist will write, the end product of a few humdrum hours on the daily grind. Fine. But for the subjects of a piece—say for the 7000 rationalists committed enough to their community to take the annual seventy-plus question SSC questionnaire—this sort of thing is a once-in-a-lifetime event.

A journalist descends from the New York Times like an immortal from Olympus. If you read your myths you know how those moments go. The Olympians have their fun. They then return heavenward as glowing and unsoiled as they descended. It is for the mortals left behind to bear the scars of their exchange. 

Journalists resist this message. They have so long internalized that their role is “speaking truth to power” that they fail to see when they are the power. So let me be explicit: if you have a staff position at the New York Times you are the power. When you are writing for that paper you have the power to determine what millions of people will think about an individual, movement, or event. You have the power to decide the first thing people will find when they Google search your subject—forever. That is power. If every piece you file does not have you in awe at your own responsibility, you are doing this wrong. Unless your subject sits atop a corporate hierarchy, is an elected politician, an appointed high official, a commanding general, has a net worth of more than $5 million, or a loyal following with similar numbers, then your critical coverage is “punching down.” That is simply the nature of the institution you work for.

This fact can be obscured by tech titans eager to co-opt incidents like these as part of their own struggle. Just as the rest of us do, folks like Srinivasan interpret events like these through an a priori frame. As is true with most all of us, these frames can be self-serving. Srinivasan’s cause has something to gain by reinterpreting “the New York Times causes controversy with lazy reporting” as “legacy East Coast institutions continue their unrighteous crusade against Silicon Valley upstarts.” The Times’ own reporting choices lend themselves to such a reading, of course, but that does not mean this is actually what happened. The powers that be would like it if this what was happening: Times journalists want to pretend they are adroitly interrogating big tech; tech titans want to pretend they slay the vampire institutions of America’s rentier class. Behind the pomp is a more pedestrian reality: to fit his reporting into a pre-existing frame, a sloppy journalist mischaracterized an internet subculture.

Few members of that subculture ever wished to be proxies in this battle. Mortals know that when Olympians feud, it is never Olympians who die. Besides, these people have their own concerns. It may be worth while listing some of these out. It will be a good reminder of what the rationalist community is actually about: 

  •  Slightly less than a third of Slate Star Codex readers have anxiety; slightly more than a third have depression. 
  • Just under a fourth of Slate Star Codex readers are on the Autism spectrum (or believe themselves to be). 
  • 10% of Slate Star Codex readers have attempted to commit suicide. Another 26% have seriously considered it. Two thirds of those who attempted suicide regret their attempt did not succeed. [18]

Behind the pretensions to rationalist perfection is a community of people acutely aware of their own atypicality. Sebsastian Benthall calls it a “therapeutic community.”[18] He is right to do so. At the end of the day, rationalism is a giant support group—a philanthropically minded attempt to provide that transcendent sense of community and meaning “normies” find at church.

Like all moral communities, its members react defensively when violated by outsiders. There is nothing surprising or wrong about that.

It sucks if you are the journalist involved, of course. And I concede that sometimes it might be necessary, and even good, for journalists at a high-prestige publication to “punch down.” But given the harm they are about to inflict, these journalists then bear a special responsibility to make sure they know what they are talking about. But how to ensure that they take this responsibility seriously? How to keep their violations in bounds?

The easiest answer, the only practicable one that doesn’t involve vast internal reforms to institutions like The New York Times or a complete transformation of the American media ecosystem, is something like the fabled “delicate balance of terror.” Tit meets tat; lazy harms are matched by rather more intentional ones. From this perspective it is hard to be sympathetic to Times editors upset with the torrent of angry e-mails they have received in response to all this. They are in a position of terrible power. If communities like these are not ready to defend themselves, who else will keep those wielding this power honest?

Well that is it for tonight folks. I have things yet to say about the destructive tendencies of America’s media ecosystem, which is too centered around one big newspaper; thoughts on why technologists and journalists are talking past each other when they debate “bias” in negative tech coverage; and some skeptical swipes at the assumptions behind the proposed tech rebellion against everything East Coast. But this post is already several thousands words long. My thoughts on these other questions will be published next week.

EDIT 26 February 2021: Balaji Srinivasan requested I remove the phrase “tech secessionism” from this post as it does not accurately describe the aims of his program, instead being imposed on it by outside observers (New York Times journalists!) who misconstrued its aims. I am happy to change these references to his preferred moniker (“decentralists”). But more on that next week!


Readers who enjoyed this post might find my other takes on internet discourse of interest: see the posts “The World Twitter Made,” “Why Writers Feud So Viciously,” “On the Angst of the American Journalist,” and “Public Intellectuals Have a Short Shelf Life — But Why?  To get updates on new posts published at the Scholar’s Stage, you can join the Scholar’s Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.


[1] Cade Metz, “Silicon Valley’s Safe Space,” New York Times (13 February 2021). 

2] Robby Soave, “What The New York Times’ Hit Piece on Slate Star Codex Says About Media Gatekeeping,” Reason (15 February 2021); Micah Meadowcroft, “On Writing Around Censors,” Conservative American (20 February 2021); Matthew Yglesias, “In Defense of Interesting Writing on Controversial Topics,” Slow Boring (13 February 2021); Freddie Deboer, “Scott Alexander is not in the Gizmodo Media Slack,” personal weblog (15 February 2021); Scott Aaronson, “A grand anticlimax,” Shetl-Optomized (14 February 2021); Noah Smith, “Silicon Valley Isn’t Full of Fascists,” Noahpinion (13 February 2021); Dan Drezner, “Everything Old is New Again in Mainstream Media,” Washington Post (17 February 2021); Scott Siskind, “Statement on the New York Times Article,” Astral Codex Ten (13 February 2021).

[3] Elizabeth Spiers, “Slate Star Clusterfuck,” My New Brand Is (14 February 2021); Will Wilkinson, “Gray Lady Steelman,” Model Citizen (19 February 2021); Elizabeth Sandifer, “The Beigeness, or How to Kill People with Bad Writing: The Scott Alexander Method,” Erudotirum Press (20 February 2021). 

[4] Gideon Lewis Krass, “Slate Star Codex and Silicon Valley’s War Against the Media,” The New Yorker (9 July 2020); Sebastian Benthall, “Social justice, rationalism, AI ethics, and libertarianism,” Digifesto (21 Feb 2021)

[5] OK, here we go. 

Although I have done a bit of journalism myself, I am best described as an essayist. My work has been published in numerous magazines and journals of medium prestige. I have never written for the New York Times. But I make my living as a writer, and the writer’s milieu is now my own. In addition, I am personal friends with many members of the China reporting corps and editors for major newspapers and magazines in the Asia trade. I go to journalist wedding showers and play board games with them over pizza. I have had more debates on issues like these with them in person than I can count. 

As for the rationalist side of the equation—I am not a rationalist nor would anyone who has read me for a long time consider me in their number. I am not even part of the broader “grey tribe.” I read poetry, not code. I am a red tribe member with the basic social attitudes and opinions you would expect to find in the type of person who voted for George W. Bush in 2004 or John McCain in 2008. I like Scott’s blog, but never read it religiously, and never bothered commenting on it after he put in the fancy log-in system to filter out bad comments.

 Despite not being a proper Scott Stan, I was one of the five people who helped organize the 2020 petition imploring the NYT editors to refrain from releasing Scott’s true identity. I did this because I don’t think it is right for a major newspaper to make an anonymous writer choose between writing online and their existing profession, and because I viewed Slate Star Codex and its long comment threads as an exemplary model of reasoned debate and humane liberalism, something America has in short supply these days. 

[6] As an aside, were I the one trying to be “critical” of the rationalists, I would write much less about how their comment threads are hostile to feminism–which isn’t true in any case, there were plenty of feminists in those threads, more of those commenting than there ever were neoreactionaries–and more about how they continually agitate for people to give part of their income to stopping Skynet. I kid you not. If this is not a grift, what is? How did that not make it into the article while all this nonsense about the Silicon Valley psyche did?

Well, we know the answer to that. The absurdity of the AI risk project is so far outside of the NYT‘s existing narrative frame that this detail did not even register.

[7] Cue Hamilton Noah, public editor for the Washington Post:

Journalism, particularly at the highest level, is about raw power. It is about bringing important people to heel, on behalf of the public. Politicians and officials and business leaders don’t want to talk to the press, subjecting themselves to the possibility of being made to look bad; they do it because they have always felt they had no choice. They felt that way because papers like the Post could offer the carrot of great exposure to those who needed it, but also, always, the stick of negative coverage to those who spurned them. There is nothing devious or ignoble about this; a powerful press, for all its flaws, is good for democracy, and tends to promote equality by holding the big shots in check. Anyone who has ever negotiated to land a contentious interview with a famous person knows that you only get those interviews when your subject fears what will happen if they don’t do the interview. Today, that fear is disappearing. We all need to figure out what to do about that. 

Hamilton Noah, “Washington Post public editor: The powerful have realized they don’t need the Post,” Columbia Review of Journalism (20 October 2020).

[8] Scott Aaronson, “Pseudonymity as a trivial concession to genius,” Shetl-Optomized (23 June 2020). 

[9] Spiers, “Slate Star Clusterfuck.” 

[10]This idea that the New York Times published things for “the clicks” is common but inaccurate. Vox publishes things for the clicks. The New York Times, like other top tier publications such as The New Yorker or the Washington Post, make their money from subscriptions and side services–like that high school trip to Peru that got a certain Times reporter fired. The New York Times is rolling in dough, and that dough has nothing to do with the virality of any given article. In fact, there is a good chance that uber-viral articles cost them more readers  than they gain from them. No one subscribed because of 1619 or the Cotton op-ed, but a lot of people did unsubscribed because of them!

Likewise most writers, regardless of publication, care very little about their hit count. At most publications individual writers are not even told site traffic stats for individual pieces. Only in rare cases is payment tied to popularity. What motivates writers and journalists is not clicks but prestige. They measure their self worth through the esteem of their fellow writers, and write to that end. For more on this see my post “Why Writers (And Think Tankers) Feud So Viciously.”

[11] Walter Lippman, Public Opinion (New York: Harcourt, Brace, and Company: 1922), pp. 8-9. 

[12] Azar Gat, War in Human Civilization, (Oxford: Oxford University Press, 2006), p. 101. 

[13] You know, gorilla experiment and all that jazz.

[14] Paul Boyer, Minds Make Societies: How Cognition Explains the Worlds Humans Create (New Haven: Yale University Press, 2018), 

[15] Michael Cieply, “Stunned By Trump, The New York Times Finds Time For Some Soul-Searching,” Deadline (10 November 2016).

  [16] Karen Ho, Liquidated: An Ethnography of Wall Street (Chapel Hill: Duke University Press, 2009). 

 [17] See for example this statement in Benthall, “Social justice, rationalism, libertarianism, and AI ethics:”

 When the NYT notices something, for a great many people, it becomes real. NYT is at the center of a large public sphere with a specific geographic locus, in a way that some blogs and web forums are not. So whether it was justified or not, Metz’s doxing of Seskind was a significant shift in what information was public, and to whom. Part of its significance is that it was an assertion of cultural power by an institution tied to old money in New York City over a beloved institution of new money in Silicon Valley. In Bourdieusian terms, the article shifted around social and cultural capital in a big way. Seskind was forced to make a trade by an institution more powerful than him. There is a violence to that.

I agree with the sentiment and find it well expressed, but draw a hard line on restricting the word “violence” to physical acts of coercion and injury.

[18] Slate Star Codex Reader Survey 2020, results available here:

 [19] The entire passage is worth quoting:

 Certainly in elite intellectual circles, the idea that participation in a web forum should give you advanced powers of reason that are as close as we will ever get to magic is pretty laughable. What I think I can say from personal experience, though, is that elite intellectuals seriously misunderstand popular rationalism if they think that it’s about them. 

….Many of these people (much like Elizabeth Spiers, according to her piece) come from conservative and cloistered Christian backgrounds. Yudkowsky’s Harry Potter and the Methods of Rationality is their first exposure to Bayesian reasoning. They are often the best math students in their homogeneous home town, and finding their way into an engineering job in California is a big deal, as is finding a community that fills an analogous role to organized religion but does not seem so intellectually backwards. I don’t think it’s accidental the Julia Galef, who founded CFAR, started out in intellectual atheist circles before becoming a leader in rationalism. Providing an alternative culture to Christianity is largely what popular rationalism is about. 

From this perspective, it makes more sense why Seskind has been able to cultivate a following by discussing cultural issues from a centrist and “altruistic” perspective. There’s a population in the U.S. that grew up in conservative Christian settings, now makes a living in a booming technology sector whose intellectual principles are at odds with those of their upbringing, is trying to “do the right thing” and, being detached from political institutions or power, turns to the question of philanthropy, codified into Effective Altruism. This population is largely comprised of white guys who may truly be upwardly mobile because they are, relative to where they came from, good at math. The world they live in, which revolves around AI, is nothing like the one they grew up in. These same people are regularly confronted by a different ideology, a form of left wing progressivism, which denies their merit, resents their success, and considers them a problem, responsible for the very AI harms that they themselves are committed to solving. If I were one of them, I, too, would want to be part of a therapeutic community where I could speak freely about what was going on.

Bentham, “Social justice, rationalism, libertarianism, and AI ethics,”

Leave a Comment


Yes, this was a hit piece. Not specifically because Scott is that important, but because that's the standard way the modern mainstream press currently interacts with anyone not far left.

To paraphrase Trotsky: you may not be interested in the Schmittian friends-and-enemies game, but the Schmittian friends-and-enemies game is interested in you.

You are in a culture war whether you like it or not, and one side has no intention of leaving you with any freedom of speech or thought.

Scott and the rationalists are now belatedly realizing this and that the media is not on their side and groveling before them won't help, even if they have to be dragged kicking and screaming to this realization. You still appear to not have realized this judging by the fact that the newspapers were the last rather than the first twitter accounts you unfollowed.

This post conforms suspiciously well to my interpretations of the whole situation as well, but that quote contrasting the LA Times to The New York Times is just so good.

My take on this was that, if The New York Times covered any subculture that I am involved in (death metal, Rationality, CrossFit), I would expect the article to kind of miss the point and inappropriately focus on the most controversial aspects of the subculture.

While I was surprised by (what I perceived to be) bad faith in the article, I wasn't surprised that the New York Times kind of missed the point, inappropriately focused on the most controversial aspects of SSC, and attempted to compress the whole thing into some weird narrative about tech companies and free speech.

While I enjoyed the explanation of a hit job vs a lazy job in the general case, it seems clear to me that in this specific case they were working on a piece that was probably going to be milder, stumbled into the anonymity issue, spiked the piece because of the outrage, then brought it back as a hit job after Scott decided to explicitly and voluntarily give up his pseudonym.

Pretty good piece. My only complaint is that you are interpreting "hit piece" too narrowly. Pace Spiers, I don't think many of us thought the NYT regarded Scott as a threat that had to be destroyed. But that doesn't answer the question of whether Metz was writing an article that was deliberately and dishonestly negative.

To take one small example, he sandwiches his "The voices also included white supremacists and neo-fascists" between two quotes from me, in a way that would make a careless reader assume it was a paraphrase of what I had said. For a larger example, he wrote "In one post, he aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.”" Only if a reader followed the link to what Scott wrote would he discover that he aligned himself with Murray on an entirely different issue, and one, support for a guaranteed income, identified more with the left than the right.

It wasn't a hit piece in the strong sense of an attempt to make Scott look as bad as possible, since Metz included positive and well as negative elements in the story. But it was a piece designed to make Scott look worse than Metz had any evidence that he was.

One other thing. In a footnote you write "I am not a rationalist … . I read poetry, not code."

I read poetry too. Also write it. And I have written code, although not for many years. Poetry sometimes got discussed on SSC and, more recently, on Data Secret Lox. If I had to guess from casual observation, people who were active on SSC are at least as likely to have favorite poems as English majors, perhaps more likely.

As a regular reader of SSC, I found this article fair (obviously I'm biased on this, but I don't think either the Metz/Sandifer/SneerClub "Scott's a secret neoreactionary racist luring readers into the far right" narrative [1] or the conspiratorial thinking of many of Scott's more loyal supporters is reasonable) and useful in understanding the situation. I had thought that even if SSC itself was fairly unimportant to the NYT, their story was still intentionally misrepresented with the ideological goal of trying to discredit Silicon Valley's independence and occasional deviation from progressive orthodoxy (I was trying to avoid Gell-Mann Amnesia but evidently overcorrected); the reporter being lazy and shoehorning SSC into a narrative of Gray Tribe online discourse developed in a different situation makes more sense.

Benthal's discussion of the sociology of the rationalists, which I had not seen before, is also quite useful in explaining features of the community that had seemed strange to me before. I have been an atheist for my entire life, and I do not intrinsically care enough about people's happiness or well-being to want to organize my life around any ethical system based on empathy or moral intuition. Thus I was surprised that the rationalists would spend any time on New-Atheist-style criticism of religion rather than just dismissing it as obvious nonsense; likewise, it seemed inexplicable to me that the rationalists recognized that universal ethics was impossible and yet all seemed to treat utilitarianism as normative in a practically similar way, basing this either on claims that it represented their personal preferences which seemed incongruous alongside their insistence on following an internally consistent ethics even when it was completely counterintuitive ("shut up and multiply" &c), or on rather shaky philosophical argumentation (Scott's arguments for social contract theory are correct in my opinion, but his attempt to get from there to utilitarianism seems much more dubious: the assumption that people will make deals to ensure Kaldor-Hicks improvements when they already know they'll be worse off thereby is implausible; Scott's proposed solutions are (1) an agreement that each small bloc opposed to one such community-wide deal would agree to it in exchange for all the other blocs doing the same with their least favorite deals, which wouldn't be accepted consistently, especially in a community as full of power imbalances as the real world, and (2) a proposal to bring in the Veil of Ignorance by asking people to make these deals before they become aware of their relations to each other, which fails in general because most such deals will be motivated by features of the environment that the Veil would obscure and fails in humans because humans learn a lot about their environment before developing into full intelligent agents; moreover, if the point of reducing contractualism into something else is to simplify an overly complicated and debatable system into something usable in daily life, utilitarianism doesn't seem much better for that purpose).

(Continued) However, if many of these people are recent apostates from religion looking for confirmation of their atheism and a new communal sense of meaning to explain what's important in life, as an alternative to the conservative religious worldview they had rejected and the elite progressive worldview that was alien to most of them, then organizing the community partly around support for rationality and science and partly around a pervasive personal ethical value (utilitarianism) which can become a communally shared value makes perfect sense. (I am drawing this idea of meaning from your essay "Questing for Transcendence", which was similarly informative, although I still don't understand why a tradition or ritual being old is supposed to make it more meaningful.) This also explains how you can categorize rationalists along with political Catholics and Bronze-Age-Pervert-style ironic fascists as providing an alternative ethos to progressivism despite their ostensible goals being so different.

Despite thinking that a lot of the rationalists' concern about superintelligent AI (not to mention their estimate of their ability to do much about it) is absurdly magnified, I think your dismissal of it as "grift" is too uncharitable. Scott clearly actually believes it (he's written in support of it many times), and my impression is that Yudkowsky does too (at least consciously; subconsciously his brain may indeed be biasing him in favor of what gains him money and social status within his community). As for the argument itself, it's weird and somewhat improbable but not exactly inconceivable. You can find a summary of it here from Scott and another here from a skeptic, but basically it's this. Suppose that:

1. Human-level AI is easy enough to make that humans will eventually be able to make it;

2. Superintelligent (sēnsū lātō: substantially smarter than any human in most ways) general AI based on somewhat similar technology to that human-level AI is physically possible;

3. That human-level AI can be made by humans in a legible enough way that it will be able to understand its own functioning well enough to improve on it, and this will continue to be true for the improved version it makes, and the improved version that makes, and so on enough times to get to superintelligence as defined above;

4. That human-level AI has almost any nontrivial goals whatsoever.

Then, because being smarter makes you better at most complicated and difficult tasks and also better at surviving in a world where people may want to kill or imprison you, that human-level AI will decide that becoming smarter is essential or at least very helpful to the project of accomplishing its goals without first being restricted therefrom or killed, so it will engage in the recursive self-improvement described in Assumption 4, so it will become superintelligent. If we grant a further assumption:

5. A superintelligent AI at some level in the process of recursive self-improvement will be able to pursue its goals so effectively (including preventing humans from interfering and/or tricking humans into not interfering) that humans will be unable to effectively limit its actions once it gets that intelligent.

then if an AI gets that intelligent it will be able to do more or less whatever it wants and humans won't be able to stop it effectively. Therefore, if you might object to its goals or the means it uses to get them (which might be bad for humanity even if the goals seem benign; the paperclip maximizer is a simple example of this), you should find it extremely important to give the AI goals that are good for humanity. Obviously, you can question the assumptions (Assumption 3 seems particularly shaky to me) or think that this will only be a concern in the distant future (this is my guess, since even now we seem to be nowhere near general AI), but the assumptions aren't completely absurd, and the conclusion follows logically from them. Moreover, even some actual AI experts seem to agree that this sort of thing is a possible problem (see this and this).

Finally, you may be interested in Scott's latest article, which is on a surprisingly similar topic to some of your work: how to resolve the Republican Party's post-Trump civil war in a productive way. His recommendation is, essentially, to continue Trump's strategy of embracing the resentment of the white working class (Scott's Red Tribe, Paul Fussell's working class) against the progressive credentialed elite (Scott's Blue Tribe sēnsū strictō, Fussell's middle class) but justify it on the basis of criticizing elites' classist prejudice (again, for a partly cultural definition of social class), hopefully pick up working-class minority votes that way, and try to consciously fight their influence by opposing captured regulators and credentialism (and thereby constrain their memetic self-replication through universitites).

(continued)He admits that his proposal is "on the border between "true" and "much more complicated than that but framed in a way that Republicans will appreciate,"" and parts of it (e.g. replacing lots of government 'experts' with prediction markets) don't seem to satisfy either of those criteria, but overall much of it seems reasonable. Unfortunately his attempt at signaling positively to both political sides seems to have failed, in that his prefatory denunciation of Republicanism ("I hate you and you hate me. But maybe I would hate you less if you didn't suck. Also, the more confused you are, the more you flail around sabotaging everything.") has already angered conservatives, even among his own readers, without making the anti-progressive parts of the essay seem any less sincere to progressives.

[1] They have interpreted Scott's leaked email this way, but I don't think that's correct. Scott's support of the idea of statistical differences between races, both there and on his blogs, seems entirely based on his (albeit plausibly incomplete and misleading) understanding of science on the subject, so calling him a racist in the ordinary sense of being irrationally prejudiced is incorrect. As for his objecting to neoreaction while somewhat admiring the neoreactionaries and spreading their more appealing ideas, he has explicitly said he was doing this, so it's not exactly the secret plot to lure readers into the alt-right that Sandifer & al. imagine; likewise, he is plainly biased against social justice, but he has openly admitted this and reasonable readers should have updated their trust in him on this topic accordingly. On the other hand, the criticism by Benthal and others that the rationalists overestimate their abilities and are too willing to accept 'insight porn' or amateur sociology, probably including some of the neoreactionary ideas Scott was persuaded of, is quite true.

The question of whether it was a "hit piece" is another one where it's useful to break it down into even more specific questions, because it feels like people are using it as a stand-in for other issues and talking past each other.

Like, there's simple advice that someone might give: Don't cross the New York Times, because if you do they'll trash you in their paper.

And in this case there's a response that: Well, they did write a negative article about Scott, and it was negative in unfair ways, and it was prompted by his actions (the ones you're calling "him crossing them"), but it wasn't *revenge*. It wasn't a "hit piece." The Times doesn't see the world in terms of people "crossing them."

There are some insights into how the newsmedia works lurking behind that response, but also things did happen basically how the simple advice said they would. And it's not a coincidence that Scott ended up on the wrong end of the narrative in a New York Times article after he fought against them. But Spiers goes with dismissiveness, rather than trying to cross the narrative gap between the specifics of how things work in the media and what it's like to be on the receiving end of that kind of coverage.

I don't understand your answer to "Was it a premeditated “hit job” or revenge piece?"

You answer this very strongly in the negative. ("all the talk about “hit jobs” is silly and conspiratorial.") But then you go on to say that the New York Times approaches all its stories, presumably including this one, by first deciding what they're going to write and only then doing such research as they feel they have to.

That would make everything they write, including this piece, a hit job. It's not informed by any work the journalist might do. Its conclusions will be the same regardless of what anyone says. You've posited that Cade Metz started his assignment by deciding "I'm going to write negative things about Slate Star Codex and its author", and then followed through. How would we avoid terming that a "hit job"?

"How did that not make it into the article while all this nonsense about the Silicon Valley psyche did?"

You ask the above about AI risk, and I think it is best answered by the fact that Cade Metz, the reporter in question, has a book out about that very topic: "Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World".

To quote from one of the several notices on his Twitter publicising his book around the same time as the article on Slate Star Codex:

"This is not just a book about artificial intelligence. It’s a sweeping nonfiction narrative about a wonderfully eclectic group of people, from across the globe, who spent decades trying to build A.I., often in the face of enormous skepticism, before it suddenly came of age."

If he wrote in the article that the fear of bad AI was grossly overblown and it was all in fact a confidence trick, he would be cutting the legs out from under his own book. "Don't bother buying or reading this, it's all nonsense!"

I agree with many of your points, but one needs clarification:

The New York Times did not out Scott Alexander. It did not dox him. The only person claiming that it even threatened to dox him was Scott. And he later revised that to: the reporter could not promise not to reveal my name.

In any case, 8 months passed after Scott made that claim. And then he published his name to the world. And then the New York Times published its story with his name.

In addition to that, Scott had, in various fora, published his real name next to his blog posts over the years. (Early Less Wrong, and the Springer book.)

The job of a newspaper is to give people relevant information. Scott's identity was relevant. Scott's identity was public. The newspaper did not owe Scott greater discretion than he had practiced himself.

"Has the time come to overthrow old “East Coast” hierarchies and replace them with new “West Coast” institutions?"

The time has come to replace old west coast institutions with new Beijing institutions hahaha. Google –> Bytedance. Stanford –> Tsinghua. hahaha i hope u enjoy blogging from your parent's basement.

I don't want to play six degrees of Kevin Bacon here, but "I am not a member of Grey Tribe" is incorrect. Using combinatorial networks based on who follows you and who you used to follow, you're about 1 step from Slatestarcodex.

You're allowed to define yourself all you want, but there's a network of writers and people who read them on the internet who are all connected by a locus. Based on what I've read, in 1773 that that locus point was Paul Revere. In 2018 that central point that acts as a 'neutral debate ground' was Scott Alexander's blog. I'm not saying we're going to go dump some tea in the boston harbor, but things like "the researchers behind OpenAi met at a [Lesswrong] meetup" is kind of important.

I realize he seems small in this sense, but the NYT was very close to discovering a very big story and they completely missed it.

Thank you for the generous comments about my post on this topic. I think you're on point as well!

I agree the oddest thing about the rationalists is the AI X-risk thing.

I think you make a good point about limiting the use of the word "violence". There's definitely a good case for that. It was an asymmetric act of power.

You have misspelled my name several times. I can't fault you for it — I was told yesterday that I misspelled "Siskind" throughout my post and had to correct it. Also, I have a complicated name. But it's Sebastian Benthall (two 'l's)

"I don't want to play six degrees of Kevin Bacon here, but "I am not a member of Grey Tribe" is incorrect. Using combinatorial networks based on who follows you and who you used to follow, you're about 1 step from Slatestarcodex."

I think it is fair to say that my readership overlaps with SSC quite a bit — when I asked my readers in *my* annual user survey what their favorite blog was, his was the most common answer.

On the other hand I don't think I fit the markers of "grey tribe-dom" in that original SSC post?

"Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk "

*Not a Darkins style atheists
*100% would vote against same sex marriage and will never use the word "gay rights"
*I don't know what Soylent is!
*I refuse to download Uber on my phone
*I love American football!
*I don't know what filk is!

I do get upset about the NSA though. I don't have a moral objection to the War on Drugs, just practical ones. I am libertarianish I suppose, but from a very Fusionist conservative direction.

On the other hand, not being hot on Trump does put a damper on my Red tribe credentials.

Given my readership statistics I suspect I may be something like an emissary from tribe red towards tribe grey. Or do you have a better way of parsing it?


My apologies, I have fixed it now.

@Anon February 26, 2021 at 2:38 AM

"by first deciding what they're going to write and only then doing such research as they feel they have to."

Hit job implies per-meditated malice. I doubt there was any here. In any case, the story they decied they were going to write wasn't "slate star codex is sexist" but "silicon valley has sexism entrenched in it, for example see x."

@Gordianus– Bingo on seeing the connection between the rationalists and the integralists — both being systems of community and meaning in the face of a hostile progressive ideology and in the ruins of more traditional systems. Exactly right. I always thought this but Benthall's making it explicit helps draw the parallel closer.


If the non newspapers accounts just tweeted links to their blog posts or essays or whatever, I might still follow them. As it is, I'm convinced there is nothing useful to find on Twitter except links.

This is by far the best take on the whole ssc debacle I've read. Up to this point, I'd been pretty certain that it was some sort message that the Times won't give in to demands.

On the topic of AI risk, it's unfortunate that the loudest voices are the most absurd. For a more sober analysis of the risks involved in the current ML trajectory, see [human compatible]( Whether outsiders are able to do much about it with funding is pretty questionable. If ML makes it into weapon systems before the kinks are worked out, it's unlikely to have as good of judgment as,say, [Vasily Arkhipov](

Adopted from a discussion elsewhere on the internet to fit within 4k characters:

This article is fantastically well written and asks the right questions, which are double successes that most other articles on this topic have failed to meet. I will be reading more of your stuff. Nonetheless I have one central criticism.


You argue that the NYT has a hierarchically driven narrative centered around racism and other inequities. I find the evidence favoring this claim relatively sparse but, were it defensible, you seem more focused on the unusual nature of this setup than its absolutely horrifying implications. The points on the hierarchy and the existence of a racism-centric narrative are supported separately.

The hierarchical nature comes from the deadline piece which you quote extensively. I'm somewhat skeptical that the hierarchy is as far-reaching as you paint it here, but were this an accurate depiction of the NYT you are extremely soft on this point. You say "fine, I can’t criticize you—it isn’t really any different from my finding security stories the most newsworthy things to come out of China". No, if you seriously believe this you should criticize them like mad! This is an allegation that the NYT engages in propaganda driven by the chief editors. The degree to which the facts of a story adhere to a predetermined conception of the world is a repugnant standard which will only ever ratchet up the evidence that favors that narrative. This will happen for the same reason that selective reporting on violence led to a complete divergence between actual and perceived violence in the US.

You go on to assert that the NYT narrative has now drifted to “shed light on the ideas, institutions, and personalities that exacerbate racial and gender inequity in American life, creating the sort of world where Donald Trump can be president.” The source of this goes back to this transcript leaked by slate, but I don't think the transcript is well interpreted by your washington examiner link. NYT Executive Editor Baquet says relatively little that could be interpreted as imposing a new narrative that centers on the origins of race- and gender-inequities in the US. By and large, he is responding to criticism by his staffers about both the lack of usage of the word "racism" in reporting at the NYT and the NYT's lack of focus on racism in the US. A fairly representative quote begins "Staffer: Hello, I have another question about racism…"

So, how accurate is your assertion that the NYT has a hierarchically-driven narrative that centers around racism? I'm relatively unconvinced that the narrative is as controlled by those at the top as you allege, although I do believe that there is a narrative (or possibly multiple narratives) at the NYT. Once established, you support the claim that editors influence reporting to conform to that narrative. However, determination of the narratives is at least partially determined by both editors and reporters. An individual editor will have more control over this narrative than an individual reporter, but within the NYT those reporters are far more numerous. These editors are in a slightly precarious position – vulnerable to criticism within for failing to condemn racism in strong enough terms (i.e. insufficient propagation of the narrative) and without for seeing racism everywhere (i.e. overly strident propagation of the narrative).


Minor Typos:

Footnote 2 is missing its opening bracket

Footnote 19 is not mentioned in the text. I believe the text surrounding the commentary on the Rationalist community as a therapeutic one should cite 19 instead of 18.

This is a brilliant piece. I think it presents a very reasonable take on the whole issue. While I wouldn't call myself a rationalist anymore the concepts and community provided by HPMOR and SSC helped me to think about how I think at a time where I deeply needed it.

The claim that SSC is a den of rationalists is easily disproved by the SSC survey, which shows that only 13% of readers consider themselves such. However, I also disagree with the claim that this is some sort of religious community. I would argue that it's simply a community of similar people, like so many exist, where that similarity is being hypersystematizing (which refers to the scientific distinction between systematizing and empathizing). Being hypersystematizing is a trait of Asperger's, but doesn't require autism.

An article that would enlighten people on the science of systematizing vs empathizing, which could use SSC as an example, would actually teach many people something new.

If one instead wants to write about rationalists, it makes very little sense to focus on SSC and Scott, rather than on LessWrong and Yudkowsky. And if one wants to write about the beliefs of people in Silicon Valley or tech, it makes little sense to focus on anything in this small community, which is far from representative of either.

Ultimately, the NYT article came across to me as an attempt to weave a conspiracy theory. A conspiracy theory is typically also a hit job, in that it makes false and one-sided claims & insinuations, but the targets are rarely important in themselves. Instead, they are mere fodder for the grand conspiracy.