Anthony Gottlieb on Rorty

Rorty had urged philosophers to abandon their intellectual hubris and instead content themselves with interminably swapping enlightening tales from diverse perspectives. It was never quite clear why anyone would want to listen to such stories without endings.

Rest here. On a related note, I was talking with a friend a while back about how embarrassingly seriously Rorty takes Freudian explanations of behavior, and how it would make more sense if you subbed in (reputable, not link-baity) ev psych or behavioral genetics. But the whole postmodernist project Rorty was engaged in doesn’t make any sense if one accepts – as one should – that the way we behave and conceive of ourselves is not just a collection of societal discourses but in large measure genetically heritable. Some things vary across cultures but a whole lot don’t. You can’t overthrow a hegemonic discourse that’s in your DNA.

Jason Brennan on the Methodology of Constitutional Theory

From my perspective as an outsider, most of the field of constitutional law seems intellectually corrupt. It seems that almost everybody does the following:

1. Start with a political philosophy–a view of what you want the government to be able to do and what you want to the government to to be forbidden from doing.

2. Take the Constitution as a given.

3. Reverse engineer a theory of constitutional interpretation such that it turns out–happily!–that the Constitution forbids what you want it to forbid and allows what you want it to allow.

The rest is just as good.

Meritocracy and the Institutions

Tim Lee on the differences between meritocracy in Silicon Valley from on Wall Street and in Ivy League universities:

Silicon Valley is at least as obsessed with the meritocratic ideal as the Ivy League and Wall Street are. But Silicon Valley’s conception of merit is more closely tied to real-world results. At a firm like Google, “merit” means the ability to create good software. Because successful tech companies must satisfy millions of finicky users, the difference between good software and bad software becomes apparent rather quickly. Facebook succeeded where Friendster and MySpace failed largely because Mark Zuckerberg was a more talented programmer—and better at hiring talented programmers—than the guys who ran the other sites.

The meritocracy of the Ivy League and Wall Street are much more loosely tethered to reality. The SAT measures a certain kind of raw intelligence, but it does a poor job of measuring characteristics like imagination and resourcefulness that may be important for real-world achievement. And once you convinced an Ivy League admissions office to take you, the standards become much more subjective and forgiving. As for Wall Street, the volatility of financial markets makes it almost impossible to distinguish genuine talent from dumb luck. So while there are certainly smart people working on Wall Street, there’s no particular reason to think the smartest rise to the top.

This is slightly too optimistic – if the market always or even usually selected for good tech, NeXT would have succeeded and Internet Explorer wouldn’t have beaten Netscape – but certainly true by comparison to Wall Street and the Ivy League. But I think the introduction of the latter into the equation confuses the issue somewhat. I, to my considerable discredit, haven’t read Chris Hayes’ The Twilight of the Elites yet (upon which Lee is riffing), and so maybe I’m rehashing things here, but it seems like there are three, quite different, versions of the meritocratic ideal:

1. Descriptive appropriateness ideal: Rewarding people in proportion to how capable they are at the task at hand makes things go best, where “best” is defined by whatever goal the institution in question is supposed to accomplish (maximizing profit, imparting knowledge, etc.).

2. Descriptive genetic elite ideal: Rewarding people in proportion to raw intellect or IQ makes things go best.

3. Normative genetic elite ideal: Rewarding people in proportion to raw intellect or IQ is morally required, even if it doesn’t make things go best.

Silicon Valley seems committed to #1, Wall Street to #2, and the Ivy League to #3. I think there’s considerable evidence at this point that #2 is laughably wrong-headed (forget the financial crisis, Halberstam was on this first), and #1 has more of case going for it. But they’re both testable hypotheses. One could theoretically figure out if hiring the smartest people in the world makes a hedge fund makes more money than its competitors, or if the fact that the best programmers (very very) occasionally become billionaires improves the quality of software, as measured by its users. And we in fact have some pretty good natural experiments on these questions.

The third ideal, the one you’re immersed in if you go to a highly selective college or university, or if you go to a secondary school that feeds into those colleges and universities, is of a different sort. It has moral content, and that moral content, when spelled out, strikes most people as deeply repugnant. And rightly so! But I think stamping out that kind of pernicious rankism is a quite different problem from disabusing Wall Street of its cult of the whipsmart trader. You can show the Wall Streeters studies proving they’re wrong. It’s harder to convince a Harvard undergraduate that he’s the beneficiary of a deeply unjust system, as I spent the last four years discovering.

Sarah Anzia on Election Timing

From the abstract (bolds mine):

It is an established fact that off-cycle elections attract lower voter turnout than on-cycle elections. I argue that the decrease in turnout that accompanies off-cycle election timing creates a strategic opportunity for organized interest groups. Members of interest groups with a large stake in an election outcome turn out at high rates regardless of election timing, and their efforts to mobilize and persuade voters have a greater impact when turnout is low. Consequently, policy made by officials elected in off-cycle elections should be more favorable to the dominant interest group in a polity than policy made by officials elected in on-cycle elections. I test this theory using data on school district elections in the U.S., in which teacher unions are the dominant interest group. I find that districts with off-cycle elections pay experienced teachers over 3 percent more than districts that hold on-cycle elections.

Full paper here. So add “it’d reduce interest group capture” to the list of good reasons to sync up elections such that none are off-cycle.

Neil Sinhababu on “Unequal Vividness and Double Effect”

Here’s the summary of the argument (links added by me for context):

I hope to advance this debate with an argument that uses recent empirical results against the Doctrine of Double Effect as Greene and Singer wish to, but which avoids the problems Berker notes, and which better fits the data. It runs as follows:

[Unequal Vividness Explanation] Double Effect is accepted because of how unequally vivid representations of actions’ intended and merely foreseen consequences affect our desires.

[Unreliability Claim] When the effects of unequally vivid representations upon our desires are decisive in causing our judgments about what to do, we are usually mistaken.

[Conclusion] In accepting Double Effect, we are likely to be mistaken.

Full paper here. What’s cool about this is that the unreliability claim, as I understand it, rests on both a descriptive claim (in practical judgment, unequal vividness frequently leads to error) and a normative claim (error in practical judgment is likely to constitute error in moral judgment). So I don’t think Neil is trying to derive an is from an ought. But at the same time, the normative claim in question is extremely modest, and one that most people involved in the dispute in question should be able to accept. The method he’s using, then, seems capable of using neuroscience to make progress in previously intractable disputes like this without falling victim, as Greene does, to errors that Hume was pointing out three centuries ago.

Unequal vividness also seems like a promising way of deflating other deontological hobby-horses, like the causing/allowing harm distinction. Surely one reason that we’re less willing to condemn someone for failing to donate $10 to UNICEF, even with the knowledge that the donation will likely save a child’s life, than we are to condemn someone who runs over a child with his car and then keeps going is that the latter scenario is far more vivid.