Sunday, October 8, 2017

Philosophy as a Vocation

There's a (perhaps apocryphal) story of a philosopher being asked at a party what exactly it was they did and responding -- ``you define a few concepts, you make a few distinctions; it's a living.'' People sometimes tell this story as an example of how base, flippant, and ignoble the culture of analytic philosophy has become; but I begin with it for the exact opposite reason. I want to acknowledge from the get go that, in the end, one of the big attractions to being a philosopher is that it's an indoor job with no heavy lifting, and that's alright. I'm not from the school of thought that thinks the problem with academics is that we fail to be sufficiently self-important, so I think it worth grounding all this vocation talk in the more humble reality straight away.

Max Weber -- ``... Wait, did I leave the stove on?''
Max Weber has a rather famous essay called `Science as a Vocation'. In it he gives an account of the existential situation of the young scientist. I'm not going to do full justice to it here, but here are four points Weber makes that I want to highlight:
  1. There is an enormous element of luck involved in deciding who makes it and who does not.
  2. To make a valuable contribution one has to narrow one's horizons and become ultra specialised.
  3. Even if one achieves something it inevitably shall eventually be over-tuned and surpassed.
  4. We live in a morally blank, existentially meaningless, universe, and one's choice of vocation will never receive compelling, external, ultimate justification.
Cheery bloke, Weber; big hit at parties.

It's not clear, from the essay, whether Weber none the less means to be advocating the life of the scientist as a noble one worthy of pursuit. Much of what he says seems to indicate that he thinks its a noble pursuit, yet when he most directly touches on the matter he says:
Hence academic life is a mad hazard. If a young scholar asks for my advice with regard to habilitation, the responsibility of encouraging him can hardly be borne. If he is a Jew, of course one says lasciate ogni speranza [abandon all hope]. But one must ask every other man: do you in all conscience believe that you can stand seeing mediocrity after mediocrity, year after year, climb beyond you; without becoming embittered and without coming to grief? Naturally one always receives the answer: ``Of course, I live only for my `calling'''. Yet, I have found that only a few men could endure this situation without coming to grief.
So we get here, also, a nod to the role that raw prejudice can play in deciding academic fates, and the bitterness that academic life can bring with it as one sees any pretense of meritocracy destroyed before one's eyes, and (so one thinks) to one's own disadvantage.

How much of this goes for philosophy as well? The role of prejudice is much discussed in our community, as are failures of our system to be meritocratic and the role of luck. I don't quite so often see it discussed, but I think we all have seen (or felt, in some of our cases) folk suffering from the peculiar kind of bitterness which results from the following combination of beliefs: that things ought be a meritocracy, that in such a system one would be doing well and widely acknowledged, that one is not doing well or widely acknowledged. Philosophy as a vocation may well contain many of the same elements as science as a vocation.

Points (2) and (3), however, are much more disputed in the case of philosophy. A recent essay in the LA Review of Books seems to me representative of certain stands of thought which vigorously protest any analogy between the sciences and philosophy in these regards. Rather than put our heads down (and together) and specialise, hoping to each make small contributions to a long running project of collective inquiry that shall - if successful - inevitably surpass our meagre contributions, ``true philosophizing is “thinking against oneself” — done systematically, mercilessly, with no safety net and no escape routes''. The picture painted is of a kind of wildly ambitious and deeply individualistic project, where one, if successful, arrives at one's own profound insight that shall last the ages, but really where one must accept at the outset that one's quest will probably result in failure.  Plato isn't quite the highlander, but there can be only so many such people. This essay was an especially fervent expression of this sentiment, but I do think it captures something of a recurring theme in our debates about our own self-conception as a distinct class of inquirers.

My heart is very much with the first of these options, I think philosophy is or ought be much more a Weberian vocation than a Romantic quest for self-assertion. I'll limit myself to one problem I had with this piece and how I think it misunderstands the position of one who commits to the less individualistic Weberian vocation. Reflections in the spirit of the LA Review of Books just fail to appreciate the full existential resources of communalism, even where they place lip service to it. For instance, in the LA Review of Books article it seems to me that a lot of what they want to claim for their own approach is its superior courage, its better ability to display that virtue. What is being praised is the courage to squarely face one's high chance of failure, and the heroism of the philosopher in daring to be idiosyncratic in an institutional structure that prefers conformism. Now, privately I initially responded by complaining a bit about the pretentiousness here (systematic mercilessness leaving no room for escape better describes assassins operating predator drones, not people who write books about philosophy and cinema) but I've got my initial disclaimer, and in any case I'll grant that there's such a thing as intellectual courage and its valuable to display it.  But somebody pursuing a Weberian vocation has no especial reason to think that the project of inquiry they commit to shall succeed in its aims. And so even if they personally have some reason to think they may succeed in making their own within-paradigm contributions (which, given the discussion about well known roles of luck, prejudice, and failures of the reward system, actually shouldn't be granted so quickly) they know that the communal project is just as precarious and fraught as the personal project of the attempted heroic individual.

Indeed, this goes in philosophy even more so than the case of science Weber focussed on, since somebody who throws themselves into a communal project of philosophical inquiry with their eyes open does so knowing that the utter abandonment of paradigms is frequent in the history of philosophy, that centuries long projects which attracted the brightest minds of entire continents are now viewed as wildly and obviously erroneous by even the average undergrad, and it is at least obscure whether we build progressively upon each other at all. (An existentialist exploration of the pessimistic meta-induction or a proper exploration of the phenomenology of the historically aware scientist would, I think,  be an interesting and valuable project.) Even granting that intellectual courage is a virtue we ought display, the Romantic individualists are too quick to write off the Weberian vocationalists as thoughtless functionaries, and not appreciate the extent to which they display the same virtue, simply at a communal level rather than displaying it for their idiosyncratic project. This kind of unsympathetic failure to appreciate the principles and positions of Weberian types is typical, I think, and part of the reason I have wrote in their defence before.

It would be exactly missing my point to see in this as a defence of all the projects of inquiry philosophy now supports, all features of dominant paradigms as they now exist. In fact, they very well might fail, and per the judgement of history bear no fruits, and this may well be because of features of how those engaged in them arranged themselves socially. See here, for instance, for one of my own discussions of a failure of the reward system which philosophers and scientists alike are subject to in the present academy. Such failings, and the real possibility that a huge number of very smart people are simply wasting their lives by their own lights but shall never know as much, underlie rather than contradict my point. One does not have to buy into a full nihilistic metaphysic to see the relevance of Weber's (4) and how it applies to philosophy -- when one buys into a communal project of inquiry, one is committing oneself to something whose horizons of success or potential revelation of failure lie far outside one's lifespan. Whether we shall collectively discern and perfect and instantiate a just society, limn and explicate the metaphysical structure of being,  understand the nature of knowledge and see it properly organised, disseminated, and implemented -- and whether any of the approaches to these now adopted and collectively worked upon shall in any way help advance or hold us back in these -- we shall probably ourselves never know in this life. If one takes philosophy as one's vocation, in the Weberian sense, one none the less commits to the attempt at some or all of these problems, and does so with less hope of glory as one of history's celebrated geniuses, but as one among many making a small and under-appreciated contribution to a greater whole.

Also it's nice not having to come into the office over summer.

Friday, September 15, 2017

On The Case For Colonialism

There's a piece a lot of people are talking about called The Case For Colonialism. It is really not very good. I'm not signing the petition to have it retracted for the reasons outlined at the end of the piece here. It's also just worth reading the linked piece there as well for dismantling the argumentative strategies of The Case For Colonialism. But I do think there will be some concerted campaign to paint the reaction to this article as one of leftists being unwilling to engage in fair consideration of the facts, so I just want to have some place I can write down my own reaction to this piece for ease of reference. I will focus in particular on the first section, wherein it is claimed we should reappraise the total effects of colonial rule and would thereby realise it was net beneficial.  (The second section is an extended argument to the effect that various post-colonial governments have been awful. No argument from me on that front, though of course a better article than that under consideration would have spent more time reflecting on the kind of conditions that lead people to such desperate straits as to throw their weight behind the various wannabe tyrants who cropped up in the wake of colonialism - this would in fact also be relevant to the argument's claims about `subjective legitimacy', discussed below.  Also the role of both Western and Soviet neocolonialism in maintaining many such regimes would need discussing. The third section is an argument for `recolonisation' in some circumstances which largely depends on you buying that colonialism is a net good, and thus depends on the first section.) Much but not all of what follows can be found in the piece just linked to, but occasionally I would have put emphasis on different points or phrased things in a different manner, so I wanted to write my own take on things.


  • The Case For Colonialism contains historical infelicities. Guatemala, Libya, and Haiti are referred to as places that did not have a significant colonial history, a claim genuinely so bizarre I wonder what it could mean since charity forbids me from taking it on its face. It is at least suggested that Amílcar Cabral was involved in post-independence mismanagement of Guinea-Bissau, despite being assassinated before independence was achieved. Attention is restricted to various European colonies from the early-19th to mid-20th century; what about all the rest of colonial history, most especially and obviously the genocide in the Americas and the transatlantic slave trade this spurred? If one wants to re-evaluate the history of colonialism then to treat the actual history in so shoddy and loose a fashion immediately undercuts the entire purported point of the article.
  • As picked up by most commentators as the most striking point, it is morally pretty horrendous to call for Belgian reoccupation of the Congo, as the article does, albeit briefly and in passing. But it is worth noting just how bizarre the argument for this was. As far as I could tell it consisted entirely of noting that the government of the Congo has not ever managed to organise as efficient an army as the occupying Belgian forces once maintained. People have called it akin to Holocaust denial, and that seems fair to me -- but it's more specifically akin to defending German actions in Poland on the basis that damn was that Blitzkrieg effective.
  • In general for an article that purports to engage in or call for a `cost-benefit analysis' of colonialism, the weighing of the costs was very obviously inadequate. Genocidal wars of annihilation as in the Americas or the Australasian continent are not discussed at all; the various forms of slavery, forced labour, and gulags on the scale of nations, barely discussed; the lasting effects of various `divide and conquer' occupation tactics not mentioned; nary a word on the wars spurred by imperial competition. And so on. This does not read like a serious attempt at the task it purportedly sets itself.
  • It is a little bit unclear to me whether the article purports to be actually engaging in the cost-benefit analysis, or calling on others to do so. It seems to suggest that it knows what the outcome of this would be, thus suggesting that the author feels the cost-benefit analysis has been carried out. Most people have read it that way, in which case the above criticism is activated -- it's a poor cost-benefit analysis that does not actually take into account costs. If, on the other hand, the claim is rather that somebody should carry out a cost-benefit analysis on colonialism; then first it never actually defends the claim that we should engage in this activity directly, and one may object to this on Kantian grounds; but in any case, if this is what is going on the article can be faulted for the more prosaic reason that it would then be simultaneously calling for the fair minded consideration of a question and announcing in advance of this consideration what it takes the answer to be!
  • Relatedly, the author often notes that the European colonists did good things too in the process of colonisation. But I am reminded of Condorcet's response to this defence of colonialism in his own day: plainly it's not the case that the only way to spread technologies and ideas is through invasion and occupation and resource extraction, it does not excuse the latter to note that you did the former, since by trade and the peaceable commerce of nations and peoples you could have achieved those goods without bringing about those bads. Despite this being a rebuttal to various `civilising mission' defences of colonialism already available in the 18th century, the author of The Case For Colonialism never considers it. 
  • This actually feeds into another complaint; the appeal to counter-factual reasoning was both spurious and barely thought through. As part of the defence of colonialism, the author thinks we ought think about how history would have gone absent colonialism. The method they propose for doing this is to compare the condition of nations that were colonised to those that were not. Many of the nations they propose specifically as counter-points were in fact colonised; it is in this context that Haiti is named as a nation with no significant colonial history, for instance. But even setting that aside, those that weren't are often nations that were repeatedly invaded and menaced by colonial powers - China and Ethiopia are named, for instance. Taking this together with the above complaint about peaceable methods of cultural exchange never being considered, one is thus led to think that the rather bizarre counter-factual being set up thus seems to be as follows: suppose the European nations had not colonised the various nations they did, but had engaged in any other sort of looting and invasion -- if there is ever a situation in which colonial rule seems preferable to this, then score-one-for-colonialism. Here, to be clear, I am engaging in some speculative positing as to what they are proposing, since the author (true to form in this poorly written and argued piece) has not precisely spelled out how we are to carry out their counter-factual reasoning. In my defence, it is genuinely hard to see what the relevant counter-factual is supposed to be wherein we are to consider the fate of China as it is now as a guide to Asanteman as it would be were it not for colonisation, and see in this a defence of colonialism.
  • The argument for the legitimacy of colonialism in the minds of the governs consists of two observations. One, after being conquered people in the occupied territories would make use of the services that existed therein and take the jobs available. Second, testimony from a former governor of the Gold Coast to the effect that those under his sway quite liked the regime. The second of these is barely worth mentioning (should we assess the popular legitimacy of the Soviet occupation of Hungary by asking what a chief apparatchik thought of it?) and the first of those would be an argument in favour of the popular legitimacy of nearly every tyranny the world has ever seen. 

So we have an appraisal of historical events that gets basic parts of the history wrong, ignores or passes over key events, purports to be a cost-benefit analysis while not actually factoring in costs, is  naively credulous as to tyrant's self-affirmation, and advocates a mode of counter-factual reasoning that is both underspecified and from what can be discerned amounts to a non-sequitur. This is not good scholarship. I'll end here. This is more effort into this than I really intended, but I have now seen so many people saying that the arguments of the piece aren't being given consideration but people are rushing to condemn that I thought this worth setting out. I take this aspect of philosophy seriously, and think that a significant public role we should play is holding people to argumentative standards. For whatever that is worth, The Case For Colonialism does not meet those standards. 

Friday, September 8, 2017

Spoiler

Here is a belief of mine that I think is pretty uncontroversial but which, it turns out, my friendship group contains some pretty heated disagreement on. A spoiler for some piece of fiction is any bit of information (which pertains to events depicted) for which being told it beforehand significantly affects your experience of the fiction.

(Don't read too much into the `significantly' - I am just friends with philosophers, so have to qualify to rule out irritating Cambridge-spoilers; I don't think the difference between `experiencing the fiction knowing X' versus `experiencing the fiction not knowing X' is  significant in all cases, and if you're being real neither do you. Ok.)

I think this definition broadly matches popular usage and some popular attempts at definition -- for instance this. But apparently when one draws out its consequences it becomes pretty controversial pretty quickly. Some examples of said controversial consequences.

First, historical information can constitute a spoiler. Knowing that Ceasar gets stabbed, the Titanic sinks, and that a complex series of battles, parliamentary reversals, and marriages, results in a Lancastrian monarchy can all, in the right context, spoil works of fiction.

Second, we'll only know what all the spoilers are once we're dead. We never know what information we are gaining now could turn out, in future, to affect our experience of some piece of fiction. Everything you learn is potentially a spoiler for some future tale. Life in a democracy is full of risks, and this is one of them.

Third, not quite a consequence but close: one can fully permissibly spoil things, it is not the case that it is always bad to spoil a work of fiction. Maybe it is bad always and everywhere to deliberately spoil a work of fiction (even if this bad can be overrode by other goods one thereby attains), but certainly giving away information which in fact constitutes a spoiler is not in itself even a prima facie bad in a great many scenarios. It may even be a good thing to do sometimes.

Fourth, spoiling is quite an individualistic affair. It depends on the peculiar character of the individual how they experience a work of fiction, and how their information bears on this; it does not depend (except in a derivative sense) on the intentions of the author of said fiction, nor on the nature of the information conveyed. Nothing is intrinsically a spoiler, it all depends on how it interacts with the individual and their mode of experiencing fiction.

Ok, there we go. I think all this is quite obvious, but frequent disagreement compels me to write it out in an easy to access place for future reference. Now you know, and knowing is half the battle.

EDIT: Thanks to Kenny Easwaran and Eric Schwitzgebel for pointing out ammendments which I have incorporated into this definition. Keep them coming!

Saturday, August 12, 2017

Du Bois on Da Vinci

A quick write up on a charming essay by the young Du Bois (from his time as a graduate student at Harvard), which I only found out about through the fascinating historical work of Trevor Pearce. The essay is entitled Leonardo Da Vinci As A Scientist and is available online here.

Leonardo Da Vinci -- ``I was even a pioneer in
side-eye and general shade throwing.''
Du Bois is concerned to argue that Da Vinci deserves credit as the founder of modern experimental science. The argument strategy is twofold. First, to show that Da Vinci has sufficient (and sufficiently impressive) scientific achievements to merit attention as an early scientist at all. This Du Bois achieves by just reviewing historians (apparently then - 1889 - relatively recent) reappraisal of Da Vinci's empirical work and work inventing scientific machinery and to show that it was indeed impressive. This in itself was interesting; so for instance I learned here that Da Vinci was already floating the idea that the sublunary realm and the broader cosmos should be understood as operating on the same principles, that Da Vinci has a
claim to being an early inventor of the telescope and also being the first to notice a parallel between how the camera obscura works and the operations of the human eye, and that on the basis of observational study of plants Da Vinci was developing ideas about plant respiration which now seem to have been on the right track. Cool!

The second step in the argument, however, is the more philosophically and conceptually interesting. Here Du Bois' task is to argue that Da Vinci deserves credit not just as a link in a great chain of scientific workers, but rather some sort of special credit as a founder figure in one sense or another. Here the point is largely drawn out by comparison with three other figures: Roger Bacon Gilbert of Colchester and Francis Bacon. While Du Bois is impressed with each of these figures, he thinks they were each lacking in a certain way. Roger Bacon was not enough of an empiricist: to be credited as a founder of modern science, Du Bois feels, empiricism must be one's epistemological foundation, where for R.Bacon ``empiricism was but a branch of the tree of which philosophy was the trunk''. Glibert of Colchester has, so to speak, the opposite problem -- he's all empiricism with no metatheory. While he's impressive in his collection of observational and experimental results, he's ``a mere experimenter, with little breadth of conception, or broad generalising powers''. F. Bacon, finally, came after Da Vinci, and is substantially the same in his metatheory (so Du Bois thinks! Please don't hurt me, Renaissance scholars), but just didn't achieve as much scientifically as Da Vinci. F. Bacon comes across, basically, as an especially talented expositor of Da Vincian method, but not himself worthy of the claim to priority on scientific method.

The philosophy of science young Du Bois is working with is interesting, and worth making more explicit than Du Bois himself does in the essay. In Da Vinci, Nature had found itself a man who could do both: patient skillful observational work, aided by machines of his own device, that uncovers particular facts of great interest and also general principles, and also explicit epistemological theorising of a sort which acknowledged and explained the importance of founding one's claims in such observations. Science, then, is the epistemologically self-conscious skillful application of empiricist method. R. Bacon was a skillful natural philosopher and epistemologically self-conscious, but not an empiricist. Gilbert of Colchester was a skillful empiricist, but did not evince the requisite degree epistemological self-consciousness. F. Bacon was an epistemically self-conscious empiricist, but just not quite good enough at the actual application. Da Vinci was the first person in whom all these qualities meet to a sufficient degree, or so Du Bois claims. (This essay also features a trait which is characteristic of all Du Bois' latter work on social matters -- explicit reticence and diffidence, with frequent reminders that one ought be cautious about one's conclusions given the difficulties of gathering evidence and being sure it is complete or representative.)

W.E.B. Du Bois -- ``The idea that the person
in this picture could ever be as enthusiastic
about anything as the person who wrote that
essay on Da Vinci is genuinely surprising.''
I've worked on Du Bois' philosophy of science before, but I have never in my published work explicitly remarked on the undercurrent of empiricism. None the less, it is there; most especially it can be seen in his lifelong habit of issuing scathing condemnations of a priori approaches to history and sociology, where he thinks that prejudice unchecked by experience has been the source of much racist balderdash concerning African (and African-descended) folk. It is remarkable to think, then, how closely Du Bois' scientific and social mission accords with the early philosophy of science he developed here. For, The Philadelphia Negro or Black Reconstruction can plausibly be described as epistemologically self-conscious skillful applications of empiricist method; in both these works (and many of his less famous essays besides) he mixes explicit methodological remarks exhorting a more carefully and rigorously observationally grounded approach to the study of black life in America, with the actual collection of novel results about social, political, or economic conditions, and in both the highlighted cases they have (nowadays) come to be seen as classics of their respective fields. His work is thus epistemologically self-conscious in its empiricism, involves the actual application of observational method as well as its exhortation, and skillful performance thereof. The philosophy of science underlying this essay by the young Du Bois seems to have set a pattern that he attempted to live up to for the rest of his scientific career.

Da Vinci, of course, is not just a great scientist and engineer, but also a great artist. Du Bois was evidently aware of this, and this fact about him is mentioned at various points in the essay. Da Vinci is indeed paradigmatic of the Renaissance Man, the individual who strives to hone diverse skills to a high degree and exhibit a broad culture. In this respect too Du Bois seems to have followed Da Vinci, being more acclaimed for his literary style and humanistic moral and political vision than his scientific career. Being attracted to the broad humanism of the Renaissance, and having great respect for Du Bois' work, seeing this essay where Du Bois develops his ideas about philosophy of science as part of an ode to Da Vinci and the Renaissance scientific humanism that Da Vinci pioneered, was in its own way quite affecting for me. Even if I cannot match these figures in their skill, I hope to at least preserve and advance the spirit of humanistic inquiry that they each embodied.

Sunday, August 6, 2017

Significant Moral Hazard

What follows is a guest post by my comrade Dan Malinsky. After the recent publication of the paper `Redefine statistical significance' Malinsky and I attended a talk by one of the paper's authors. I found Malinsky's comments after the talk interesting and thought-provoking that I asked him to write up a post so I could share it with all yinz. Enjoy!

--------------------------------------------------------------------------------------------------------------------------

Benjamin et al. present an interesting and thought-provoking set of claims. There are, of course, many complexities to the P-value debate but I’ll just focus on one issue here.

Benjamin et al. propose to move the conventional statistical significance threshold in null hypothesis significance testing (NHST) from P < 0.05 to P < 0.005. Their primary motivation for making this recommendation is to reduce the rate of false positives in published research. I want to draw attention to the possibility that moving threshold to P < 0.005 may not have it’s intended effect: despite the fact that “all else being equal” such a policy should theoretically reduce false positive rates, in practice this move may leave the false positive rates unchanged, or even make them worse. In particular, the “all else being equal” clause will fail to hold, because the policy may incentivize researchers to make more errors of model specification, which will contribute to a high false positive rate. It is at least an open question which causal factors will dominate, and what the resultant false positive rate will really look like.

An important contribution to the high false positive rates in some areas of empirical research is model misspecification, broadly-understood. By model-misspecification I mean anything which might make the likelihood wrong: confounding, misspecification of the relevant parametric distributions, incorrect functional forms, sampling bias of various sorts, sometimes non-i.i.d.ness, etc. In fact, these factors are more important contributions to the false positive rate than the choice of P-value convention or decision threshold, in the sense that any plausible decision rule no matter how stringent (whether it is based on P-values, Bayes factors, or posterior probabilities) will lead to unacceptably high false positive rates if model misspecification is widespread in the field.

Note that the authors Benjamin et al. agree on the first claim. Benjamin et al. mention some of these problems, agree that they are problems, and frankly admit that their proposal does nothing to address these or many other statistical issues. Model misspecification, in their view, ought to be tackled separately and independently of the decision rule convention. The authors also admit that these and related issues are “arguably bigger problems” than the choice of P-value. I think these are bigger problems in the sense specified above: model misspecification will afflict any choice of decision rule. This is important because the proposed policy shift may actually lead to more model misspecification. So, the issues interact and it is not so straightforward to tackle them separately.

P < 0.005 requires larger sample sizes (as the authors discuss), which are expensive and difficult to come by in many fields. In an effort to recruit more study participants, researchers may end up with samples that exhibit more bias -- less representative of the target population, not identically distributed, not homogenous in the right ways, etc. Researchers may also be incentivized, given finite time and resources, to perform less model-checking and diagnostics to make sure the likelihood is empirically adequate. Furthermore, the P-value critically depends on the tails of the relevant probability distribution. (That’s because the P-value is calculated based on the “extreme values” of the distribution of the test statistic under the null model.) The tails of the distribution are rarely exactly right at finite sample sizes, but they need to be “right enough.” With a low P-value threshold like 0.005, getting the tails of the distribution “right enough” to achieve the advertised false positive rate becomes more unlikely because with 0.005 one considers outcomes further out into the tails. Finally, other problems which inflate false positive rates like p-hacking, failure to correct for multiple testing, and so on may be exacerbated by the lower threshold. The mechanisms are not all obvious -- perhaps, for example, making it more difficult to publish “positive” findings will incentivize researchers to probe a wider space of (mostly false) hypotheses in search of a “significant” one, thereby worsening the p-hacking problem -- but it is at least worth taking seriously that these factors may offset the envisaged benefits of P < 0.005. (I think there are some interesting things which may be said about why these considerations are less worrisome in particle physics, where the famous 5-sigma criterion plays a role in announcements. I’ll leave that aside for now.)

I’m not disputing any mathematical claim made by the authors. Indeed, for two decision rules like P < 0.05 and P < 0.005 applied to the same hypotheses, likelihood, and data, the more stringent rule will lead to fewer expected false positives. My point is just that implementing the new policy will change the likelihoods and data under consideration, since researchers will face the same pressure to publish significant results but publishing will be made more difficult in a kind of crude way.

This worry will be relevant for any decision threshold convention, and so it speaks against any strict uniform standard. However, Benjamin et al. raise the important point that “it is helpful for consumers of research to have a consistent benchmark.” My friend and colleague Liam Kofi Bright reinforces this point in his blog post: there are all sorts of communal benefits to having some mechanism which distinguishes “significant” results from “insignificant.” I’d like to propose a different kind of mechanism.

Sometimes statisticians casually entertain the idea of requiring “staff statistician reviewers” to review (the data analysis portions of) empirical articles submitted for publication. I think we can plausibly institutionalize a version of this practice, and it can function as a benchmarking procedure. Every journal will pay some number of professional statisticians (who should be otherwise employed at universities, research centers, etc.) to act as statistical reviewers, and specifically to interrogate issues of model specification, sample selection, decision procedures, robustness, and so on. Only when a paper receives a stamp of approval from two or more statistical reviewers should it count as having “passed the benchmark.” The institutionalization of this proposal would have some corollary benefits: there are a lot of statistician professionals who are employed with “soft money,” i.e., they have to raise parts of their salaries by applying for grants. This mechanism could partially replace that grant-cycle: journals would apply regularly every few years for funding from the NIH, NSF, and other funding agencies to compensate statistical reviewers (an amount dependent on the journal’s submission volume); the statisticians get to supplement their incomes with this funding rather than spend time applying for grants; and the public gets some comfort in knowing that the latest published results are not fraught with data analysis problems. I can image a host of other benefits too: e.g., statisticians will be inspired and motivated to direct their own research towards addressing live concerns shared by practicing empirical scientists, and the empirical scientists will be alerted to more sophisticated or state-of-the-art analytic methods. Statistician’s review may also reduce the prevalence of NHST, in favor of some of the alternative analytical tools mentioned in Benjamin et al. The details of this proposed institutional practice need to be elaborated, but I conjecture it would be more effective at reducing false positives (and perhaps cheaper) than imposing P < 0.005 and requiring larger sample sizes across the board.

[I should acknowledge that, depending how my career goes, I could be the kind of person who is employed in this capacity. So: conflict of interest alert! Acknowledgements to Liam Kofi Bright, Jacqueline Mauro, Maria Cuellar, and Luis Pericchi.]

Monday, July 24, 2017

Supporting the Redefinition of Statistical Significance

Recently an article entitled `Redefining Statistical Significance' (RSS) has been made available. In this piece a diverse bunch of authors (including four philosophers of science - represent) put forward an argument with the thesis: ``[f]or fields where the threshold for defining statistical significance for new discoveries is P<0.05, we propose a change to P<0.005.'' In this very brief note I just want to state my support for the broad principle behind this proposal and make explicit an aspect of their reasoning that is hinted at in RSS but which I think is especially worth holding clear in our minds.

RSS argues that, basically, rejecting the null at P<0.05 represents (by Bayesian standards) very weak evidence against the null and in favour of the hypothesis under test, and further than its communal acceptance as the standard significance level for discovery predictably and actually leads to unacceptably many false-positive discoveries. P<0.005 taken as the norm would go some way towards solving both these problems, and the authors emphasise most especially that it would bring false positive levels down to within what they deem to be more acceptable levels. RSS doesn't claim originality for these points, and is a short and very readable paper; I recommend checking it out.

The authors then have a section replying to objections. They note that they do not think that changing the significance level communally required for discovery claims is a cure-all, and deploy a number of brief but very interesting arguments against the counter-claim that the losses in terms of false-negatives would outweigh the gains in avoiding false positives. This is all interesting stuff, but the point at which I wish to state my broad agreement comes when they consider the objection that ``The appropriate threshold for statistical significance should be different for different research communities.'' Here their response is to say that they agree in principle that different communities facing different sorts of puzzles ought use different norms for discovery claims, but note that many communities have settled on the idea that given the sort of claims they are considering and tests they can do  P<0.05 is an appropriate standard for discovery claims. They are addressing those communities in particular with their proposal, so are addressing communities which have already come to agree that they should share a standard for discovery claims.

My one small contribution here, then, is in following up on this point. They briefly note in their reply to this objection that -- `it is helpful for consumers of research to have a consistent benchmark.' I think this point deserves elaboration and emphasis, and it is why I feel that, although I do not feel sufficiently expert to comment on the specific proposal they made, the broad contours of their argument are right. Why, after all, do we actually have to agree on a communal standard for what counts as an appropriate significance level for `claims of discovery of new effects' at all? Couldn't we leave that to the discretion of individual researchers? Or maybe foster for some time a diversity of standards across journals and let a kind of Millian intellectual marketplace do its work? To put it philosophically, why have something rather than nothing here?

I take it that a lot of what the communal standard is doing is providing a bench mark for those not able to make expert or highly-informed personal assessment of the claims and evidence to know that the hypothesis in question is confirmed to the standards of those who are able to make expert or highly informed assessments. These consumers of the research are those for whom the consistent benchmark helps. Especially for the kind of social scientific fields which have in fact adopted this benchmark, a pressing methodological consideration has to be that non-scientists or folk not able to assess statistical claims, and more pointedly people with policy or culturally influential positions, will consume the research, and take actions based on what they believe to be reliable, or at least take action on the grounds of what convinces them. The trade off between Type 1 and Type 2 errors, then, must be made with it in mind that there is an audience of non-experts to the claims made in this field, and an audience who will shape actions and lives and self-perceptions (in part) upon the results these fields put out. As a scientific community we must therefore decide what we think of our own work can be vouchsafed to these observers, or validated to the standard this cultural responsibility entails.

In theory, of course, we could still leave this up to individuals or allow for a diversity of standards among journals. But I think awareness of the scientific community's public role tends to speak against that. Such diversity, I'd wager, would either result in a cacophonic public discourse on science in which the media and commentators constantly reported results, then their failure to replicate, and then their replication once more (as well as contrary results, their failure to replicate...). This because the diversity of standards led to non-experts picking who to believe randomly among folk with different standards, or according to who they judged to have the flashiest smile, or whichever university PR department reached out to them last, or factionally choosing their favourite sources. Or, it would result in silence, as gradually scientific results come to be seen as too unreliable, too divided among themselves, to be worth paying much attention to at all. If you think that scientifically acquired information can make a positive difference to public discourse, either of these seem like bad outcomes. (The somewhat self-promoting Du Bois scholar nerd in me can't resist pointing out that Du Bois brought similar considerations to bear in responding to widespread failures of social scientific research in his day.) In fact, I think this epistemic environment makes a conservative attitude sensible, and speak in favour of adopting a very low tolerance for false-positives. This because is much harder to correct misinformation once it is out there than it is to defer announcing until we are more confident, and the very act of correction may induce the same loss of trust worry mentioned before. This means that in addition to elaborating upon RSS' reply to an objection, and without feeling competent to quite judge whether P<0.005 in particular is the right standard, I also think the overall direction of change advocated by RSS is the right one, relative to where we are now.

Saturday, July 1, 2017

Decolonise Philosophy!

The following thoughts, prompted by this article, will (I suspect) almost all be super obvious to anybody who has been thinking about decolonising philosophy for an extended period of time. But my audience is largely composed of people, methinks, who do not regularly think about such things.

Lots of people would agree with the slogan ``We ought decolonise philosophy!'' but, philosophy being what it is, the meaning of the slogan is highly contentious. I'll work with one account thereof, based on this and related papers by Kwasi Wiredu, but bear in mind that it's not the only account of what it would take to decolonise philosophy that is out there. I think this particular account makes my point very stark, but something essentially similar to what I say would go if I had worked with some other prominent accounts. Wiredu begins by saying that what it means to decolonise African philosophy would be  ``divesting African philosophical thinking of all undue influences emanating from our colonial past.'' This is then cashed out in terms of taking conscious control of the concepts deployed in philosophical reasoning, as well as the substantive positions covered and the questions asked, by means of subjecting them to critique via cross cultural comparisons. The idea, basically, is to try and ferret out aspects of philosophical thinking now going on in Africa which can't earn their keep on their own merits but rather persist simply because the colonialists imposed them during their occupations -- and to ferret them out by using the fact that indigenous languages, conceptual schemes, and thought traditions have resources that can make incongruences stark by means of comparison, undermine false claims to necessity by evincing in practice alternate ways of going on, or may occupy regions of logical or conceptual space that the colonists never bothered to explore.

So, for instance, Wiredu argues that there are certain puzzles about existence or the nature of capital-b-Being which simply cannot arise if you are to formulate your thoughts in certain West African languages. In the essay linked, for instance, he argues that the notion of creation-ex-nihilo which has caused so much debate in philosophy of religion is nigh-on-incomprehensible if one tries to discuss it in his native Akan language. It is not that he thinks this therefore proves that those questions of Being are pseudo-puzzles, or that creation-ex-nihilo is impossible, but rather simply that it would be a colonial attitude to simply assume that this difference must be due to an expressive fault with the West African languages rather than a tendency to produce misleading linguistic confusions in the European traditions which concentrate on those puzzles and on the basis of nothing more than this assumption work to import the European concept. If it is a genuine improvement on the indigenous conceptual scheme that must be argued for. Further, having realised the incongruity, and not uncritically accepting the Western mode as just obviously superior, one can see whether and how thinking with the concept derived from one's own linguistic tradition would ramify through philosophical issues -- and in this paper he concludes, for instance, that attempts to harmonise or synthesise religions indigenous to Ghana and Christianity are probably not as coherent as some claim, but are relying on equivocation at key moments. In his own work he has applied this method to a number of other problems to interesting effect -- to give some of the provocative examples, he concludes that Descartes' cogito would fairly immediately have been seen to be an invalid argument had Descrates attempted to formulate it in a West African language, or on other occasions that the correspondence theory of truth is a tautology in an Akan language.

The point, then, is not simply to reject everything associated with the colonialists. (As he says, the emphasis in his initial definition of decolonising philosophy should be on the word `undue' before `influences'.) Rather, the point is to ensure that the tools we think with are up to task, and to use the availability of alternative tools as a means of facilitating test and comparison. So whether or not you one ultimately ends up accepting the problems-in-Western-languages as genuine or pseudo-problems, the decolonised philosophy is that which has used the conceptual resources and intellectual traditions of the former colonised nation to put itself in a position to consciously decide whether or not its inherited problems are worth pursuing in light of consideration of a fuller range of facts, rather than uncritically (or without due consideration of the facts adduced by considering the thought of the colonised) accepting the concepts, problems, and solution space given to it by Western tradition.

Before drawing out my intended moral, some comments on Wiredu's account as an account of decolonising philosophy. I think it does a pretty good job of rationalising a lot of what people tend to actually do under the aegis of decolonising the field (I usually see people try and change: (i) what is taught, and (ii) who does the teaching), since it is basically an attempt to leverage cognitive diversity in a way that tends to align with the various reform efforts now going on. Wiredu would, I think, also be of the opinion that this is a contribution to the broader project of decolonisation -- since the historic task of former colonies at this moment is to deal with the legacy of colonialism by taking the reigns of history and no longer simply having Western modes of life and government imposed, but rather consciously weighing the colonists mode of life against the indigenous tradition and attempting to forge a synthesis that allows for the best of both as far as is possible. That is to say, Wiredu's account of what African nations should be up to during periods of post-colonial modernisation looks a lot like his account of what African philosophy should be up to. He might therefore think that each can reinforce the other. If you do not agree with Wiredu on what broader cultural and political decolonisation means, I think one could reasonably fault this as failing to properly contribute to the broader decolonial project. I am not sure what I think the broader decolonial project will or should amount to, so I am agnostic on this point.

Ok here's the thing that Wiredu's account makes especially for me: I have never seen an account of decolonising philosophy that does not make it seem like it is just a generally desirable thing to do. I can understand why it is of especially pressing importance in departments in former colonies. But the thing Wiredu described just sounds like a corollary of enlightenment, assuming you don't a priori limit the capacity for interesting thought or concept creation to Westerners. (This would have to be a corollary of some version of the enlightenment that did not share the patronising assumptions of many actors within the actual historical enlightenment! Enlightenment itself is, of course, famously a concept subject to much critique.) Everyone, citizen or descendent of former (or presently) colonised nation or not, should want to decolonise philosophy in Wiredu's sense. Could we still claim our mantle as true philosophers if we, as a matter of policy, uncritically made use of our inherited concepts? Can we really vow to just set aside pertinent information about the limits or oversights of our own conceptual scheme, or are we so sure that there is no pertinent information to be drawn from the kind of cross-cultural comparisons which examination of various world traditions makes possible? In addition to Wiredu's aforementioned work, I am reading David Wong's Natural Moralities at the moment, which through its comparative approach between Confucian and western liberal ethics seems to be another proof in practice of the possibility of drawing pertinent information for philosophical puzzles from this -- and, by Wiredu's lights, is an instance of decolonising philosophy.

The only people I see talk about decolonising philosophy tend to be people from former colonies or right-on-lefties in the West. But when I read accounts of what decolonising philosophy would amount to it seems like anybody committed to the enlightenment ideals held by most philosophers should likewise find themselves engaged in full sympathy with this activity.