Friday, September 15, 2017

On The Case For Colonialism

There's a piece a lot of people are talking about called The Case For Colonialism. It is really not very good. I'm not signing the petition to have it retracted for the reasons outlined at the end of the piece here. It's also just worth reading the linked piece there as well for dismantling the argumentative strategies of The Case For Colonialism. But I do think there will be some concerted campaign to paint the reaction to this article as one of leftists being unwilling to engage in fair consideration of the facts, so I just want to have some place I can write down my own reaction to this piece for ease of reference. I will focus in particular on the first section, wherein it is claimed we should reappraise the total effects of colonial rule and would thereby realise it was net beneficial.  (The second section is an extended argument to the effect that various post-colonial governments have been awful. No argument from me on that front, though of course a better article than that under consideration would have spent more time reflecting on the kind of conditions that lead people to such desperate straits as to throw their weight behind the various wannabe tyrants who cropped up in the wake of colonialism - this would in fact also be relevant to the argument's claims about `subjective legitimacy', discussed below.  Also the role of both Western and Soviet neocolonialism in maintaining many such regimes would need discussing. The third section is an argument for `recolonisation' in some circumstances which largely depends on you buying that colonialism is a net good, and thus depends on the first section.) Much but not all of what follows can be found in the piece just linked to, but occasionally I would have put emphasis on different points or phrased things in a different manner, so I wanted to write my own take on things.


  • The Case For Colonialism contains historical infelicities. Guatemala, Libya, and Haiti are referred to as places that did not have a significant colonial history, a claim genuinely so bizarre I wonder what it could mean since charity forbids me from taking it on its face. It is at least suggested that Amílcar Cabral was involved in post-independence mismanagement of Guinea-Bissau, despite being assassinated before independence was achieved. Attention is restricted to various European colonies from the early-19th to mid-20th century; what about all the rest of colonial history, most especially and obviously the genocide in the Americas and the transatlantic slave trade this spurred? If one wants to re-evaluate the history of colonialism then to treat the actual history in so shoddy and loose a fashion immediately undercuts the entire purported point of the article.
  • As picked up by most commentators as the most striking point, it is morally pretty horrendous to call for Belgian reoccupation of the Congo, as the article does, albeit briefly and in passing. But it is worth noting just how bizarre the argument for this was. As far as I could tell it consisted entirely of noting that the government of the Congo has not ever managed to organise as efficient an army as the occupying Belgian forces once maintained. People have called it akin to Holocaust denial, and that seems fair to me -- but it's more specifically akin to defending German actions in Poland on the basis that damn was that Blitzkrieg effective.
  • In general for an article that purports to engage in or call for a `cost-benefit analysis' of colonialism, the weighing of the costs was very obviously inadequate. Genocidal wars of annihilation as in the Americas or the Australasian continent are not discussed at all; the various forms of slavery, forced labour, and gulags on the scale of nations, barely discussed; the lasting effects of various `divide and conquer' occupation tactics not mentioned; nary a word on the wars spurred by imperial competition. And so on. This does not read like a serious attempt at the task it purportedly sets itself.
  • It is a little bit unclear to me whether the article purports to be actually engaging in the cost-benefit analysis, or calling on others to do so. It seems to suggest that it knows what the outcome of this would be, thus suggesting that the author feels the cost-benefit analysis has been carried out. Most people have read it that way, in which case the above criticism is activated -- it's a poor cost-benefit analysis that does not actually take into account costs. If, on the other hand, the claim is rather that somebody should carry out a cost-benefit analysis on colonialism; then first it never actually defends the claim that we should engage in this activity directly, and one may object to this on Kantian grounds; but in any case, if this is what is going on the article can be faulted for the more prosaic reason that it would then be simultaneously calling for the fair minded consideration of a question and announcing in advance of this consideration what it takes the answer to be!
  • Relatedly, the author often notes that the European colonists did good things too in the process of colonisation. But I am reminded of Condorcet's response to this defence of colonialism in his own day: plainly it's not the case that the only way to spread technologies and ideas is through invasion and occupation and resource extraction, it does not excuse the latter to note that you did the former, since by trade and the peaceable commerce of nations and peoples you could have achieved those goods without bringing about those bads. Despite this being a rebuttal to various `civilising mission' defences of colonialism already available in the 18th century, the author of The Case For Colonialism never considers it. 
  • This actually feeds into another complaint; the appeal to counter-factual reasoning was both spurious and barely thought through. As part of the defence of colonialism, the author thinks we ought think about how history would have gone absent colonialism. The method they propose for doing this is to compare the condition of nations that were colonised to those that were not. Many of the nations they propose specifically as counter-points were in fact colonised; it is in this context that Haiti is named as a nation with no significant colonial history, for instance. But even setting that aside, those that weren't are often nations that were repeatedly invaded and menaced by colonial powers - China and Ethiopia are named, for instance. Taking this together with the above complaint about peaceable methods of cultural exchange never being considered, one is thus led to think that the rather bizarre counter-factual being set up thus seems to be as follows: suppose the European nations had not colonised the various nations they did, but had engaged in any other sort of looting and invasion -- if there is ever a situation in which colonial rule seems preferable to this, then score-one-for-colonialism. Here, to be clear, I am engaging in some speculative positing as to what they are proposing, since the author (true to form in this poorly written and argued piece) has not precisely spelled out how we are to carry out their counter-factual reasoning. In my defence, it is genuinely hard to see what the relevant counter-factual is supposed to be wherein we are to consider the fate of China as it is now as a guide to Asanteman as it would be were it not for colonisation, and see in this a defence of colonialism.
  • The argument for the legitimacy of colonialism in the minds of the governs consists of two observations. One, after being conquered people in the occupied territories would make use of the services that existed therein and take the jobs available. Second, testimony from a former governor of the Gold Coast to the effect that those under his sway quite liked the regime. The second of these is barely worth mentioning (should we assess the popular legitimacy of the Soviet occupation of Hungary by asking what a chief apparatchik thought of it?) and the first of those would be an argument in favour of the popular legitimacy of nearly every tyranny the world has ever seen. 

So we have an appraisal of historical events that gets basic parts of the history wrong, ignores or passes over key events, purports to be a cost-benefit analysis while not actually factoring in costs, is  naively credulous as to tyrant's self-affirmation, and advocates a mode of counter-factual reasoning that is both underspecified and from what can be discerned amounts to a non-sequitur. This is not good scholarship. I'll end here. This is more effort into this than I really intended, but I have now seen so many people saying that the arguments of the piece aren't being given consideration but people are rushing to condemn that I thought this worth setting out. I take this aspect of philosophy seriously, and think that a significant public role we should play is holding people to argumentative standards. For whatever that is worth, The Case For Colonialism does not meet those standards. 

Friday, September 8, 2017

Spoiler

Here is a belief of mine that I think is pretty uncontroversial but which, it turns out, my friendship group contains some pretty heated disagreement on. A spoiler for some piece of fiction is any bit of information (which pertains to events depicted) for which being told it beforehand significantly affects your experience of the fiction.

(Don't read too much into the `significantly' - I am just friends with philosophers, so have to qualify to rule out irritating Cambridge-spoilers; I don't think the difference between `experiencing the fiction knowing X' versus `experiencing the fiction not knowing X' is  significant in all cases, and if you're being real neither do you. Ok.)

I think this definition broadly matches popular usage and some popular attempts at definition -- for instance this. But apparently when one draws out its consequences it becomes pretty controversial pretty quickly. Some examples of said controversial consequences.

First, historical information can constitute a spoiler. Knowing that Ceasar gets stabbed, the Titanic sinks, and that a complex series of battles, parliamentary reversals, and marriages, results in a Lancastrian monarchy can all, in the right context, spoil works of fiction.

Second, we'll only know what all the spoilers are once we're dead. We never know what information we are gaining now could turn out, in future, to affect our experience of some piece of fiction. Everything you learn is potentially a spoiler for some future tale. Life in a democracy is full of risks, and this is one of them.

Third, not quite a consequence but close: one can fully permissibly spoil things, it is not the case that it is always bad to spoil a work of fiction. Maybe it is bad always and everywhere to deliberately spoil a work of fiction (even if this bad can be overrode by other goods one thereby attains), but certainly giving away information which in fact constitutes a spoiler is not in itself even a prima facie bad in a great many scenarios. It may even be a good thing to do sometimes.

Fourth, spoiling is quite an individualistic affair. It depends on the peculiar character of the individual how they experience a work of fiction, and how their information bears on this; it does not depend (except in a derivative sense) on the intentions of the author of said fiction, nor on the nature of the information conveyed. Nothing is intrinsically a spoiler, it all depends on how it interacts with the individual and their mode of experiencing fiction.

Ok, there we go. I think all this is quite obvious, but frequent disagreement compels me to write it out in an easy to access place for future reference. Now you know, and knowing is half the battle.

EDIT: Thanks to Kenny Easwaran and Eric Schwitzgebel for pointing out ammendments which I have incorporated into this definition. Keep them coming!

Saturday, August 12, 2017

Du Bois on Da Vinci

A quick write up on a charming essay by the young Du Bois (from his time as a graduate student at Harvard), which I only found out about through the fascinating historical work of Trevor Pearce. The essay is entitled Leonardo Da Vinci As A Scientist and is available online here.

Leonardo Da Vinci -- ``I was even a pioneer in
side-eye and general shade throwing.''
Du Bois is concerned to argue that Da Vinci deserves credit as the founder of modern experimental science. The argument strategy is twofold. First, to show that Da Vinci has sufficient (and sufficiently impressive) scientific achievements to merit attention as an early scientist at all. This Du Bois achieves by just reviewing historians (apparently then - 1889 - relatively recent) reappraisal of Da Vinci's empirical work and work inventing scientific machinery and to show that it was indeed impressive. This in itself was interesting; so for instance I learned here that Da Vinci was already floating the idea that the sublunary realm and the broader cosmos should be understood as operating on the same principles, that Da Vinci has a
claim to being an early inventor of the telescope and also being the first to notice a parallel between how the camera obscura works and the operations of the human eye, and that on the basis of observational study of plants Da Vinci was developing ideas about plant respiration which now seem to have been on the right track. Cool!

The second step in the argument, however, is the more philosophically and conceptually interesting. Here Du Bois' task is to argue that Da Vinci deserves credit not just as a link in a great chain of scientific workers, but rather some sort of special credit as a founder figure in one sense or another. Here the point is largely drawn out by comparison with three other figures: Roger Bacon Gilbert of Colchester and Francis Bacon. While Du Bois is impressed with each of these figures, he thinks they were each lacking in a certain way. Roger Bacon was not enough of an empiricist: to be credited as a founder of modern science, Du Bois feels, empiricism must be one's epistemological foundation, where for R.Bacon ``empiricism was but a branch of the tree of which philosophy was the trunk''. Glibert of Colchester has, so to speak, the opposite problem -- he's all empiricism with no metatheory. While he's impressive in his collection of observational and experimental results, he's ``a mere experimenter, with little breadth of conception, or broad generalising powers''. F. Bacon, finally, came after Da Vinci, and is substantially the same in his metatheory (so Du Bois thinks! Please don't hurt me, Renaissance scholars), but just didn't achieve as much scientifically as Da Vinci. F. Bacon comes across, basically, as an especially talented expositor of Da Vincian method, but not himself worthy of the claim to priority on scientific method.

The philosophy of science young Du Bois is working with is interesting, and worth making more explicit than Du Bois himself does in the essay. In Da Vinci, Nature had found itself a man who could do both: patient skillful observational work, aided by machines of his own device, that uncovers particular facts of great interest and also general principles, and also explicit epistemological theorising of a sort which acknowledged and explained the importance of founding one's claims in such observations. Science, then, is the epistemologically self-conscious skillful application of empiricist method. R. Bacon was a skillful natural philosopher and epistemologically self-conscious, but not an empiricist. Gilbert of Colchester was a skillful empiricist, but did not evince the requisite degree epistemological self-consciousness. F. Bacon was an epistemically self-conscious empiricist, but just not quite good enough at the actual application. Da Vinci was the first person in whom all these qualities meet to a sufficient degree, or so Du Bois claims. (This essay also features a trait which is characteristic of all Du Bois' latter work on social matters -- explicit reticence and diffidence, with frequent reminders that one ought be cautious about one's conclusions given the difficulties of gathering evidence and being sure it is complete or representative.)

W.E.B. Du Bois -- ``The idea that the person
in this picture could ever be as enthusiastic
about anything as the person who wrote that
essay on Da Vinci is genuinely surprising.''
I've worked on Du Bois' philosophy of science before, but I have never in my published work explicitly remarked on the undercurrent of empiricism. None the less, it is there; most especially it can be seen in his lifelong habit of issuing scathing condemnations of a priori approaches to history and sociology, where he thinks that prejudice unchecked by experience has been the source of much racist balderdash concerning African (and African-descended) folk. It is remarkable to think, then, how closely Du Bois' scientific and social mission accords with the early philosophy of science he developed here. For, The Philadelphia Negro or Black Reconstruction can plausibly be described as epistemologically self-conscious skillful applications of empiricist method; in both these works (and many of his less famous essays besides) he mixes explicit methodological remarks exhorting a more carefully and rigorously observationally grounded approach to the study of black life in America, with the actual collection of novel results about social, political, or economic conditions, and in both the highlighted cases they have (nowadays) come to be seen as classics of their respective fields. His work is thus epistemologically self-conscious in its empiricism, involves the actual application of observational method as well as its exhortation, and skillful performance thereof. The philosophy of science underlying this essay by the young Du Bois seems to have set a pattern that he attempted to live up to for the rest of his scientific career.

Da Vinci, of course, is not just a great scientist and engineer, but also a great artist. Du Bois was evidently aware of this, and this fact about him is mentioned at various points in the essay. Da Vinci is indeed paradigmatic of the Renaissance Man, the individual who strives to hone diverse skills to a high degree and exhibit a broad culture. In this respect too Du Bois seems to have followed Da Vinci, being more acclaimed for his literary style and humanistic moral and political vision than his scientific career. Being attracted to the broad humanism of the Renaissance, and having great respect for Du Bois' work, seeing this essay where Du Bois develops his ideas about philosophy of science as part of an ode to Da Vinci and the Renaissance scientific humanism that Da Vinci pioneered, was in its own way quite affecting for me. Even if I cannot match these figures in their skill, I hope to at least preserve and advance the spirit of humanistic inquiry that they each embodied.

Sunday, August 6, 2017

Significant Moral Hazard

What follows is a guest post by my comrade Dan Malinsky. After the recent publication of the paper `Redefine statistical significance' Malinsky and I attended a talk by one of the paper's authors. I found Malinsky's comments after the talk interesting and thought-provoking that I asked him to write up a post so I could share it with all yinz. Enjoy!

--------------------------------------------------------------------------------------------------------------------------

Benjamin et al. present an interesting and thought-provoking set of claims. There are, of course, many complexities to the P-value debate but I’ll just focus on one issue here.

Benjamin et al. propose to move the conventional statistical significance threshold in null hypothesis significance testing (NHST) from P < 0.05 to P < 0.005. Their primary motivation for making this recommendation is to reduce the rate of false positives in published research. I want to draw attention to the possibility that moving threshold to P < 0.005 may not have it’s intended effect: despite the fact that “all else being equal” such a policy should theoretically reduce false positive rates, in practice this move may leave the false positive rates unchanged, or even make them worse. In particular, the “all else being equal” clause will fail to hold, because the policy may incentivize researchers to make more errors of model specification, which will contribute to a high false positive rate. It is at least an open question which causal factors will dominate, and what the resultant false positive rate will really look like.

An important contribution to the high false positive rates in some areas of empirical research is model misspecification, broadly-understood. By model-misspecification I mean anything which might make the likelihood wrong: confounding, misspecification of the relevant parametric distributions, incorrect functional forms, sampling bias of various sorts, sometimes non-i.i.d.ness, etc. In fact, these factors are more important contributions to the false positive rate than the choice of P-value convention or decision threshold, in the sense that any plausible decision rule no matter how stringent (whether it is based on P-values, Bayes factors, or posterior probabilities) will lead to unacceptably high false positive rates if model misspecification is widespread in the field.

Note that the authors Benjamin et al. agree on the first claim. Benjamin et al. mention some of these problems, agree that they are problems, and frankly admit that their proposal does nothing to address these or many other statistical issues. Model misspecification, in their view, ought to be tackled separately and independently of the decision rule convention. The authors also admit that these and related issues are “arguably bigger problems” than the choice of P-value. I think these are bigger problems in the sense specified above: model misspecification will afflict any choice of decision rule. This is important because the proposed policy shift may actually lead to more model misspecification. So, the issues interact and it is not so straightforward to tackle them separately.

P < 0.005 requires larger sample sizes (as the authors discuss), which are expensive and difficult to come by in many fields. In an effort to recruit more study participants, researchers may end up with samples that exhibit more bias -- less representative of the target population, not identically distributed, not homogenous in the right ways, etc. Researchers may also be incentivized, given finite time and resources, to perform less model-checking and diagnostics to make sure the likelihood is empirically adequate. Furthermore, the P-value critically depends on the tails of the relevant probability distribution. (That’s because the P-value is calculated based on the “extreme values” of the distribution of the test statistic under the null model.) The tails of the distribution are rarely exactly right at finite sample sizes, but they need to be “right enough.” With a low P-value threshold like 0.005, getting the tails of the distribution “right enough” to achieve the advertised false positive rate becomes more unlikely because with 0.005 one considers outcomes further out into the tails. Finally, other problems which inflate false positive rates like p-hacking, failure to correct for multiple testing, and so on may be exacerbated by the lower threshold. The mechanisms are not all obvious -- perhaps, for example, making it more difficult to publish “positive” findings will incentivize researchers to probe a wider space of (mostly false) hypotheses in search of a “significant” one, thereby worsening the p-hacking problem -- but it is at least worth taking seriously that these factors may offset the envisaged benefits of P < 0.005. (I think there are some interesting things which may be said about why these considerations are less worrisome in particle physics, where the famous 5-sigma criterion plays a role in announcements. I’ll leave that aside for now.)

I’m not disputing any mathematical claim made by the authors. Indeed, for two decision rules like P < 0.05 and P < 0.005 applied to the same hypotheses, likelihood, and data, the more stringent rule will lead to fewer expected false positives. My point is just that implementing the new policy will change the likelihoods and data under consideration, since researchers will face the same pressure to publish significant results but publishing will be made more difficult in a kind of crude way.

This worry will be relevant for any decision threshold convention, and so it speaks against any strict uniform standard. However, Benjamin et al. raise the important point that “it is helpful for consumers of research to have a consistent benchmark.” My friend and colleague Liam Kofi Bright reinforces this point in his blog post: there are all sorts of communal benefits to having some mechanism which distinguishes “significant” results from “insignificant.” I’d like to propose a different kind of mechanism.

Sometimes statisticians casually entertain the idea of requiring “staff statistician reviewers” to review (the data analysis portions of) empirical articles submitted for publication. I think we can plausibly institutionalize a version of this practice, and it can function as a benchmarking procedure. Every journal will pay some number of professional statisticians (who should be otherwise employed at universities, research centers, etc.) to act as statistical reviewers, and specifically to interrogate issues of model specification, sample selection, decision procedures, robustness, and so on. Only when a paper receives a stamp of approval from two or more statistical reviewers should it count as having “passed the benchmark.” The institutionalization of this proposal would have some corollary benefits: there are a lot of statistician professionals who are employed with “soft money,” i.e., they have to raise parts of their salaries by applying for grants. This mechanism could partially replace that grant-cycle: journals would apply regularly every few years for funding from the NIH, NSF, and other funding agencies to compensate statistical reviewers (an amount dependent on the journal’s submission volume); the statisticians get to supplement their incomes with this funding rather than spend time applying for grants; and the public gets some comfort in knowing that the latest published results are not fraught with data analysis problems. I can image a host of other benefits too: e.g., statisticians will be inspired and motivated to direct their own research towards addressing live concerns shared by practicing empirical scientists, and the empirical scientists will be alerted to more sophisticated or state-of-the-art analytic methods. Statistician’s review may also reduce the prevalence of NHST, in favor of some of the alternative analytical tools mentioned in Benjamin et al. The details of this proposed institutional practice need to be elaborated, but I conjecture it would be more effective at reducing false positives (and perhaps cheaper) than imposing P < 0.005 and requiring larger sample sizes across the board.

[I should acknowledge that, depending how my career goes, I could be the kind of person who is employed in this capacity. So: conflict of interest alert! Acknowledgements to Liam Kofi Bright, Jacqueline Mauro, Maria Cuellar, and Luis Pericchi.]

Monday, July 24, 2017

Supporting the Redefinition of Statistical Significance

Recently an article entitled `Redefining Statistical Significance' (RSS) has been made available. In this piece a diverse bunch of authors (including four philosophers of science - represent) put forward an argument with the thesis: ``[f]or fields where the threshold for defining statistical significance for new discoveries is P<0.05, we propose a change to P<0.005.'' In this very brief note I just want to state my support for the broad principle behind this proposal and make explicit an aspect of their reasoning that is hinted at in RSS but which I think is especially worth holding clear in our minds.

RSS argues that, basically, rejecting the null at P<0.05 represents (by Bayesian standards) very weak evidence against the null and in favour of the hypothesis under test, and further than its communal acceptance as the standard significance level for discovery predictably and actually leads to unacceptably many false-positive discoveries. P<0.005 taken as the norm would go some way towards solving both these problems, and the authors emphasise most especially that it would bring false positive levels down to within what they deem to be more acceptable levels. RSS doesn't claim originality for these points, and is a short and very readable paper; I recommend checking it out.

The authors then have a section replying to objections. They note that they do not think that changing the significance level communally required for discovery claims is a cure-all, and deploy a number of brief but very interesting arguments against the counter-claim that the losses in terms of false-negatives would outweigh the gains in avoiding false positives. This is all interesting stuff, but the point at which I wish to state my broad agreement comes when they consider the objection that ``The appropriate threshold for statistical significance should be different for different research communities.'' Here their response is to say that they agree in principle that different communities facing different sorts of puzzles ought use different norms for discovery claims, but note that many communities have settled on the idea that given the sort of claims they are considering and tests they can do  P<0.05 is an appropriate standard for discovery claims. They are addressing those communities in particular with their proposal, so are addressing communities which have already come to agree that they should share a standard for discovery claims.

My one small contribution here, then, is in following up on this point. They briefly note in their reply to this objection that -- `it is helpful for consumers of research to have a consistent benchmark.' I think this point deserves elaboration and emphasis, and it is why I feel that, although I do not feel sufficiently expert to comment on the specific proposal they made, the broad contours of their argument are right. Why, after all, do we actually have to agree on a communal standard for what counts as an appropriate significance level for `claims of discovery of new effects' at all? Couldn't we leave that to the discretion of individual researchers? Or maybe foster for some time a diversity of standards across journals and let a kind of Millian intellectual marketplace do its work? To put it philosophically, why have something rather than nothing here?

I take it that a lot of what the communal standard is doing is providing a bench mark for those not able to make expert or highly-informed personal assessment of the claims and evidence to know that the hypothesis in question is confirmed to the standards of those who are able to make expert or highly informed assessments. These consumers of the research are those for whom the consistent benchmark helps. Especially for the kind of social scientific fields which have in fact adopted this benchmark, a pressing methodological consideration has to be that non-scientists or folk not able to assess statistical claims, and more pointedly people with policy or culturally influential positions, will consume the research, and take actions based on what they believe to be reliable, or at least take action on the grounds of what convinces them. The trade off between Type 1 and Type 2 errors, then, must be made with it in mind that there is an audience of non-experts to the claims made in this field, and an audience who will shape actions and lives and self-perceptions (in part) upon the results these fields put out. As a scientific community we must therefore decide what we think of our own work can be vouchsafed to these observers, or validated to the standard this cultural responsibility entails.

In theory, of course, we could still leave this up to individuals or allow for a diversity of standards among journals. But I think awareness of the scientific community's public role tends to speak against that. Such diversity, I'd wager, would either result in a cacophonic public discourse on science in which the media and commentators constantly reported results, then their failure to replicate, and then their replication once more (as well as contrary results, their failure to replicate...). This because the diversity of standards led to non-experts picking who to believe randomly among folk with different standards, or according to who they judged to have the flashiest smile, or whichever university PR department reached out to them last, or factionally choosing their favourite sources. Or, it would result in silence, as gradually scientific results come to be seen as too unreliable, too divided among themselves, to be worth paying much attention to at all. If you think that scientifically acquired information can make a positive difference to public discourse, either of these seem like bad outcomes. (The somewhat self-promoting Du Bois scholar nerd in me can't resist pointing out that Du Bois brought similar considerations to bear in responding to widespread failures of social scientific research in his day.) In fact, I think this epistemic environment makes a conservative attitude sensible, and speak in favour of adopting a very low tolerance for false-positives. This because is much harder to correct misinformation once it is out there than it is to defer announcing until we are more confident, and the very act of correction may induce the same loss of trust worry mentioned before. This means that in addition to elaborating upon RSS' reply to an objection, and without feeling competent to quite judge whether P<0.005 in particular is the right standard, I also think the overall direction of change advocated by RSS is the right one, relative to where we are now.

Saturday, July 1, 2017

Decolonise Philosophy!

The following thoughts, prompted by this article, will (I suspect) almost all be super obvious to anybody who has been thinking about decolonising philosophy for an extended period of time. But my audience is largely composed of people, methinks, who do not regularly think about such things.

Lots of people would agree with the slogan ``We ought decolonise philosophy!'' but, philosophy being what it is, the meaning of the slogan is highly contentious. I'll work with one account thereof, based on this and related papers by Kwasi Wiredu, but bear in mind that it's not the only account of what it would take to decolonise philosophy that is out there. I think this particular account makes my point very stark, but something essentially similar to what I say would go if I had worked with some other prominent accounts. Wiredu begins by saying that what it means to decolonise African philosophy would be  ``divesting African philosophical thinking of all undue influences emanating from our colonial past.'' This is then cashed out in terms of taking conscious control of the concepts deployed in philosophical reasoning, as well as the substantive positions covered and the questions asked, by means of subjecting them to critique via cross cultural comparisons. The idea, basically, is to try and ferret out aspects of philosophical thinking now going on in Africa which can't earn their keep on their own merits but rather persist simply because the colonialists imposed them during their occupations -- and to ferret them out by using the fact that indigenous languages, conceptual schemes, and thought traditions have resources that can make incongruences stark by means of comparison, undermine false claims to necessity by evincing in practice alternate ways of going on, or may occupy regions of logical or conceptual space that the colonists never bothered to explore.

So, for instance, Wiredu argues that there are certain puzzles about existence or the nature of capital-b-Being which simply cannot arise if you are to formulate your thoughts in certain West African languages. In the essay linked, for instance, he argues that the notion of creation-ex-nihilo which has caused so much debate in philosophy of religion is nigh-on-incomprehensible if one tries to discuss it in his native Akan language. It is not that he thinks this therefore proves that those questions of Being are pseudo-puzzles, or that creation-ex-nihilo is impossible, but rather simply that it would be a colonial attitude to simply assume that this difference must be due to an expressive fault with the West African languages rather than a tendency to produce misleading linguistic confusions in the European traditions which concentrate on those puzzles and on the basis of nothing more than this assumption work to import the European concept. If it is a genuine improvement on the indigenous conceptual scheme that must be argued for. Further, having realised the incongruity, and not uncritically accepting the Western mode as just obviously superior, one can see whether and how thinking with the concept derived from one's own linguistic tradition would ramify through philosophical issues -- and in this paper he concludes, for instance, that attempts to harmonise or synthesise religions indigenous to Ghana and Christianity are probably not as coherent as some claim, but are relying on equivocation at key moments. In his own work he has applied this method to a number of other problems to interesting effect -- to give some of the provocative examples, he concludes that Descartes' cogito would fairly immediately have been seen to be an invalid argument had Descrates attempted to formulate it in a West African language, or on other occasions that the correspondence theory of truth is a tautology in an Akan language.

The point, then, is not simply to reject everything associated with the colonialists. (As he says, the emphasis in his initial definition of decolonising philosophy should be on the word `undue' before `influences'.) Rather, the point is to ensure that the tools we think with are up to task, and to use the availability of alternative tools as a means of facilitating test and comparison. So whether or not you one ultimately ends up accepting the problems-in-Western-languages as genuine or pseudo-problems, the decolonised philosophy is that which has used the conceptual resources and intellectual traditions of the former colonised nation to put itself in a position to consciously decide whether or not its inherited problems are worth pursuing in light of consideration of a fuller range of facts, rather than uncritically (or without due consideration of the facts adduced by considering the thought of the colonised) accepting the concepts, problems, and solution space given to it by Western tradition.

Before drawing out my intended moral, some comments on Wiredu's account as an account of decolonising philosophy. I think it does a pretty good job of rationalising a lot of what people tend to actually do under the aegis of decolonising the field (I usually see people try and change: (i) what is taught, and (ii) who does the teaching), since it is basically an attempt to leverage cognitive diversity in a way that tends to align with the various reform efforts now going on. Wiredu would, I think, also be of the opinion that this is a contribution to the broader project of decolonisation -- since the historic task of former colonies at this moment is to deal with the legacy of colonialism by taking the reigns of history and no longer simply having Western modes of life and government imposed, but rather consciously weighing the colonists mode of life against the indigenous tradition and attempting to forge a synthesis that allows for the best of both as far as is possible. That is to say, Wiredu's account of what African nations should be up to during periods of post-colonial modernisation looks a lot like his account of what African philosophy should be up to. He might therefore think that each can reinforce the other. If you do not agree with Wiredu on what broader cultural and political decolonisation means, I think one could reasonably fault this as failing to properly contribute to the broader decolonial project. I am not sure what I think the broader decolonial project will or should amount to, so I am agnostic on this point.

Ok here's the thing that Wiredu's account makes especially for me: I have never seen an account of decolonising philosophy that does not make it seem like it is just a generally desirable thing to do. I can understand why it is of especially pressing importance in departments in former colonies. But the thing Wiredu described just sounds like a corollary of enlightenment, assuming you don't a priori limit the capacity for interesting thought or concept creation to Westerners. (This would have to be a corollary of some version of the enlightenment that did not share the patronising assumptions of many actors within the actual historical enlightenment! Enlightenment itself is, of course, famously a concept subject to much critique.) Everyone, citizen or descendent of former (or presently) colonised nation or not, should want to decolonise philosophy in Wiredu's sense. Could we still claim our mantle as true philosophers if we, as a matter of policy, uncritically made use of our inherited concepts? Can we really vow to just set aside pertinent information about the limits or oversights of our own conceptual scheme, or are we so sure that there is no pertinent information to be drawn from the kind of cross-cultural comparisons which examination of various world traditions makes possible? In addition to Wiredu's aforementioned work, I am reading David Wong's Natural Moralities at the moment, which through its comparative approach between Confucian and western liberal ethics seems to be another proof in practice of the possibility of drawing pertinent information for philosophical puzzles from this -- and, by Wiredu's lights, is an instance of decolonising philosophy.

The only people I see talk about decolonising philosophy tend to be people from former colonies or right-on-lefties in the West. But when I read accounts of what decolonising philosophy would amount to it seems like anybody committed to the enlightenment ideals held by most philosophers should likewise find themselves engaged in full sympathy with this activity.

Tuesday, June 13, 2017

Remonstration

A recent conversation with some friends has me thinking about roles we can fruitfully play as philosophers of science. I just thought I'd write up on a blog post my thoughts on something that came out of that, which is a role we sometimes play that I feel is not often enough highlighted.

In philosophy we learn about tools and methods of critical thinking and argument construction and evaluation. For instance, a standard part of philosophical training is going through some basic logic. You should learn therein what it takes for an argument to be valid, and, going in the other direction, how one can demonstrate the invalidity of an argument by constructing counter-models. (If this doesn't mean anything to you, I will be going through an example later in this post!) That is just part of basic philosopher training. If you go into philosophy of science you will further specialise, perhaps learning about experimental technique, statistical methods, or theories of confirmation along the way. All of these can put somebody in a decent enough position to evaluate the cogency of arguments that scientists put forward, providing one familiarises oneself with the particular theoretical background the scientists one is evaluating are working within.

And it matters that scientists are making cogent arguments! Science has a lot of social cachet; with some well noted exceptions, folk trust scientists and will tend to believe claims that scientists put forward about the world. What scientists conclude is therefore deeply significant to our worldview and senses of self. Further, in many spheres of life we base policies on recommendations from scientific experts. Just the enterprise of science itself involves moving huge amounts of people and resources around, and the opportunity cost of having all these smart folk spend their time in this way rather than on other socially valuable tasks is itself huge. We want scientists to be basing their claims, recommendations, and activities, on sound argumentation and good reasoning, so as to ensure that this cachet and those resources are used as best we can.

So then putting these two together we get a natural thought about how philosophers of science should use our skills. We should monitor the arguments scientists make, and where we find that their methods or modes of argument are not capable of supporting the conclusions or recommendations they are making in light of those arguments, we should bring to bear our expertise in the evaluation of inferences or arguments (broadly construed) on calling this out and suggesting better practice for the future. (I recall reading, but do not recall where I was reading, E.O. Wilson once write that this is exactly what he thought of as the point of philosophers of science, people looking over his shoulder saying `Oh no I don't think this is good enough, what about such and such counter argument, eh?' He noted that while this could be pretty irritating in the moment, on reflection thought it valuable to him.) I call this kind of thing `remonstration', it's a kind of `speak truth to power!' norm, and I think we should see it as a valuable part of our mission as philosophers of science.

I am going to go through an example from my own work in a bit of detail below, but for some more illustrious examples one might want to check out: Clark Glymour's critique of the statistical reasoning that underlay the famous Bell Curve book and much of the rest of social psychology at the time, Nancy Cartwright's long running project critically evaluating the limitations of randomised control trials for medical or social research, or Roman Frigg's work (discussed, say, at the end of this excellent episode of the generally excellent Sci Phi podcast) on over-confident and over-specific claims made on the basis of models of climate change. 

But for an example of remonstration I am most familiar with (and also to allow me to explain and slightly reframe this previously published work of mine) I'd like to go through my paper On Fraud. One of the motivations for that paper was thinking about claims currently being made about how we should deal with the replication crisis in social psychology. Broadly, lots of claims in social psychology that were thought to have been securely established are being found not to stand up to sustained scrutiny when people attempt to replicate the initial experiments which led to their acceptance, or redo the statistical analyses with bigger/better data sets. In thinking about why this is occurring, a number of scientists have come to conclude that one (but not the only) source of the problem is -- scientists are not just seeking the truth for its own sake, but instead being encouraged to pursue credit (esteem, reward, glory, social recognition by their peers in the scientific community) by various features of the incentive structure of science. This pursuit of credit itself incentivises bad research practices, ranging from the careless to the outright fraudulent. If only we could remove these rival incentives which are causing the misconduct, and instead encourage pure pursuit of the truth, we'd have removed the incentive to involve oneself in such research misconduct. Since I had seen some very similar arguments come up before in my more historical scholarship on W.E.B. Du Bois, my interest was very much piqued and I got to thinking about whether this argument should be accepted as a sound basis of science policy.

I came to conclude that the psychologists and sociologists of science making these arguments were making a subtle mistake in how they reasoned about policy in light of scientific evidence. They were doing good empirical work tracing out the causes of much of the research malpractice we witness in science. But on the basis of this they were concluding that if we removed the actual causes of fraud we'd see less fraud. That is to say, they were establishing premises about the causes of fraud in the actual world, and concluding that a policy which intervened on (in fact removed or greatly lessened) these causes would mean that there would be less fraud after our intervention. After all, it's a natural thought; if X was what was causing the fraud and now there's no more (or much less) X, well you've removed the cause and so you should remove the effect, right?  Not so. Such arguments are not valid -- their premises can all be true, while their conclusion is false. So I constructed a counter-model, which is to say a model which shows that all of their premises can be true while their conclusion is false.

Without going into too much detail, I produced a model of people gathering evidence and deciding whether not to honestly reveal what evidence they received when they go to publish. Fraud is an extreme form of malpractice, of course, but it would do no harm to my arguments to interpret the agents as deciding whether or not to engage in milder forms of data fudging or other research malpractice. We can model the agents as pure credit seekers, they just want to gain the glory of being seen to make a discovery. Or we can model them as pure truth seekers, they just want the community to believe the truth about nature. (We can also consider mixed agents in the model, but set that aside.) In the model credit seeking can indeed incentivise fraud, and for the sakes of the counter-model we may grant that in the actual world all fraud is incentivised in this way. But what I show is that in this model, even if suppose that there were some policy that could successfully turn all scientists into pure truth seekers, it does not guarantee that there is less fraud -- in fact truth seeking can, in some especially worrying circumstances, actually lead to more fraud!

There is a general lesson here, in fact, that I wish I had done more to bring out in the paper. The point is: if you are basing policy on empirical research, it is tempting to think that what you need to know is whether the policy would be effective in the actual world. That, after all, is where you will be implementing the policy! But that's the wrong causal system for evaluating the effects of your proposed policy. What you need to know is whether the policy would be effective in the world (or causal system) that will exist after the policy is implemented. In the actual world -- sure, credit seeking is causing malpractice. But the fact that you remove that incentive to commit fraud does not by itself mean you've removed the incentive to commit fraud. It may be that in the world that exists after this intervention there are new temptations to commit fraud. Truth seeking itself may be one of them. Policy relevant causal information must include counter-factual information, information about the world that will exist after a not-yet-implemented policy has been carried out.

If you want the real details of my argument -- read the paper! But what I want to note here is how this is me trying to be the change I want to see in philosophy of science. I found some scientists making policy recommendations in virtue of their empirical research (in this case it was policy affecting science itself). I thought about the structure of their arguments, and realised they were making implicit assumptions about counter-factual reasoning. A general philosophy education gives you tools for reasoning about counter-factuals, so I could bring that to bear. What is more, general critical thinking (or logic) training that is part of being a philosopher points the way to counter-model construction as a means of critiquing arguments. Finally, disciplinary specific training in the philosophy of the social sciences gave me training in tools for building models of social groups, which was what was of particular relevance here. I was therefore able to remonstrate, to bring to bear my training in calling attention to an error in scientists' reasoning, and what's more an error that (since it was supposed to be the basis of policy) has the potential to be of some social and opportunity cost. I don't claim, of course, that this is the best example of remonstration in the literature (c.f. my illustrious colleagues above!) -- but I hope going through an example I am intimately familiar with in depth gives people a better example of how philosophy of science as remonstration is a good use of our disciplinary tools and expertises.

Now, it is certainly not my claim that only philosophers of science engage in this kind of remonstration. Statisticians very often engage in a very similar activity -- Andrew Gelman's blog alone is full of it. There is also a fine tradition of scientific whistleblowers who call foul when misconduct is afoot. Remonstrating with scientists whose reasoning has, for one reason or another, gone astray, ought not be, and fortunately is not in fact, left to philosophers alone. And, in case it needs to be said, nor is this (or nor ought this be) all of what philosophers of science get up to. Most of my own work, for instance, is not remonstration.

But when I see accounts of the tasks of philosophy of science they typically fall into one of three categories. Concept construction or clarification, where the goal is something like producing or improving a tool that it might help scientists do their job better. Scientific interpretation, where the goal is to do something like provide an understanding of scientific work that would make sense of the results of scientific activity, and tell us what the world would be like if our best evidenced theories were to be true. And meta-science, where the goal is to do something like provide an explanatory theory which tells us why it is that scientists reason (or ought to reason) in some ways rather than others. All of these can be valuable and I hope philosophers of science keep doing them. And I can even understand why people aren't keen to advertise the disciplinary mission of remonstration: it makes us into the stern humourless prigs of science, somewhat akin to Roosevelt's critic on the sideline hating on the folk actually getting stuff done. But, since I think it can be good and necessary, I hope that, even if it doesn't win us friends, and along with our comrades elsewhere in the academy and with our eye on the social good, we hold true to the mission of remonstrating against scientific overreach, malpractice, or just plain old error, where-ever we should see these arise.