Religion, science, and nonbelief

So you know that thing?  That thing when you see a link and you know reading it is going to infuriate you, but you click on it anyways and end up even more frustrated and angry than you expected?

Well, an acquaintance on facebook posted a link to a New Yorker op-ed entitled “All Scientists Should be Militant Atheists.”  (Trigger warning for casual Islamophobia.)  The author introduces his argument by saying “as a physicist, I do a lot of writing and public speaking about the remarkable nature of our cosmos, primarily because I think science is a key part of our cultural heritage and needs to be shared more broadly. Sometimes, I refer to the fact that religion and science are often in conflict; from time to time, I ridicule religious dogma.”  Really, I should have just stopped there, but sometimes you just can’t not make yourself angry by reading the internet.

I think I’m going to do a separate ‘talking to imaginary Islamophobic strawmen’ post about the casual Islamophobia and the tendency to fall back on ‘but Islamic extremism!’ as a trope.  In this post, I want to focus instead on the related assumptions that 1.) science is essentially rational/unbiased and that 2.) this essentially rational/unbiased nature means it can’t not come into conflict with religion.

But first, a short disclaimer: I don’t care if you’re an atheist.  If you consider nonbelief the best expression of your personal beliefs and experience of the day-to-day world, that’s great.  You do you.  However, it’s different to say “I’m an atheist and shouldn’t experience undue social or legal pressure because of that identity” and “everyone should be an atheist and here’s why.”  The author of this piece is clearly making the second argument, and that’s what opens his argument up to public scrutiny.  I think it’s important to state this distinction clearly because many apologetics (which is essentially what the second argument is, a claim of superiority for a specific community that invites others to join in order to be correct) get away with occupying space in public discourse without scrutiny by falling back on “these are my beliefs and I’m entitled to my beliefs” as a defense.  So again, just to be clear – you’re entitled to your beliefs about you.  You’re not entitled to make claims about what everyone else ‘should’ do without expecting a response to and/or rejection of those claims.

The author clearly identifies himself with the problematically-named ‘militant atheist’ movement, and here I actually agree with him – I don’t think the term ‘militant atheist’ is helpful because, at least as far as I’ve experienced this community, they’re not militant.  I’ve never read or heard anything from Richard Dawkins, Christopher Hitchens, Penn Jillette, or any of the other recognizable leaders of this movement calling for organized, violent intercession.  What they are, however, is missionizing, a term I suspect they might resist even more.  It is the case that the term ‘missionizing,’ and the related terms of ‘prostelytizing’ and ‘indoctrinate’ all carry Christian connotations – that’s due in large part to the English language’s historical connection to English Christianity – but these terms still come the closest to expressing what this particular community is trying to do.  Again, they’re not just asking for fair treatment and respect for their identity as atheists; they’re telling everyone else to be atheists.

I think placing this atheist movement in the broader history of missionizing movements is also helpful because many of their core arguments are not original, but rather characteristic of missionizing as a tradition, in particular the claim to be the most rational approach to the divine.  As someone who studies apologetics, I’ve read works explaining why Christianity is more rational than Greek polytheism, why Greek polytheism is more rational than Christianity, why Christianity is more rational than Judaism, why Christianity is more rational than Islam, why Islam is more rational than Christianity or Judaism, and why atheism is more rational than Christianity or Islam[1].  Reading a whole lot of these works together, it becomes much clearer how authors use and reuse the same rhetorical tricks to make their point.

I think the use of ‘rational’ as a defense for your beliefs is powerful for the same reason it’s problematic – it feels like it should be a good guideline, but trying to define it is essentially impossible.  If we take ‘rational’ to mean “based on reason or logic,” we’re immediately faced with the problem that in real logical arguments, you have to define axioms and work within them.  There is no naturally-occurring logic – it’s a set system of rules that the user then chooses to work within.  In arguments about religion, the two participants (or, more often with apologetics, the author and their made-up opponent) are often using two completely different set of assumptions, so the fact that one side can build a rational argument for their faith from those assumptions really isn’t significant – it just demonstrates that those assumptions are capable of sustaining a rational argument, not that the opposing set of assumptions are any better or worse at sustaining a rational argument.

More often, this claim of ‘rational’ is used more generally to mean something like ‘common sense,’ the idea that we, as humans, are able to sense what’s a more or less logical idea, and that this carries with it some kind of value judgment.  This is an even more problematic assumption because 1) plenty of true concepts are really difficult to comprehend (more on that below) and 2) if ‘common sense’ did exist, people would just default to agreement, which is clearly not the case.  I think point number 2 is particularly well illustrated by atheism – religions can claim divine inspiration to explain why some (seemingly intelligent) people believe in them and some (seemingly intelligent) people don’t, but for nonbelief, if your argument is that the nonexistence of God is obviously true, everyone should eventually revert to that idea, in the same way we all eventually learn ‘I shouldn’t touch hot things’ or ‘chewing on foil hurts your teeth.’  Common sense ideas are just that – common.  Either you know them from testing them yourself or from hearing about someone who did.  

This gets us back to point number 1, and the fundamental flaw with the idea that science is essentially rational/unbiased – common sense ideas are generally pretty simple because many complex but true things are really hard to comprehend, and often counterintuitive.  Common sense tells me that things fall down because of something that’s below me dragging them down.  I can also kind of see how there could be something above me pushing them down.  But the reality of general relativity is far more complicated than that, and Einstein’s Newton in an elevator thought experiment still blows my mind just a little bit.  Similarly, common sense evolves over time as our understanding evolves – that germs make you sick makes perfect sense if you grew up knowing what cells and molecules are, but without that knowledge, it’s a completely insane idea that every surface is covered with tiny, invisible, creepy crawlies.

Science is rational in that it’s based off an agreed system of rules and axioms, but that’s the exact opposite of saying that “science holds no idea as sacred.”  No scientist starts their work by retesting ever established assumption – it would be ridiculous to expect to them to, as this would take lifetimes, but this is also why untrue assumptions can survive for so long (like that ulcers are caused by stress, or that your BMI effectively correlates to your long-term health).  Scientists themselves are also not unbiased – ask any woman, person of color, queer person or person with a disability working in STEM if racism, sexism, homophobia, classism, and ableism are still present in science, and they will give you a laundry list of experiences and first-hand accounts.

Religions are, similarly, systems built on agreed rules and axioms, and as such, they can sometimes bump uncomfortably into science.  But it’s important to understand that that’s not the intrusion of something artificial and manmade (religion) on something stalwart and unchanging (science).  It’s also not the interaction of two competing monoliths – there are scores of religions with hundreds of sects that all work off slightly different systems and rules, and there are dozens of kinds of science with hundreds of different theories, each giving a slightly different interpretation of how to work within their established system and rules.

When the author of the op-ed says he “from time to time, ridicules religious dogma” as part of his physics lectures, I’m guessing he’s referring to Christian beliefs about the age of the earth and age of the universe.  Firstly, those aren’t doctrines in and of themselves – they’re outcomes of the larger doctrine of Biblical inerrancy, the belief that all understanding can be derived from the Bible.  Biblical inerrancy is only found in Christianity, and then only in a small minority of evangelical and fundamentalist sects, predominantly those that originated in the US.  I’d guess that even the majority of Christians don’t hold to Biblical inerrancy, and no other religion even considers the New Testament and Christian Bible to be divinely inspired.  So most religion doesn’t have any problem with this guy’s physics lectures – he’s choosing to focus on the one group that does, and then assume that every other religious person believes the same thing.  (He is also choosing to ridicule people for not agreeing with him, instead of just ignoring them and carrying on with his life, which, I think it’s important to point out, was also always an option.)

In actuality, and apparently much to this guy’s chagrin, thousands of people identify as both scientists and believers, and I don’t see any reason to assume these people are either lying or suffering from split personalities.  More likely, these people understand both systems to occupy separate or even complementary spaces – someone who believes in Biblical inerrancy probably wouldn’t make a good astrophysicist, but someone who understands the Genesis stories as analogies, a common interpretation among many Christian and Jewish denominations, might not only see no contradiction, but might see confirmation of their faith in their science.  I’d argue that this isn’t that dissimilar from any ideology or opinions – someone who keeps vegan probably wouldn’t want to be a taxidermist, but that’s not because either veganism or taxidermy is ‘irrational,’ they’re just not complementary.  They are also both personal choices, same as religious belief (or nonbelief) and if, as a society, we’re serious about freedom of religious expression, we need to become more sensitive to the difference between “I don’t want to face undue legal or social burden due to my choices” and “I want everyone to do as I do” for religious choices, every bit as much as if someone was trying force everyone to be vegan or to become taxidermists.  Some things just don’t fit all people.

[1] See, for example, Against Celsus, Celsus’s True Word, Doctrina Jacobi, a letter from al-Kindi to his friend al-Hashimi, and the works of Abu Isa, and Richard Dawkin’s God Delusion, respectively.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Sacred spaces: on converting churches to mosques

Okay, this story is from last month, but I’ve wanted to write about it since it first popped up and just had other things in the queue.

So apparently there’s a petition in France to stop the conversion of unused churches to mosques after the rector of the Grand Mosque in Paris said that he would support such an action.  The petition has been signed by several eminent right-wing and nationalist figures, as well as by former French President Nicolas Sarkozy.

The issue at stake doesn’t appear to be just the repurposing of churches, as it arose out of the existence of a significant number of unused churches.  Indeed, all across Europe, churches are being repurposed as shops, cafes, restaurants – Oxford itself has a great bar called “Church,” inside a previously-abandoned Catholic church in the north of the city that features some truly gorgeous pre-Reformation frescos.  In fact, as I’ll talk more about in a second, the decline in the use of churches in Europe has been going on for most of the twentieth and twenty-first centuries, a fact that spurred, in large part, the “secularization thesis,” the idea that Europe (and North America, largely by association) were becoming less religious and more secular.

So if it’s not about repurposing churches, it seems reasonable to assume the issue at stake here is that of conversion, namely the symbolic conversion of churches into mosques and the resulting effect this might have on the surrounding communities.  

The idea of using and reusing sacred spaces is not a new one – in fact, it’s an incredibly old idea.  That certain spaces promote or accentuate holiness, and thus should be used for religious services, appears in both organized religion and folkstories – it’s the same basic concept behind laylines, shrines, and sacred landmarks.  Sometimes these sacred spots develop a narrative to explain their sacredness – Mount Olympus as the home of the Gods, the Jordan River as the place where Jesus was baptized, etc. -, whereas in other cases, it seems that the space itself just became associated with the idea of holiness.  Years ago, I actually put together a research project to study this idea, as there are a number of examples of sacred spaces in the Middle East being taken over as mosques or being used as both churches and mosques, including most importantly the Umayyad Mosque in Damascus, which was first a pagan temple, then a church, then a church and mosque simultaneously, and finally a mosque and Muslim shrine, but unfortunately, as with the Umayyad Mosque, most of the example sites are in Syria, so traveling to study their visual presentation is currently impossible.

Theologically, I would argue that it makes sense for newer religions to feel comfortable or even happy to take over the sacred spaces of older religions.  Since Islam understands itself as the correction of Christianity, Muslims taking over Christian sacred spaces can be understood by the Muslim community as a similar process of correction – both accepting the essentially holy nature of the space, while correcting what they understand as errors in practice and visual representation.

However, I would argue that the desire of Christians to preserve unused sacred spaces rather than allowing Muslims to use them does not make sense given most modern Christian theology, and in fact preserves two fairly outmoded theological concepts.

The first is the conversion of physical or geographical space.  In the Middle Ages, religious buildings weren’t just built as needed to accommodate the population – they were also built to serve as a physical representation of that religion’s dominance in that area.  It’s important to remember that Europe was NEVER 100% Christian – there were sizable Jewish and Muslim communities throughout Europe, and elements of paganism survived well into the late Middle Ages.  When Europeans called their kingdoms Christian, then, they weren’t referring to the entire population.  Kingdoms were Christian because their kings and ruling classes were Christian, and one way they demonstrated this ruling authority was the construction of churches.  This explains in part why so many churches are such massive edifices, even when the local population was relatively small.  They weren’t built to fit the populace, but as a physical marker for Christianity’s dominance in that territory.

In fact, the idea of conversion of geographical space isn’t unique to Christianity, but elements of it arise in the Muslim caliphate and in Asia.  Indeed, one of the most interesting variations I’ve come across is in the Buddhist conversion of Tibet – the earliest Buddhist sherpas understood the land as ruled by a giant she-demon, who had to be literally pinned down into the earth, with each new Buddhist stupas or shrine being built at one of her joints (as illustrated here).  The land was converted once the shrines were done and the demon bound to the earth.

Today, however, given our focus on individualism and individuality, even the idea of national religion strikes some people as misguided, as contradicting religion as essentially a relationship between the individual and the divine.  I wouldn’t guess many people at all would believe that the presence of a church makes the surrounding community Christian, as most of us have grown up in close proximity to churches, temples, mosques, and many other houses of worship, without ever thinking that we were part of those communities just because we walked beside their buildings on a regular basis.

This sort of leads to the second theological flaw, that seeing the conversion of churches to mosques as a threat to French Christianity is really quite putting the cart before the horse.  The churches are empty because fewer people are participating in Christian religious rituals.  There’s no reason to think that leaving them abandoned is going to spark people to start practicing Christianity. Again, I don’t think any of us has ever seen a church and suddenly thought to ourselves, “I should join that faith!”  As I mentioned earlier, this decrease in the use of Christian sacred spaces has also led scholars to speculate that France and other European nations are becoming less Christian than they had been historically, but I’d argue that even this is a more complicated theological issue.  Historically, Europeans were largely compelled to be Christian, unless they actively identified as something else, often facing serious repercussions for doing so.  In most cases, ‘being Christian’ in Medieval Europe was the path of least resistance.  It’s also unclear how many people would have know about other faiths or had the means and opportunity to learn enough to contemplate whether converting to another religion (or leaving religion for nonbelief) would better match with their own personal conception of the divine.  

This becomes a particularly important question in study the Reformation, when communities and kingdoms are understood to ‘flip’ religious affiliations quite regularly.  We’re left asking to what degree did the general populace notice these changes, or understand the theological ramifications of them?  To what degree did these changes adhere to or contradict their own religious beliefs?  The continuation of Christian sects despite political repression – the Huguenots in France, for example, as well as both the Catholics and the radical Protestant churches like the Quakers in the UK – suggest that the local communities in these areas did understand themselves as possessing an individual religious identity that might have differed from that of the king, but we’re still left uncertain if more people would have joined these or other religious movements if they had had the chance.

By comparison, we live in societies today where more and more people can actively choose their faith, and where the public expression of a variety of faiths gives people the opportunity to find one that most closely matches their own personal experience of the divine.  In the case of Europe, this means there may be fewer seats in the pews, but at least in theory, it also means that those who attend are more consciously engaged with their faith and its theology.

Again, if the secularization thesis was correct, then we should only ever see sacred spaces being repurposed as non-sacred spaces.  Society becomes more secular, the desire to separate out sacred space decreases, previously sacred spaces become bars and laundromats.  What we’re seeing instead is the transition of sacred space as a mirror of the larger transition of the population.  People aren’t becoming less religious – religious identities are expanding beyond the confines previously set, often by force, by ruling elites.  That means that religious freedom is working, that people have the opportunity to choose the expression of their religion, which in any free society should be cause for celebration, not outrage.

Posted in Uncategorized | Tagged , , , , | Leave a comment

You believe what you believe.

Dear [Jessica]:

Since my initial contact with you I have made a sincere effort to learn more – not become an expert – just to learn more so that I might better understand Islam, “average Muslims”, Islamists, and Islamicists. In the past eight months I have read the Qur’an in several versions, including Rodwell, Yusuf Ali, Ali with commentaries, Pickthall, Mohsin Khan, the Qur’an in Modern English, and at present trying to plow through Qutb’s 18-volumes of pure torture. My library now would surely make a casual observer believe that I’m a Muslim . . . which I decidedly am not.

All this and spare-time entangling with the various ahadith collections in our local libraries and on the internet, Reliance of the Traveller, The Life of Muhammad translated by A. Guillaume (a most difficult read), and a variety of commentaries on Islam by Muslims and non-Muslims.

Now that I’ve soaked up as much on Islam as my Christian heart and brain can stand, I’ve started going through your posts since the beginning. My first question to you is: When did you change from being an advocate to being an acolyte to being an apostle? It’s hard to find the exact time, but somewhere along the way you seem to have lost your objectivity, one of the essentials for what you claim to be – an intellectual.

The second question is: Why the stubborn insistence that there is no such thing as Islamic terrorism? Yes, many feel that that is a tautology, but many of us also recognize that the world is aflame and awash in bodyless heads, and at the center of that flame is one common factor – Islam. To say that the Old Testament catalogs such things as stonings and beheadings is a red herring, since those things have not been practiced for millennia. To condemn the Crusades is to forget that had Muslims not conquered the so-called Christian lands of the Middle East, North Africa, and much of Europe, there would have been no need for the Crusades.

The third question: Does not abrogation nullify most of the “be-good-to-non-Muslims” parts of the Qur’an (those ‘revelations’ from Mecca) so that we non-Muslims are at best dhimmis?

And, by the way, I like the way your latest post correspondent spelled ‘litterature’ since that appears to be mostly what ahadith musings are.

Finally, I have given up all hope on Islam, since to me there is no way for the world to live in peace as long as there is the fervent Muslim belief that the Qur’an is the word-for-word, unchanging and unchangeable word of Allah. As long as the Qur’an (and its author Muhammed) are standards to which Muslims aspire, there is no hope, and academics who tell us otherwise are doing us no favors.

You write well, and you have certainly caused me to do a lot of research I would else not have done, so for that I thank you. But please, don’t go much farther across that line between Islamicist and Islamist.

Lee Skinner

Hi Lee,

I admit, I’m having a hard time coming up with how to respond to your questions, but I shall do my best.  To your first question, I certainly can’t give you a date as I was unaware I had become an apostle of Islam (honestly, I’m enough of a nerd that calling me an acolyte makes me feel like a follower of Magneto).  To the best of my knowledge, I have never preached Muslim teachings, I’ve never spoken in a mosque or delivered a khutba, and I’ve never encouraged or even suggested that someone convert to Islam.  I certainly wouldn’t be angry or disappointed with someone choosing to become a Muslim, but mostly I feel that any given person’s religion really isn’t any of my business.

Which I suppose is the larger theme of my response – I’m sorry you sound frustrated with the time you’ve spent studying Islam.  Certainly I have not had the same experience; I’m in my field largely because I enjoy the subject.  As an intellectual – or, I guess, a former intellectual – I would be tempted to suggest that your experiences are the result of acquired wisdom, the human tendency to judge what we learn against what we already know.  No one can start a subject ex nihilo, and we experience what we learn through the lens of what we already know, as well as through the lens of our expectations and assumptions.  Since it sounds like you went into your studies looking for counter-evidence to the idea that Islam is essentially a violent religion, academically I would be tempted to argue that this essential bias – that Islam is a violent religion – corrupted your studies, pressuring you, either consciously or subconsciously, towards interpreting the material as the product of a violent religion.

However, none of that is really my concern.  I did not go into my work expecting that I would be able to convince everyone, or even anyone.  Teaching is not coercion.  I can give you information, suggest sources, and discuss the various established schools of interpretation, but if after all of that, you are still certain that Islam is evil, that’s on you.

Similarly, in terms of objectivity, you’re entirely right – I have none.  I don’t think I ever claimed to be objective.  I’m a bit distant from my subject because I’m not a Muslim, nor did I grow up in a Muslim country or in a Muslim community, but in a decade working in Islamic studies, I have known loads of practicing and nonpracticing Muslims, many of whom I consider friends and love and care for deeply.  My experiences with them absolutely impact my experience of Islam as a movement, a culture, and a religion, as does, incidentally, my personal experiences with Christians, Jews, Hindus, Buddhists, Hellenic Polytheists, Druids, Taoists, atheists and agnostics.  You’re also correct that I was, and I’d say still am an advocate for Muslim communities living in the West – from my own experiences and from what I’ve been told by participants of various faiths – who, due to the influence they’ve had on me, I am inclined to believe at their word – I think that in the US, there is a real difference in how ‘freedom of religion’ works, depending on your religion, and that American Muslims often get the short end of the stick in that exchange.  I have no intention of hiding my beliefs or opinions on the matter – not only do I not consider them inhibitors to my research, I think they’re integral to giving my work a real-world relevance.

Again, as a former intellectual and trained historian, I could point to the links between the modern emphasis on “objectivity” as being necessary for the study of the humanities and social sciences and the history of Western Imperialism, as traced through the history of anthropology and sociology.  Or as an advocate, I could talk about how “objectivity” can be used by oppressive forces to silence victims from expressing their own experiences.  But it’s clear there’s no point. I don’t intend to justify these choices.  This is my blog.  I write it in my free time.  I write it because I think it might be useful for some people, and I continue writing it because I’ve gotten enough feedback to maintain that belief.

To your second question, I don’t believe there is no such thing as “Islamic terrorism.”  Actually, I’ve talked a couple of times recently about how there absolutely are groups that attach Islamic terminology and imagery to acts of violent abuse.  However, I maintain my belief that the term “Islamic terrorism” is unhelpful, both because it distracts from the fact that thousands of the victims of these groups are Muslims, and because it’s all too often used as if the first word explains the second, that we don’t need to understand how and why terrorist organizations have emerged and gained power in the last half century because they’re “Islamic.”

It’s unhelpful for exactly the reason you yourself have illustrated, because not every flame or headless body is the result of someone calling themselves a Muslim.  Thousands of Americans have been gunned down and beaten to death by police officers who share, at best, a civic religion.  Thousands of queer people in Uganda have lived in fear of horrific deaths due to laws that, at least on the face of them, claim to be Christian (a claim which is similarly linked to the history of Western Imperialism – check out this fantastic report by John Oliver and interview with Pepe Julian Onziema for more).  And basically everyone in North Korea lives in constant fear and suffering without a single reference to Islamic law.

However, again, these arguments are pointless because I will continue to see the term as unhelpful and you will continue to believe that violence is an essential part of Islam.

And to your third point, if you’re asking technically are they all abrogated, then the answer is no.  To the best of my knowledge, no classical Islamic scholar claimed that 5:82 or 42:13 were abrogated.

If you want to talk anecdotally, I’ve never known any Muslim to treat me like a dhimmi.  I’m not even sure what that would mean.  I’ve known some who wanted to discuss religion with me – their own, others, what I’ve been researching – and given that I write and publish on the topic, I’ve generally been happy to entertain their interest.  Even when I’ve traveled in the Middle East, no Muslim has ever demanded a poll-tax from me, required me to reveal the lineage of my father’s line, drilled me on Islamic law, or even required me to keep my head covered, except in order to visit the Umayyad mosque in Damascus (which is a major Muslim shrine).  They have asked to take a picture with me, asked me where I’m from and how my trip is going, offered me tea and chocolate, and let me play with their children while their parents pray.  Again, I have absolutely no doubt in my mind that these experiences have informed my research, and I have no intention of trying to surgically remove the effects of these experiences from my intellectual life.

I’m sorry you found your studies fruitless, and that you feel you cannot live in a world where Muslims believe in the Qur’an as the Word of God.  If you genuinely feel you can’t live in this world and that there is no hope, then I implore you to speak to a therapist, or to call the National Suicide Prevention Lifeline (1 (800) 273-8255) or the Samaritans (1 (877) 870-4673).  Even if you don’t think you’re in any danger from yourself, please call one of those numbers – there will be no downside for you.  At worst, you have a pleasant chat with some truly wonderful people.  If you find you feel afraid or anxious about the state of the world, speaking to a therapist or joining a therapy group like Cognitive Behavioral Therapy may help you develop ways to cope with those feelings.

I appreciate whatever concern for my emotional or spiritual well-being was intended in your warning that I might slip from being an Islamicist to an Islamist, but I as I hope I’ve demonstrated, this is really not an imminent threat.  I feel no personal investment in your becoming a Muslim, following Islamic law, or even agreeing with a single word I’ve written.  I can offer guidance for studying Islam, and answers to specific questions based on my interpretation of the available sources and secondary methodologies, but at the end of the day, you believe what you believe.  And I will remain, I expect for some time, an Islamcist.

With kind regards,


Posted in Uncategorized | Tagged , , , , | 5 Comments

Modern hadith studies

Nader asked:

Hi Jessica,

A very nice blog attracting sincere and relevant questions :) Congratulations for this job. I have been reading for a while papers dealing with the authenticity of the hadiths and I must admit I still didn’t find a kind of review that mentions the positions of the different researchers (from schacht to MM Al-A’zami, from Juynboll to Hallaq … ). Do you know any paper or book that could help me getting a clear understanding of the main trends in academia, regarding the hadiths literature and their corresponding arguments ? Could you tell me about the western university groups that are still working on this topic

Ultimately, I would be happy to get an exhaustive review of the different schools of thought among scholars (not only western) who study hadiths. Reading books for each of them (Schacht, MM Azami, Fazlur Rahman, Juynboll, etc) would take me years otherwise … :)

Thanks for your answer. I hope it is clearer now.

Happy “Eid al Fitr” for your Muslim readers,

Thanks again for the kind words, Nader!  Unfortunately the reason you haven’t been able to find a work giving summaries of the existing scholarly schools on the hadith and their historicity is because, as far as I’m aware, no such work exists.  I’ll do my best to give some general outlines and suggest some shorter readings for getting a better idea of the state of the field, but unfortunately a full review of the existing scholarship on the subject would easily fill a book, so well beyond the scope of the internet rules for tl;dr.

First, though, a bit of background for the uninitiated: “the hadith” is the general catch-all term for the collections of hadith (pl. ahadith), meaning literally “transmission” or “report,” which are stories of the Prophet (pbuh) and the early community, which often report on how the Prophet (pbuh) practiced his religion – what he did, what he said, how he corrected others, and what he declared permissible and impermissible.  In the Muslim tradition, the hadith are generally understood as expanding the practices laid out in the Qur’an, explaining how ritual actions should be performed and why.  As such, these stories span an incredible range of topics – everything from whether Muslims can or should use toothpicks and the correct order to clean oneself during ablution to when conversion takes place and the nature of Heaven and Hell.

According to Muslim tradition, transmission of the hadith began organically from the early years of Islam, and in particular from the first decades after the death of the Prophet (pbuh) as the community began to expand out of the Arabian peninsula.  As new converts wanted to learn more about how to practice the religion, they naturally turned to members of the community who had known the Prophet (pbuh) personally, asking for clarification on how he had performed certain rites and practices, and these conversations developed into the hadith.  After a few generations, there were thousands of individual ahadith circulating throughout the caliphate, many of which contradicted one another.  As scholarly schools developed in the Abbasid period (starting in the late 8th century), scholars began to analyze critically the various hadith traditions.  They focused particularly on the transmission history of the hadith, studying the sanad (pl. isnad) or chains of transmission (“so-and-so was told by so-and-so who was told by so-and-so who heard that the Prophet (pbuh) did x,y, and z”), essentially developing one of the earliest citation checking systems, arguing that only hadith that were transmitted through people who were actual contemporaries and could actually have spoken to one another should be considered authentic.

The hadith became an integral part of the larger study of Islamic law, as providing guidance for the acceptable forms of ritual practice.  Unsurprisingly, variant traditions emerged between the Sunni and Shi’a, with some hadith transmitters being considered reliable by only one sect or the other.  Similarly, there is evidence to suggest that both the Kharijites (a 7th century sect who rejected both caliphal and Shi’i claims of hereditary authority) and the Mutazilites (a 9th century philosophical movement who supported the use of rational analysis of the Qur’an in order to derive ritual practices) both rejected the use of hadith as authoritative as overemphasizing the Prophet’s role as Prophet (pbuh) over the role of the Qur’an as divine revelation.  However, our knowledge of both of these groups is heavily filtered through works written about them by their opponents, so it’s difficult to know what they believed with any kind of certainty.

The long gap between the initial development of individual hadith and the eventual codification of the traditions has led many modern Western scholars to question the authenticity of the hadith as anything more than an Abbasid creation to codify Islamic practice.  Again, there’s no way I’d be able to offer a full view of every scholarly position, but I’ll try to highlight some of the more influential voices.

Ignaz Goldziher: In addition to being a nominee for the coolest sounding name ever, Goldziher is one of the founders of Western Islamic studies.  In the late nineteenth and early twentieth century, he wrote on the life of Muhammad (pbuh) and the hadith, arguing that the content of the latter was better understood in the context of the Abbasid period than that of the early community.  He also tried to argue for the general principles of the hadith and early Islamic law as arising from Roman contract law, an argument largely dismantled by Patricia Crone (may she rest in peace).  Although later scholars, in particular Joseph Schacht and Norman Calder, often cited Goldziher as contradicting Islamic tradition regarding the hadith, I would argue that this is incorrect – Goldziher’s concept of the hadith as coming out of the Abbasid period doesn’t really contradict Islamic tradition because that is the period in which the traditions were codified.  The difference between Goldziher’s concept of the hadith and the Islamic tradition seem to stem, at least to me, by his disregard for the more banal and practical elements of the hadith – it’s easy to see how some of the broader philosophical themes date to the Abbasid period, but it’s a lot harder to argue that the Abbasids had a vested interest in whether Muslims used toothpicks.  The broader themes may have emerged during the process of codification, but that doesn’t mean that the contents were wholly invented in that same process.

Joseph Schacht:  Schacht is responsible, for better or for worse, for shaping much of the Western Islamicist approach to Islamic law.  I include the ‘for worse’ mostly because his writing is also very much a product of early twentieth century academia – it’s very scant on citations, and often uses single examples to make sweeping arguments about the whole of a particular school of Islamic jurisprudence or even the whole of the field (for example, as I’ve discussed before, in the case of the “closing of the doors of ijtihad”).  I would argue that Schacht had a similarly inconsistent conception of the hadith – on the one hand, as I’ve talked about before, he claims to reject entirely an internal source for Islamic law, be it the Qur’an or the hadith.  Yet at the same time, in his analysis of Islamic jurisprudence, he relies heavily on the broader narrative of the formation of the hadith, assuming that legal decisions were made both by qadi (judges) in formal courts and by alim (scholars) in non-binding, but still authoritative, day-to-day conversations.  Again, for me, I find this argument unpersuasive, as it would require the preservation of both the ‘real’ source of Islamic law (whatever that is – Schacht, for his part, echoes Goldziher in arguing for Roman common law) and the hadith traditions, despite them lacking ‘real’ legal authority.

Muhammad al-A’zami: Perhaps the first and probably still the most important criticism of Western analyses of the hadith comes from Muhammad al-A’zami.  Having trained at Darul Uloom Deoband in India and al-Azhar in Egypt before coming to Cambridge, al-A’zami was one of the few early Islamcists to be equally well-versed in traditional Islamic scholarship and Western methodology.  In his book Hadith Methodology and Literature (based on his doctoral thesis at Cambridge, the full text of which is available online), he argued that Western methodology suffered from a limited view of the history of the hadith, which had negatively impacted its interpretation of the tradition.  In particular, he highlighted that according to Islamic tradition, the hadith were not transmitted exclusively orally, but that several scholars did produce written collections before the formalization of the isnad system, but that, indeed, the circulation of these collections is one of the things that fueled the interest in monitoring the accuracy of transmission histories.  However, much of al-A’zami’s work received its own criticism from mainstream Islamic studies, with many scholars arguing that it was, in essence, an apologetic defense of the Muslim tradition.  Moreover, his association with Darul Uloom Deoband brought him further scrutiny, as the university has faced various accusations of extremism in the second half of the twentieth century.  Personally, I think al-A’zami’s work is no more polemic than Schacht’s, and I find his criticism of latter (published as On Schacht’s Origins of Islamic Jurisprudence) to be a very useful methodological analysis.  It is certainly the case that al-A’zami is not ‘unbiased,’ but I’d be hard-pressed to think of any scholar who isn’t actually personally committed to their own work and theories, so I find it a bit hypocritical to discount his work just because his personal commitment comes from a religious or cultural connection to the subject.

Unfortunately, since al-A’zami’s work in the mid-twentieth century, Islamic studies has largely moved away from hadith studies, probably due in large part to the complicated nature of both the source material and the methodologies surrounding it.  Turning to scholars of the late twentieth and early twenty-first century, a number of people have written about the hadith in passing (Patricia Crone and John Wansborough as examples of the source skeptical perspective and Wadad al-Qadi and Gabriel Said Reynolds as examples of the more traditionally-reliant scholarship), and some scholars discuss specific hadith and hadith traditions that are relevant to a larger theme of their studies (Michael Bonner, for example, talks in his research about the hadith traditions regarding warfare and jihad, but it’s this latter topic that’s really the focus of his work), but there are fewer scholars who I would characterize as working on the hadith.

A good general primer on the state of the field, including its relative indifference to the hadith, can be found in Gabriel Said Reynolds’ introduction to The Qur’an in its Historical Context (2007) or Andrea Neuwirth’s and Nicolai Sinai’s introduction in Qur’an in Context (2011).

Fred Donner: One of the more recent attempts to find common ground between the increasingly skeptical Western approach to Islamic studies and the Muslim tradition is Fred Donner, who has published predominately on the development of historiography and identity formation in the early Islamic community (and, since we’re talking about bias, was also my mentor).  Although he still is not really a hadith scholar, his work on the early Islamic community has reasserted some of the same arguments made by al-A’zami, in particular arguing for an early date for the initial circulation of individual hadith and noting the existence of a concurrent written tradition.  His work Narratives of Islamic Origins (1998) opens with what I think is an excellent breakdown of how and why Western methodology has rejected the hadith tradition, and how and why that’s negatively impacted our research.

Wael Hallaq:  I’ve talked about Wael Hallaq before, as well – it’s hard to talk about Schacht and not discuss Hallaq, as well.  Again, he’s not really a hadith scholar, but since his research focuses on Islamic law, he definitely engages more directly with both individual hadith and the broader hadith tradition than most modern Islamicists.  Hallaq is also definitely a post-Orientalist scholar, in that he engages both with modern Islamic studies and with what he sees as the lasting effects of Orientalism on the field (a methodology which, I should say, I generally agree with, but one that has opened him up to accusations of bias not dissimilar to those against al-A’zami).  Although his most recent book (The Impossible State, 2014) is really about Islamic law in the modern context, his earlier works are more about the development of Islamic law in the Middle Ages, and here he deals more with the hadith (his Introduction to Islamic Law (2009) is particularly good for the uninitiated).  He gives less of a direct response to Western source skepticism than al-A’zami, but in his use of the hadith tradition, lays out what I think is a compelling example of how these sources could be used by modern scholars, noting individual cases of transmission errors or conflicting reports while accepting in broad strokes the traditional Muslim account of the codification of the hadith tradition and Islamic law.

Wilfred Madelung: Along with being another nominee for the most awesome sounding name award, Madelung is also one of the few well-known Islamcists from the late twentieth century I can think of who I would actually call a hadith scholar, in that analysis of the hadith makes up a sizable portion of his published work.  Even here, though, it’s a bit misleading – the word “hadith” doesn’t appear in the titles of any of his books.  His work is focused, first and foremost, on the history of Islamic sectarianism, particularly in the emergence of the Shi’a and the Isma’ili.  In following the limited source material about the sects, however, Madelung revisited much of the hadith tradition, and in doing so, argued for the rehabilitation of the tradition, particularly if it could be analyzed through the lens of sectarianism and the early conflicts over authority.  In his work The Succession of Muhammad (1998), Madelung argued that whereas Western scholars have approached individual Muslim sources with skepticism, they have accepted largely uncritically the idea that Muhammad (pbuh) always intended to be succeeded by Abu Bakr (pbuh), and that the idea of dynastic inheritance through ‘Ali (pbuh) was a later invention by Shi’i jurists.  In order to interrogate these ideas, Madelung analyzed a number of hadith, not only arguing for their essential authenticity, but further arguing that the hadith offer scholars a more complete picture of the struggle for authority after the death of the Prophet (pbuh).  As someone who works mostly on Islamic theology and the emergence of Islamic religious identity, I find Madelung’s work very convincing, as authority remains a central tenant of early Islamic theology, as well.  However, his work was met with a varied reception, in part because of its ‘optimistic’ (in the words of the Journal of the American Oriental Society) approach to Islamic sources.

So there’s a very quick overview.  As I said, in general, I feel like Islamic studies has taken a big step back from even engaging with the hadith, which I would argue does us a disservice, both because we’re intentionally cutting ourselves off from relevant sources and because we do still sometimes talk about individual hadith or hadith traditions, but do so in a way that lacks any larger critical apparatus.  However, in order for hadith studies to be integrated into Islamic studies, we would need to create that critical apparatus, both in the form of critical editions and in the form of historical commentaries, so people can trace the various versions of a tradition in order to understand how it could have changed over time.  Unfortunately, I’ve seen little interest in doing so – it seems like the availability of digital resources should make this easier, but if there’s anything similar in the works at any university, I haven’t heard about it.

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

Casual Islamophobia and Dehumanization

So a couple of weeks ago, I was talking about how Americans’ (particularly white Americans’) misguided definition of ‘terrorism’ and our resistance to calling mass shooters ‘terrorists’ negatively influences our understanding of how global terrorism actually works, in particular that for groups claiming an Islamic narrative or theology for their violence, like al-Qaeda and ISIS, we don’t seem to understand that many if not the majority of their victims are Muslims, and that we need to act to protect these communities far more than we need to be concerned with our own safety.

I came across an article that I think well illustrates another aspect of this same problem, namely that our casual association of Islam with terrorism (and resulting misunderstanding that thousands of Muslims worldwide are daily the victims of terrorism) also denies the victims their own voice and narrative of events, so that we have to go to some weird sources to serve as examples of victims.  Thus – the silent monkey victims of the war on terror.

Disclaimer: let me just say at the outset that I do not intend to discuss the pros and cons of animal testing, nor do I wish to have discussions about this in the comments.  I know this is an issue about which many people feel strongly, but I want to stay focused on the larger issue that we’re devoting column inches (albeit digital ones) to discussing the effects of the war on terror on monkeys.

Privilege can be demonstrated in a number of ways, but one of the most powerful aspects of privilege is innate authority.  The more vectors of privilege you possess, the more likely other people are to just believe you automatically.  This has been well-demonstrated academically in how people perceive women’s and men’s contributions to mixed-gender discussions or by how white culture systematically disadvantages African-American Vernacular English to make black people sound unreliable and unrelatable.  It’s also easy to find anecdotal examples in nearly any news story involving women, queer people, or people of color, for example in this utterly ridiculous story where police officers refused to believe an adult black man, until his story was corroborated by a four-year-old white girl.  Although all of the news story about this event have focused on the little girl as a ‘junior sleuth’ or a ‘pint-size detective,’ really the big story should be that police officers accepted the witness statement of a four-year old girl, but not that of an adult man.  When prejudice leads us to trust people who believe in fairies and Santa Claus more than grown adults, we need to accept that we’ve taken a wrong turn.

This privilege of believability, and its opposing silencing of victims, is essentially a form of dehumanization.  It reduces those who do not possess privilege to nonhuman entities, whose experiences, perceptions, and opinions are not relatable or deserving of empathy like other humans.  It can also have tremendous impact on how we understand the world around us because we don’t experience personally 99.9% of what goes on around us – we depend on narratives provided to us by others to experience the world beyond our current point in time and space.  If we fail to consider others’ narratives fairly or privilege certain narratives above others for reasons beyond rational ones (like were you there, could you have seen anything, do you speak the same language as the people involved, etc.), we can drastically alter our perception of the world beyond our sphere of influence.

This is exactly what’s happening when we look to lab monkeys as the victims of the war of terror in order to contextualize the ‘real’ experiences of that war.  It’s not that monkeys being used to test biological or chemical weapons don’t suffer or feel pain – the issue is how discussing that suffering influences our narrative of what has happened during the war of terror.  Indeed, talking about animal testing in warfare can be a really useful perspective, if what we’re talking about is how we weigh the necessity of war versus the cost in loss of life, a calculation in which the war of terror really does not balance the scale at all.  Yet here again, this narrative is better served by talking about the tremendous loss of human life, the cost of a hundred thousand Iraqi civilians and twenty thousand Afghani civilians (to put this in better perspective, more than the entire population of Cambridge, Massachusetts or Gainsborough, Florida), and their and their families’ experiences of pain and suffering and loss.

However, these narratives of suffering are not the central focus of how we experience and discuss the war of terror, nor have they ever been.  In this way, we can see exactly how powerful silencing victims can be for altering our perception of reality, as lacking a continuous narrative about the human suffering caused by the war of terror, it becomes all too easy to believe that there is none, that the suffering of lab animals is not only a significant outcome, but the most that we can measure or illustrate the pain and suffered we’ve caused.  That is not only untrue, but dangerously untrue, as it has allowed generations of Americans to retain the belief that our actions overseas have no negative consequences or no human impact.

Posted in Uncategorized | Tagged , , , , | Leave a comment

7 Myths about Why Higher Education is Failing

Okay, so I should just stop claiming that one of the purposes of this blog isn’t to talk about the current failings with higher education because obviously I can’t stop blogging about it.

I’m also a little nervous to post this, because I know there are academics who read my blog, so please, believe me that I write this out of both genuine love of what we do and genuine frustration for where it’s headed when I say, dear fellow academics, I have a question for all of us:


Seriously, we do hear ourselves when we talk?  Because we’ve been talking about why higher education is failing for a while, why we’re putting an entire generation of young people in debt, why we’ve failed to create the diversity in our admissions and recruitment numbers we so desperately claim in our recruitment material, why we’ve failed to create safe spaces for young people, how we’ve failed to address the truly staggering rates of rape and sexual assault on college campuses, how we’ve allowed racism and sexism to fester among campus institutions like fraternities and social clubs, why we continue to admit students for professional and graduate degrees only to release them into a massively oversaturated market, why costs continue to skyrocket even as we double-down on adjunct and part-time teaching, and most important, why we still expect people to send us their children and a cheque for 50 grand if we can’t answer any of these questions?!  And as of yet, the answers we’ve come up with are both (a) unsatisfying and (b) often completely untrue.

My most recent exposure to our comically incompetent attempts to address our own failings comes from a special edition of Politics & Political Science that actually came out in 2013, which lists several problems facing higher education and asks four leading political sciences to address them, which they sort of do.  Unfortunately, most of the problems aren’t actually threats to higher education (or are only tangentially related), and most of the solutions are … well, oversimplified at best.  The whole symposium is also a fantastic example of how smart people get stupid when talking about things they care about – I particularly like on page 86 (second paragraph), when the author outright contradicts existing evidence when it comes to classroom versus internet learning – “Although recent quantitative comparisons have concluded the opposite (Means et al. 2010), it seems reasonable to assume that researchers will eventually be able to document the benefits of in-person over online education.”  Nothing like basing our revolution on what seems reasonable to us, the traditional institution!

However, reading it made me realize just how often I’ve read these arguments, and just how persuasive they seem when framed in academic language with citations and charts and tables.  So in response, 7 myths about why higher education is failing:

1. It’s because of the internet.  The PS symposium doesn’t even beat around the bush on this one – it’s the internet’s fault: “The first attack on the traditional brick-and-mortar university came from the Internet, which made knowledge previously attainable only on college campuses available to all. Today, Khan Academy, YouTube Edu, Academic Earth, and other outlets make educational videos available for free; many of these videos cover topics that would be standard in many college curricula, particularly in mathematics, engineering, and science (Kolowich 2011; Sengupta 2011). The Internet also makes it possible for people from all over the world to find practice exams, problem sets, visual examples and walk-throughs, worksheets, lecture notes, academic presentations, interactive exercises, webinars, and more for free.”

Oh no!  People learning stuff!  Not in a classroom!

Okay, yes, I know that’s not what they mean, and on some level, they’re right – the internet has completely changed how we access information.  But to be fair, there was free, public information available before the internet, in the form of libraries, museums, public K-12 schools, public lectures, encyclopedias, almanacs, and just ordinary, person-to-person conversations.  It wasn’t that there was no publicly-available, free information before the internet, it’s that there was a lot less of it, and it was a lot harder to find without already knowing what you were looking for.

Moreover, the availability of information online doesn’t need to be a threat to higher education, if higher ed was willing to adapt to what’s online.  Honestly, if you’re teaching a math or science course that could now be replaced by YouTube Ed or Academic Earth, I’m sorry to break this to you, but your math or science course sucked.  Education has been moving away from the base absorption of facts for the last century; the internet just gave that process a massive kick forward.  Indeed, the example the PS authors give as the biggest threat to higher ed from the internet – library resources – is also its greatest potential benefit.  Imagine not needing a library to have a university. It could substantially decrease operational costs and substantially increase the range of subjects taught, as subjects would no longer be confined by what resources are in the library.  In theory, access to more information should always make education better, not destroy it forever.

2. It’s because of for profit education.  Again, on this one, the PS authors don’t hold back.  “The third wave of attack comes from the still fast growing group of large for-profit (or “career”) universities, which have the same accreditation as traditional universities but have the intention and potential to scale up to much larger size.”  Interestingly, though, they never really demonstrate how the rise of for-profit higher ed impacts traditional higher education.

I’ve talked before about how repugnant I find for-profit education, but when it comes to threats to higher education, this one is just nonsense.  For-profit institutions are not a threat to traditional higher education because they don’t compete, they share the market.  No student is considering Harvard or the University of Phoenix, and traditional higher ed is only attempting to be a competitor to one of those institutions (and here’s a hint, it’s not University of Phoenix).  In fact, one of the things that has gotten for-profit education in trouble is its lack of admissions standards, something University of Phoenix has recently announced that it’s changing.

The only threat from for-profit education is to traditional higher ed’s reputation, by calling attention to the massively disadvantageous system of loans and financial administration that traditional, nonprofit higher ed also benefits from.  For-profit higher ed companies have been very successful in the last decade expanding the federal laws regarding student financial aid, again something that nonprofit higher ed also benefits from.  It’s a bit like a drug dealer setting up right outside a shady pharmacist’s office – the pharmacist wants to get rid of the drug dealer, not because he cares about his community, but because he doesn’t want the cops coming around.

3. We’re tired of all these m*f*ing administrators on this m*f*ing college campuses!  The PS articles largely steer clear of this one, but there are more than enough other examples, most recently this article from the New York Times, claiming to ‘reveal’ the ‘real’ reason behind university costs (despite this being the claim that universities themselves have put forward for a decade).  The Times cite the very real fact that the number of university administrators has consistently risen for the last fifty years to claim these people are responsible for fattening up university budgets.  There are a number of flaws for this theory.

First, “administrator” is the university catch-all term for many kinds of non-exempt staff.  In any university, you’re either faculty or staff, and often any staff member who is paid a salary is an “administrator.”  So the category includes everything from HR to the office of budget to the housing and catering staff.  And as someone who works in university administration, you bet your ass there are a lot of us.  That’s because universities are incredibly large and complicated institutions.  We’re a research institute that also teaches thousands of students and awards them degrees, monitors their progress after graduation (mostly to get money out of them), while also running several hotels and restaurants to provide for them while they’re here.  We have to have the institutional infrastructure of a research institute, a school, a foundation, a financial campaign, a hotel, and a restaurant.  Some elements of those can be combined – human resources, accounts payable, purchasing, and financial management, for example, are usually university-wide offices – but even within these university-wide offices, personnel need a really massive range of skills and expertise to accommodate all of the various needs of the institution.

Even if there were a huge number of unnecessary staff members in a university, it can’t be the case that our cushy, six-figure salaries are fattening up the budget because we don’t get paid that much.  The Chronicle of Higher Education collects detailed salary data, and it’s always demonstrated the same thing – the only administrators who make six figure salaries are the President and Chiefs (CEO, COO, etc), VPs/heads of divisions, Provosts, in-house counsel (whose salaries have to be comparable to private lawyers), sports coaches (because of course they do), and Deans and some Chairs.  However, those last two categories are highly deceptive because deanships and chairships are often held by faculty members, particularly to faculty members who lose funding.  Like the university’s lawyers, deanships and chairships have to pay that much to be comparable to faculty salaries.  The remainder of professional positions in a university, including ‘nonresearch deanships’ (ie. deanships held by nonfacutly members) are significantly lower, and often even lower than comparable positions outside of higher education.  In fact, if you go through CEH’s data, one of the things that pops out over and over again is that salary is determined by education level, so that people who do comparable jobs are getting paid more simply for having a PhD, regardless of whether that degree is relevant to their job.  It’s hard not to feel like that’s because people with PhDs set the salary ranges.

4. Professors can’t do their jobs anymore!  It’s a madhouse!  A!  MAD!  HOUSE!!!  There are two halves to this myth – first, that professors are now hemmed in by crazy expectations of political correctness, and second, that their day-to-day lives are consumed with administration and paperwork (which obviously ties in with number 3 – what do all these administrators do when our poor professors are doing all the admin work themselves!).

The first half is hinted at by one of the PS responses when the author talks about how ‘quirky’ professors are: “And we do honestly believe, although we have to do a much better job of articulating this, that it is far better to educate young adults in a vibrant and eclectic intellectual campus patrolled by brilliant, inquisitive, undisciplined, and (not infrequently) ornery university professors than in specialist teaching academies staffed by finely honed and hyper-effective teachers. Whisper this quietly, but we are unreservedly prepared, and we are not wrong, to sacrifice (some) pure teaching effectiveness to expose students to (sometimes) shambolic but ferociously creative thinkers.”

After a decade in universities, including conversations with a lot of people from a lot of different backgrounds, I can say, without hesitation, that “vibrant,” “eclectic,” “undisciplined,” “ornery,” “shambolic” and “ferociously creative” are all code for racist, (cis)-sexist, homophobic, ableist, and classist.

Universities are not bastions of liberal diversity, despite what all of our brochures imply.  Even as undergraduate classes continue to slowly tick upwards in terms of diversity, there is still almost no diversity among university faculty, particularly among full faculty.  At the same time, however, and as I’ve talked about before, there is a deepseated belief that “freethinking” is a movement somehow separate from questions of civil and human rights, that there’s such a thing as a “freethinking movement” that has always aimed to question everything and spread skepticism across the land while only being populated by white, straight, middle class men.

This myth of our “freethinking” past is particularly strong in universities.  Professors today complain nonstop about “PC policing” and “trigger warnings” ruining their ability to challenge their students, apparently unaware that there are perfectly vibrant conversations taking place online (on the evil internet!) everyday that manage to engage directly with complicated issues while also allowing their readers to opt out of discussions they feel will damage them personally.  Professors could use the same policies, if they bothered to seek them out, but instead, they harken back to their own experiences of higher education, which, depending on their age and background, often involves harkening back to a time before integrated or co-ed education, in which many of their students would have been actively and violently excluded from the conversation, apparently blissfully unaware of this complication.  For those of us in academia who are not straight, white, middle class, able-bodied cis men, it’s hard not to hear that as “it was so much easier to talk about you when you weren’t in the room.”

This myth of the bound academic, swimming against the tide of political correctness, may help explain why, despite the occasional wave to diversity, and despite having developed many of the tools and methodologies for studying the effects of bigotry, we’re so terrible at applying those tools to ourselves.  For my reading, at no point in the PS articles does anyone discuss privilege, bigotry or bias, which is a pretty big oversight for talking about the failures of higher education.

The other half of the myth, that academics are weighed down by admin work, is really just the intersection of number 3 and the myth of the history of higher education (number 5).  For whatever reason, many professors seem to think that their job should involve absolutely nothing but research and teaching, and really only what they personally think constitutes research and teaching.  I’ve actually had a professor make the “I’m doing too much admin” complaint about submitting grades.  Seriously.  He was okay teaching the class, but submitting the grades at the end of term – that was too much.

In my experience, a big part of this half of the myth stems from a simple lack of understanding of higher education as a job, and what the standard requirements of a job are.  This makes sense considering how many academics enter university and never leave.  Being expected to do administrative work that explains, reports, or transmits your work isn’t something in addition to your job; that is your job.  It doesn’t count if your employer can’t easily find evidence for it.

For example, academics complain about having to report on their work to a superior, apparently unaware that probably 99% of all currently employed people have to do this. Even directors and CEOs usually have to answer to investors or a board.  Similarly, I’ve had loads of academics complain about how many meetings they attend, apparently unaware that committees are artificial things that they could choose to disband or reorganize, if they could get enough supporting votes from other people.  And again, I think everyone on the planet probably thinks they attend too many meetings – there’s probably a shepherd somewhere in the world complaining about too many sheepherding meetings right now.

5. Universities are preserving centuries-old traditions.  Again, the PS article goes hard for this one: “for hundreds of years: universities are not only the primary stewards of the scientific community but the most sought after way to become educated.”  In reality, only a handful of universities can claim to have been doing anything for hundreds of years – there are maybe a thousand universities worldwide that date from before the 19th century, and many of those are no longer well-ranked (and most of them are in Italy.  Apparently the Italians loved founding universities).  In the case of the US, by 1900, there were fewer than 1000 universities in the country total (today, not including for-profit universities, there are more than 4000).  Among those thousand, the newer institutions, particularly those in the Midwest and farther west, were often smaller institutions that did admit students from a range of backgrounds (including the emergence of schools specifically for the education of women and people of color, the oldest of which appeared in the US in the early nineteenth century), but the older and more illustrious universities, like the Ivy League, were exclusively and unabashedly elite organizations that only admitted children from upper class backgrounds.

The expansion of higher education, both in terms of the number of institutions and the number of students admitted, took place in the twentieth century, and the two biggest periods of expansion correspond to the two biggest reforms to federal funding for students – the 1950s, with the GI bill, and the 1960s and 70s with the Great Society reforms.  Essentially, more people could get money to go to college, so entrepreneurial folks set up a bunch of new colleges for them to attend.

So what?  Well, when we talk about the centuries-old traditions of higher education being corrupted by commercialism and consumerism, we’re really misrepresenting the facts.  The easy majority of colleges in the US exist today because of federal financial aid – it’s not a corruption of their core principles that they’re trying to get the most students and the most aid money; that’s exactly why they were founded.  Moreover, the ‘traditional’ education program that academics are fighting to preserve just isn’t that traditional – until the big education booms in the twentieth century, a PhD was not normally required for professorships, and plenty of universities didn’t even offer graduate programs (and again, arguably many of these were invented to be cash cows – that’s not a modern perversion, either).  Many early graduate programs were based on the German model, which didn’t have a thesis component and was sometimes more of a long-service award, recognizing anyone who had participated in the faculty for a certain period of time.  The tenure system is similarly a twentieth-century innovation, and only became a written, contractual arrangement in the 1970s following a pair of Supreme Court cases about professorial contracts and dismissals.

All in all, we’re essentially fighting to preserve a system that really only dates from the 1960s and 70s.  Like the myth of professors being hemmed in by political correctness, it’s hard not to see this as at least in part professors today harkening back to their own experiences as students, without thinking critically about the ‘institutions’ and ‘traditions’ they’re defending.

6. Universities are necessary to keep research alive.  This one is tough, and I admit, I even believe this one a little bit.  So let’s be clear – there is a crapload of research being done by universities that’s simply not going to be done anywhere else because it’s not productive enough, it’s too expensive, or it’s not going to produce any kind of marketable product.

However, there’s a big difference between saying that some kinds of research are done by universities and nowhere else and saying that research itself will end, full stop.  And the latter has just never been true.  Huge research advancements have been made by private industry, military development, and by crazy-determined people working out of their basements.  To take a few of the most cliched examples – the transistor, which is essential for basically all forms of personal electronics, was invented by the Bell corporation to make telephone switchboards work more efficiently; wifi and cellphones are both based on military technology (the former of which is also based on the frequency-hopping spread spectrum invention of Heady Lamar, who started inventing to help with the war effort and because she was bored of her acting career); and both Apple and Microsoft were founded by college dropouts, initially working out of their homes.

Indeed, if we want to talk about preserving traditional education, prior to the education booms of the twentieth century, it was commonplace for research to be done by ordinary civilians who happened to have an interest in a particular field.  In the case of my own field, some of the most important scholars were never or only ever briefly professors –  Alphonse Mingana, one of the founders of Syriac studies, came to the US to work at the Woodbrooke Quaker Study Section and spent most of his life as a librarian of Arabic manuscripts; Montgomery Watts, who wrote two of the first widely-used English biographies of the Prophet (pbuh) was a priest and an Arabic interpreter for the Anglican church in Jerusalem, and although he was a professor, I don’t think he had a PhD; William Nassau Lees, who produced many of the first English translations of Arabic histories, was an army chaplain stationed in Calcutta, and E A Wallis Budge, who produced the first English translation of the Syriac history of Bar Hebraeus and was a curator for the British Museum, left school when he was twelve and (rather famously in my field) supposedly studied Assyrian on his lunch break from WH Smith (a British stationery shop), eventually catching the eye of the staff of the British museum, who helped raise money to send him to Cambridge.

Again, like the myth of universities’ ancient traditions, as academics we need to be honest that the idea that how you become a researcher is to go to college, go to grad school, do a postdoc, and then become a professor is really a twentieth century innovation, and has never been universally true, even in the last century, so that the continued variations to how research is done created by new technologies and increased ease of communication is really just a new variation on a very old theme.

7. They’re not!  Everything’s gonna be fine!  As someone who works in academia, I certainly want this to be true, but I don’t think it will be.  All of the things that I ranted about at the start of this post are true.  In addition, higher education in this country isn’t just getting more expensive – because of the federal financial aid programs, it’s becoming increasingly dependent on pooled, collective debt, something that did not end well for either the stock market or the housing market.  And when the education bubble bursts, it’s going to both decimate our education and research capacities and destroy the lives of thousands of young people who just wanted to get an education.  That’s a terrible position to be in, and there’s every reason for us to be trying to address these problems.  But we need to do so honestly, not hiding behind myths and a concept of tradition younger than many of our faculty members.  We need to be willing to consider serious changes and massive overhauls, including potentially losing things we consider integral to the system, like being high-contact institutions.  We maybe should also listen to our data on that one, that low-contact and online learning styles aren’t worse than in-classroom learning.

We also need to be prepared for a lot of the problems to be our fault, since we’ve been the ones running the show, and be prepared to talk honestly about issues of bigotry and bias, and about how we’ve allowed these problems to fester even to the point of endangering our students safety and wellbeing, in large part because we just didn’t know what to do and didn’t want to talk about it.  We can’t fix the past, but there are a lot of counts on which we also can’t defend it.

Above all, I feel like we need to brace ourselves that universities now are probably not going to look like they did in the 1970s.  That’s fine – universities in the 1970s didn’t look like those in the 1920s, either.  But for the love of all that’s good, we have to pull our heads out of the sand and accept those changes because the potential downside of failing to adapt is truly terrifying.

Posted in Uncategorized | Tagged , , | 6 Comments

On looking like a duck.

Okay, so I’ve been debating whether to write this post, as for my normal readership, I feel like this is really preaching to the choir.  However, it’s also really stuck in my head, so I figure getting it on paper might help.  Also, trigger warnings for discussions of terrorism, violence, abuse, bombings, shootings, and rape.

I don’t think anyone can miss that we have a huge, unaddressed problem with violence in this country.  Even though the rates of violent crime have been decreasing consistently for decades, violence remains a reality in the lives of way too many Americans.  The most recent attack against Americans in Charleston, SC, as well as the start of the trial of the Colorado theater shooter, has spurred discussions about racism, gun violence, and the role of privilege in violence that this country desperately needs to have.  However, there is one aspect of these discussions that is particularly important to me and the kind of work that I do that I want to talk about here, namely that as a country and as a community, we need to get comfortable calling these people terrorists.

There’s an old joke called “the Duck Test,” which was popularized in the 1940s, that “if it looks like a duck and quacks like a duck, I’m going to call it a duck.”  Like any piece of rhetoric, there are obviously a thousand problems with the Duck Test, but when it comes to the perpetrators of violent crimes, it does feel relevant.  Shooting up public places is, for lack of a better word, terrifying, but there’s significant evidence demonstrating the difference in how the perpetrators of this kind of crime – who are generally white, male, and middle class – are described as compared to either nonwhite terrorists or nonwhite violent criminals (or indeed, nonwhite victims of violent crimes).  Indeed, this division runs so deep that we separate terminology for almost every aspect of these crimes – “mass shootings,” “gunman,” and “gun violence” are all disturbingly neutral, and “madman,” as applied to shooters, is both ableist and essentially apologetic, attempting to explain away the perpetrator’s actions as a form of mental illness.

I’ve talked before about the dangers of our use of “Islamic terrorism,” that by constantly reinforcing the Islamic-ness of acts of terrorism by people who self-identify as Muslims, we’re creating a subconscious association between Islam and terrorism (Tariq al-Hubb wrote a great followup to my post, which is here).  The more we use the term, the more we search for something essentially Islamic in these acts of violence to accommodate its usage.  However, it’s the word “terrorism” that’s actually important – we should care about acts of terrorism worldwide because they are terrorism, not due to their association to any other ideology.

Moreover, as I think our resistance to calling mass shooters terrorists demonstrates, the focus of “Islamic terrorism” has warped our view of what terrorism is.  At its most basic, terrorism is a planned system of violent acts intended to create fear and terror in a community, in order to force that community to preserve or return to a traditional status quo (real or imagined).  In this way, terrorism is more than just violence.  In the same way as abuse, it’s violence used to a specific aim, used to mold people’s behaviors and actions through negative association with acts of violence.  Terrorists use violence – murders, bombings, kidnappings, rapes, and attacks – to keep their target community afraid with the ultimate goal of exerting power and authority over that community.

I would guess that this definition of terrorism would not fit with most white and middle class Americans view of terrorism because we’re in the lucky position to have only ever been tangentially connected to it.  As a white, middle class woman, I will only ever be the victim of terrorism by random chance.  My connection to terrorism is the same as my connection to the lottery or to lightning.  I might be afraid of being struck by lightning or excited by the idea of winning the lottery, but in both cases, these thoughts are nothing more than passing fancies, something that might pop into my head one day, but which will never be my central focus.

This is not the experience of terrorism for communities targeted by terrorists.  These communities live in constant fear of the next attack, the next murder, the next death, if it will be them or someone they love.  This may not be the experience of terrorism by white communities in the US, but as the Charleston shooting demonstrates, it is still the experience of nonwhite communities in the US.  Basically, for those of us with white privilege, understanding terrorism as a system of violence against a target community doesn’t seem relevant or accurate because we’re not the target community[1].  That makes it all too easy to see mass shootings as isolated incidents and not part of a larger system of violence.

This misconception of what terrorism is, and why mass shooters are terrorists, also impacts how we perceive “Islamic terrorism.”  It is the case that there are terrorist organizations that use Muslim imagery and narrative as part of their justification for their actions, like al-Qaeda, ISIS, and Boko Haram.  It is also the case that many of their victims – in some cases, even the majority of their victims -, are Muslims.  Al-Qaeda may claim to want to overthrow America and American leadership, but they’re actually much more interested in maintaining their own power in the Middle East.  Indeed, America works as an ideological target for al-Qaeda in large part because of the history of American interference and imperialist action in the Middle East.  The focus is still on the local community – attacking America can be used to demonstrate a group’s strength, but they demonstrate that strength as part of the system of violence to control their co-religionists in the Middle East.

This is why it’s so important that we revise our conception of terrorism, including calling mass shootings acts of terrorism.  As long as we only identify our tangential relationship to terrorism as ‘real’ terrorism, we’re never going to be able to do anything to seriously address the problem.  We need to let the communities who are the direct targets take the lead, offering them protection and resource to address the problem.  More than that, as white, middle class Americans, we need to accept the idea that those responsible will often look and sound like us, and that those people are using the same protection and privilege that make us only occasional victims of violence to perpetuate that violence.  Until we’re ready to accept this uncomfortable reality, we’ll never be able to make a systematic change to end terrorism.

[1] Obviously intersectionality matters here – both white women and white queer people can still be the targets of systematic violence.  However, I’d argue that even here, there is much more institutional protection to defend these communities from violence than for nonwhite communities in the US.  If nothing else, the state apparatus meant to defend people from violence – the police – is still predominately going to help a white woman or queer person who is seen to be vulnerable or in danger, which is not the case for nonwhite communities.

Posted in Uncategorized | Tagged , , , | Leave a comment