Please don’t draw the Prophet (pbuh)

So I still haven’t submitted my thesis, but I have received an extension.  And since procrastination and completion are nearly the same thing, I am breaking my hiatus.  Partly because I have more free time now, but mostly because many of the reactions to the tragic deaths in France last week have made me feel incapable of remaining quiet.  In particular, I’ve seen a disturbing number of people online suggest that, like with the Danish cartoons of the Prophet Muhammad (peace be upon him) a few years ago, the best response is another “Draw Muhammad Day.”  I’m strongly against this, and would like to take this opportunity to explain why.  [Trigger warnings: I’m not going to post any depictions of the Prophet or link to any, but I am going to describe what some of them look like.  Also there’s some discussion of terrorism and mass murder.]

But, to start with, it’s worth pointing out (as many people on the internet have) that it is true that the Qur’an does not specifically ban depictions of the Prophet (pbuh).  The practice of not depicting the Prophet (pbuh) stems from the Qur’anic conception of graphic images, and the various ways that Islam has formulated the meaning of graphic images is way too big a topic to cover here.  Maybe I’ll do a separate post about it.  But the fact is that while the ban of depictions arises primarily out of the hadith and the early history of the Islam, and is not practiced universally in Islam, that doesn’t make it any less a religious practice for some Muslims.  To put this in perspective, I can make a solid case that the Bible provides no basis for the concepts of heaven (as a place where humans go), hell (as a place of punishment for sinners), or the soul (as an immortal aspect of all humans), but that doesn’t make these concepts any less integral to Christian theology.  That’s because Scripture doesn’t work like a cookbook – you don’t just read it and do what it says.

To turn Western depictions of the Prophet (pbuh) as artistic and/or political statements, first off, your depiction of the Prophet (pbuh) is probably not going to be original.  If you’re an artist looking to push boundaries, this is pretty much the exact opposite of that.  Non-Muslims have been producing depictions of Muhammad (pbuh), both in written works and as pictures, since their first interactions with Muslims.  Many of these depictions are negative, painting Muslims as mindlessly violent and terrifyingly evil.  The root of this meme is not hard to find – many of these works come from communities who were at war with the Muslims, whether due to Muslim incursion, as during the Islamic expansion, or due to Western incursion, as during the Crusades.  In either case, non-Muslims have routinely produced works of art that depict Muslims as evil, and Christian works in particular like to present Muhammad (s’lm) as the devil or the Antichrist.  Drawing a cartoon of him carrying a bomb is just a modern twist on a 1400-year-old artistic tradition.

Even if you have come up with something more creative than the Prophet (pbuh) as Devil trope, if you’re aim is to satirize Islam, either as it’s experienced in other countries or as it’s experienced by Muslims in the West, it’s really unlikely to be successful because it’s incredibly difficult to satirize a foreign country or culture.  This one kind of makes sense – satire is meant to be a short, pithy observation about everyday life, often relying on juxtaposition to create cognitive dissonance in the viewer or listener.  So for Westerns wanting to satirize Islam, there is a lot of ground to cover.  If you’re trying to skewer people in foreign countries, you would need some kind of access to them as an audience, as well as intimate familiarity with their culture.  If you live in the US – do you know if foreign papers are running political cartoons about the US?  Do you care?  From having lived overseas, I know they are, but I don’t think most people seek out foreign jokes about their country, and would probably be deeply disappointed if they did, as they’re liable to be wildly off base.  Case in point: one of the most common jokes in Britain about Americans is that most Americans don’t have passports, something unthinkable in Europe, where the countries are tiny, but something relatively neutral in America, where our country is massive.

For attempts at satire about Muslims in the West, the case is even more complicated, because the joke would need to succinctly demonstrate an understanding of some aspects of Islam as a global culture and an understanding of how the intermixing of that global culture with the local Western culture can create conflict, and it needs a quick and easy way to present this information to people who may have no experience of either of these things.  Which is probably why so much really great comedy about Muslims in the West comes from Muslims who live in the West, like Aziz Ansari, Dean Obeidallah, and Sadia Azmat.  They grew up knowing the first two things, so all they had to do was figure out how to present their experiences in a humorous way to an uninitiated audience.

So for one thing, I think it’s going to be very difficult for non-Muslim Western artists to create something original about Islam, especially if the intention is satire.  But beyond the questions of interpretation and expression, I do believe there needs to be an ethical consideration about drawing the Prophet (pbuh), as well.  In all of the discussions about the attacks in Paris, it feels like people are forgetting that criticism is not the same thing as censorship.  Censorship is a government or other body of authority preventing something from being visible or accessible publicly.  For example, in the US, there is religious censorship which prevents the use of “Jesus Christ” as an expletive in some forms of media, in order to protect the religious beliefs of some Christians that this qualifies as taking the Lord’s name in vain.

Freedom of speech and of the press are important for preventing censorship, and although people have already pointed out the hypocrisy of many of the world leaders who marched for freedom of the press in Paris while denying it in their own countries, I think it’s also important to point out that criticism and even violent backlash against art is NOT censorship because it does not come from a position of authority.  It’s exactly the existence of freedom of speech and the press that gives people the right to publish things that may be deeply hurtful to others, and criticism is vital in that system to address how that art functions, why it’s hurtful, and whether that hurtfulness comes from a place of bigotry and racism, which I would argue for most non-Muslim Westerners who want to draw the Prophet (pbuh), it does.  Absolute freedom of expression without thought to consequences is guaranteed to end in violence because no one enjoys being insulted over and over again – as the Pope pointed out, if you insult someone’s mother, you’ll get punched in the face.

There’s also the fact that what art a society holds in esteem says a great deal about the opinions of that society, what they deem important and what they don’t.  In the case of Charlie Hebdo, by focusing on this single event as a clash of ‘Islam and the West,’ we risk implying that this is the only way in which that clash is manifested.  This is dangerously untrue – on the one hand, there has been a horrible re-emergence of nationalist zenophobia in Europe in the last several decades, and Muslims living in France today have suffered truly terrifying levels of oppression and abuse, including 16 Muslim places of worship being vandalized or attacked in just one 48-hour period.  On the other hand, extremism in the name of Islam is an international problem, and communities outside of the West face far worse effects of it than Westerners, as evidenced by the horrifying attack by Boko Haram in Nigeria in the same week, in which somewhere between 150 and 2000 people were murdered.  Terrorism and religious extremism are not just, or even predominantly, Western problems, and treating them as such, and expecting the world to jump to our aid when the West is attacked, is a disgustingly imperialist idea.

Finally, I would argue that it has to be morally wrong to intentionally try to make some violate their private religious practices.   Choosing not to depict the Prophet (pbuh) or to look at depictions of him is not forcing your beliefs on others, and asking for images of him not to be present in public spaces is no more religion invading the public realm than having churches, crosses, or manger scenes visible in public spaces.  If someone was running around trying to trick Jews into eating pork, most people would probably consider that person a jerk, even if they labeled their action as performance art.  Similarly, although there’s plenty of room for debate as to whether using “Jesus Christ” as an expletive is blasphemy, it would still be wrong to start “Blasphemy Day,” where people run around trying to shout it in Christian’s faces as many times in 24 hours as possible.  Religion is personal, but it can’t always be private – people inhabit public spaces, and religious people have as much right to those spaces as nonreligious people.  Asking for consideration is not the same thing as demanding censorship because it doesn’t take away anything from anyone else.

I am absolutely not in favor of censorship, but again, consideration and criticism are not the same thing as censorship, and, I would argue, are actually incredibly necessary to make good art.  Part of any piece of art is thinking about how it will be experienced and what message it will send to its audience, and I can’t see what message drawings of the Prophet (pbuh) send except a big middle finger to Muslims.  It won’t be shocking to a non-Muslim audience because we have no preconceived notions about images of the Prophet (pbuh) for it to attack.  It probably won’t be shocking to Muslims because I’d guess most are aware that depictions of the Prophet (pbuh) exist; they just choose not to look at them.  As far as I can tell, the absolute best outcome that can come from papering newsstands with images of the Prophet (pbuh) will be that it will hurt Muslims, but that they will choose to accept that hurt, and bear it privately, rather than asking for consideration and respect for their beliefs.  Which is exactly what Muslims are choosing to do.  And that’s a beautiful statement about them as people, but still doesn’t validate the art.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Closed for winter!

Although it hardly seems fair to say I’m taking a hiatus from blogging when the posting around here has been so variable, I’m still claiming this as a unique period of not posting, as opposed to my normal not posting.

I’ve been in the habit of closing down for the holidays, so I don’t have to feel guilty spending all day playing Borderlands with my sister rather than writing about imperialism and violence.  Although I’m agnostic, I am super obsessed with Christmas, and I really appreciate being able to take a couple of weeks off.

However, this hiatus may last a little longer.  As I’ve mentioned a few times here, I’m in the process of finalizing my doctoral thesis for resubmission, a weirdly Oxford process that has taken more than two years for various reasons.  My final submission is probably going to be in the next two months, so between that and working full time, much of my intellectual and emotional energy is already allocated.

I’m hoping that by February, everything will be done, but I don’t anticipate being able to produce much for this blog until then.  So until February, merry Christmas, happy Hanukkah, happy Kwanza, happy New Gregorian Calendar, and Gong Hey Fat Choi!

Posted in Uncategorized | Leave a comment

Should academia be addressing for-profit education?

First off, a little self-advertisement: Nahida very kindly asked me to write a guest post for the fatal feminist, so head over if you want to read some rambling thoughts about Quranic revisionism in Western scholarship.

This is going to be another one of those posts that’s got nothing really to do with the history of Islam, but that is maybe relevant to scholarship in general, namely, the rise of for-profit education and whether the traditional academe should be doing anything about it.

For those who haven’t heard, the Obama administration has been trying to increase regulation of for-profit schools, in particular by setting requirements for the return-on-investment ratio of the school’s cost to resulting employment.  In part due to these changes, Corinthian Colleges, the publicly-traded company that owns many for-profits, has declared bankruptcy and will selling off many of its campuses.  The increased administrative attention has also led to several state and civil lawsuits against for-profits, claiming gross misrepresentation and predatory sales techniques.  Here’s a truly disgusting article from buzzfeed about students having their federal loan applications effectively submitted for them with no understanding of how much they owe, and John Oliver being brilliant in his discussion of how absolutely horrible the education at a for-profit can be, despite costing upwards of five times more than a community college.  Unfortunately, the new regulations are significantly less strict than what the Obama administration had announced earlier this year, and despite Corinthian’s bankruptcy, its campuses will likely be sold to other for-profit education companies and the system will likely continue.

I’m sad to say that despite working in education, as both an academic and as an administrator, the news articles this summer were the first I had heard about any of this.  I’m also sad to admit that as an educator and advocate for education reform, if you had asked me six months ago who goes to Phoenix University or ITT Tech, I probably would have said, “stupid people.”  For-profit education has been around for a couple of decades –  I remember the low-production value ITT Tech ads that ran during daytime television when I was a kid, and teasing my sister, when she was already an engineering student at CalTech, that should get a degree in rocket science in just six months!  It says so right there!

So for me, part of the tragedy in reading these articles was being reminded that having access to a university education is a huge privilege, but so is knowing about universities and what a university education is supposed to be like.  I’m sad to say that it genuinely hadn’t occurred to me how many people really don’t know what to expect from a university, and wouldn’t just laugh hysterically at a school calling day in and day out demanding you enroll, or expecting you to enroll on the spot if you come for a campus visit.  Reading the stories of people who’ve been taken advantage of by these institutions, it’s clear how adept these schools are at playing off those circumstances as benefits – look how great a candidate you are, we can sign you up right now!  Or, you better sign up right now or you’ll have to wait until next year!

Which brings me to the question of whether academics should be doing something to help the students who are being taken advantage of by these institutions, or I suppose more specifically, why aren’t we?  Admittedly, traditional academia has problems of its own, from poverty-level adjunct jobs and an ever-dwindling tenure job market to continuing problems with sexual harassment, racism, and classism.  But I suspect that part of the problem is that many academics are in the same mindset as I was – it really just doesn’t seem like our problem, and we’re not 100% sure this isn’t the victim’s fault for being taken advantage of.

The other problem is, as I’m sitting here writing this, I’m not really sure what we could do to help.  Two things come to mind, but both would require a ton of commitment from individual academics, and one would require actual institution support:

  1. Schools need to provide actual college counseling, and academics could help teach people what going to college should look like.  I recognize that a lot of the people being enrolled by these colleges are not school-aged any more, but having available resources where people learn about schools should be part of high school curriculum, as well as, ideally, job centers, rehabilitation centers, and the VA.  Academics are in an ideal position to give real feedback about university education.  But this would obviously require direct intervention, when many academics already feel they have too many ‘other’ obligations on their time besides research.
  2. The effectiveness of for-profit recruiting among low income and rural communities is also a tribute to how ineffective universities continue to be at targeting these communities.  Now, it’s worth pointing out, as several of these articles do, that many of the students admitted to the for-profit programs could not qualify for a community college or state school – they have no GED or their English language capacity is far too low – but that only make up one portion of the admitted students.  Many of these people could be enrolled in similar programs in community colleges for a fraction of the cost.  This is especially true of veteran’s – despite the absolutely pivotal role the GI Bill played in shaping the modern American middle class after World War II, many universities have cut or cancelled entirely their VA recruitment programs.  This is not only terrible for veterans, who need support for education and job training, but also for universities who are constantly desperate for new financial streams.

Disappointingly, I don’t know how even how to start getting these issues on academics’ radar, let alone get people involved.  But it’s heartbreaking to me that in this day and age, there are people so horrifically being taken advantage of simply because they wanted an education, and that the field of professional educators, of which I’m a part, seem so completely detached.

Posted in Uncategorized | Tagged , , , | Leave a comment

Learning, research, and received wisdom

As I’m now deep in the depths of the final revisions of the resubmission of my thesis (long story), I’ve found myself re-reading books and journal articles that I first read years ago, and as a result, I’ve also found myself thinking a great deal about how learning and research works, and in particular, how ideas get recycled across years and years without any real critical consideration.

There are a whole bunch of circumstances in academia today that, at least from where I’m standing, seem to be limiting innovation.  An academic job market that has never really recovered from the recession puts pressure on graduate students and early career investigators to focus on ‘safe’ material – grad students get told we need our theses to generate great letters of support, and then need to be turned into books that will get great reviews.  If we score a tenure-track position, it seems like more and more the mentality is to keep your head down until you have tenure, whether that means being expected to live above your means or not making waves on campus.  Even for established researchers, the pressure of funding still puts the emphasis on continuing existing projects rather than branching out into something new because no one wants to fund failure.  All told, that means that for upwards of the first twenty to thirty years of your career, the pressure is to keep to familiar ground.

These circumstances are particularly problematic because, even without them, it seems like we humans tend to keep to familiar ground with our thinking anyway.  We like precedence, and seem to have the tendency to judge new information against what we already know about the subject.  In doing so, however, we’re essentially privileging whatever we heard first.

This plays out in education all the time.  Take, for example, a couple of weeks ago, when many children learned, as many of us did, that “in 1492, Columbus sailed the ocean blue.”  We learned that he was proving the world was round, not flat as everyone else thought, and that he had three ships, the Nina, the Pinta, and the Santa Maria.  And to prove how effective that early teaching is, I did all of that from memory.

However, only bits and pieces of that are true.  He did sail in 1492; he did have three ships, and he did sail an ocean of some color.  However, he wasn’t proving the world was round – according to contemporary records, he was arguing for the world as being significantly smaller than it had already been proven to be (by a mathematician in Alexandria using trigonometry), and according to his own sea journal, he thought the world was pear-shaped or the shape of a woman’s breast, with the ocean being the bulge (thus taking so long to transverse).  The children’s account also leaves out his many atrocities in the Americas.

The claim is that the children’s version has to be simplified and sanitized in order to make it appropriate for children.  However, the differences go well beyond simplification, to the point that many of us have the experience of learning the more complex and complete story as adults and feeling either lied to or strongly resistant to the latter version, as contradicting what we already know.  We take the children’s version as received wisdom – something from on high that comes to us complete and elegant – and resist the adult version, with all of its problems and problematic questions, as being less satisfying.

The apparent benefit of received wisdom is elegance.  Elegance is also a mathematical principle – in logic, the best proofs are elegant, in that they are succinct, rely on a minimum of assumptions, and contain no extraneous material.  It’s tempting to use the same model for all information, but the problem is that oftentimes, the thing we learned first wasn’t actually more elegant than the thing we learned second – especially if we learned it as children or when we were just starting out in a particular field, it was probably oversimplified, and we were too inexperienced to know how to ask the right questions or interrogate the information properly.

This comes up with Islamic studies because, as I’ve talked about before, my field has a history of source skepticism.  But it’s a selective source skepticism.  It has to be, because pure source skepticism is impossible – there are so few extant manuscripts dated to within the period they describe that we can authenticate that we would barely be able to describe the historical periods pre-publishing at all.  And in rereading so much material in one go, I’ve started to notice the pattern of what sources are privileged above others, and in many cases, they’re the ones we all learn first, pure and simple.

For example, I was reading a book recently that made an absolutely beautiful source skeptical argument about a particular Christian theological debate from the seventh century called the Monothelite debate, basically arguing that too many of the sources that historians have traditionally relied upon come from after the Ecumenical Council at Constantinople in 680.  Monothelitism was one of the reasons that Council was called, and historians have used this to cast backwards a version of the controversy in which the theology was always full-formed and the sides clearly delineated.  This guy made a compelling case that we need to be more careful and look exclusively at the earlier sources to understand how the controversy started, and not just accept the Acts of the Council as the full story.

And then proceeded to defend his reading of what the controversy looked like in the early seventh century by citing Michael the Syrian, a 12th century source, extant in a single complete 16th century manuscript and four known partial manuscripts.

So if the concern is authenticity to a 7th century mindset, one that didn’t yet bear the strict “orthodox/heresy” perception of the church council, why use a 12th century source?  Well, to be honest, I don’t think the author was thinking about it like that.  I think the starting point was ‘it says this in Michael the Syrian, so how does that compare to other things?’  And I think the source of that mentality is that Michael the Syrian is, quite literally, how many historians of the Late Antique period learn what a chronicle is.  It’s one of the first things you translate if you’re learning Syriac; it’s one of the first things you read about in any given textbook or general study.  We don’t hold it to the same rules of source skepticism because it’s how we learn what a source is, so of course it must be a good one!

This isn’t a vote for Michael the Syrian as a good or bad source – on some level, it’s a demonstration of the essentially tautological nature of source skepticism.  We can’t apply the same skepticism to all sources because on some level, we need a model to use as a standard.

But it’s also a vote for innovation, and for a certain flexibility in how we do research, so that there are opportunities to go back and revisit even well-accepted sources, with enough time and space to approach these questions, even, or perhaps most importantly, for the beliefs and perspectives with which we’re the most comfortable.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Pay no attention to the women behind the veil? (Actually, no, don’t do that.)

So in the on-going saga of Western countries being horrible to women who wear burqas or niqabs, Australia announced that it was going to require anyone wearing “facial coverings” to sit behind a glass partition when viewing Parliamentary procedures.

A whole lot of people immediately pointed out that this was a ridiculous decision (buzzfeed has helpful collected some of the cleverest responses), the Prime Minister Tony Abbott backtracked and said that the new rule might not be necessary, as those wearing facial coverings would already have gone through the same security screenings as everyone else, and as far as I’ve heard online, everyone is just sort of waiting to see what Parliament does next.

Obviously the rule is silly.  People go through security screenings, and people are no more likely to hide dangerous materials on their face as anywhere else on their person (I’d argue less likely, actually, but I’ve never tried to tape a gun to my face).  If the issue were identity screening, anyone wearing facial coverings could just be checked privately, and then allowed to wear the covering like usual.  The only reason to single out those wearing facial coverings once in the proceedings would be in order to stress the differences between those people and all others – the same reason there was once segregated seating for women or for people of color.

However, I do think the ongoing saga of the West’s relationship to Muslim styles of dress is a useful way to discuss why intersectionality is so important to both understanding religion in the public sphere and in understanding the treatment of women today.  Because the problem of Australia’s response to Muslim women wanting to attend Parliamentary proceedings does not lie exclusively in the fact that they are women.  Or in the fact that they are Muslims.  It’s the intersection of the two that matters.

In order to demonstrate this point, think about how many times you’ve seen this picture:


Or this one?


Or this one?


All three are from news sites discussing Western veil banning and whether Islamic styles of dress oppress women (here, here, and here).  But in looking at these images, we need to ask – who are these women?  Are these the same women over and over again?  Did they agree to be photographed?  Did they know how these photos were being used?

For all the West’s apparent discomfort with the veil, we’ve become increasingly comfortable with the use of women wearing the veil as a visible cue for the differences between Islam and the West.  These photos also reveal the continued lack of interest in knowing more about these women’s choices or these women as people – the guardian article even mislabels the woman in photo 2 as a French woman wearing the burqa – she’s wearing a niqab (as are the women in picture 1, despite being attached to an article specifically about burqas – only picture 3 is wearing a burqa).

This repeated image of an anonymous woman wearing a veil is essentially a form of objectification.  Although we most often discuss the objectification of women as objects of sexual desire (if you haven’t ready Ozy Frantz’ fantastic breakdown of women as sexual objects, you’re missing out!), objectification can take many forms – in this case, the anonymous nature of the figure in the photo encourages the perception of women in Islam as voiceless masses being forced to conform to tyrannical requirements of their religion as seen in their style of dress.  In reality, lots of women wear veils and other forms of ‘Islamic’ dress for lots of reasons – even if that reason is just “everyone else does it,” that’s still a choice, made by an autonomous human being (and the same reason millions of people around the world choose what to wear – “everyone else is wearing this” is basically how fashion works).

None of these discussions are about specific women who choose to wear the veil, or how the discrimination against them has affected them personally.  Instead, these pieces (this one included!) discuss ‘women who wear veils’ as a generic whole.

As I said, objectification of women is commonplace.  However, the specific objectification of the anonymous women in veils isn’t just a result of their being women.  It’s specifically about them being Muslims.  Indeed, one aspect of the supposed security threat of women in veils is the identity concern, that it could be anyone under there!, essentially abstracting these women’s identity beyond them even being women.  Objectification is dehumanizing – in this case, by emphasizing the supposedly anonymizing effect of the veil, it allows the people pushing anti-veil laws to claim that there’s no downside.  Banning the veil in public just makes non-veil wearing people feel better!  It’s important for security!  Veils make people uncomfortable!

All of those things might be true, but in a free society, we’re supposed to measure the benefit of these changes against the damage and inconvenience caused to the people negatively impacted by these laws, just like we do when we’re considering limiting someone’s freedom of speech or right to bear arms.  Doing so requires listening to Muslim women, those who wear veils, those who wear headscarves, and those that don’t wear either, women who occupy the whole spectrum of how women experience Islam.  We need to listen to them about their experiences, about the treatment they already receive from their choices, and how these proposed changes impact them.  Like Nahida.  And woodturtle.  And Amanda Quraishi.  And Zainab bint Younus.  And the women at altmuslimah.  And the literal millions of others who don’t have websites.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Outsiders, minorities and learning new tricks

This post sort of follows on from my last one about things always seeming worse than what came before, but it’s also about a tendency I’ve seen in a lot of thinkpieces talking about anything to do with atheism, skepticism, or nerddom, namely, the tendency to conflate being an outsider with the experiences of minorities.

I admit, this is pretty far outside the scope of this blog, but it’s something that’s fairly important to me as a person, so I’m going to write about it anyways.  Having finished it, it’s also pretty long, so sorry about that.

In the interest of full disclosure: I am a nerd.  Always have been.  As a kid, I was a massive Marvel fangirl – I still own several hundred comics, and as a kid, I also had a large collection of Marvel trading cards (which I never traded with anyone, because I didn’t know any other Marvel fans) and action figures.  In middle school and high school, I got into scifi novels and anime.  College and grad school introduced me to Doctor Who and Firefly.  They also introduced me to intersectional feminism and the idea of modern imperialism.

As a girl, I always felt unworthy of calling myself a nerd.  The boys who ran the comic book shop would give me pop quizzes on aliases and origin stories before allowing me to step into the Marvel section, to prove that I deserved to dig through boxes looking for an old back issue to complete my collection, unless my dad (who got me into comics) came in with me, in which case they’d hang back, casting side-eyed glances as we poured over the re-issued original X-Men.  I had a few female friends in high school who liked anime, and my school had a small anime club run by the Japanese teacher, which was mostly boys and maintained a strict “NO SAILOR MOON” rule, as not being ‘real’ anime.  I didn’t get into video games until my mid-twenties – until then, I remained distinctly, and often literally, ‘gamer-adjacent,’ happy to sit around and watch my friends game.

I was still enough a part of the nerd community to know the Story of the Nerd, though – the brilliant, misunderstood boy who got picked on by bullies for liking comics more than football, who could never get hot girls to talk to him, but who was creative and interesting and would eventually grow up to be Steve Jobs or Frank Miller.  As a kid, the Story made me empathize with the boys who were refusing me access, believing that they were bullied and mistreated, and thus distrustful of others.  It also made me feel guilty, that I wasn’t hot enough or not the right kind of girl for them (something they were also happy to extemporize about to my face).

When I got older, and learned about things like intersectionalism, microagressions, and socialization, I recognized the Story for what it was – a myth.  Myths are stories that, although they may stem from reality, preserve an imagined version of reality.  Like parables and fables, they’re instructive, but myths give instruction on a macro scale, demonstrating how communities should behave.  The Story of the Nerd preserves nerd identity – outsiders, misunderstood, under-appreciated.  But more importantly, the myth also requires that the Nerd be a boy, and a straight one at that.  The women in the Story are symbols for his identity, demonstrating how unappreciated he is.  Women in the Story are something to strive for and achieve – effectively, a prize, handed out in recognition of a person’s acceptability to society.  They have no personality and will of their own.

Nor does the suffering of the archetypal Nerd ever get truly terrible or severe.  We’ve all grown up with stories of nerd and bullies – the nerds are occasionally beaten up, but more often humiliated – pants pulled down, heads dunked in toilets, etc.  At least in the portrayal of bullying, no one is ever seriously injured.  There’s also almost always a comedic component, again reducing the severity of the whole scenario.

The Story was accurate, at least in comparison to the nerds I knew personally.  They were sometimes bullied in school, generally made fun of or occasionally publicly embarrassed.  When I was old enough to think to compare notes, the suffering they had experienced as a nerd was pretty mild, compared to what I had experienced as a young woman.  Most of my nerdboy friends had never been followed, screamed at in the street, or grabbed and held against their will.  None of them had had stalkers.  None of them had been physically or sexually assaulted.  As white kids, none of us had ever been followed or threatened by the police.  As white kids growing up in Arizona, none of us had ever been accused of being in the country illegally.  None of us had ever been chased by people screaming racist slurs.  Most of us had ever been beaten up.

And herein lies the difference between being ‘an outsider’ and being a minority.  ‘An outsider’ is still a recognizable part of the privileged, normalized section of society.  They don’t play a central part in that society, but they’re still in it.  They still receive the basic protection of that privilege.  And in order to receive a more central role, all they need to do is either change their interests, or lie.  Or, as the last decade has proven, wait, as most of the things which marked me and my friends out as nerds and ‘outsiders’ in the 90s have since received mainstream popularity.  The Marvel cinematic universe is now only competing against itself for box office records.  Gaming is so mainstream that we can’t even decide what “a gamer” is.  And most of my nerdboy friends are successful by any traditional measure.

And yet we continue to want ‘being an outsider’ to be the same thing as being oppressed.  The most recent version of this I’ve read is this lengthy thinkpiece on Buzzfeed about whether the atheist movement can survive misogyny.  The author reiterates over and over again how outside the mainstream the ‘freethought’ movement is, and how much better than the mainstream it is, too – it’s a “progressive, important intellectual community” made up of “cheekily anti-establishment skeptics.”  They’re “liberal, forward-thinking types” with “matter-of-fact attitudes” and started out as “a tiny, bygone community of eccentrics” and “a safe space for science geeks, political dissidents, and other kinds of misfits.”

He also repeatedly side-steps the question of whether this wacky, fun-loving, bygone community of eccentrics used to be sexist – he suggests that the any institutional sexism would have been the result of history because “groups like American Atheists drew from university faculties, particularly philosophy and science departments, and from libertarian and objectivist political culture — all heavily male” and seems to suggest that the modern sexism is a result of the rapid expansion of the culture: “This overall growth, and increased parity between the sexes, would seem like a good thing for the movement. But not everyone saw it that way. Older male activists in particular were like fans whose favorite obscure band hits it big; their small, intimate shows were becoming big arena concerts, leaving them a bit dislocated.”

To start with, this is another example of the tendency to read ‘history’ as ‘what really happened.’  By the 1950s, there were women professors in both philosophy and science, including the often majority-female faculties of women’s colleges.  There were also a growing number of women – including women of color – involved in politics in the civil rights movement.  It wasn’t that there weren’t women who might have had opinions about religion and skepticism – it’s that the men forming these clubs weren’t talking to them.  Similarly, like the boys in the comic book shop quizzing me on Wolverine’s past, there’s no logical reason why the growth of the community should cause the old guard to lash out against women specifically – if it were just a concern for maintaining either the size or the purity of the community, then the gate-keeping should be applied equally across the board. There’s no reason to assume a priori that a women is less qualified to be either a ‘real’ skeptic or a ‘real’ nerd than a man, so why quiz one and not the other?

But what really struck me, reading the piece, is that it seems like the biggest difference between ‘an outsider’ and a member of a minority is whether they can learn new tricks.  One of the repeated themes from the piece is one that also gets brought up about feminism and racism in general, that all of this discussion of feminism makes it so that ‘no one knows the rules,’ meaning usually the rules for friendliness and/or flirting.  In the piece, this is vocalized by the only figure the author even comes close to accusing of sexism, Michael Shermer, that he’s the victim of “a growing movement to clarify or even to redefine the rules of sexual encounters.”

I’ve talked before about how people use the concept “rules” to claim  a natural state where none exists.  The “natural” rules for human sexuality would be the same as other primates – lots of sex, with everything and everyone, regardless of age, blood relation, or consent.  We don’t do that because of social expectations, and those social expectations change over time.  Shermer here is essentially decrying that he’s expected to go with the times and accept those changes, but in doing so, he’s invoking the myth of the Nerd – he’s on the outside, not being part of the social mainstream.  He’s a rebel, Dottie.

However, the privilege to ignore the rules is just that, a privilege.  It doesn’t extend to everyone.  In particular, minorities and oppressed communities are expected to adapt and change their behavior in order to avoid being blamed for their own oppression.  Women are told they can avoid being raped – don’t get drunk, don’t go out alone, don’t dress a certain way, wear ‘roofie-sensing’ nail polish, carry a self-defense weapon, etc.  When innocent black people are shot by the police or their neighbors ‘standing their ground,’ we look to how they were dressed, how they wore their hair, how they were standing, or what they said to account for what happened.

The same goes for the privilege of being confused or not knowing the rules.  A man can say he was confused and didn’t know what he was doing if he gets drunk and assaults someone, but it’s hard to imagine a woman getting away with saying she didn’t know what would happen if she got drunk in public.  Similarly, a police officer can claim he thought a black teenager was armed, but we’d have little sympathy for a black teenager who mouthed off to a cop because he didn’t know what would happen.

I think we’re particular sympathetic for nerds and skeptics when they use this ‘but you’re changing the rules!’ excuse because so many of us know the Story of the Nerd, and feel like breaking the rules and not adhering to society’s norms are parts of their identity, so by expecting them to treat women as people or by asking them to consider issues that affect people of color in majority, we’re somehow destroying their identity.  But in reality, doing these things isn’t mainstream – most people don’t do them, which is why we have institutional racism and sexism.  Social justice is an outsider activity – if nerds and skeptics really wanted to be on the outside, they’d be standing next to us.

Posted in Uncategorized | Tagged , , , | Leave a comment

Imaginary Islamophobic strawman: Everything’s worse than it was!

Well, since there wasn’t roaring disappointment over my use of an imaginary Islamophobic strawman, I figured I’d carry on, footloose and fancy free.  (Seriously, my feet are so loose.  Dangerously loose.  Does anyone know where I can get some new bolts for my feet?)

The other strawman argument I run into the most often isn’t actually anything unique to Islam, or even to the study of religion – it’s the tendency of people to associate “history” as we can read it in books with “an exact record of what happened in the past” and not “an extremely limited view produced predominantly by the educated elite who didn’t care much about the vast majority of the population.”

Examples of this phenomenon include the ever popular “women in the workplace/homosexuality/variety of sexualities/non-nuclear families/insert thing someone doesn’t like here is just such a recent invention” or “there’s so much more drug addiction/depression/mental illness than there used to be.”

The problem with any social institution or trends, particularly minority ones like mental illness (which affects roughly 1 in 4 in the US) or non-heterosexual orientations (on which there is very poor reporting even today, but let’s go with the old figures and say 1 in 10 in the US), you need a decent sample size to be able to track them.

It’s just like when you learned about statistics in school – statistically, a flipped coin should land 50/50 heads and tails[1].  However, it’s a totally reasonable outcome to flip a coin several times and have it land the same way.  You need to flip it roughly a hundred times before you start to see a consistent pattern.  In the same way, you need to study a large pool of people in order to get an idea of what is and isn’t common behavior.

But you can’t do that with historical sources, because even histories that claim to tell a universal history of their own times or the preceding centuries aren’t doing that.  They’re inevitably telling a little bit of that history that seems relevant to that particular historian.

Case in point: The Muslim armies, during the Muslim expansion, besieged Constantinople twice.  The first time was around 674 CE, when they managed to maintain a siege more or less continuously for several years.  The second was in 717, when the siege lasted for one year and most of the Muslim army starved to death.  We have historical records from the period of the 674 discussing the first siege, but it is often left out of later histories.

So what?  Well, if the earlier records haven’t survived (and remember, books are rather perishable and, in particular, very flammable.  Also very inflammable.), we’d never know the earlier siege happened.  It did happen, but it just wasn’t relevant to the later authors.  It was longer than the 717 siege, but it was ultimately just as unsuccessful.  And the 717 siege happened to coincide with a major change in rule in the Muslim caliphate, and marked the last significant incursion by the Muslims into Byzantine territory.  So it felt more important to later generations.

And in general, historians care way more about military and political history than they do about social history.

Take, for example, the case of depression.  There is definitely more diagnosed depression than there was two hundred years ago.  This is, at least in part, because there was only a clinical definition of depression in the mid-20th century.  But even if we could read a definition into history, how exactly are we going to judge depression against the general horribleness of life for millions of people in the last two centuries?  I would go out on a limb and guess that many people living in the US in the mid-19th century displayed signs of depression, but I’m not sure it counts if millions of them were being forcibly held as slaves.  Hundreds of thousands more were living in poverty, working long hours under grueling work conditions, suffering in war, slowly dying of infection, cholera, or influenza, or scraping together a bare existence on the frontier.

So in order to speak accurately to whether there is more depression now than a hundred years ago, we would need a historian who was running around, interviewing a large and varied section of society about their personal experiences and problems, in such a way that we, as modern readers, could effectively analyze their responses for signs of depression, and weigh those responses against the often terrifying realities of that person’s life to decide if their feelings of hopelessness or emptiness weren’t simply common sense.

So why does any of this matter?  Well, our oversimplified view of history is often used to make the current day look worse than it is.  These arguments are also often highly dependent on post hoc, ergo proctor hoc – there was no homosexuality in the 50s, and everyone was happy, as proven by things like Howdy Doody and Leave It to Beaver, so clearly all of these gay people are making people unhappy.  Well, no.  Firstly, there were gay people in the 50s, and for centuries and centuries before that.  And secondly, loads of people were unhappy in the 50s, due to things like racism, civil unrest, tyrannical government practices, and Howdy Doody (that puppet is evil).

It’s also possible that the opposite was true – maybe the majority of people 100 years ago were really contented and comfortable with their lives (honestly, seems unlikely, but then again, I’ve never had cholera).  The point is that we don’t know, and trying to draw social lessons from the past about anything except the really big events implies a whole bunch of knowledge that we just don’t possess.

[1] Actually a flipped coin has a slight preference for the facing side when flipped, but whatever.

Posted in Uncategorized | Tagged , | Leave a comment