How do the little rubberdrones
Improve their glistening shine,
And fill themselves with tentacles
And lead-sweet squid-ink wine.
How blankly do they smile as one,
How slickly glide their paws,
And mutely serve in multitude,
With joy their only cause.
— with apologies to Rev. Dodgson and Mr. Watts
In part one of the series, we took a quick tour through the “furry” component of “postfurry.” We did a bit of digging into the history of the fandom, and we looked at how the technology of the community helped shape its cultural norms. We coined a clever word to describe the cloud of behaviors and values that, while not strictly related to furry in the sense of being “what furry is,” all helped sharpen the focus on what the furry fandom is, or at least has the potential to be.
“Epifurry.” Write that in your copybooks, now.
From here, we start to get into the hard stuff. I’m not going to try to write an entire essay on each and every one of these topics, but I want to cover them in enough detail and provide enough links that those of you following along at home can dive as deep as you like in your off-hours. Remember how I said that “postfurry” was just “post- plus furry,” and then I turned around and said that it’s really more like “post plus epifurry”? Well, we’re going to do that again from the other side, because “post-” isn’t just one thing. It’s actually a variety of possible things, each of which contributes a little bit to the understanding of “what postfurry
is means to me.”
Time to unfold again.
I may have bitten off more than I can chew. “What is postmodernism?” Cripes. Okay, deep breaths. We’ll get through this.To start, let’s jump back to a fairly common launchpoint: the Industrial Revolution. Starting in or around the 1870s, we underwent a fundamental shift in how our societies worked. Until that point, if you wanted a thing, whatever that thing was, somebody had to make it for you. That meant an army of skilled labor dedicated to producing stuff, and a standard of living for most people — aristocracy aside — that was pinned to the value of the stuff you could make and the speed with which you could make and sell it. Then folks started to figure out that if they made machines that could make stuff, their standards of living would be tied to how much stuff the machines could make and how well they could keep the machines running. This kicked off an economic arms race that, here in 2016, has people worrying that the idea of “a job” may already be obsolete. In the 1870s and 1880s, though, only a few forward-thinking types predicted what would happen if this trend were allowed to continue, and bemusingly their very name has become synonymous with rejection of healthy progress, a linguistic enantiodrome.
The people who pulled this off ended up starting consolidating a fair bit of wealth, relative to their peers. I mean, think about it this way: if, in the past, a society could support a hundred people doing a particular job, there’d probably be about a hundred people doing that job. Then,
ifwhen somebodyinvents a machine that can do that job, at least some of those hundred people are now at least temporarily out of work, and the person who owns that machine starts making all the money those people used to make, less the cost of maintaining the machine. Further, that person whose “job” is “own the machinery that does the job” now has a lot of free time, because the “job” of owning a thing doesn’t actually take any time. Further, because of those people ended up spending that time and those resources on the machinery of government, inevitably government starts tilting slightly in favor of people who, as some people might say, “own the means of production.”
This isn’t an essay on economics, though this would be a great launch-point if it were. I brought all this up to get to modernism, which itself evolved as society began to normalize the changes that the Industrial Revolution brought. Things like pervasive belief in social progress as an inevitability, faith in technological progress, an embrace of novelty, and the rise of capitalism as the dominant economic theory all came about as part of the transition from feudalism to industry. Modernism was also great for things like individuality, liberty, and equality. Good things did happen as a result of this transition, and we have a lot of Really Nice Things because of these changes. More people have more stuff, live longer, and suffer less. There are, in fact, those who’ve contended that modernism was the proverbial “it” in “this is it.”
First off, all these grand narratives that modernism spawned brought with them some unpleasant side effects, which really need to be addressed. Second, modernism as a collection-of-ideas has a really bad habit of not stating its assumptions, and also of treating them as axiomatic, which postmodernism seeks to illuminate and challenge. Third, modernism makes a lot of assertions about the nature of authenticity and authority; specifically, it asserts both that both things actually exist. Postmodernism questions both of those and explores the idea of a copy with no source.
Postmodern art specializes in the remix: collage, sampling, photomanipulation. Old tales are retold from different points of view. Previously dominant viewpoints are challenged. Subjectivity and multiplicity of viewpoints are asserted. Think Rashōmon and its embrace of the ambiguity of its central event. The question of “what happened among the samurai, the bandit, and the lady” has no single answer; it has only the admission that perception matters.
Finally, postmodernism formally establishes the idea of a text: a thing and all the assumptions that went into creating it. More importantly, it says that there’s nothing outside of it. What is written, drawn, said, or otherwise created about other texts are, themselves, texts. As such, they’re subject to the same laws of interpretation and evaluation as the texts they’re analyzing. Critique doesn’t exist in a vacuum; all perspectives are local. There is no “objective” viewpoint; there is only an infinite series of subjective frames of reference, each encoded with its own unexplored and potentially unacknowledged assumptions about what it claims is “true.” The best that an artist can hope for is not objectivity, or the illumination of some universal truth, but the full and honest understanding of one’s own biases and beliefs.
I thought the last one would be the big hurdle. I was wrong.
To tackle post-structuralism, we have to first go through structuralism. Structuralism is a multi-disciplinary methodology for understanding elements of culture by examining the roles that those elements play within a larger framework — the eponymous “structure” — of thoughts, beliefs, and ideas that people hold. Further, it maintains as an assertion that these structures exist, and that studying them is a worthwhile endeavor. The practice of structuralism — because there is no “structure” without “construction” — spread across multiple disciplines in the 1950s and 1960s. The real start of Structuralism evolved out of the work of de Saussure and his research into semiotics. Structural linguistics, as a field, holds that there’s an underlying structure to the way signifiers acquire signifieds, and that by breaking down languages into their base units — phonetics, grammars, and syntaxes — the underlying cultural biases of those who spoke that language could be deduced. Much of his work is, at this point, fairly widely disputed, but in its passing, it informed a fair number of other disciplines, and so it’s important to at least touch ground here.
Anthropology was one of the fields that was heavily influenced by de Saussure, mostly through the work of Lévi-Strauss, who held that every culture has certain universal “deep structures” embedded within it, and that manifestations of these “deep structures” could be seen through various acts that every culture had to perform in some fashion, such as reproduction, eating, coping with death, and so on. Further, he held that — and I quote from Wikipedia so please don’t scream at me directly — “people think about the world in terms of binary opposites — such as high and low, inside and outside, person and animal, life and death—and that every culture can be understood in terms of these opposites.”
In literature, de Saussure’s work eventually evolved into semiotic literary criticism, which held that certain narrative structures were themselves universal, and that literature could be broken down into common structural forms. This paralleled Lévi-Strauss’ study of mythology, which showed remarkable similarities of form across multiple cultures despite there being no obvious connection between them.
I want to make it clear before we dive into talk of post-structuralism that all of these thoughts were showing up in the marketplace of ideas as modernism was really coming into its own. As I noted above, modernism was looking for sweeping narratives that could capture universal truths, and structuralism was a methodology built for finding universal truths in some very messy, chaotic, and often contradictory fields. And so, even as weird as a lot of these ideas may sound, they became very much in vogue in their various schools of thought, because if they turned out to be true, they would greatly reinforce a dominant cultural paradigm into which a lot of prominent thinkers had invested a lot of emotional capital. Lots of people wanted these ideas to be true, and there’s always a strong desire to justify what we want to be true.
Then a lot of folks started asking a lot of questions, from all different angles: beautiful, painful, difficult, and very important, such as the following:
- What happens to a text if we study it from the position of the observer, rather than the creator? On what assumptions did the creator rely as part of creating the text, and do they hold true for those for whom the work was intended? What assumptions does the observer possess, and how do they change the meaning of the text?
- What positions does this text think are in opposition, and which one does it think is dominant? What’s the actual relationship of these concepts to each other, and what happens if we change that relationship?
- What does the language used in this text say about the text? Is this text authoritative when it should be inquisitive? Is it submissive when it should be dominant? Does this text leave room for alternate interpretations, or does it attempt to close off other avenues of research?
- What’s the cultural, historical, and narrative significance of this text and its creator, and how do those elements play into standard interpretations of the work? What inquiries can we make into those background elements that went into the text, and how do those questions inform our opinions and interpretations?
Ultimately, “post-structuralism” is a toolbox of investigative techniques that comes with a big sticker on the lid: “Don’t Take the Dominant Metanarrative For Granted.” Inside the box are a collection of investigative tools for picking apart the cultural assumptions embedded into a text, but not every tool will be useful in every case. And, as with postmodernism above, there’s really no expectation that, in applying any of these tools, we’re actually getting closer to “truth.” There’s only an endless series of layered investigations, and an understanding that the best we can hope to achieve is an embrace of our own biases.
“But th’ buni,” I hear so many of you already interjecting. “Furry isn’t about humans! Isn’t furry as a concept innately ‘posthuman’?” And, at least on occasion, furry does get into these spaces. We do occasionally dip our toes into the ethics of chimerae, stories about child races outliving their creators, and the like. There are certainly aspects of the fandom that poke into these ideas.But when I speak of posthumanism, there’s a whole field of inquiry that furry doesn’t really try to address. Posthumanism is, like postmodernism and poststructuralism, a concept that, at its heart, is a question: “what is beyond humanism?” To answer that, we have to ask what humanism is, and here, for once, I think the Wikipedia article sums it up nicely enough that I’m just going to paste it here:
Humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and affirms their ability to improve their lives through the use of reason and ingenuity as opposed to submitting blindly to tradition and authority or sinking into cruelty and brutality.
So, what does it mean to go “beyond” that idea? Well, for starters, what is a human being? What is it that humans can do that non-human animals can’t? Is it raw intelligence? Is it the ability to know your name? Is it language use? How about Tools? Perhaps we might like to say that there’s some essential set of traits that “humanity” or at least “persons” possess that “non-humans” don’t, but then what happens if a human doesn’t possess that trait? Can we comfortably and reliably identify human beings?
Current political climate suggests no, and that’s all I’m going to say about that.
Extending this idea in the other direction, what about cyborgs? Sure, we can talk about the sexy cases, but I’m talking about something a little more insidious and complicated. I’m talking about insulin pumps, dialysis machines, and prosthetics. Because here, we start to edge into the frightening and glorious possibility of artifice exceeding human potential. We’re just beginning to explore the possibility of technological replacements outperforming the baseline human design. There will come a point, in the not-too-distant future, when technology may well outstrip what we’re capable of doing purely by biology and training. When that happens, Moloch awaits. When a technology gives those who possess it an edge over those who don’t, it can be argued that it’s a moral failure to deliberately refuse that technology. On the flip side, almost everyone intuitively sees the Red Queen’s race that awaits that possibility, as well as the huge potential for ethical disasters.
On the second part, if humanism is an affirmation of people’s potential to improve their lives, then post-humanism is the exploration of what happens when we actually do that. Technological advances have the potential to eliminate work; what do people do with their free time, and how do they make their living? We’re starting to rediscover what indigenous cultures have known and practiced for centuries about psychedelics and mental health; what happens to people if we’re empowered to take ownership of our own thoughts? What happens to people when suffering is optional? What happens when we’ve reached the tipping point that we no longer need to simply avoid pain and can instead start seeking pleasure?
So far, so good. Time for some fire! This really ought to be its own article, but we’ll do it live.
Humanity, as a species, doesn’t really show much in the way of sexual dimorphism. I mean, sure, we can talk about breasts as chest-buttocks, and we can talk about the relative advantages and disadvantages of sexual reproduction compared to budding or shoots or the like, but on the whole, human beings don’t really show broad variation in physical expression when it comes to the various ways in which the six different chromosomal sexes express themselves physically. Certainly, this is true when compared to some other animals, especially when you look at other sex-determination systems.However, “not much” isn’t the same as “nothing,” and those small variations have been enough to create cultures that treat those who broadly fall into one grouping very differently from those who fall into another, and to think of those two groups as exclusive and oppositional.
The group that we’ll identify as “women” — because that’s what most of us have said we wanted to be called — have gone through various “waves” of expressing desire to be treated as people in the same way that those who haven’t been identified as women have been treated as people. Around the time of the French Revolution, several people who identified themselves as women noted that the legal and political framework that Rousseau had created ostensibly in the name of liberté et égalité excluded les sœurs, making political space for just les frères, and they spoke up to try to get some resolution on the contradiction. At the time, most of the focus was on the legal framework of female identity, including subjects like suffrage and property rights. It wouldn’t be around the 1960s that folks who identified as women would start to push in an organized fashion for equality in a lot more spaces traditionally reserved for people, like the workplace and the bedroom.
However, late into this cycle, and early into the next, the folks who identified as women started to run into this one uncomfortable and pervasive problem, especially as modernism began to give way to postmodernism and certain technological advancements were just starting to creep into cultural awareness: what’s a woman, and what does it mean to be a woman?
Just as in the question of humanism above, there’s no really good definition of “what’s a woman” that includes everybody who wants to be in the group for the right reasons and excludes both everyone who wants to be left out of the group and everybody who wants to be in the group for the wrong reasons. We can’t even agree on what the right reasons are. Bioessentialists aside, the lines are a mess. Chromosomes can’t guarantee it. Hormones can’t guarantee it. The presence or absence of breasts can’t do it. The presence or absence of a vagina can’t do it. Being or not being in the social role of women can’t do it. There just isn’t a good tool for saying who is and who isn’t part of the category “women,” and that’s a problem if you’re trying to use “women” as some kind of identifier for who gets to be in the club that you’re forming, or if you’re trying to keep women out.
Well, that’s not true. There’s one utterly reliable method for figuring out who’s a woman: positive self-affirmation. Want to be a woman? Say you’re a woman. It’s that easy.
And yet… I can hear the puckerclenching from here. Every last one of you has at least one reason why that won’t work. You’re wrong. Why? Because whatever it is you’re about to say, it’s making one of two mistakes:
- It’s making some essentialist assertion about womanhood that you can’t actually support if you carry your thoughts out to their logical conclusion.
- It’s confusing sex and gender.
I’ve covered the former; for more information, please reread. For the latter, I honestly blame a lot of emergent language coming out of the trans “community” as a whole for this mess. When we talked about sex changes, sure, people tittered behind their hands, but at least everyone knew what we were talking about. Then we changed it to sex reassignment surgery, which wasn’t a bad term, and I’ve seen “genital reconstruction surgery,” which I thought was actually pretty close to optimal, linguistically. Then somehow GRS evolved into “gender reassignment surgery,” as though social role could go under the knife. Lately, I’ve been hearing “gender confirmation surgery,” which… I can see what people are trying to convey with this one, but I can’t help but hear it as though they’re expecting changing a part of the body social custom demands we hide could alter a social role, which again blurs the sex-gender dichotomy, and I think we need to take a lot greater care to avoid reinforcing the bioessentialist positions in our language.
Fundamentally, the problem of trying to answer the question of “what is a woman,” and really of being any gender, is that “gender” is an one-way aggregation function of qualia related to the embodiment of self, enmeshed with acculturated social roles. More importantly, that mapping is intensely — inalienably — personal. What I say X is evidence of being “a woman” is not what anyone else says when that person says is evidence of being “a woman.” I’m actually saying that X is evidence of being “a womanbuni“. When the President of Elbonia says it, they’re saying that X is evidence of being “a womanPresident of Elbonia“. The problem is that we don’t vocalize the subscripts, and because we don’t, we all think we’re using common language when in fact we’re not, so we assume that traits that are pervasive are in fact universal, and traits which are uncommon are in fact absent. Then, when we run into people who lay claim to labels that don’t seem to match our internal aggregation functions, we have this terrible habit of trying to enforce those roles. We confuse map and territory.
At its core, the problem of gender — any, every, and all genders — is that we think it’s connotative and it isn’t. This wouldn’t necessarily have to be a problem, except that in our efforts to act as if gender conveyed meaning, we end up hurting as many — possibly more — people than we help. Not just people who identify as women, but people who identify as men, people who change how they identify, and people who don’t identify as either suffer immense psychological, social, and physical harm, all because of a pervasive societal misconception that “gender has extrinsic meaning.” People who express their gender, whatever it is, no matter how strange it may sound, are reifying facticities about themselves and their aggregation-functions, which are basically unrelated to anything going on in your head; any connection between what they say and your own perception of their gender is coincidental at best. Refusing to acknowledge those statements is, socially and psychologically, akin to refusing to use somebody’s chosen name. When we learn that we’re doing things that harm people, the right answer has never been to double down on that thing and tell the other party to deal with it. The right answer is to apologize, and then to fix it. Gender as a map of social role to biology was useful in its day, just like impetus theory and the Bohr model were when they were developed. We need a new model of gender, one that takes into account quantum entanglement and acceleration.
I have a feeling that this one’s going to be controversial in my own community, but I’m going to include it anyway, because I think there’s a key element here that all of the previous conversations can’t actually quite cover.Postmodernism and post-structuralism both encourage heavy use of things like deconstruction and hermeneutics as tools of linguistic discovery and literary analysis. These are powerful tools, but they carry with them a risk: by encouraging people to inquire not only into the biases and motives of others, but also their own unstated assumptions in order to more fully embrace them, there’s a constant risk of emotional over-exposure, of having to dig too deeply into one’s own unstated assumptions in ways that create vulnerability. In challenging the edifices of objectivity and authority, it’s very easy to create an environment in which there is no “expert.” In such a framework, if everyone’s opinions are equally valid, anyone’s strongly held opinions are open to challenge. The path to avoiding being challenged, then, is to reject holding onto any beliefs strongly enough to warrant being attacked for them. Or, if one must hold onto such ideas, to coat them in a thick patina of disaffection or cheerful hypocrisy to deliberately avoid having to perform such difficult emotional labor.
Posthumanism and postgenderism, meanwhile, rely heavily on personal expression as the only reality that matters. There’s an explicit push in both philosophies to reject any narrative that isn’t from an individual about oneself. Indeed, any statement one tries to make about another’s facticities is necessarily limited to one’s perceptive accuracy, which is definitionally only as good as one’s perceptions are. Given what we spent the first two sections above discussing, the odds are not in our favor.
In other words, postmodernism and post-structuralism as cultural values tend to promote an unhealthy level of ironic detachment. Conversely, posthumanism and postgenderism almost demand a certain degree of authenticity as an acceptance of lived experience and personal truth. The challenge, then, is how to reconcile these seeming opposites.
Enter a construct meant to express the ideas of reverting from irony to new sincerity, but more importantly, treating the absurd at face value. Postirony deliberately blurs the distinction between detachment and engagement by finding warmth and comfort in things things that should raise our emotional defenses. One good example would be Danger 5, whose faithful reproduction of hokey 1960s television, complete with visible boom mics and paper-thin plot and characterization, captures both the sincere appreciation of the media of the day and the snarky repudiation of it, all at once. Through postirony, we accept the joys of earnest emotional engagement with our values while maintaining a degree of awareness that they lack any necessary objectivity, that we are creatures of limited perspective.
I think that wabi-sabi actually does a good job of expressing the post-ironic impulse as an aesthetic value. This artistic encapsulation of the Buddhist principles of “nothing lasts, nothing is finished, and nothing is perfect” captures the sort of cheepnis that both encourages appreciation of what can be and has been accomplished, while also reminding the viewer of the limitations of all human endeavor, including the viewer’s. This complex, multilayered engagement is exactly the sort of Hegelian synthesis needed to bring the all the different “posts” together into a common collective.
And again, here at the end, the seemingly divergent views come together into a common understanding: self-aware expressions of personal authenticity through deep inquiry into the assumptions that underly our history, culture, and society; coupled with an acceptance that our views are necessarily incomplete and subject to change as our perceptions of self change. We ask challenging, complicated, and often irreconcilable questions about our own existence from a multiplicity of narrative perspectives, and we contextualize what answers we can produce according to those viewpoints. We understand that we are always and forever bound to see the world through our own eyes and that only we ourselves can answer many of these questions, and only ever for ourselves, if they can be answered at all. We accept what answers we find as long as we remain convinced of their value, and we seek new ones when the old ones no longer serve us.
Next time, we’ll put it all together.