Laboratory

tk-link via Creative Commons

Shock Totem, a horror zine, held a flash fiction contest early last month. One thousand words based on a photograph of a rusting roller coaster in the mist. Each participant had one week to submit their story. Then all submissions were anonymized and posted for the participants to read, vote on, and provide some feedback. After a month, the votes were tallied and the feedback posted, along with the winners.

Now, I didn’t really get my hopes up too high. Several of the submissions were truly great, so I knew the competition would be rough. But after reading all 45 or so of the other entries, I felt my story had an original take on the prompt that maybe would set it apart from the others. When the results came back and I didn’t win, I wasn’t that surprised. I was happy to get a bunch of feedback on my writing, though. But then I looked at it.

And I got confused.

Instead of what I expected to see (I had realized after submission there was a minor plot hole), there was a lot of… well, noise. No one complained about the plot hole. Instead people complained about things I hadn’t thought would be unclear. I felt that for every specific bit of praise there was a corresponding criticism regarding the exact same element. “Unpredictable!” Followed immediately by “Telegraphed ending.” And, “Captured the characters,” matched with, “I didn’t care about these people.”

I finished going through it all and felt deep frustration. I was discouraged. Something must be wrong with my writing, I concluded, or the feedback wouldn’t be so scattered. Maybe if it was a mix between, “I loved this!” and “It wasn’t for me,” I could understand. But this felt like peers who couldn’t agree on whether the story was even good or not. If they can’t sort it out, how can a reading audience be expected to?

Angry and unhappy, I went to bed.

When I woke up, I got some good news: a story I had been getting a lot of form rejections for was finally accepted. What’s more, it was from a paying market. Not a pro-level market, but paying nonetheless. The welcome confidence boost cooled me off. Instead of grumping into “Woe is me” territory, I started thinking.

And ultimately, I did what I always do when I have information but no conclusion. I went into nerd mode.

In order to tease out the truth behind the feedback from Shock Totem’s contest participants, I began categorizing the specific bits of feedback data into a spreadsheet. I broke them down into two categories first: simply positive and negative.

I came up with 62 opinions and they were equally split: 31 positive, 31 negative. Even as I did this, copying fragments of the impressions into columns, I began to see a few patterns emerge. In a way, this confirmed my impression that the overall sentiment was mixed. But I wanted to know what was really being said. So I started so subdivide and quickly I realized most of the line items fell easily into categories.

On the positive side, nearly all sentiments boiled down into one of a couple broad notions:

  1. The concept was good, or used the prompt in a novel or interesting way.
  2. The writing was strong, with vivid descriptions, nice pacing, good description, etc.
  3. Distilled to the essences, the positive feedback was saying it was a good idea, executed pretty well.

The negative feedback was less uniform. But still there was room to distill the criticisms. They amounted to:

  1. The ending was too easy to predict.
  2. The writing wasn’t without issue. Particularly, it had a few confusing sections and there were a number of complaints about lack of plot or development.
  3. The characters weren’t easy to relate to nor were their motivations clear.

Broken down this way, suddenly the feedback was clear. The story had a good idea, mostly good execution, but more focus needed to be on the characters and the ending was too easy to see coming. There were still some direct disagreements. For example, one or two people thought the conclusion was shocking (as intended), while eight or nine felt it was obvious. It was not exactly the whirlwind of contradictions I had imagined just reading them over. And considering the wide margin of critiques on the “it was predictable” side, it’s fair to say the ending needed work. Even in cases where there wasn’t a majority and a minority opinion, such as whether or not the characters were well-rounded enough, it’s hard to say if characters can ever be too well rounded.

The main insight I gained was to understand that the core aspects that earned praise—in this case the good use of the prompt and the solid exposition—were rarely if ever contradicted. There might be some disagreement about what was wrong with it, but it seems most people agreed about what I did right.

And the overall lesson I learned was this: reading scattered feedback can be misleading. Your brain forms correlations and leaps to potentially false conclusions that are not necessarily borne out by a detailed analysis. Obviously it’s impractical to do this kind of thing on a scale much larger than what I was dealing with, and it’s likely the reviews will be coming from much more distributed sources. But I think it’s worth remembering that a human tendency is to skim and downplay positive feedback, particularly when it’s not gushing or glowing. Likewise, negative opinions are often amplified and can feel like they offset any positive commentary.

So, armed with these lessons, I’ll head back and re-work this story, thankful for the feedback that was always great, I just didn’t know how to analyze it at first.

2 thoughts on “The Scientific Method

Comments are closed.