Do kids have a fundamental sense of fairness?


Do kids have a fundamental sense of fairness?

Experiments show that this quality often emerges by the age of 12 months.

This article was originally published by Scientific American.

By: Katherine McAuliffe, Peter R Blake, and Felix Warneken SCIENTIFIC AMERICAN

Children have a reputation for selfishness. Picture a traditional morning-after-Halloween scene: A child is hunched over a huge mound of collected candy while their parent stands by begging them to share their spoils with a younger, less fortunate sibling. The frustrated parent in this scene embodies the common notion that the only way to get children to be fair is to forcibly extract it out of them, like blood from a stone.

After studying children’s fairness behavior for nearly a decade we argue that this reputation is, well, unfair.

We travel to public spaces in different cities and ask children to play a simple game: Two children who do not know each other are paired up and given an unfair distribution of candy. One child gets four candies, the other gets one candy. Here’s where things get interesting. One of the two children — the decider — can accept or reject the allocation. If the decider accepts, both children get their candy. If the decider rejects, both children get nothing. Imagine that, like the Halloween scenario, the child in power gets four and their partner gets one. What will they do?

If you are like most parents watching their children play our game, you probably think the decider will happily accept the four, creating a stark inequality with the peer. Children only focus on getting more for themselves, right? To the surprise and delight of many an unsuspecting parent, children — at least older children — frequently reject this unfair advantage. They are willing to sacrifice their own rewards to prevent someone else from getting the short end of the stick. Getting nothing seems better than getting more than a peer, even a child whom they have just met.

The act of self-sacrifice in the name of fairness is indeed surprising. But more than that, it flies in the face of our intuitions about where fairness comes from in our species. There is a commonly held belief that humans are fundamentally selfish agents and fairness is a construct designed to help us override our selfish instincts. Not only this, but the idea really seems to be that fairness doesn’t come naturally, which is why we need institutions like the justice system to make sure that fairness prevails. Psychologists and economists have begun to gradually chip away at this notion, showing that people are actually pretty fair even when they can get away with selfishness.

But this still doesn’t tell us where fairness comes from. Is fairness something that must be learned via extensive experience? Through explicit teaching from adults? To answer this question, we need to look to children. Indeed, a suite of recent studies with children suggests fairness is not something that takes a long time to develop or that must be enforced through formal principles and institutions of justice. Rather, fairness is an integral part of our developing understanding of how to social world operates and, perhaps more surprisingly, it guides children’s behavior from very early on.

Indeed, children apply a strong sense of fairness not only to themselves — they also stand up for others. We invited children to play a different game in which they learn about a decider who selfishly wanted to keep all candies for themselves, refusing to share with another peer. Our child participant then faces a choice: Do they stand by and do nothing or do they get involved and prevent the injustice? To make it especially difficult, children must pay a cost for intervening — they have to give up some of their own candy to prevent unfairness. Nevertheless, children regularly intervene, choosing to pay so they can prevent the selfish child from getting away with unfair behavior. Together, these findings show children hold themselves and others to high standards of fairness.

We are now sitting on a mountain of evidence from our studies as well as those conducted by others that suggests fair behavior has deep roots in development. Infants as young as 12 months expect resources to be divided equally between two characters in a scene. By preschool, children will protest getting less than peers, even paying to prevent the peer from getting more. As children get older, they are willing to punish those who have been unfair both when they are the victims of unfairness as well as when they witness someone else being treated unfairly. Older still, children show what we described above: They would rather receive nothing than receive more than a peer.

More recently, many developmental psychologists, including our team, have begun studying these behaviors across different cultures, asking whether children everywhere show a similar developmental pattern. What we have found is that certain aspects of fairness appear to be universal. For example, children everywhere seem to dislike getting less than a peer. Other forms of fairness, however, appear to be more culturally variable, perhaps shaped by local customs.

Of course, a mature sense of fairness consists of more than just reactions to inequality and, indeed, we see more sophisticated concepts of fairness in children as well. One bedrock of fairness is that you should share the spoils of your joint labor—equal work, equal pay. Children as young as three years are sensitive to this: When they have to work toward getting a toy or a treat, they are more likely to share the spoils equally with a peer co-worker than when each one worked on the task by themselves. They even track how hard each person worked and reward the one who worked more accordingly.

The bottom line here is that children, even young ones, show remarkable sophistication not just in their understanding of and conformity to norms of fairness but also in their ability to enforce fairness in others and to flexibly tune fairness to different situations. These exciting developments dovetail beautifully with work showing that adults are often fair even when they could be selfish, and suggest we need to overhaul the notion that humans are fundamentally out for themselves at the expense of others. Instead, we should adopt the idea that fairness is a key part of our developing minds from as early as they can be studied.

So, sure, children are selfish sometimes. We should recognize, however, that just like in adults, alongside their impetus for self-maximization runs a deep and maturing concern for fairness — not just for themselves but for others as well.

Advertisements

What do I owe my mother? It’s not selfish or childish to refuse to forgive and forget


 

Image result for women on the beach with a cell phone

I was on the beach when news of my mother’s stroke reached me. Did I really have to cut my vacation short?

By: Anna March/Salon

I was in Hawaii, enjoying the first days of an annual post-Christmas ritual, a trip of three weeks from the cold East coast. In the mornings, I swam in the balmy Pacific while my partner, Adam, slept, followed by breakfast together: pineapple-orange-guava juice, macadamia nut pancakes. One January morning, as I emerged from my swim, my smile faded when I spotted Adam parked just off the beach, his face tense. “Your mother called six times.”

It won’t be that my grandmother has died, I thought. My mother had other people to get attention from in that case: siblings, family friends. This was something to do with her — something she wanted an audience for other than her husband of twenty-eight years, Bill, who’d probably already rolled his eyes and continued with his day. I, her only child, was It — vacationing or not. I played the first voicemail. “I’m at the E.R. I think I’ve had a stroke!”

I turned to Adam. “I don’t have to go home, do I? I don’t. Please, tell me that.” I knew my words were selfish. But I also knew this: I’ve earned the right.

***

Forty-six years before, at nineteen, my mother became pregnant during a fling with her older, married boss. Her boyfriend, Martin, was serving overseas. She wanted to abort me — she’s told me this — but abortion was illegal, dangerous, and especially difficult for a Catholic teen living with her parents. She told Martin he’d gotten her pregnant. Even though the timing of my birth made the lie obvious, he married her. They divorced when I was six but he abused me until I was nine. I found out in my twenties that I wasn’t his daughter, when I called him for information for a medical form. He said, “You won’t need that from me. I’m not your father.” She had never told me.

I played the second and third voicemails. With each one, my mother was more exuberant as, I assumed, test results came in: “I had a stroke! I did!” I handed the phone to Adam. He’d tell me if I was being a jerk. He listened, shaking his head, said, “I swear, she’s got a personality disorder. Who’s happy they had a stroke?”

I felt relieved when Adam confirmed that my skepticism and annoyance were not just me being the “ungrateful brat” my mother had called me when I was ten or the “mean little bitch” at fifteen, but instead a valid reaction. I played the remaining messages and sat down on the sea wall, first to face the phone calls to find out what actually happened, and then to perform the harder part: deciding if I needed to return to frozen Delaware and my mother or stay in Hawaii.

***

My mother didn’t want me, but she still was my mother, dropping out of college to give birth. She worked two jobs to feed and clothe me, paid my Catholic school tuition until eleventh grade, bought me sports lessons and camp, took me to doctors, gave me birthday parties. She didn’t abandon me. She raised me to be a feminist and an activist and I was grateful for that. Even after I moved out as a teen, she’d sometimes helped me financially.

I’m not positive she knew of Martin’s abuse, when I told her about it fifteen years later, she said, “I didn’t know.” Maybe not. I’m convinced she wouldn’t have confronted him anyway. He could have exposed my paternity (something she still doesn’t readily admit).

After their divorce, my mother left most of my care to her parents, Auggie and Martha, who lived nearby in D.C., while she worked and often socialized afterward. I went to them after school and stayed through dinner, then my mother picked me up and returned me at dawn. My grandparents drove my car pool. I spent Sundays with them — church, cooking, family dinner. In many ways, my grandparents’ generous love and stern guidance saved me. Just before I started ninth grade, they retired to a beach town in Delaware.

Alone together, my mother alternately screamed at or ignored me. She erupted over nothing. If I fought back, she called her family or Martin and told them I was “out of control.” After one fight, she sent me to live with my twenty-five-year-old uncle and his girlfriend in their apartment. My commute to school was 90 minutes by bus to a train to the car pool of a friend, who took me out of pity.

I felt miserable so far from school and friends. Meanwhile, my mother left for a month in Greece, a Christmas present from a boyfriend. When she returned, I went home for junior year, determined to stay out of her way. I worked thirty-two hours a week at a grocery store, paying my own school tuition. My grades dropped, and when I got pneumonia they dropped more. Now I was the “problem student” she’d called me for years. Around that time, she began pushing me to contact Martin and “work on getting your college tuition out of him,” adding, “He’s your problem now, not mine.”

I stuck around until October of my senior year and then escaped to my boyfriend, David, who’d been transferred by Foot Locker from D.C. to a mall in Virginia Beach. With no first and last months’ rent to get an apartment, he lived in a dilapidated motel. I lived with him, chain-smoking and watching TV while he worked, too depressed to look for a job. We lived on his salary, $280 a week, and some weeks, after lodging and gas and our payday splurge at McDonald’s, we had $50, or $3.50 a day, for food. I ate peanut butter crackers and Coke, 7-Eleven hot dogs or soup. A dozen biscuits from KFC (for 1.85) fed us for two days. When we couldn’t pay the motel, they’d keep our stuff and lock us out. We slept in our car, and once, in the back of Foot Locker, using packs of socks for pillows.

I loved David. He, too, saved me, although by then I’d internalized my mother’s verdict — I was hopeless, messed up, a lost cause. For two decades this was a self-fulfilling prophecy. I graduated from high school on time (after returning to D.C. in January, I passed the GED in the spring) and completed two years of college, going when I could afford to. But by my mid-thirties I had two failed marriages and two failed businesses (a nonprofit arts organization and nonprofit consulting business both folded). Lies, bad health, heartbreak . . . unhappy times.

***

Sitting on the wall in Hawaii, I remembered these experiences and situations I’d replayed in my mind or in therapy. And then this: When I was thirty-seven, I left D.C. to live with my grandmother, by then eighty-two, at her Delaware beach house. Auggie had died. I felt deep love for Martha and also that I owed her.

For two and a half years I cooked for her, kept house, scheduled her doctors, drove her to church and hair appointments. We tried out lipsticks and listened to the Andrews Sisters; I learned to poach her eggs just right. It was both wonderful — the daily kindness and love, the chance to step back from my life and just think — and one of the hardest things I’d ever done; I hadn’t been prepared to bathe her and wipe the intimate folds of her skin, to help her when she soiled herself. But I did these things with love and even pride. Martha had raised six children before half raising me; she’d nursed her father for his final six years. I was happy to be able to give her what she’d given others.

Eventually, I moved to my own place at the beach. Living cheaply and purposefully, I gradually rebuilt my life. I began writing seriously, running, adopted an emaciated old dog and nursed him back to health.

I’d seen my mother once a week when living with my grandmother. After that, we shared only an occasional meal. I still felt some responsibility toward her until a year before the Hawaii trip when I’d traveled to Europe with her, Bill, and my grandmother. I knew it might be my gran’s last chance to get away, and I thought — optimistically — that we might all have fun together. Instead, my mother made a huge scene, screaming at Bill and me — wildly, appallingly — in the line to board a ship. For the rest of the trip, I didn’t eat with or talk to her.

On returning, a therapist helped me understand I would probably never make my mother proud or happy — never even really matter to her — because she wasn’t capable of that, or of treating me with kindness and respect. I grieved. After grieving, I set firm boundaries, which I conveyed by e-mail: I would no longer talk to her on the phone; for pressing business she could e-mail; I’d see her for two hours twice a year — once in May around her birthday (though not that May; it was too soon), and once around year’s end.

She e-mailed back: Could we meet in person to discuss?

No, I said.

Ten months later — near Christmas, just before the trip to Hawaii — I visited her at my aunt’s condo in D.C. The house was full of relatives so, happily, there wasn’t time alone with her. There was talk of the meal, the weather. I brought her a gift; she thanked me. As always, she did most of the talking, never asking how I was or what I’d done the past months; but neither did she scream at or insult me. Improvement already! I’d left for Oahu feeling victorious.

But only two weeks, staring at the vast blue sea, the toasted white sand, I wondered: Has my mother really had a stroke? And if so, what happens now — to her and to me?

Here in middle age, many friends are grappling with what to do with and for aging parents, ill parents. What do we owe them if they were good to us? How about if they weren’t, but are now? What do we owe a parent who’s obnoxious but not entirely without merit, who gets a D but not quite an F? Who’s obnoxious without merit — solid F — but because of a mental health issue? Even the mighty Tony Soprano had panic attacks when he put his narcissistic mother in a pricy nursing home. How do we find the line between what our parents need and what we need in order to proceed with our lives? Refusing to deal with an aging parent, refusing to be available instead of reserved, makes you selfish or childish or worse, doesn’t it?

Or does it?

When I veer toward guilt about my mother, it always turns out to cover sadness. I would like to be able to do for her what I did for my grandmother: to take care of her in old age, even have her and Bill come live with me. But that’s in a world where my mother is a different person, I remind myself.

I can’t reach back, and I can’t know if she’s going to change, as some mothers do. (“We were all bad mothers to some extent,” a therapist once told me. “What’s important is how we deal with that in our child’s adulthood. Do we apologize for it? Can we relate to our adult children with love and respect?”) No and no seemed to be the answer in my mother’s case. At sixty-six, unwilling to examine her part in problems with me or anyone else, hadn’t she made her bed? It took me decades to pull my life out of the horror show it had become — I won’t say because of my mother, but certainly my childhood, from birth to adulthood, figured in.

After talking to my mother, then a nurse, then Bill (who, familiar with her hypochondria, admitted he hadn’t rushed to the hospital when she called; he’d waited for confirmation that she was actually sick); after relaying it all to Adam; after remembering what my various therapists taught me and what I vowed in Europe — a smart vow, made for the right reasons — I told Bill I’d decided not to return early.

When I returned I did go visit her, and she told me she’d like to retire but worried she and Bill couldn’t afford it. Years ago, I’d told her I was trying to earn enough so that, when she was ready to retire, I could send her $1,000 a month, but in the meantime I hoped she and Bill would map out how they’d otherwise support themselves. I suggested selling her house and buying a condo; finding part-time work; making a plan to cut back on expenses.

So when she mentioned retirement on the phone, I didn’t respond. Although she hadn’t done any planning, I did want to help. Giving her money was something I could now do — not instead of spending time with her, because I wouldn’t do that regardless, but because I wanted to.

The stroke turned out to be minor. “That’s great!” I said when I saw her. “Sounds like you’ll be fine.”

“My speech is slurred.”

“Really? You sound perfect to me.”

Bill nodded. “To me, too.”

“Well, I’m not,” she said.I’m slurring my words. Horribly.”

In keeping with my vow not to engage, I wrote her a check for $12,000. “I promised you this. I hope it will help you retire. I’ll do this every year, as long as I can afford to. But I hope you’ll do what we discussed to set yourself up for the future, because I won’t be able to see you more often or help you daily if something happens.”

She didn’t respond.

“I would love to know you have a plan, Mom. I could hire a social worker to help you make one.”

“I have a plan. I’m going to die before Bill.”

I stifled my sigh. “Uh-huh. And if that doesn’t happen? Or if he outlives you but can’t care for you?”

She shook her head, turned away.

“Okay, then,” I said in my calmest voice. “I’ll see you in May. Good luck with your recovery.”

***

Still, I struggled: no matter how little I saw my mother, I felt like I was turning my back — and while the rational part of me, the therapized part, knew this was okay, the emotional part still wondered: Is this right?

When I thought about my mother and felt myself inch toward that sad, fearful place, I pulled back to reflect on painful truths: She does not engage in appropriate ways. We don’t have a meaningful relationship. I’ve been emotionally kind without much emotional reciprocation. I owed my mother nothing. Maybe some day she would change, but I couldn’t see it happening any time soon.

In December, a week before Adam and were scheduled to head back to Hawaii, almost a year after her stroke, another emergency: A possible heart attack. Bill refused to come home from a charity food drive, so she’d called 911. The EMTs called me, probably directed by her.

I stopped packing, called the dog walker, and headed to the hospital.

No heart attack had occurred. Hobbling around on a cane, my mother complained of vision loss and pain. Perhaps this was honest. I couldn’t know. After hours of forms, examinations, and drama, a nurse asked if my mother was under psychiatric care because it all seemed “an elaborate acting out . . . not even an anxiety attack.”

On the way home, she chattered away and I kept silent. Apropos of nothing, she announced, “I was an attentive mother.” I wasn’t sure whether to laugh or howl with rage, but I kept my voice measured. “No. You weren’t.”

She was quiet.

“Good-bye, Mom,” I said, getting out of the car. “I won’t see you again until spring.”

She looked at me, at last silent, and then we stiffly hugged a terse good-bye. A week later, I headed off into the sunny, beckoning second half of my life.

_______________________________________

This essay is adapted from “The Bitch is Back: Older, Wiser, and (Getting) Happier,” edited by Cathi Hanauer, to be published Sept. 27 by William Morrow, an imprint of HarperCollins.

__________________________________________________

Anna March’s writing has appeared in The New York Times’ Modern Love column, New York Magazine, Tin House, The Rumpus and frequently here in Salon. Her essay collection, “Feminist Killjoy,” and novel are forthcoming. Follow her on Twitter @annamarch or learn more about her at annamarch.com.

“Grow Up” Caught in a bad bromance? We should be encouraging man hugs, not mocking them


 Caught in a bad bromance? We should be encouraging man hugs, not mocking them

 China’s Cao Yuan and Qin Kai; Britain’s Jack Laugher and Chris Mears hug after the men’s synchronized 3-meter springboard diving final at the 2016 Summer Olympics, Aug. 10, 2016. (Credit: AP/Wong Maye-E)

The reaction to a passionate embrace between male Olympic divers illustrates the ongoing stigma of male intimacy

By: Nico Lang/Salon

What’s the matter with a hug? Nothing, unless it’s between two men.

Last week, the U.K.’s Daily Mail raised its proverbial eyebrow at an embrace between Chris Mears and Jack Laugher, a pair of British Olympians who had just won the gold medal for the 3-meter synchronized dive. In a photo of the two, Laugher appears to pounce on his diving partner, wrapping his arms around Mears’ neck. The moment is sweet and a bit silly, the kind of rapturous gesture that doesn’t seem out of the question when someone has earned the highest possible honor in his sport.

But to the Daily Mail, there was something questionable — even girly — about their prolonged cuddle, contrasting their elation with a more subdued exchange from the Chinese team. “Britain’s victorious synchronised divers hug for joy after winning gold,” the Mail wrote, “while China’s bronze medalists settle for a manly pat on the back.”

Twitter users decried the gaffe as distasteful, but the Mail is hardly the only outlet to express discomfort at a display of affection between two men, even heterosexual ones.

President Barack Obama’s farewell hug to Jay Carney, his former press secretary, became an instant meme in 2014. BuzzFeed dubbed it “the most awkward hug in White House history.” But what made the moment an object of fascination is its seeming novelty. Most Western men do not exchange hugs as a social custom, and they especially do not exchange prolonged ones. Kory Floyd, a researcher at Arizona State University, suggests that the maximum length for a male-on-male embrace is one second, and anything longer is coded as being romantic or sexual in nature.

To put it bluntly, men shy away from hugging one another not because it’s “unmanly” but because they’re worried about being perceived as gay. Following last week’s controversy, Laugher’s girlfriend went so far as to make a public statement about her boyfriend’s sexuality. (After all, the Olympic partners do share an apartment together.)

The stigma surrounding male-to-male intimacy is a relatively recent phenomenon, a product of the lavender scare of the 1950s. Sen. Joseph McCarthy, who some speculate was a closet homosexual himself, led a purge of gay employees from government offices, as it was believed that queer people were inherently communist sympathizers. In McCarthy’s America, gay people became a pathologized class of individuals, with homosexuality classified as a mental illness by the American Psychiatric Association until 1973. (In many countries, being transgender is still defined as such.)

Before the postwar era, homosexuality was on the minds of few Americans; many might have associated being gay as the effect of a biblical plague, the curse of Sodom. Gay people weren’t a real-world threat. When France decriminalized sodomy in 1791, homosexuality as a lived identity wasn’t thought of. Homosexuality was defined as a behavior not a discrete way of being. Openly gay writer Marcel Proust didn’t introduce the concept into French literature — in his expansive “In Search of Lost Time” series — until 1913.

Without the cloud of anti-gay suspicion around male friendships, same-sex intimacy was extremely common in the 19th century, when it was customary for men to walk around holding hands and even sleep in one another’s beds.

The most famous example of this is President Abraham Lincoln, who shared a bunk with his best friend, Joshua Speed. Carl Sandburg, the revered American poet, was the first to suggest that there had been anything sexual between the two men. He wrote euphemistically in “Abraham Lincoln: The War Years,” a 1926 biography of the 16th president, that their kinship was defined by “a streak of lavender and spots soft as May violets.” (Their possible love nest was re-created by artist Skylar Fein as a potential moment of gay history.)

Lincoln’s sexuality remains an open question, but expressions of affection that break with modern notions of homosociality were many and varied. Daniel Webster, who served as the secretary of state under Presidents William Henry Harrison and John Tyler, was known to refer to some of his male friends as “my lovely boy” in correspondence.

The best illustration of changing gender mores is in “Picturing Men: A Century of Male Relationships in Everyday American Photography,” a book by John Ibson, a California State University, Fullerton professor. His book is a treasure trove of portraits of 19th-century American masculinity, in which male subjects are pictured with their arms draped over one another, hands clasped and sitting on each other’s laps.

Men in the 19th century were, for lack of a better way to put it, all over one another — without the faintest whiff of irony.

Reviewing Ibson’s collection, Alecia Simmonds of Australia’s Daily Life wrote that the lack of similar patterns in relationships today is not merely a product of homophobia but the emergence of women in public life. “The photos in Ibson’s collection were taken during a time when life was incredibly gender segregated,” she said. “Your primary emotional identification was with people of your own gender.”

Women have taken the place of other men as the objects of that affection, as outlets for the male need for human touch and intimacy. A 1997 study from Purdue University showed that 75 percent of men relied primarily on women — particularly their wives or girlfriends — as their sole source of close companionship. While relationships between women were often confessional, based around conversation and disclosure, the study showed that male friendships were less intimate and more driven by activity.

That kind of distancing can be seen today when two men go to the movies together. They leave an open seat between them.

These findings correlate with the overall reality of male friendships: A 2006 survey inAmerican Sociological Review showed that adult heterosexual men have fewer friends than any other population in U.S. society. As Salon’s Lisa Wade has previously suggested, it’s not that straight men don’t want more friends. They do. “Men desire the same level and type of intimacy in their friendships as women, but they aren’t getting it,” Wade wrote.

Not having those desires met can have a deeply detrimental impact on heterosexual males. In a 2013 essay for The Good Men Project, writer Mark Greene called it “touch isolation.”

“American men can go for days or weeks at a time without touching another human being,” Greene wrote. “The implications of touch isolation for men’s health and happiness are huge.”

Gentle platonic touch is key to the early development of infants, Greene noted, adding that it continues to play an important role throughout men’s and women’s lives for their “health and emotional well being, right into old age.”

Men’s desire for intimacy from companions of the same sex is repressed at a young age, Greene said, but this is reinforced throughout their lives through stigma — the suggestion that there’s something less than masculine or abnormal about male-to-male intimacy. When two Russian men filmed themselves walking the streets of Moscow holding hands, they were repeatedly harassed by passersby — called “bitches” and “faggots” and even told to leave the country. Another man forcibly ran into them, hoping to start a fight.

Things are changing, however, along with the advent of a new generation that views homosexuality in a more positive light, thus lessening the shame around potentially being seen as gay.

In a 2014 study published in Men and Masculinities, British researchers Mark McCormack and Eric Anderson found that 98 percent of college-aged men surveyed in the U.K. had slept in the same bed with another man. And 93 percent of those surveyed said they had cuddled a male classmate. “They don’t realize this is something that older men would find shocking,” McCormack, a professor at Durham University, told The Huffington Post. “It’s older generations that think men cuddling is taboo.”

The shifting tides are evident in the embrace of friendships that skirt the line between what might have been considered outré even just a decade ago.

Real-life best friends Ian McKellen and Patrick Stewart went on an extremely affectionate tour of New York after finishing their 2014 run in Broadway’s “Waiting for Godot” in a honeymoon that inspired a million BuzzFeed lists. Members of the former One Direction boy band were often known to affectionately kiss one another or slip their hands into each other’s butt pockets. Those public displays of affection weren’t jeered. They made the band’s female fanbase go apeshit.

It might seem like The Daily Mail’s homophobic flap is a reflection of the current cultural era, but this lingering distrust of close male friendships is increasingly behind the times. Hugging it out doesn’t just feel good. Embracing the bromance makes the world a better, cuddlier place.

Secrets of the penny candy jar: From Tootsie Rolls to Necco wafers, the real story behind every nostalgic treat


Whether you loved Milk Duds, Pixy Stix, the Circus Peanut or the Charleston Chew, these histories are sweet

By: Susan Benjamin

Excerpted from “Sweet as Sin”

One of the beauties of the penny candy store was how you collected your stash. Some candies were unwrapped, and you picked these up with a small scoop, a set of tongs, or, at the beachside shops when no one was looking, your bare hands. Others were individually wrapped. These candies were what got penny candy out of the apothecary and grocers and into mainstream shops. Wrapped and ready, these candies were labeled, sanitary, and above all, self-contained. What better way to end our penny candy search than with a few favorite wrapped selections.

TOOTSIE ROLL: AN ENIGMA WRAPPED IN A MYSTERY WRAPPED IN CHOCOLATE

The history of the Tootsie Roll began with an Austrian immigrant named Leo Hirshfield. The rumor—actually a published and respected rumor—was that Hirshfield started making his candy in a little shop in Brooklyn, New York. He named his penny candy the “Tootsie Roll” because it was a roll of toffee-like chocolate and his daughter Claire was nicknamed “Tootsie.” Later, a larger company, Stern & Staalberg, bought Hirshfield out. Somewhere along the way, Hirshfield hand-wrapped his candy so it was clean, hygienic, and could travel from one store to another without needing to be poured and weighed. Hirshfield, the immigrant candymaker, was the American dream and success story all rolled into one.

That’s the story I love, and I’ll stick to it.

But the truth is more like this: Leo Hirshfield really was an Austrian immigrant, but he was an inventor at the confectionery company Stern & Staalberg with numerous patents to his name. He worked his way up the company ladder, eventually becoming a vice president. He did invent the Tootsie Roll and likely named it for his daughter, although “Tootsie” had been a term of endearment since the early 1900s as well as a loving name for a young one’s foot. As for Hirshfield, after some wrangling with Stern & Staalberg, he either lost or left his job. He attempted to start another, but that, too, failed. The wealthy but defeated inventor went on to shoot himself in a New York hotel.

See why I like the first story better?

Other Tootsie Roll insights: The Tootsie Roll was a heat-safe chocolate that held up well all year round. Among the many candies appearing in the rations of World War II soldiers, it was so durable and dependable, soldiers used “Tootsie Roll” as another name for bullets. Stern & Staalberg later became known as the Sweets Company of America, then Tootsie Roll Industries, which it remains today.

STICKY CANDIES WRAPPED IN WAX PAPER HELD IN A BAG

Caramels are an American invention that emerged from the European caramelized sugar of the seventeenth century. They are the essence of the praline, which the French brought to Louisiana in the 1760s. The caramel came into its own in the late 1800s, around the time when Hershey started the Lancaster Caramel Company. The Encyclopedia of Food and Beverages, published in 1901, gives this definition of caramel: “Sugar and corn syrup cooked to a proper consistence in open stirring kettles, run out in thin sheets on marble slab tables and cut into squares when cooled.” That recipe is not an industry standard: Hershey, compliments of his Denver caramel-making employer, knew to substitute milk for paraffin wax. Either way, caramel played a welcome part in candy where, with nuts, a chocolate coating, or simply solo, it is one of America’s favorite candies today.

MILK DUDS BECAUSE IT IS ONE

Milk Duds were invented by the F. Hoffman Company of Chicago in the 1920s and later made by Holloway. This was at a time when marketing was becoming ever more sophisticated, and marketers knew that a product’s name meant everything. No more putting the candymaker’s name on the label—it worked for Hershey, the Smith Brothers, and Oliver Chase, but times were changing. The name needed zing!

But how do you give zing to a candy you intended to be a perfectly round chocolate-covered caramel ball that sagged and dented? It wasn’t a ball. It was a dud. And that’s when someone in the company came up with a great idea. Let’s call it “Milk Chocolate Duds!” Too long? OK, then just “Milk Duds!” It’s too bad that person’s identity has been lost in the annals of history. It was the first and only time, as far as I know, that a candy was named for its liability.

Another caramel favorite with a spicy name was the Sugar Daddy, invented by Robert Welch, a chocolate salesman for the James Welch Co. The Sugar Daddy was named for the other sugar daddy, an older gentleman who obliges a younger woman—his wife, his mistress, or whoever she may be—with all the comforts his fortune can supply. Apparently, that sugar daddy originated with one Alma Spreckels. It’s the pet name she gave her considerably older husband, heir to the Spreckels’s sugar fortune in 1908. Originally, the candy was called the “Papa Sucker.” We’re all glad they changed it. Can you imagine calling it “Papa Sucker” today? It’s almost too embarrassing to talk about.

TOFFEES OR TAFFY OR TURKISH TAFFY?

The Mary Jane was one of the earliest toffees, and its beginnings in Paul Revere’s former home is beyond the greatness any toffee can reasonably expect. Still, a rush of other toffees followed, such as the Bit-O-Honey, first made in Chicago in 1924, using honey instead of the standard corn syrup and sugar. It’s not clear if the name was influenced by Clarence Crane’s increasingly popular Life Saver family with the pronounced “O.”

Another old timer is the sassy Squirrel Nut Zipper, which has one of the most perplexing names in candy history. The Squirrel Nut Company, then called the Austin T. Merrill Company, started in 1890 in Boston, Massachusetts. The business soon moved to Cambridge, Massachusetts, where the new owner Mr. Perley G. Gerrish sold his freshly roasted nuts throughout the Boston area by horse and carriage.

The company produced candy as well as nuts and came out with the Squirrel Nut Zipper in 1926. The name “Squirrel Nut,” is for the company, obviously. The “Zipper” was an illegal Prohibition-era cocktail. Remember how the temperance crowd claimed candy would lead to alcoholism in kids? Well, the candy companies had their say, putting a humorous twist on the old adage, “If you can’t beat them, join them.”

Eventually the Squirrel Nut Company’s nuts went to south to Antarctica with Admiral Richard Byrd, alongside its stateside neighbor’s NECCO Wafer. Also like the wafer, the nuts were also sent out during World War II. A soldier stationed in the Philippines wrote home: “I received a Christmas box with a pack of your peanuts in it. They were the only nuts that arrived without worms.”

Today, the company, now called Squirrel Brand and Southern Style Nuts, is based in McKinney, Texas. The Zipper is still in Massachusetts, where it’s now made by none other than NECCO.

HEATH TOFFEE LAXATIVE?

In the penny candy store, the toffee found itself in places in-between its naked self and fully dressed in candy bar chocolate. One example is the Heath Toffee Bar, a candy quite different in nature from the other misbehaving Prohibition-era candies we’ve discussed. The Heath Toffee Bar was started by a school teacher, L. S. Heath, in Illinois. Heath was actually looking for a line of work for his two oldest sons when he found a small confectionery for sale.

In 1914, the shop opened selling ice cream, fountain drinks, and sweets. One thing lead to another, and soon candy salesmen were hanging around Heath’s store, talking, as they do, about candy. One of them was raving about another candymaker’s toffee recipe. The Heath brothers were intrigued. I bet you know what happened next. They called it Heath Toffee.

In 1931, L. S. Heath quit his job teaching school after twenty years and joined the candy business, as did his two sons. It was the younger generation who thought up a great marketing idea: Why not sell our candies through dairymen who went from house to house with their milk, ice, and cheese? Just add Heath Toffee to the list, and customers will add it to their purchases along with other products. And, of course, they did.

But the Heath Toffee Bar was different from other bars, which initially caused confusion. First, the bar was one ounce, while others were four; this convinced consumers they were buying a penny candy and not a five-cent bar. Second, the design had a large “H” at either end, with the “eat” in small caps in the middle: HeatH. Shoppers thought the name of the company was H&H with the “eat” telling them what to do with it. A third problem was that the packaging, name aside, made it look like the laxative Ex-Lax. Salesmen weren’t sure what they were supposed to sell.

The Heath Toffee Bar took off anyway and is made by Hershey today.

THE BOSTON CHEW

Don’t be deceived by the title. We really are talking about the Charleston Chew, which is not exactly a taffy and not exactly a toffee, and to be perfectly honest, I’m not exactly sure what it is. But I can tell you this: the Charleston Chew, that dense marshmallow-taffy-toffee substance covered with chocolate is more like the Squirrel Nut Zipper, in spirit anyway, than the Heath Toffee Bar.

It was first made in 1925 and spent most of it life in Boston, which is one Zipper connection, although most people think its name refers to Charleston, South Carolina. I imagine it has a pretty good following there. The other connection is that the Charleston Chew is tied to Prohibition, named for the dance, the Charleston, which showed up in movies with flappers dancing merrily in-between sips of the Zipper (quite possibly) and other speakeasy drinks.

While we’re discussing theater and dancing, the Fox Cross company that invented the Charleston Chew began when Donley Cross, a Shakespearean actor in San Francisco, fell from the stage, injuring his back and ending his career. The logical next step for Cross was to start a candy company with his friend, Charlie Fox. I know, it doesn’t make much sense, but that’s candy.

TAFFY DUD

The Turkish Taffy was another flop that rose in stature to become a pop-culture favorite and, after a brief hiatus, remains so today. I remember eating it as a kid and feeling the sticky sweetness warm my mouth.

Bonomo Turkish Taffy was not made by a Turkish candymaker but by Austrian immigrant Herman Herer in 1912. At the time, he was trying to create a marshmallow candy for M. Schwarz & Sons of Newark and added too many egg whites. The candy was a dud. But it got Herer thinking. He experimented with the recipe, then sold his business to M. Schwarz & Sons who hired Herer back. Herer kept experimenting and finally succeeded in making the only flat taffy in the world. Its name was Turkish Taffy.

In nearby Coney Island, the Bonomo family was looking for something new to do. Albert, who really was Turkish and had immigrated to the United States in 1892, started his career selling candy from a pushcart in Coney Island. He then owned an ice cream company, where he sold ice cream from a horse-drawn covered wagon. Eventually he opened a candy and ice cream factory on the first floor of his house, living on the second floor and housing about thirty workers on the third floor.

In 1919, Bonomo’s two sons, Vic, who had just returned from World War I, and Joe, a bodybuilder and football player, joined the business. In 1936, Bonomo bought M. Schwarz & Sons and with it the Turkish Taffy, making the taffy truly Turkish. Eventually the brothers took over and ran the company until Joe left to pursue a career in Hollywood as an actor, stuntman, strongman, and health writer. Vic then ran the company on his own.

The Turkish Taffy remained a mainstay of American confections, thanks in part, to its signature tag line “Crack it up,” and instructions on the packaging: “Crack It Up!—Hold Bar in Palm of Hand—Strike against Flat Surface—Let It Melt in Your Mouth.” Tootsie Roll eventually bought the candy and ran it into the ground. But only temporarily. The Turkish Taffy is back, now owned by a company called Bonomo. There is no relation between the company and the Bonomo family, but the taffy still tastes good.

SOFT STUFF AND PIXY STIX SWITCH AND BANANA-FLAVORED PEANUTS

Another centerpiece of the candy store was the wild flavors, colors, and textures that promised kids a culinary (of sorts) experience. The endurance of these sprightly selections has much to do with their flexibility, sometimes shifting purpose as well as packaging and taste. One example is Pixy Stix, the paper straws filled with sugar so powdery and light it practically vanishes when eaten.

Originally a drink flavoring, much like Kool-Aid, the sugar powder was made in the 1930s and called “Frutola.” But when inventor J. Fish Smith found that kids preferred eating it, he turned it into an eating candy, which he sold with a spoon. In the 1950s, Sunline Inc. made it the fun-lover’s candy it is today. Outside of its straw-like wrapper, it would just be another tasty but highly processed sugar. But who cares?

Another candy that is perplexing in flavor, texture, and history are the much loved (and loathed) Circus Peanut. This curious candy originated in the 1800s. It was quite possibly for sale at travelling circuses but was also found in candy stores, general stores, and other places where penny candy was sold. The texture is soft as a sponge, spongy as a marshmallow, and flavored like a banana. The circus peanut was never what you’d call prestigious and various versions entered the candy arena for decades. One in particular is a surprise.

AS FOR THAT SURPRISE

In 1963, General Mills used the circus peanut as a prototype for the charms in Lucky Charms. Today, knockoff Charms are cropping up at candy stores everywhere, minus the flakes. As you may remember, I sampled a few on my way back from Wilbur’s. Very satisfying in a lighthearted way.

CHICKEN BONES TO CHICK-O-STICK

Is Chick-O-Stick another Life Saver rip off with the “O” at its center? If so, that’s about all the two sweets have in common aside from their presence in candy stores. The orange- and coconut-speckled Chick-O-Stick began its life in Canada, known as Chicken Bones. It was invented by Frank Sparhawk, an employee of brothers James and Gilbert W. Ganong, who opened their shop in 1873. The candy was a cinnamon-flavored candy shell filled with bittersweet chocolate, that looked like chicken bones. The Ganong’s company is still operating today and is still family owned.

So successful were Chicken Bones that they spread south, all the way to Texas, where another family-owned business, Atkinson Candy Company, apparently found them. The Atkinson Candy Company began in 1932 after Basil Atkinson was laid off from his job at a foundry. He borrowed a truck, dug up some cash, and loaded his wife and sons into the cab for the two-day drive to Houston. There he loaded up on candy and tobacco. He began selling these items to small shops, eventually setting up a wholesale distribution center.

Eventually, Basil realized he could make candy just as well, make that better, than the other guys. With help from his wife, he got to work. In the late 1930s, he came up with a candy that looked like Southern-fried chicken bones (and the Chicken Bones candy that already existed). Basil decided he should name them “Chicken Bones,” but Ganong decided he shouldn’t. Atkinson renamed his candy Chick-O-Stick, with the Life Saver-esque “O” in the middle. It was a southern favorite for years and is nationally known today.

PERSONAL PERSPECTIVE: WILMA GREEN, CHICK-O-STICK IN CHICAGO

Wilma Green is an artist whose life could be her own portrait. She is an activist, a community organizer, a mother, a teacher, and a friend. This is her candy story.

When I was a kid, my mother would send my twin brother, William, and me to Star Foods Grocery with empty RC Cola bottles and some money. We’d exchange the bottles for more RC for her and two Chick-O-Stick; one for her and one for my brother and me to share. They had to be Chick-O-Stick. She loved Chick-O-Stick. I don’t know if it was her background coming from the South, but she did.

That was in the ’60s in Chicago. We lived in the largest public housing development in the world—the Robert Taylor housing project. People had migrated from the South during the Depression and built these communities over the years. It was a black metropolis. We did everything there—went to school, saw the doctor. Everyone who went to the Star Foods Grocery knew the owners and everyone got credit. Some people say it was segregation, but I don’t know. I always felt good. . . . I always knew everyone was watching out for me there.

My mom was raising ten kids on her own since my father had died. My brother and I were the youngest. The three oldest were in Arkansas picking cotton with my grandparents. All of us went there in the summer to help out, but my brother and I were too young to pick. It was because of that work my grandparents were able to buy their own house.

During WWII, when my mom was a teenager, she had the opportunity to work at a plant that made parts for bombs like Rosie the Riveter. She paid someone else to pick the cotton; the owners didn’t mind because they didn’t know the people who worked there—just the number of hands. They weren’t really people to them. Eventually she worked at an electronics company that was right next to a candy company. She would go there and get the second-rate candy, you know like broken pieces of chocolate, which she’d also give to us. My brother and I would sell it to other kids and get the good stuff for ourselves, like Now and Laters.

But, as I said, once a week, my brother and I would take the RC bottles and get more RC for my mother and Chick-O-Stick for all of us. I think the candy originated in the south—it was probably a piece of her memory. It’s amazing how candies bring up memories.

It’s nice to revisit those memories. I really love that.

AND, AT LAST, THE LOLLIPOP

No one knows when people started enjoying lollipops, although Charles Dickens wrote about hard candy on a stick in the 1800s. In the United States, around the time of the Civil War, people started sticking pencils into hard candies to eat them. At home, people basically dropped a mound of hard candy onto parchment or wax paper, stuck in a stick, let it dry, and then enjoyed the treat. But commercially not a lot was going on.

Then, in 1895, Chicken Bones owners Gilbert and James Ganong began inserting sharp wooden sticks into their hard candy, creating one of the first commercial lollipops in the northern hemisphere. They called it an “all day sucker.” That changed in 1908 when the Bradley Smith Company starting manufacturing the “Lolly Pop,” which was named after George Smith’s favorite racehorse. Their inspiration was a chocolate-caramel taffy on a stick, made by Reynolds Taffy of West Haven, Connecticut, that resembled the Sugar Daddy.

George Smith attempted to get ownership of the name “Lolly Pop,” but the US Patent Office turned him down, as the term was listed in an English dictionary of the early 1800s, spelled “lollipop.” There it was described as “a hard sweetmeat sometimes on a stick.” Eventually, Smith got the rights to “Lolly Pop” with that specific spelling, but it was negligible. People began using both names interchangeably and have ever since.

By the 1920s, numerous lollipops seemed to appear in penny candy stores and other places. There was the Dum Dum, made in 1924 by the by the Akron Candy Company, which evidently knew the marketing potential of a name; the company’s salesman named the lollipop “Dum Dum,” thinking kids could easily remember it and ask their parents to buy some. Obviously, he was right. The Tootsie Pop, essentially a panned Tootsie Roll, came around 1931, and, in the 1940s, after parents expressed concern that kids would choke on the stick, the Saf-T-Pop, with a round holder, was released.

Excerpted from “Sweet as Sin” by Susan Benjamin. Published by Prometheus Books. Copyright 2016 by Susan Benjamin. Reprinted with permission from the publisher. All rights reserved.

Getting off (line): What’s lost in the age of internet porn


Getting off (line): What’s lost in the age of internet porn

We’re an army of unmanned drones, piloting our libido through the ether”

By: Mark Slouka/Salon

It’s a rhetorical question – I don’t need a show of hands:  How many men have recently “lain with a woman,” as the prophets might say, and found themselves unmanned because they’d been partaking of too much porn?  Or found that, in order to man themselves, they needed to superimpose a recollected online image onto the scene like a high-school biology teacher placing a transparency of the endocrine system on the overhead projector?

It’s a fair question, maybe even — given the unsolicited anecdotes I’ve been picking up from the men and women who are confident enough to talk about the issue – an important one.  Important because the case of online porn opens a small window onto what happens when we outsource our imaginations, when we begin to accept generic, instantly accessible fantasies in place of the ones we used to have to work for.  Extend ourselves for, as it were.

What happens is we’re screwed, that’s what happens.  Not in a good way.

To make my case, I have to digress from porn to the University of Chicago, where I once professed, and from thence to a crack house in Winslow, Arizona. Bear with me. 

Though at present I’m a sojourner in civilized life again, I taught college for 30 years, during which time, not surprisingly, I saw some changes:  office hours grew virtual, chalk was digitalized, professors (with heroic exceptions) morphed into entertainers teaching to customer satisfaction surveys filled out at the end of the term. So far, so good; nothing’s perfect. Underlying all this “progress,” though, was a steady, graphable rise in student itchiness, an inability to stay the fuck still. To follow an idea, to immerse in a book — to go deep.  I’d watch it in the library, make bets with myself: two minutes before she checks her phone? Less?  Worse, I began to notice the same thing in myself – a current of distraction, like static in the brain.

Not long after I saw a distracted colleague sweep his index finger across the cover of a physical magazine, I left the halls of almost-ivy and, having nowhere much else to go, moved with my family to a small house in Winslow, Arizona, where my desk looked out on Route 66, dust storms and a crack house.

I liked working at that desk.  Whenever the novel I was working on gave me trouble, I’d look up and study the young man with the cantaloupe-size biceps who sat on the porch across the street, tilted back on a wooden chair in front of the plywood-covered windows.  His leg jerking like he was carrying a charge, he’d sit for 20 or 30 seconds, fidgeting, scratching, then grab his phone. Sometimes he’d leap up and walk quickly back and forth on the cracked sidewalk, talking – laughing, yelling, cursing – until a customer pulled up, at which point he’d exchange goods for legal tender through the passenger-side window, walk rapidly back to the chair, shift the phone to his left hand, snatch a barbell off the floorboards and do a few quick curls. Then the phone again. 

This would go on for hours.  I’d disappear into my book for an hour or more, then reemerge and there he’d be: pacing, scratching, fiddling with the phone, pumping iron – a hundred sets of two – most sincerely strung out.

And then a desert epiphany came to me as it came to John the Baptist: subtract the barbell, ratchet down the craziness, upgrade for decency (if not necessarily IQ), and I was looking at one of my former students.  Here was the same inability to be still, to do one thing, any thing. To throw one leg over the other and watch the dust storm coming in, to be in your own skull.  The conclusion was as inescapable as it was uncomfortable: We – maybe all of us, to one degree or another — were exhibiting signs of addiction; the drug might be legal (in fact, universally sanctioned and more or less mandatory), and the effects less toxic, but the similarities were striking.

I was wrong: The effects were just as toxic, they just manifested themselves more gradually, less visibly; instead of targeting your liver, say, this drug hit your ability to think, to contemplate your world, to use your imagination, to be alone. It let you keep your teeth and your job while it quietly paved your soul.

All of which brings us back around to online porn, something I’ve partaken of – might as well clear that right up — as has pretty much every man capable of curiosity and possessed of a penis.  Truly, there’s something extraordinary about this virtual edifice, this million-room whorehouse catering at a keystroke to every conceivable taste and fancy, entirely free of the old-timey risks of disease, danger and social embarrassment.  A parody of the marketplace?  This is the quintessence of it.  This is what the piled centuries since Adam Smith have been building toward: a universal human need — for most of our lives nearly as essential as shelter — commodified, then abstracted into light, then delivered friction-free to the customer waiting dry-mouthed by the door. What could be more perfect?

Unfortunately, the term “friction-free” is the thread that, once pulled, starts the unraveling. Why?  Because beyond the method of delivery there’s really nothing friction-free about it, because even the slightest deviation from the main channel suddenly finds you backpedaling out of ugly back eddies where the subjects look like they’re in eighth grade and the forces of manipulation, coercion (and worse) are all too visible.  Because, at the end of the process, actual human beings are actually involved, many of whom don’t get a vote.  Because there’s fucking — to speak plainly — and then there’s fucked-up. 

But setting aside the truly insane shit – the kids and the rape porn and the crush videos, etc., best left to the cops – still leaves us with a continent’s worth of stuff catering to people who just want to get off.  Which is where the second problem with “friction-free” delivery comes up, namely, that it’s just that.

A certain amount of friction – in the bedroom as in a democracy – is a good thing, a beneficial thing. It tells us we’re alive in a world of skin and fur and opinions not our own; it forces us to reckon with others, to contend and argue and accommodate. It asks something of us. It makes us stronger. Ultimately, despite the headaches, it makes us happier, too, if only because we evolved in relation to the messy, physical world, and you don’t erase a million years of evolutionary adaptation in the space of a generation without some interesting side effects. 

So what’s the appeal of friction-free?  Simple: convenience, comfort, cost.  Physical space requires energy and money to cross; social interaction carries risk. As human beings, we’ve been courting the Big Easy – softening the hard edges of the world, conquering distance and time, developing technological prostheses that enhance our limited natural abilities – for half a millennium and more. The problem, though, is that we’ve gotten too good at it, too indiscriminate about which edges we plane smooth. Having saved ourselves a great deal of time and labor (easier, faster, smoother) we’re moving on to saving ourselves the trouble of thinking. Conquering the world, we’ve allowed ourselves to be conquered.

At times it has the feeling of a natural law: soften the hard edges, you soften yourself. It’s not hard to see where this leads. Eventually, surrounded by the accessible, the instant, and the effortless, you can barely feel your life at all – except as an occasional source of irritation that things aren’t more accessible, instant, effortless.  At which point it occurs to you that “friction-free” is a synonym for “dead.”

How does all this apply to pornhub? Pretty well, I think. The irresistible lure of online porn is that it’s easy, risk-free; the sting in the tail is that not only is there no accountability, there’s no presence. We’re not involved, really. We’re an army of unmanned drones, piloting our libido through the ether, one hand firmly on the controls, risking nothing. 

And yet, we are – more than we think.   

To explain the fine print charges on your last, more-or-less friction-free transaction with Brandi Love requires a brief, impressionistic history of porn and the male imagination.

A Brief Impressionistic History of Porn and the Male Imagination

In the beginning, or near enough, there was probably shape: the soft cleft in the skin of a fruit, the curve of a root. With the object of our affections elsewhere and absence making our heart grow fonder, we saw her (or a reasonable stand-in) everywhere, and knew what to do. Eventually, to enhance our doing, representation kicked in: a finger drawing in the dust, perhaps a cave painting not like the ones usually found on the Discovery Channel.

Over time we lent substance to our musings, carving fertility goddesses with impossible breasts and mountainous buttocks (and, because it doesn’t hurt to dream, gentlemen sporting phalluses that would require a 12-foot partner to put into practice) and so forth.  Other modes and materials – pigment on canvas, stone, etc. followed.  Some of these, because they were good, became art (and blasphemous though it may sound, one can imagine the artist’s pulse ticking up a bit as he chisels the undercurve of that marble ass), the vast majority did not.

Since nothing much happens for a while, skip a few centuries to 1970,  where we find my 12-year-old, sullen self walking down the side of a country road in Tarrytown, New York, kicking at garbage on the shoulder.  When I boot a rain-soaked paper bag, magazines spill out.  I peel back the tearing, sodden pages — how erotic that memory still is — and there they are! Girls — because that’s what they were then. With breasts! My heart beating  like a jackhammer, anticipatory shame reddening my face, I stuff them in my jacket, praying to whatever gods there be that my mother doesn’t find out.  She doesn’t.  The contraband is successfully transferred to my friends, Matt and Andy, who secret it in the clubhouse under the floorboards where it remains, fingered over, until it disintegrates into molecules.

So far, then, from tree roots to, I don’t know, Japanese erotic prints to Swank magazine, nothing much has changed.  Admittedly, photography has added a layer of realism, but one all-important constant holds: The object of desire is static, silent.  To animate her we have to imagine how she’d move, what she’d do, how she’d sound. 

In the 1970s she begins to move and, in a manner of speaking, speak.  Pornographic films, heretofore a specialty interest, and quite illegal, go mainstream; a last few legal hurdles are cleared, and the floodgates of the wonder-world swing open. For a short time, Linda Lovelace does what she does amusingly broken up by the censors into a hundred tiny frames (forcing some last, tired twitch from our imagination), and then, thanks to the Supreme Court, she and the requisite part of her costar, Mr. Harry Reems, cohere into a single image.  Our brain is no longer needed; in fact, given the “dialogue,” it’s probably best left at home. 

There’s only one problem left to solve. To get where we want to go, we still have to get up – that is, move our corporeal body to that drugstore or newsstand to buy the mag where we may have to brave the sales girl who takes one look and knows it’s another Saturday night and we ain’t got nobody; we have to take the subway to Times Square and walk into that movie house on 43rd and 8th, then sit next to nasty old guys in raincoats, staring at a movie that’s two hours too long and doesn’t really fit the bill anyway . . . This takes dedication.

But the powers that would improve our lives never rest. The video player introduces a wider range of consumer options, at-home convenience and the miracle of the rewind button (though the actual cassette still has to be secreted away in the sock drawer), and then, before we know it, the digital revolution is upon us and the last barriers fall: from here in, to quote Don Henley, it’s everything, all the time. The options are endless, and endlessly gratifying; instantly here and – equally helpful — instantly gone.  No need to get dressed, go out, spend money, court rejection, perform, talk, escape. Better still, we can get off pretty much anywhere, any time – no fuss, no muss.  Having a little trouble with that report today, or maybe just bored?  A few quick strokes and we’re in, glandular overload in five, four, three, two seconds. Given men’s more objective, fuel-injected mechanics (“Breasts?  Wham!), this is freebasing sex, and, as with the other, there’s nothing free about it.

Stuff yourself at the All-U-Can-Eat long enough, you forget to taste the food; after a while, you forget how. Stuff yourself on others’ fantasies, and you lose the ability to form your own.  There’s something paternalistic about the process, infantilizing: “You just sit there, baby – I’ll do everything.  No need to trouble your little head.”  Not thatone, anyway.

In this context, Marshall McLuhan’s famous dictum that “every new technology amputates the function it extends” has a particularly unfortunate ring. Still, whatever literal un-manning may be taking place (and who can fail to see the link between skyrocketing impotence rates and the expansion of on-line porn), the real violence is being done to our heads, which are, after all, connected to the rest of us. Step by step, to recall e.e.cummings, the world of the made replaces the world of the born; the machine colonizes the mind.

To experience, um, firsthand, what I’m talking about, try the following experiment – call it Independent Study No. 1. The next time your fancy lightly turns to thoughts of love, tap into whatever site you’d ordinarily tap into, and pay attention. Remember – this is homework. Notice how quickly your interest peaks — tap, tap, in! – and how swiftly you disconnect once the mission’s been completed – exit, exit, now back to that report.

A graphic representation (horizontal axis for duration, vertical for pleasure), would resemble an alp. I’m not going to draw it for you – imagine it.     

We’re not done.  The next time the urge befalls, resist going the usual route and instead, go solo, all by your lonesome, in your very own, unmediated head. Call it Independent Study No. 2.  It may feel unfamiliar at first — a bit like reading a novel, which, come to think of it, requires much the same equipment — but persevere anyway.  If it’s difficult, boring or impossible, stop to consider how scary this is (“This is your brain on technology”), recall that line from McLuhan, then dedicate yourself to regrowing your sexuality by reclaiming your imagination. If, on the other hand, you’re able to find a nice place in the sun in which to spend a fruitful moment or two, notice how different it is from Study No. 1. Note, first, how much slower the buildup is, how your mind flutters and dips from flower to flower before it settles in for the ride, how you actually have to work that muscle a bit (still talking about your brain) before arriving at the station, just to mix a few metaphors. Notice, above all, after duly getting off at that station, how comparatively nice it is there. How you’re in no hurry to get back to work; how you want to just sit on the bench a while, and smile.

Mark Slouka’s work has appeared frequently in Harper’s, Granta, The Paris Review and other publications. His novels and essays have been translated into sixteen languages, and he lives with his family in Brewster, New York. His memoir, Labyrinth of the Heart, will be out with Norton in fall 2016, and his blog, Notes From The Shack: On Nature, Culture, Politics and Technology.

Our terrorism double standard: After Paris, let’s stop blaming Muslims and take a hard look at ourselves


Our terrorism double standard: After Paris, let's stop blaming Muslims and take a hard look at ourselves

An Indian child pays floral tribute at a sand sculpture created in remembrance of victims of Friday’s attacks in Paris, in Bhubaneswar, India.(Credit: AP)

We must mourn all victims. But until we look honestly at the violence we export, nothing will ever change

By: Ben Norton/Salon.com

Any time there is an attack on civilians in the post-9/11 West, demagogues immediately blame it on Muslims. They frequently lack evidence, but depend on the blunt force of anti-Muslim bigotry to bolster their accusations.

Actual evidence, on the other hand, shows that less than two percent of terrorist attacks from 2009 to 2013 in the E.U. were religiously motivated. In 2013, just one percent of the 152 terrorist attacks were religious in nature; in 2012, less than three percent of the 219 terrorist attacks were inspired by religion.

The vast majority of terrorist attacks in these years were motivated by ethno-nationalism or separatism. In 2013, 55 percent of terrorist attacks were ethno-nationalist or separatist in nature; in 2012, more than three-quarters (76 percent) of terrorist attacks were inspired by ethno-nationalism or separatism.

These facts, nonetheless, have never stopped the prejudiced pundits from insisting otherwise.

On Friday the 13th of November, militants massacred at least 127 people in Paris in a series of heinous attacks.

There are many layers of hypocrisy in the public reaction to the tragedy that must be sorted through in order to understand the larger context in which these horrific attacks are situated — and, ultimately, to prevent such attacks from happening in the future.

Right-wing exploitation

As soon as the news of the attacks broke, even though there was no evidence and practically nothing was known about the attackers, a Who’s Who of right-wing pundits immediately latched on to the violence as an opportunity to demonize Muslims and refugees from Muslim-majority countries.

In a disgrace to the victims, a shout chorus of reactionary demagogues exploited the horrific attacks to distract from and even deny domestic problems. They flatly told Black Lives Matter activists fighting for basic civil and human rights, fast-food workers seeking liveable wages and union rights, and students challenging crippling debts that their problems are insignificant because they are not being held hostage at gunpoint.

More insidiously, when evidence began to suggest that extremists were responsible for the attacks, and when ISIS eventually claimed responsibility, the demagogues implied or even downright insisted that Islam — the religion of 1.6 billion people — was to blame, and that the predominately (although not entirely) Muslim refugees entering the West are only going to carry out more of such attacks.

Clampdown on Muslims and refugees

Every time Islamic extremists carry out an attack, the world’s 1.6 billion Muslims are expected to collectively apologize; it has become a cold cliché at this point.

Who benefits from such clampdown on Muslims and refugees?

Two primary groups: One, Islamic extremist groups themselves, who use the clampdown as “evidence” that there is supposedly no room for Muslims in the secular West that has declared war on Islam; and two, Europe’s growing far-right, who will use the attacks as “evidence” that there is supposedly no room for Muslims in the secular West that should declare war on Islam.

Although enemies, both groups share a congruence of interests. The far-right wants Muslims and refugees from Muslim-majority countries (even if they are not Muslim) to leave because it sees them as innately violent terrorists. Islamic extremists want Muslim refugees to leave so they can be radicalized and join their caliphate.

More specifically, to name names, ISIS and al-Qaeda will benefit from the clampdown on Muslims and refugees, and Europe’s growing far-right movement will continue to recruit new members with anti-Muslim and anti-refugee propaganda.

ISIS has explicitly stated that its goal is to make extinct what it calls the “grayzone” — that is to say, Western acceptance of Muslims. The “endangerment” of the grayzone “began with the blessed operations of September 11th, as those operations manifested two camps before the world for mankind to choose between, a camp of Islam … and a camp of kufr — the crusader coalition,” wrote ISIS in its own publication.

An excerpt from ISIS' own publication (Credit: Iyad El-Baghdadi)

An excerpt from ISIS’ own publication (Credit: Iyad El-Baghdadi)

Demonstrating how right-wing and Islamic extremist logic intersect, ISIS actually favorably cited the black-and-white worldview shared ironically by both former President George W. Bush and his intractable foe, al-Qaeda leader Osama bin Laden. ISIS wrote: “As Shaykh Usamah Ibn Ladin said, ‘The world today is divided into two camps. Bush spoke the truth when he said, “Either you are with us or you are with the terrorists.” Meaning, either you are with the crusade or you are with Islam.’”

By making ISIS go viral, we are only helping them accomplish their sadistic goals.

In the meantime, France’s extreme right-wing National Front party stands to gain in particular. The party — which was founded by a neo-Nazi and is now led by his estranged daughter Marine Le Pen — constantly rails against Muslims, whom it hypocritically characterizes as Nazi occupiers. In 2014, a Paris court ruled it was fair to call the National Front “fascist.”

Before the Paris attacks, Le Pen’s extreme-right movement was France’s second-largest party. Now it may become the first.

The massacres that are ignored

There are hundreds of terrorist attacks in Europe every year. The ones that immediately fill the headlines of every news outlet, however, are the ones carried out by Muslims — not the ones carried out by ethno-nationalists or far-right extremists, which happen to be much more frequent.

Yet it is not just right-wing pundits and the media that give much more attention to attacks like those in Paris; heads of state frequently do so as well. Minutes after the Paris attacks, Presidents Hollande and Obama addressed the world, publicly lamenting the tragedy. Secretary John Kerry condemned them as “heinous, evil, vile acts.”

Notable was the official silence surrounding another horrific terrorist attack that took place only the day before. Two ISIS suicide bombers killed at least 43 people and wounded more than 230 in attacks on a heavily Shia Muslim community in Beirut on November 12. President Obama did not address the world and condemn the bombings, which comprised the worst attack in Beirut in years.

In fact, the opposite happened; the victims of the ISIS attacks were characterized in the U.S. media as Hezbollah human shields and blamed for their own deaths based on the unfortunate coincidence of their geographical location. Some right-wing pundits even went so far as to justify the ISIS attacks because they were assumed to be aimed at Hezbollah.

Nor did the White House interrupt every news broadcast to publicly condemn the ISIS massacre in Turkey in October that left approximately 128 people dead and 500 injured at a peaceful rally for a pro-Kurdish political party.

More strikingly, where were the heads of state when the Western-backed, Saudi-led coalition bombed a Yemeni wedding on September 28, killing 131 civilians, including 80 women? That massacre didn’t go viral, and Obama and Hollande did not apologize, yet alone barely even acknowledge the tragedy.

Do French lives matter more than Lebanese, Turkish, Kurdish, and Yemeni ones? Were these not, too, “heinous, evil, vile acts”?

Oddly familiar

We have seen this all before; it should be oddly familiar. The reaction to the horrific January 2015 Paris attacks was equally predictable; the knee-jerk Islamophobia ignored the crucial context for the tragic attack — namely the fact that it was was the catastrophic U.S.-led war on Iraq and torture at Abu Ghraib, not Charlie Hebdo cartoons, that radicalized the shooters. Also ignored was the fact that the extremist attackers were sons of émigrés from Algeria, a country that for decades bled profusely under barbarous French colonialism, which only ended after an even bloodier war of independence in 1962 that left hundreds of thousands of Algerians dead.

After the January Paris attacks, leaders from around the world — including officials from Western-backed extremist theocratic tyrannies like Saudi Arabia — gathered in Paris to supposedly participate in a march that turned out to actually be a carefully orchestrated and cynical photo op.

And not only are Muslims collectively blamed for such attacks; they, too, collectively bear the brunt of the backlash.

In just six days after the January attacks, the National Observatory Against Islamophobia documented 60 incidents of Islamophobic attacks and threats in France. TellMAMA, a U.K.-based organization that monitors racist anti-Muslim attacks, also reported 50-60 threats.

Once again, mere days before the January Paris attacks, the global community largely glossed over another horrific tragedy: The slaughter of more than 2,000 Nigerians by Boko Haram. The African victims didn’t get a march; only the Western victims of Islamic extremism did.

Western culpability

A little-discussed yet crucial fact is that the vast, vast majority of the victims of Islamic extremism are themselves Muslim, and live in Muslim-majority countries. A 2012 U.S. National Counterterrorism Center report found that between 82 and 97 percent of the victims of religiously motivated terrorist attacks over the previous five years were Muslims.

The West frequently acts as though it is the principal victim, but the exact contrary is true.

Never interrogated is why exactly are so many refugees fleeing the Middle East and North Africa. It is not like millions of people want to leave their homes and families; they are fleeing violence and chaos — violence and chaos that happens to almost always be the result of Western military intervention.

Western countries, particularly the U.S., are directly responsible for the violence and destruction in Iraq, Afghanistan, Libya, and Yemen, from which millions of refugees are fleeing:

  • The illegal U.S.-led invasion of Iraq led to the deaths of at least one million people, destabilized the entire region, and created extreme conditions in which militant groups like al-Qaeda spread like wildfire, eventually leading to the emergence of ISIS.
  • In Afghanistan, the ongoing U.S.-led war and occupation — which the Obama administration just prolonged for a second time — has led to approximately a quarter of a million deaths and has displaced millions of Afghans.
  • The disastrous U.S.-led NATO intervention in Libya destroyed the government, turning the country into a hotbed for extremism and allowing militant groups like ISIS to spread west into North Africa. Thousands of Libyans have been killed, and hundreds of thousands made refugees.
  • In Yemen, the U.S. and other Western nations are arming and backing the Saudi-led coalition that is raining down bombs, including banned cluster munitions, on civilian areas, pulverizing the poorest country in the Middle East. And, once again — the story should now be familiar — thousands have been killed and hundreds of thousands have been displaced.

Syria is a bit more complicated. Many refugees in the country, which has been torn apart by almost five years of bitter war, are fleeing the brutal repression of the Assad government. Western countries and their allies, however, share some of the blame. Allies such as Saudi Arabia and Turkey have greatly inflamed the conflict bysupporting extremist groups like al-Qaeda affiliate al-Nusra.

And it should go without saying that millions of Syrian refugees are fleeing the very same terror at the hands of ISIS that the group allegedly unleashed upon Paris. By suppressing Syrian and Iraqi refugees fleeing the ruthlessly violent extremist group, France and other Western countries will only be further adding to the already shocking number of its victims.

Dislocating the blame

When the U.S. and its allies bomb weddings and hospitals in Yemen and Afghanistan, killing hundreds of civilians, “Americans” doesn’t trend globally on Twitter. Yet when Parisians are allegedly killed by Islamic extremists, “Muslims” does.

The imperialist West always try to dislocate the blame. It’s always the foreigner’s, the non-Westerner’s, the Other’s fault; it’s never the fault of the enlightened West.

Islam is the new scapegoat. Western imperial policies of ravaging entire nations, propping up repressive dictators, and supporting extremist groups are conveniently forgotten.

The West is incapable of addressing its own imperial violence. Instead, it points its blood-stained finger accusingly at the world’s 1.6 billion Muslims and tells them they are the inherently violent ones.

Unfortunately, tragedies like the one we see in Paris are daily events in much of the Middle East, no thanks to the policies of the governments of France, the U.S., the U.K., and more. The horrific and unjustifiable yet rare terrorist attacks we in the West experience are the quotidian reality endured by those living in the region our governments brutalize.

This does not mean we should not mourn the Paris attacks; they are abominable, and the victims should and must be mourned. But we should likewise ensure that the victims of our governments’ crimes are mourned as well.

If we truly believe that all lives are equally valuable, if we truly believe that French lives matter no more than any others, we must mourn all deaths equally.

The dangers of habit

We know the responses to attacks like these. Great danger lies in them continuing on the same way.

Governments are going to call for more Western military intervention in the Middle East, more bombs, and more guns. Hard-line right-wing Senator Ted Cruz immediately demanded airstrikes with more “tolerance for civilian casualties.” Naturally, the proposed “solution” to individual acts of terror is to ramp up campaigns of state terror.

At home, they will call for more fences, more police, and more surveillance. Immediately after the Paris attacks, France closed its borders. In the U.S., as soon as the attacks were reported, the NYPD began militarizing parts of New York City.

The hegemonic “solution” is always more militarization, both abroad and here at home. Yet it is in fact militarization that is the cause of the problem in the first place.

At the time of the atrocious 9/11 attacks, al-Qaeda was a relatively small and isolated group. It was the U.S.-led war in and occupation of Iraq that created the conditions of extreme violence, desperation, and sectarianism in which al-Qaeda metastasized, spreading worldwide. The West, in its addiction to militarism, played into the hands of the extremists, and today we see the rotten fruit borne of that rotten addiction: ISIS is the Frankenstein’s monster of Western imperialism.

Moreover, Western countries’ propping up of their oil-rich allies in the Gulf, extremist theocratic monarchies like Saudi Arabia, is a principal factor in the spread of Sunni extremism. The Obama administration did more than $100 billion of arms deals with the Saudi monarchy in the past five years, and France has increasingly signed enormous military contracts with theocratic autocracies like Saudi Arabia, the UAE, and Qatar.

If these are the strategies our governments continue to pursue, attacks like those in Paris will only be more frequent.

The far-right will continue to grow. Neo-fascism, the most dangerous development in the world today, will gain traction. People will radicalize.

The incidence of attacks inspired by ethno-nationalism or far-right extremism, already the leading cause of European and American terror, will increase even further.

The pundits will boost anti-Muslim bigotry and feed the anti-refugee fervor. In doing so, they will only make matters worse.

The Paris attacks, as horrific as they are, could be a moment to think critically about what our governments are doing both abroad and here at home. If we do not think critically, if we act capriciously, and violently, the wounds will only continue to fester. The bloodletting will ultimately accelerate.

In short, those who promote militarist policies and anti-Muslim and anti-refugee bigotries in response to the Paris attacks are only going to further propagate violence and hatred.

If the political cycle is not changed, the cycle of violence will continue.

My unconventional Texas family: A gay dad and a mom who wears mens’ clothes, in the Lone Star State


My unconventional Texas family: A gay dad and a mom who wears mens' clothes, in the Lone Star State

As the child of gay parents, I would sometimes feel ashamed. But I would also feel ashamed of my shame

By: Elizabeth Collins/Salon

Not long ago, I called an old friend of mine, a white, straight, 50-year-old, Texas-born and bred, religiously conservative Republican friend of mine named Scott. He was also my former boss and current business partner. I’ve known for him almost ten years, and yet when I called him, I was nervous. “I’m, uh, coming to Houston to do a show. You’re welcome to come, but I’m not sure if it would be your cup of tea,” I told him.

I used to work for Scott when I lived in Houston, and had since moved to Los Angeles where, with his help, I started my own business. And while I had told him about my side gig as a comedian, I did not have the heart to tell him the subject matter of my material, which is about my experience being raised by gay men in Texas during the ’90s.

Scott said, in his kind Texan accent, “I’d love to see your show, girl! I’ll be there.” While this was very sweet of him, I knew I now had a lot of explaining to do before he came to see it.

Scott was the first real boss I ever had who believed in me AND gave me health insurance. Before I worked for him, I was carless and still living with my mama. By the time I left for Los Angeles, I had a Honda and a beautiful red sleeper sofa to call my own.

He was a patient and fun person to work for. We saw each other day in and day out for almost three years. We went to lunch once a week and had the occasional Friday afternoon scotch.  I knew his daughter, who was 12 at the time. And though I never met his wife, I felt like I knew her through talking to her on the phone when she called to speak to him. I knew so much about Scott, but Scott knew little about me and he thought it was because I was shy.

I have been considered “shy” most of my life. The fact that I do stand-up comedy bewilders people. I have realized, through doing standup, that I am not “shy,” but that there is a lot I hide from the world. Even when I was a little girl, I knew my family was not like other families. I had a mom and dad like everyone else, but found myself telling people, when I got to know them, “My mom is kinda like the dad, and my dad is kinda like the mom.”

My dad came out when I was 11 and my parents divorced. After the fact, I first lived with my mother. She was a tomboyish woman. In fact, there were many times when people would see her and think that she was my dad. While my mother was not gay, it was difficult to explain to people that Yes, my dad is gay, and my mom wears men’s clothes. Especially in Texas. Especially in the nineties.

Since 2010 I’ve been more open, telling people on stages at comedy clubs from LA to Paris about my family. So why, in the year 2015, was it so hard to tell Scott, someone I considered a close friend? Why couldn’t I just say, “I’m doing a show where I talk about my dad who is gay.”

Part of it was the memory of former rejections when I have had to “come out” in the past.  In high school, I lived with my dad and his partner. When I explained our family life to other people, I’d get a mixture of responses. Some of my friends would say, “That is so cool! I wish my dad were gay.” But there were many times when people would surprise me and say, “That there is an abomination.” Even people who knew and loved my dad became confused and distraught when learning that he and his “roommate” Dale were more than friends.

I worked with Scott in my twenties. At that time, since I didn’t live with my dad, it didn’t seem like a necessary topic to bring up. My dad wasn’t dating anyone, so there was no reason to say, “I hung out with my dad and his partner over the weekend.”But I knew about Scott’s mom. I knew that he took care of her after his dad passed away. I knew when she was sick and he had to take time off work to take her to the hospital. I didn’t bring up anything about my dad, because I wasn’t sure where the conversation would go. I worried I would let something spill that would reveal him.

Many people think it’s not necessary to mention a person’s sexuality in casual conversation. But the act of avoiding it cuts off an entire part of a life and a history. I couldn’t talk to Scott about my dad’s partner, Dale, who was like a second father to me when I was growing up. What would Scott say? Okay, fine or That there is an abomination? I wasn’t worried about losing my job; it was about having someone I loved and respected blindside me by rejecting my family and me to my face.

Speaking to Scott on the phone, I thought, “I’m a grown woman in her 30s. It’s 2015. I care about this person, but it’s time I take a risk and reveal a huge part of who I am. He may reject me, but I’ll be okay.”

I told Scott the nature of the show and that it was about my dad who was gay. In fact, my show was called, “Raised By Gays and Turned Out OK!” Scott’s response was muted. He said, “I’ll see if I can go. Talk to you later.”

Honestly, no response was the best response for me. I was elated to let go of this burden. Glad that he did not say, “Whuuut?” The cat was out of the bag and if he went to the show, fine. If he didn’t go, fine. At least I no longer felt like a liar.

Recently, I’ve thought a lot about what it must have been like for my father to come out. He risked rejection by his family, friends, coworkers and society as a whole. Most of our family, who learned the truth, did reject him, not only for being gay, but for hurting my mother. Ultimately, I think the most difficult part was admitting the truth to himself. He loved my mom, my brother, and me. He wanted to be a traditional family man, and the last thing he wanted was to tear our family apart. Through doing my standup and one person show, I have explored this transition in my family’s life and have found nothing but tremendous empathy and admiration for the life my father had to lead. And I’ve realized that his coming out didn’t tear us apart; it made us better people to see him living as his true self.

I in no way thought my “coming out” experiences were exactly like his. But I was there with him by his side. When our family rejected him, I felt rejected. When he expressed concern about being “out” at work and possibly losing his job, I was concerned too. When we saw reports in the news of attacks against men in areas with concentrations of gay bars and other gay-owned businesses, I feared for his life. Along with having to come out to the kids at school and being the only person I knew with two dads, I often worried, “If my dad is gay, what does that make me?” I discovered it did not make me different from anyone else in school. My dad’s orientation didn’t make me different; living in a world that didn’t accept his orientation made me different. Through watching my standup, my dad has seen my side of the story and has realized that he was not completely alone.

Many times, children of gay parents are overlooked, lost in the shadows of their parents’ enormous struggles. But even my father, who had his own challenges, didn’t have to worry about sharing pictures of his mom and dad for fear that he might be outing his parents as a heterosexual couple. He could be proud of them. As the child of gay parents in an often homophobic world, I would sometimes feel ashamed. But I’d also feel ashamed of my shame. I felt that I had to completely cut out a section of my childhood because if I talked about it, it would bring up so many questions, politically-charged conversations and potentially nasty remarks about people I loved. I feared it would end friendships and kill romances. But not talking about my dad, whom I loved and respected, meant not talking about myself. It felt like a betrayal.

The day of my show in Houston, Scott called and said, “I’m gonna be there! You want me to videotape it?” I declined his offer to tape it because the show was my turn to talk about my family with pride and I wanted Scott’s undivided attention. While performing, I was so happy to see him sitting in the audience listening.

I still don’t know how Scott feels about gay rights as a social or political issue. But after it was over I was too filled with joy and relief that he was still there at the end, standing beside me, to care about any of that. He patted me on the shoulder and said, “That was awesome, girl!”

 

Elizabeth Collins is a comedian and writer living in Los Angeles. She is currently working on a memoir based on her one person show called, “Raised By Gays and Turned Out OK!” She leads the Los Angeles chapter of COLAGE, an organization for the children of LGBTQ parents. Follow her @raisedbygaysok.