Waterford_Man posted a photo:
Thanks for all the views, Please check out my other photos and albums.
Waterford_Man posted a photo:
Thanks for all the views, Please check out my other photos and albums.
Hollywood Reporter | Fox Sports Exec Likens His Network to Fox News (Seriously, He Does) Hollywood Reporter I'm bearish on the future of news and highlights shows. If there ... Rami Malek [the star of Mr. Robot] was asked [in THR] how he wants to be coached by directors. And he ... Have your league partners expressed anger with what your opinion hosts have said? |
Sunday's edition of the New York Times Magazine featured a story about a fight over the brain of a patient who, in his life, had been a keystone of memory science. It was an excerpt from Patient H.M., a book about the man who lost his memory after a lobotomy, written by the grandson of the neurosurgeon who performed the operation.
And then the letters started coming in. At issue (in part) was an exchange between author Luke Dittrich and a Massachusetts Institute of Technology neuroscientist named Suzanne Corkin, who died in May. Corkin had conducted decades' worth of experiments on H.M.—Henry Molaison—and Dittrich wanted to know what would happen to all her files:
Suzanne Corkin: Shredded.
Me: Shredded? Why would they be shredded?
Corkin: Nobody's gonna look at them.
One of the letters to the Times was from James DiCarlo, the head of the department of brain and cognitive sciences at MIT. He and his colleagues contend, among other things, that the records had not been destroyed and in fact are currently housed at MIT. “Journalists are absolutely correct to hold scientists to very high standards,” DiCarlo's letter concludes, “I—and over 200 scientists who have signed a letter to the editor in support of Professor Corkin—believe she more than achieved those high standards. However, the author (and, implicitly, the Times) has failed to do so.”
But the New York Times Magazine did fact-check the book excerpt itself, Danielle Rhoades Ha, the paper's vice president of communications, told me. It fact-checks everything, down to poems, according to Robert Liguori, a research editor at the magazine and a fact checker who has worked on books.
Part of the fallout is surely attributable to the collision here between two standards of evidence: scientific and journalistic. Even in the hands of a rigorous reporter, the latter standard is baggier by necessity—nothing would get published if every story were vetted like a peer-reviewed research study—and can sometimes involve taking a source at his or her word about what happened, as Dittrich seems to have here. To this unavoidable problem, the conventions of journalism and nonfiction publishing in general contribute a further complication—there is rarely any transparency about the fact-checking process and certainly no way of knowing if publishers hold their books to consistent standards.
When I tried to find out if Dittrich's book had been fact-checked, the assistant director of publicity at Random House said he couldn't tell me. When I reached out to Dittrich himself, I got a reply back from Random House PR again: “We do not discuss the editing process of our books.” The representative agreed to send me the acknowledgments section, which does describe a vetting process: The writer shared drafts with family members involved in the story and “two eagle-eyed and sharp-brained neuroscientists” who read the whole manuscript.* But if anyone who helped on the book was a fact checker vetting its journalistic accuracy, Dittrich didn't specify as much.
People are often surprised to learn that books, those bulky, fact-rich forever things, frequently receive less scrutiny from an independent fact checker than the stories they skim in magazines before tossing them in the recycling bin. In an ideal world, those books would be vetted in a rigorous and standardized way. “It's impossible to write 50, 60, 70,000 words and not screw up somewhere,” said Ligouri.
I know well, from spending more than two years as a freelance fact checker for many national science publications, that errors are terrifically common in nonfiction. When I started fact-checking as an intern at a magazine, I was surprised to find mistakes of all kinds in the work of authors I'd long admired—misspellings of proper nouns, botched descriptions of experiments, badly turned metaphors for, say, how the brain processes information or how the solar system formed. In the end, everything comes back to primary sources. One of my favorite catches: Once, an author noted that if you were to slice off someone's head, blood would shoot up out of the hole—a gruesome illustration of how powerfully the body's arteries pump blood. I called a forensic expert, who pointed me to videos of beheadings. There was no spurting blood.
All this work costs money—and publishers rarely foot the bill. In a highly unscientific Twitter survey, I asked authors of science books to tell me if their publishers had paid for an independent fact checker. Just four out of 38 respondents said they had.
Ligouri offered me his own theory about why publishers aren't springing for fact checkers: If a text has embarrassing or fatal errors, the onus is put on the author. “Does anybody besides anyone in publishing remember who published those books?”
I've long wished that fact-checked material would carry some kind of stamp on it noting if it had been independently and thoroughly fact-checked. (Internet articles included—this one wasn't.) It would be particularly useful for books, the paper copies of which are impossible to change even when errors are inevitably caught or a new angle to the story emerges.
I'm not the only one. “Maybe there should be a warning,” journalist Mac McClelland has said of books that haven't been checked, “like on a pack of cigarettes.”
And if material has been fact-checked, I'd like to see the checker get a credit, the way that writers, photographers, and increasingly story editors do. They are just as integral to the process of getting the story on the page. Though I've been listed on the masthead of one magazine as a researcher, the vast majority of my fact-checking gigs didn't provide me with a formal credit—even though I showed stories that I checked the same care and diligence I would have if my name had been plastered at the top.
Sometimes writers, with their reputations very much on the line, will hire their own checkers—five respondents to my survey said they had. That was what science journalist and veteran fact checker Brooke Borel did for her forthcoming book The Chicago Guide to Fact Checking, which, I suppose, given the subject matter, isn't terribly surprising. (Full disclosure: I've fact-checked magazine pieces by Borel and consider her a friend. She also interviewed me for her fact-checking guide.) But for her previous book, about the rise of bedbugs, Borel, like many authors, could not afford a fact checker and did the checking herself. That's not ideal, Borel notes. It's hard to get out of your own head and spot your own mistakes, especially when you are invested in the story sticking together. I'd imagine it's doubly hard for a book like Dittrich's, in which the subject matter at times involved his own family.
And when her bedbug book was excerpted by various publications, no one asked Borel for backup materials. While some publications, like the New York Times Magazine, have a policy of fact-checking excerpts, in my experience that doesn't always happen with the same rigor as with other articles. When I've worked on excerpts in the past, I've turned up errors just as surely I would in most features handed to me. More proof that the fact-checking process is opaque: I asked a couple research chiefs about their magazines' policies for excerpts. With the exception of the Times, they declined to comment on record.
Still, fact checking isn't a bomb-proof way to ferret out the truth. As Daniel Engber wrote in Slate, on the subject of Jonah Lehrer's new book, which did get a fact-checking treatment, “Fact-checked, straight-laced science journalism may also fail to catch distortions of the evidence, if those distortions come from researchers themselves.”
Indeed, Dittrich wrote in a response on Medium: “My reporting of the shredding was based entirely on [the researcher's] own words, to me, on tape.” He even uploaded a recording of the conversation. If many books' budgets do not carve out money for fact checkers, they certainly wouldn't include any for a full-scale investigation into locating a pile of notebook shreddings. It might've been a good move to call MIT and ask about the status of the records, especially since Dittrich says that he had a contentious relationship with Corkin. (Whether I would have done this personally—or if the researchers at the Times did this—I can't say. I get to speak with the hindsight of knowing that the claim is being hotly and publicly contested.)
In this case, Dittrich said that he's actively hopeful he's not correct. He wrote on Medium, “I hope the data has in fact survived, as that strikes me as the best possible outcome.” For everyone except maybe the fact checkers.
Correction, Aug. 12, 2016: Due to an editing error, this story originally misstated the source who sent the writer the acknowledgments section of Luke Dittrich's book. It was the assistant director of publicity at Random House, not Dittrich. (Return.)
A pleasant-looking young man begins to grind his hips to a basic R&B backbeat, as a buxom girl gazes out along a bridge and wrinkles her nose at the camera. The young man starts to sing, “It's somethin' about you, girl … ”—and instantly one realizes something is horribly, delightfully wrong. His voice sounds like a sozzled toddler on a bungee cord. When a “harmony” comes in, it's just another, lower, equally wobbly vocal line that tangles around the first. The dancing becomes more and more lost-looking. The PG-13 lyrics are lisped and gulped into sonic oatmeal.
This is San Antonio's Daniel Mcloyd, aka IceJJFish, and the video is “On the Floor.” It was posted in February 2014. Today it is verging on 50 million views.
Debates go round and round on the YouTube comments thread and various subreddits over whether Mcloyd is kidding, “trolling” for clicks. There is no evidence he is anything but straight-up. Meanwhile, his own hometown alt-weekly, the San Antonio Current, worried IceJJFish was a signal of social deterioration, of “a time when someone can get just as famous, if not more so, with a confident and terrible voice as they can with a good voice.”
Over its 11-year tenure, YouTube has done its damnedest to corroborate that claim, from early viral hits such as Tay Zonday's “Chocolate Rain” and Reh Dogg's “Why Must I Cry?” through “your boy” Bangs and the mournful D4NNY. Then of course there is the Joan of Arc of inept YouTube singers, Rebecca Black. The then-13-year-old's 2011 Los Angeles vanity-studio production “Friday” has insinuated itself so deeply into people's musical cortices that it will be a must on wedding-reception playlists through at least the 2020s. Its irresistibility left us at sea: Had we embraced it for its gormless awfulness or for a kind of hidden pop greatness that made it somehow, well, gormful?
This is one of the opportunities that “bad” music can permit us: a mini-liberation from the usual bounds of taste. There is only listening, perhaps in an unusually pure form.
The risk is that we might be listening cruelly, in freak-show mode, staring down upon some deficiency. Is the performer “in” on the joke, sharing a parallel pleasure? Consider how many of the singers in that list of YouTube smash-flops are “foreigners” or people of color, or in Black's case a young girl. Evaluating singing is especially sensitive because the act is so vulnerable, issuing directly from the body, via the mysterious workings of breath, throat, and tongue. The singing in a real sense is the singer.
That said, any alarm that our current appetite for vocal destruction means our culture has become a uniquely degraded idiocracy cries out for a historical reality check. And this week it has come, in the form of the new film Florence Foster Jenkins, directed by Stephen Frears and starring Meryl Streep as the real-life New York heiress and arts patron who became known in the 1940s as the world's worst opera singer.
For decades, in private recitals, Jenkins charged at the repertoire's most challenging arias like a blind, braying, three-legged horse in a steeplechase, rarely clearing a musical hurdle. She was indulged and protected by her society friends. Then, in her 70s, she slipped up. She made supposedly private recordings of her singing at a studio-for-hire (the wartime equivalent of Rebecca Black's Ark Music Factory), and they began to circulate, including to local radio. Worse, she rented out Carnegie Hall to fulfill her dream of singing there. This time, no one could keep the public away, and they packed the rows to gawk and laugh.
Jenkins died soon after, arguably of her shattered illusions. But the legend of her high-spirited haplessness continued to grow via recordings, articles, books, stage plays, documentaries, and now this film (as well as a recent French movie more loosely based on her, Marguerite). Hers was a slow-motion version of viral notoriety, a Rebecca Black or IceJJFish for the analog age.
I first heard about her in the late 1980s and early 1990s, a time when fans of “underground” rock tended to fetishize past examples of “incredibly strange music” as precursors of the deliberately raw, discordant, and alienated sounds of post-punk and its kin—while no doubt engaging in some immature snickering, too. It's impossible to draw the line.
On college radio, Jenkins' scattershot rendition of Mozart's “Queen of the Night” aria might play alongside the “exotica” of “Peruvian princess” Yma Sumac and avant-garde jazz by Sun Ra's Arkestra. They'd be joined by the Shaggs, the home-schooled New Hampshire sisters whose father forced them to become a 1960s band. The trio's so-wrong-they're-right harmonies and shifting-sands timing on naïve compositions like “My Pal Foot Foot” or “Who Are Parents?” have astonished generations of musical safarists from Frank Zappa to Sonic Youth and beyond: It's just been announced that their 1968 album Philosophy of the World is getting a deluxe vinyl reissue on the highly curated Light in the Attic label in September.
But this has never been exclusively a sophisticate's game. “Bad” singing was commonplace novelty fare on TV talk and variety shows from the 1950s through the 1970s, under the watch of Ed Sullivan, Johnny Carson, and the Smothers Brothers—whether in the warbling falsetto and ukulele strums of the comic-poignant Tiny Tim or the very Jenkins-esque strains of California housewife and grandma “Mrs. Miller,” who was put up by producers to sing the latest pop songs and even hippie drug anthems in her Edith Bunkeresque caterwaul.
The legacy stretches back even further, at least to the stages of 1890s vaudeville, where the temperance-crusading Iowa farm women the Cherry Sisters had to perform behind wire screens to protect them from the tomatoes and other projectiles rocketing at them from the crowd. Like IceJJFish, they inspired endless speculation about whether their clunky song-and-dance routines could really be done unawares, but if they were play-acting, then they safeguarded the secret to the ends of their lives.
This question of intention haunts the history of bad singing, and there have been successful fakes, such as the deliberately drecky lounge-music duo Jonathan and Darlene Edwards, who in real life were the accomplished conductor-arranger Paul Weston and his wife, the easy-listening pop singer Jo Stafford. But they take the act too far, in the pattern of the overdone “Hollywood tone deaf” singing we so often hear as a gag from movie and TV characters. Streep, a verifiably good singer, is much better in Florence Foster Jenkins—not caricaturing her model's mistakes but conveying that she is trying her best and not succeeding, as anyone might.
That it's so difficult to sing “badly” well is another reminder of the inherent instability of the concept. Every great singer is a bad singer sometimes. A friend reminded me of the superb moment near the end of Janet Jackson's 1995 single “Runaway” when she sings “I just know we'll have a good time,” then comments, “Uh, didn't quite hit the note—that wasn't such a good time.” In perhaps the finest book about the complications of loving voices, The Queen's Throat, Wayne Koestenbaum recounts how the operatic deity Maria Callas had terrible off nights, especially as she grew older and developed a “flap” in her high range. But these flaws, this fissure between her greatness and her humanity, made her admirers adore her all the more. This is not so far from the affection Jenkins' go-for-it fallibility bred, as Streep's robed and tiara-crowned performances show us: She was the complete diva, in everything but ability.
The popular music canon is full of singers who aren't conventionally tuneful or precise, particularly after the loosening up brought by rock and, to greater extremes, the celebrations of transgressive amateurism in punk and hip-hop. The human race is split between people who can't stand the nasal swooping of Bob Dylan's voice and people who think his counterintuitive phrasing and stresses make him one of the great expressive interpreters. He's lately been on a campaign to prove the latter with his two albums covering Tin Pan Alley standards, slyly setting himself up in competition with Frank Sinatra.
The other night, my mother was talking about how much she loved Leonard Cohen's songs, but only on the page or when performed by other people—his own voice, to her, was just a monotonous, rushing drone, “bluh-bluh-bluh-bluh.” I wanted to retort that I find his delivery at once witty and moving, and more so in his old age. I think of him in the lineage of bardic recitation and plainsong. But I knew I'd never win. Cohen himself makes the joke on “Tower of Song,” declaring with a groan, “I was born like this. I had no choice./ I was born with the gift of a golden voice.”
When I mentioned to friends that I was working on this article, more than one quoted a line from another mumbly poet-singer, David Berman, of Pavement cohorts the Silver Jews: “All my favorite singers couldn't sing.” This is admittedly my instinctive allegiance, with all my post-punk, boho, Velvet Underground, DIY, bluh-bluh-bluh baggage.
And that's without even accounting for the further-out vocal styles of, for instance, Yoko Ono, who mesmerized me at first hearing. Her flutters and screams are frequently on hit lists of awful singing, which makes sense if you assume she's supposed to be measured next to the Beatles, instead of next to traditional Asian theater music, avant-garde opera à la Alban Berg, the performance-art experiments of her Fluxus art-movement peers, and even labor pains. Her accidental entry into the pop marketplace through John Lennon made her accessible as a role model in particular to young women looking to unleash their wilder voices through the decades.
Sometimes “bad” singing is just down to changing times and fashions, the way that 1920s boy-toy crooners such as Rudy Vallee can sound ridiculously mannered to modern ears (see Mr. Show's “Monsters of Megaphone” sketch), while today's mushy “indie girl voice” provokes the ire of more senior ears.
Your favorite voice always will be someone else's most despised (as I've discussed at length elsewhere). If it's the expansive contours of Björk, there are others to whom she's formless and grating. If it's the soaring crescendos of Whitney Houston, I'm afraid there are many—though it's less often admitted since her tragic death—to whom her version of “I Will Always Love You” was an over-the-top, exhausting travesty of Dolly Parton's original. If it's Aretha Franklin … well, maybe you're on safe ground.
Canadian journalist Tim Falconer recently delved into what “bad” singing means from a very personal angle, with his new book, Bad Singer. Questioning if friends and family were right to call him “tone deaf” ever since childhood, Falconer simultaneously takes up singing lessons and undergoes scientific testing of his theoretical musical potential.
Many people who believe they're tone deaf are merely unpracticed. But Falconer, to his regret, learns that he does register as part of the estimated 2.5 percent of the population that is “amusic”—unable to detect or produce pitches accurately. The worst amusia cases are like the late author and neuroscientist Oliver Sacks' French colleague François Lhermitte, who told him that when he heard music, “he could say only that it was ‘The Marseillaise,' or that it was not,” or his patient who said music sounded like nothing but the clatter of pots and pans. Given this, the researchers are surprised to hear that Falconer is an avid music fan who's devoted much of his life to records and concerts.
After two years of fitful and frustrating study, he does improve enough to brave his way through a couple of songs (the Beatles' “Blackbird” and Joe Strummer's “Silver and Gold”) at a living-room concert for friends. More vitally, he comes to appreciate the crucialness of an emotional connection with a song, which can make it land even if you're wavering or speak-singing. He tells me he's discovered that he responds most to the elements of music less dependent on pitch, such as timbre, texture, and dynamics.
Despite appearances, Florence Foster Jenkins probably wasn't amusic: She was talented enough to have been a piano prodigy in her childhood, known as “Little Miss Foster,” and invited to perform at the White House. Her abilities may have degenerated, as the movie heavily hints, due to nerve damage brought about by the syphilis she contracted from her first husband on her wedding night, or alternatively by the mercury and arsenic she took to treat it. There was also a head injury in a car accident (unmentioned in the film). All reminders of that fleshy, tender embodiedness of the voice again.
On the other hand, Jenkins might simply have been overambitious and too stubborn to admit it—Hank Williams and Woody Guthrie were gifted singers, but they likely wouldn't have fared well with Wagner and Strauss, either. For that matter, Pavarotti would have sounded ridiculous in a honky-tonk. Technical incompetence is one way of singing badly, but badness can also come down to context and occasion.
Karaoke is our culture's leading haven for democratically bad singing. But at the weekly event at a former veterans' club I attended in British Columbia this winter, my old friend Colin took to performing anodyne numbers such as “Don't Worry, Be Happy” and “Tea for Two” in a gruff, atonal, disaffected bark. To our gaggle of mostly transplanted East Coasters, it was perfectly hilarious. To the local karaoke host Stacie, who could knock out a devastating version of a belter like K.D. Lang's “Constant Craving” on a dime, it was a barely tolerable abuse of her show—not good-bad, just obnoxious. I'm still not sure which of us was in the right. I also have no idea if Colin could sing on-key if he tried.
As well as music so bad it's good, there can be music so technically good that it's bad, because it reads as pretentious or self-indulgent. This is something that chafes nonfans of American Idol, The Voice, and other televised singing competitions—that the oversinging and Mariah Careywannabe vocal curlicues used by contestants to show off their skills can come at the expense of meaning and soul. I felt a similar tension while binging a reality series about college a cappella competitions called Sing It On! I was startled whenever the participants referred to themselves as “musicians,” as I'd been appreciating their routines more like Olympic gymnastics or (as the title suggests) high-level cheerleading.
American Idol also became infamous for the “bad auditions” segments in each season's opening episodes. The breakout star of this brutal ritual was William Hung, whose awkward rendition of Ricky Martin's “She Bangs” in 2004 led to an unexpected audience petition for repeat performances and a temporary career on the talk-show and county-fair circuit. (Today he's reportedly an administrator with the L.A. County Sheriff's Department.) That Hung was being demeaned and patronized as an amusing Asian mascot is depressingly clear, foreshadowing many of the YouTube viral videos as well as South Park's cartoon treatment of the Hong Kongborn immigrant singer Wing, who is also an actual human.
Still, like most of the un-Idols, Hung was an enthusiastic participant. His gameness, like Jenkins', genuinely endeared him to many followers. Scholars such as Katherine Meizel and Matthew Wheelock Stahl have written about how Idol's bad auditions weren't only ritual sacrifices, but a necessary flipside to the success fantasy Idol promulgated: It showed that even the losers believed deeply in the terms of its supposed meritocracy, thus proving its validity. Viewers got to size up candidates and vicariously judge them wanting or worthy, the kind of power bosses and authority figures more often wield over our own economic lives.
Singing can also be “bad” but not failed, thanks to cult followings, if it is sufficiently esoteric, hermetic, and poetically doomed. Artists such as the Shaggs, the reclusive but prolific mail-order recording artist Jandek, and the gifted but dangerously bipolar singer-songwriter Daniel Johnston are all often lumped into the category of “outsider music,” which is meant as an honor, albeit a problematic one: Outside of what and compared to whom? (As I've written before, Michael Jackson in his way was a more thoroughgoing outsider than Syd Barrett or Captain Beefheart.)
Yet there is also a kind of “bad” singing that designates insiderness. At its crassest, the Hollywood version of this was memorialized in the 1990s Golden Throats series of compilations of actors such as William Shatner or Telly Savalas “singing” Beatles or Johnny Cash songs—celebrities doing whatever they wanted, competence be damned. Similarly there's the nepotism of being married or born into featured-singer status, as witness the late Linda McCartney's usually buried, jumble-sale vocal parts.
A much more resonant example is drag-queen camp and its later queer heirs such as Kiki and Herb, who manipulate the so-bad-it's-good blend to destabilize boundaries and toy subversively with straight culture for the benefit of fellow code-switching adepts. But it could also include the in-crowd effects of the casually “off” singing of 1970s outlaw-country singer-songwriters such as Kris Kristofferson, as well as hardcore screaming metal growling, and the shouts, yelps, and yawps of punks and post-punks.
Then there's the don't-give-a-fuck, off-key singing that has been spreading through hip-hop for more than a decade, in a line descending from Biz Markie's old-school classic “Just a Friend” through Kanye West's 808s and Heartbreak to Lil Wayne, Future, and Young Thug. (Off to one side is Drake, set apart as in so many things by the fact that he actually seems to care about hitting the notes.) Their drawls and mumbles are often denatured and hybridized via liberal doses of Auto-Tune, the pitch-correction software that at its highest settings causes tones to rip, skip, chirp, and yodel erratically. These effects have metaphorical overtones, invoking both trauma to black bodies and a vision of futuristic mutation and escape. (They're equally popular in Jamaican dancehall, and Asian and African music.) But up front, like so many gestures in hip-hop, they mark of-the-moment “realness” and a warning to stay the hell out if you can't handle it.
It's conspicuous that most of these ingroups are dudes' domains. Acceptably performing femininity in this culture still mandates that you look and sing pretty, even if you're as fierce as Beyoncé. The one big crack in that edifice was punk, which—especially in Britain, with the likes of the Slits and X-Ray Spex's Poly Styrene and the Raincoats, but in America, too—unleashed many unruly, angry women's voices from the tyranny of tunefulness. As Pitchfork's excellent survey on “feminist punk” confirmed this week, it has continued doing so through riot grrl to this day, though much more for white women than anyone else.
Celebrating “bad” singing can be a way of opening up space for society's “bad subjects,” the misfits of gender, sexuality, racial hierarchy, and other confining norms. Still, assuming that noise and dissonance always denote a positive kind of rebellion would be too cheerfully literal minded. Think of the dystopian violence of neo-Nazi punks and others who 1970s critic Lester Bangs called “white noise supremacists.” There are morally bad kinds of singing, too, from the declamatory timbre of propaganda anthems to the debased, mugging slander of blackfaced minstrelsy. Not every impulse deserves to soar on wings of song.
The Florence Foster Jenkins movie ends with a vision of the heroine's own conception of her Carnegie Hall experience, blissfully innocent of its imperfections. Even if that was so, which we can never know, should we give a big hug to Jenkins' deluded freedom? Or should we side with the audience members who might have laughed because through the medium of this rich, oblivious clown they felt like they were witnessing the indecent exposure of the naked emperor? Jenkins' long musical honeymoon, after all, was sustained through a bought-and-paid-for conspiracy of silence among New York elites.
The movie's central moral is that bad singing matters less than bad listening, an ungenerous unwillingness to hear humanity in all its difference and richness. This is broadly true—ethnomusicology has shown how different cultures and eras have vastly varied ideas of harmony—but it's also too easy, too American Idol, to say that sincerity and striving are all that count. Like the 2014 Oscar winner Birdman, this film portrays critical thought—as displayed in the New York papers' appalled reviews of Jenkins' performance—as its ultimate villain, the slayer of hope.
I can't help but think of the current election cycle, in which the dissonant noise virtuoso Donald Trump is doing his best to drown out the careful sheet-music follower Hillary Clinton.
As Lauren Berlant wrote in her brilliant Trump essay in the New Inquiry this week, “Trump's response to what he has genuinely seen is, analytically speaking, word salad. Trump is sound and fury and garble. Yet—and this is key—the noise in his message increases the apparent value of what's clear about it. The ways he's right seem more powerful, somehow, in relief against the ways he's blabbing.” It's the same way Jenkins' many wrong notes make you give her extra credit for the ones she hits.
Most of all, though, “the second thing about Trump is that Trump is free.” It doesn't matter that Trump's version of freedom has little to do with any potential freedom of our own. We just thrill to hear him going off-book, off-pitch, in ways that so few politicians in our focus-grouped, vocal-coached lifetimes would dare.
Still, that doesn't turn every melody-mangling talent-night amateur into a proto-Trump. When we ask whether something is “bad,” the best response is “bad for what?” The question of bad singing leads to wondering, What is singing for?
In Falconer's Bad Singer, he outlines several evolutionary theories: that singing developed as a means of sexual display, or for emotional signaling, for group bonding, or perhaps for nothing—as a side-effect of language and other functions. The prominent evolutionary psychologist Steven Pinker famously called it “auditory cheesecake.”
If it's all about sex or cheesecake, then bad singing would have little value. But if it's more about how we reveal ourselves to one another and forge communities, as so much of human religious and tribal history suggests, then whether you can match pitch or not matters way less than being in the game.
That's how I felt watching the 2008 documentary Young@Heart, about a Massachusetts-based senior citizens' chorus that became renowned for its unexpectedly rowdy covers of rock, punk, and soul. The participants don't necessarily sing well, but like Florence Foster Jenkins, no one can deny they sing. It's one of the most emotionally overwhelming movies I've ever seen. Several members die during the filming, and it builds to a climax in which one of the oldest choristers, beset with multiple afflictions, sings Coldplay's “Fix You” onstage with a breathing tube between his lips, dedicating it to his recently fallen friends: “When you lose something you can't replace … ” I'd never liked that song. But this man's rendition remade it in every syllable and inflection as a hymn to human dignity, all to the arrhythmic, gasping pulse of his oxygen tank.
“Good” or “bad” never entered my mind.
With the scorching temperatures in New York City, I'm dreaming of the cool waters of Lake Tahoe with this view I captured a few weeks ago. At an elevation of 6,225 ft (1,897 m), Tahoe is the largest alpine lake in North America and the second deepest in the United States with a depth of 1,645 ft (501 m). ? by @benjaminrgrant (at South Lake Tahoe, California)