Then it brought forward Klan terrorism and Jim Crow in the South; now it has brought to power the most overtly racist President since Woodrow Wilson, openly catering to a white revanchist base. It’s a depressing prospect, and Gates is properly depressed and depressing about it.
The broad outlines of the Reconstruction story have long been familiar, though the particular interpretive pressures put on particular moments have changed with every era. Toward the end of the war, Washington politicians debated what to do with the millions of newly freed black slaves. Lincoln, after foolishly toying with recolonization schemes, had settled on black suffrage, at least for black soldiers who had fought in the war. (It was a speech of Lincoln’s to this effect that sealed his assassination: John Wilkes Booth, hearing it, said, “That means nigger citizenship. Now, by God, I’ll put him through.”)
After Lincoln’s death, his hapless and ill-chosen Vice-President, Andrew Johnson, did as much as he could to slow the process of black emancipation in the South, while the “radical” core of the abolitionist Republicans in Congress tried to advance it, and, for a while, succeeded. Long dismissed as destructive fanatics, they now seem to be voices of simple human decency. Thaddeus Stevens, the abolitionist congressman from Vermont, proposed shortly after the war’s end, in his “Lancaster” speech, a simple policy: punish the rebel leaders; treat the secessionist states as territories to be supervised by Congress, thus protecting the new black citizens; take the confiscated plantations on which masters had worked slaves like animals, and break up those plantations into forty-acre lots for the ex-slaves to own (a form of the classic “forty acres and a mule”). That this minimally equitable plan was long regarded as “radical” says something about how bent toward injustice the conversation quickly became.
Freed slaves eagerly participated in the first elections after the war, and distinguished black leaders went to Congress. The 1872 lithograph of “The First Colored Senator and Representatives,” by Currier & Ives, no less, shows seven black men given the full weight of mid-century Seriousness, including the first black senator from Mississippi, Hiram Rhodes Revels.
But white state governments steadily reconstituted themselves. By the eighteen-nineties, they were passing laws that, piece by piece, reclaimed the right to vote for whites alone. All of this was made worse by one of those essentially theological “constitutional” points which American professors and politicians love to belabor. Lincoln’s argument was always that, since it was unconstitutional for states to secede on their own, the rebel states had never seceded. The rebels were not an enemy nation; they were just a mob with a flag waiting to be policed, and the Union Army was the policeman. The idea was to limit any well-meaning attempt at negotiation, and to discourage foreign powers from treating the Confederacy as a separate state. After the war, though, this same idea implied that, since the state governments had never gone out of existence, their reborn legislatures could instantly reclaim all the rights enjoyed by states, including deciding who could vote and when.
As Stevens pointed out, the reasoning that says that no states seceded because the Constitution won’t allow it would also say that no man can ever commit murder because the law forbids it. “Black Codes” were put in place in most Southern states that, through various means, some overt and some insidious (anti-vagrancy statutes were a particular favorite), limited the rights of blacks to work and to relocate. The legislative reconquest was backed by violence: the Ku Klux Klan, formed as a terrorist organization by ex-Confederate officers, began murdering and maiming assertive black citizens. In 1877, after a mere dozen years in which black suffrage and racial equality were at least grudgingly accepted national principles, the federal government pulled its last troops from the South and, in what could be called the Great Betrayal, an order of racial subjugation was restored.
It’s a story with fewer pivotal three-day battles than the war fought over slavery, but its general shape is oddly similar: after a stunning series of victories and advances in the early years by the “rebels”—in this case, egalitarian forces—the armies of Reconstruction began to fall victim to the sheer numbers of the opposing side and to the exhaustion of their allies and reserves. Some battles, both real and rhetorical, do stand out. There were the arguments in Congress, pitting newly minted and almost impossibly eloquent black representatives against ex-Confederate politicians who a few years earlier had been sending hundreds of thousands of young men to their death in order to preserve the right to keep their new colleagues in perpetual servitude. There was the so-called Battle of Liberty Place, in New Orleans in 1874, a riot on behalf of the White League, a gang of ex-Confederate soldiers who sought to oust Louisiana’s Republican governor and its black lieutenant governor. In a moment of extraordinary moral courage, as worthy of a film as any Civil War battle, James Longstreet, the most capable of General Lee’s Confederate lieutenants, agreed to lead municipal police, including black officers, to put down the white riot and restore the elected government. He knew what it would cost him in status throughout the old Confederacy, but he did it anyway, because it was the right thing to do. Naturally, the city’s monument to the attempted coup bore an inscription that conveyed the White League’s point of view, and, sobering fact, it was scarcely two years ago that the racist memorial to the riot finally came down—with a police escort to protect the movers.
The Civil War: White Christians Slaughtering One Another On A Scale ISIS Can Only Dream Of
Gates emphasizes that Reconstruction was destroyed not by white terrorism alone but also by a fiendishly complicated series of ever more enervating legal and practical assaults. The Supreme Court played a crucial role in enabling the oppression of newly freed blacks, while pretending merely to be protecting the constitutional guarantee of states’ rights—one more instance in which “calling balls and strikes” means refusing to see the chains on the feet of the batter. The overtly racist decision in Plessy v. Ferguson (1896) arrived long after the worst was already done, but it sealed the earlier discrimination in place, and Jim Crow thrived for another half century. Meanwhile, at least some of those Northern liberal abolitionists—including the likes of Henry Adams and the well-meaning Horace Greeley—managed, in the way of high-minded reformers, to let their pieties get the better of their priorities: recoiling against the apparent improprieties of the pro-suffrage Grant Administration, they made common cause with the Democrats who were ending democracy in the South. “When, therefore, the conscience of the United States attacked corruption,” W. E. B. Du Bois wrote in his classic 1935 study, “Black Reconstruction in America,” in many ways the most astute account of the period ever produced, “it at the same time attacked in the Republican Party the only power that could support democracy in the South. It was a paradox too tragic to explain.”
Gates is one of the few academic historians who do not disdain the methods of the journalist, and his book (which accompanies a four-hour PBS series he has made on the subject) is flecked with incidental interviews with and inquiries of other scholars, including the great revisionist historian Eric Foner. Though this gives the book a light, flexible, talking-out-loud texture, it is enraging to read—to realize how high those hopes were, how close to being realized, how rapidly eradicated. That Currier & Ives lithograph of the black legislators, which Gates reproduces, takes on almost unbearable pathos. The last black U.S. representative from North Carolina was forced out of office in 1901—and there would not be another until 1991. The eclipse of formal black political power happened, in significant part, by violence. The historian David Blight estimates that, between 1867 and 1868, something like ten per cent of the blacks who attended constitutional conventions in the South were attacked by the Klan.
Gates quickly moves beyond the immediate political context of black disenfranchisement to tell the sad story of how an ideology that justified racism as science, and bigotry as reason, grew and governed minds across the country. There’s the pseudoscientific racism promulgated by Louis Agassiz, of Harvard, who sought to show that blacks belonged to a separate, inferior species; the repellent but pervasive popular cartoon spectre of the black defilement of white women; the larger ideology of shame that also assigned to black men a childlike place as grinning waiters and minstrels. When they weren’t raping white women, they were clowning for white kids.
The historical literature that arose to defend white supremacy was soon accepted as a chronicle of truths, especially in the countless sober-seeming memoirs of the former leaders of the slave states, including Jefferson Davis, the President of the Confederacy, who insisted that slavery was a side issue in a states’-rights war. The “Lost Cause” took on popular literary form in Thomas Dixon’s novel “The Clansman,” which became the basis for D. W. Griffith’s 1915 “The Birth of a Nation,” the first great American feature film. In Griffith’s Reconstruction, blacks, many played by white actors in blackface, are either menaces or morons (black legislators of the kind depicted in that lithograph spend their time in the statehouse drinking and eating), and are, thankfully, routed by the Klan—shown dressing in sheets because they have grasped the primitive African fear of ghosts.
It is still difficult to credit how long the Lost Cause lie lasted. Writing in the left-wing The Nation, James Agee, the brilliant film critic and the author of the text for “Let Us Now Praise Famous Men,” could announce, in 1948, that “Griffith’s absolute desire to be fair, and understandable, is written all over the picture; so are degrees of understanding, honesty, and compassion far beyond the capacity of his accusers. So, of course, are the salient facts of the so-called Reconstruction years.” Even as late as the nineteen-sixties, the Harvard historian Samuel Eliot Morison, in what was then a standard “Oxford History of the American People,” called for “ten thousand curses on the memory of that foulest of assassins, J. Wilkes Booth”—but for a surprising reason. “Not only did he kill a great and good President; he gave fresh life to the very forces of hate and vengeance which Lincoln himself was trying to kill,” Morison wrote. “Had Lincoln lived, there is every likelihood that his magnanimous policy towards the South would have prevailed; for, even after his death, it almost went through despite the Radicals.” The thought that the failure of Reconstruction had been its insufficient attention to the feelings and the interests of the white majority—like the thought that “The Birth of a Nation” should be considered to hold the “salient facts” of Reconstruction—strikes us now as astounding, but it was orthodox textbook history and criticism for an unimaginably long time, and among people who believed themselves to be progressive.
A turn in the South has happened, though. Reading Richard White’s volume “The Republic for Which It Stands,” in the new Oxford History of the United States, we could not be further from an aggrieved account of how mean Reconstruction was to the South. White, writing with a microscopically attentive eye to the fine shadings of the period, gives a full picture of terror rampant, justice recumbent, and liberty repressed. Curiously, however, he uses the old vocabulary of disdain, designating pro-Reconstruction Southern whites as “scalawags” and pro-Reconstruction Northerners as “carpetbaggers,” just as their enemies styled them. (What are the limits of appropriating a derogatory vocabulary? It is fine to call painters who had no desire to give us their impressions Impressionists, but it somehow feels unfair to use epithets that imply bad intentions where one can find purposes largely good.)
Could things have gone otherwise? Contingency counts and individuals matter. When it came to the exacting task of managing the postwar settlement, it’s hard to imagine a worse successor than Andrew Johnson. Chosen in the good-enough-to-balance-the-ticket way that Vice-Presidents so often were, right up through Harry Truman, Johnson was openly racist, poorly educated, and bad-tempered. But President Grant followed President Johnson, and Grant, as Ron Chernow showed in his recent biography, tried very hard for a while to end the terror and to maintain what were already being called civil rights. His Attorney General, Amos Akerman, declared that the Ku Klux Klan was “the most atrocious organization that the civilized part of the world has ever known,” and helped bring in more than eleven hundred convictions against it. In 1872, the year of that glorious lithograph, the Klan was, as Chernow says, “smashed in the South.” By Hannah Goldfield
Yet even that hardly helped. One mistake the North made was to allow the Confederate leadership to escape essentially unscathed. Lincoln’s plea for charity and against malice was admirable, but it left out the third term of the liberal equation: charity for all, malice to none, and political reform for the persecutors. The premise of postwar de-Nazification, in Germany, was a sound one: you had to root out the evil and make it clear that it was one, and only then would minds change. The gingerly treatment of the secessionists gave the impression—more, it created the reality—that treason in defense of slavery was a forgivable, even “honorable,” difference of opinion. Despite various halfhearted and soon rescinded congressional measures to prevent ex-Confederate leaders from returning to power, many of them didn’t just skip out but skipped right back into Congress.
One might at first find it inspiring to read the gallant and generous 1874 remarks of Robert Brown Elliott, a black congressman representing South Carolina, as he defended civil rights against Representative Alexander Stephens, of Georgia, the former Vice-President of the Confederacy. Elliott’s voice is so ringing and defiant, and at the same time so uncannily courteous. “Let him put away entirely the false and fatal theories that have so greatly marred an otherwise enviable record,” he declared, addressing Stephens. “Let him accept, in its fullness and beneficence, the great doctrine that American citizenship carries with it every civil and political right which manhood can confer.” But then one recalls Abraham Lincoln’s beseeching letters to Stephens in 1860, between his election and his Inauguration, seeking some possible compromise before war came. Stephens then made it plain that slavery was the only thing at issue, and its permanent perpetuation the only demand that could never be compromised. What the hell was he doing back there in Congress, one wonders, after all that death and suffering? He should have counted himself lucky not to have been hanged. But he was there and, soon enough, Elliott wasn’t.
Surprisingly few in the educated classes in the South had the foresight to recognize that reform was needed for the South’s own sake. Du Bois reproduces an 1866 speech from Governor Brownlow, of Andrew Johnson’s own state of Tennessee, in which he stated bluntly, “I am an advocate of Negro suffrage, and impartial suffrage. I would rather associate with loyal Negroes than with disloyal white men. I would rather be buried in a Negro graveyard than in a rebel graveyard.” Yet Robert E. Lee—subsequently ennobled for not actually leading a backwoods guerrilla campaign—never made a statement accepting the new order, never said, in the language of the time, something like: “A great struggle has gone on, and Providence has settled the question on the anti-slavery side. We must now accept these men as citizens and comrades, if not fully as brothers.”
One Confederate general who did make the turn was Longstreet, a genuinely heroic figure. The only member of Lee’s inner circle at Gettysburg who was smart enough to grasp that Lee’s aggressive strategy, and thus Pickett’s Charge, was doomed in advance, he was also smart enough to see that the strategy of permanent segregation was ultimately ill-fated. Yet the broader legacy of Pickett’s Charge is part of the story, too. Fifty thousand casualties in three days at Gettysburg: for us, those are numbers; for their countrymen, it was fifty thousand fathers and sons and brothers wounded or dead. War weariness is essential to the shape of the postwar collapse. The hope that, in 1870, even a well-intended cohort of former abolitionists would focus properly on the denial of civil rights to blacks in the South was morally ambitious in a way that is not entirely realistic. Richard White, like many others, points to the retreat on the part of Northern liberals from aggressively advocating for black rights, while perhaps not sufficiently stressing one good reason for it: the unimaginable brutality many had experienced in fighting the war. In ways that Louis Menand explored in his book “The Metaphysical Club,” it left a generation stripped of the appetite for more war-making and even (as Menand has argued) of any confidence in moral absolutism. The horror of the Civil War made it difficult to accept that more fighting might be necessary to secure its gains. Nothing is easier to spark than an appetite for war, and nothing harder to sustain than a continued appetite for war once a country learns what war is really like. War hunger and war hatred are parts of the same cycle of mass arousal and inhibition.
The other brutality lay in the strange demographics of race in America: basically, the black people were in the South, and their natural allies were in the North. Even today, African-Americans form a huge nation, almost forty-four million people—bigger than Australia or Canada—but they also represent only about thirteen per cent of the U.S. population, never large enough to act without allies. In the postwar period, clustered in the South, they found that their chief ethnic allies were far away. This demographic paradox—a population large enough to be terrifying to the majority population nearby but not large or concentrated enough to claim its own national territory—was part of the tragedy, and increased the brutality by increasing the fear. The adjusted percentage of the Jewish population in Poland before the Holocaust was similar, and had similar implications: enough to loom large in the minds of their haters, not enough to be able to act without assistance in the face of an oppressor.
Gates goes on to illuminate the complex efforts of black intellectuals, in the face of the reimposition of white rule, to find a sane and safe position against it. The “New South” was met by the “New Negro,” a phrase that arose in the eighteen-nineties. The emancipated, educated, fully literary black bourgeoisie would undeniably be a full citizen. This urge to “earn” full citizenship by effort instead of by claiming it as a birthright seems forlorn now, a product of minds exposed so long to toxic bigotry that some of it had seeped inside and curdled into self-hatred. But, as Gates shows, it was possible to be entirely committed to the rights of black people while still being convinced of the need for education to uplift them—indeed, while still voicing sympathy for the travails of the defeated South. Hiram Rhodes Revels, the black senator from Mississippi, who is on the left in that Currier & Ives lithograph, blamed Republican interlopers for bringing racial discord to the South, writing to Grant in 1875 that, “since Reconstruction, the masses of my people have been, as it were, enslaved in mind by unprincipled adventurers, who, caring nothing for country, were willing to stoop to anything, no matter how infamous, to secure power to themselves, and perpetuate it. . . . The bitterness and hate created by the late civil strife has, in my opinion, been obliterated in this state, except perhaps in some localities, and would have long since been entirely obliterated, were it not for some unprincipled men who would keep alive the bitterness of the past, and inculcate a hatred between the races, in order that they may aggrandize themselves by office.” Revels himself left his Senate seat after a year and became the head of the newly formed Alcorn University, devoting the rest of his life to educational uplift.
It is easy to regard leaders like Revels (including, later, the electorally reticent Booker T. Washington) as “Uncle Toms”—a term that, Gates notes, doesn’t become pejorative until the next century. But their reading of the circumstance assumed, optimistically, that once blacks had earned equality they would betreated equally. They believed passionately that the ex-slave population, degraded by centuries of slavery, needed to be educated into the professions. The New Negro, as he emerged in the twentieth century, was so narrowly focussed on literary and scholarly accomplishment that he tended, Gates insists, to neglect the most astounding cultural achievement of his own country and kin. “There was, in fact, a genuine renaissance occurring during the Harlem literary renaissance, but it wasn’t among the writers,” Gates observes. “The renaissance was occurring among those great geniuses of black vernacular culture, the musicians who created the world’s greatest art form in the twentieth century—jazz.” The New Negroes were hardly alone among aspirational Americans in the pathos and dignity of their respectability; one sees the same attempt to outwit the oppressor by becoming like the oppressor among the lace-curtain Irish or the stained-glass Jews. Indeed, combining the New Negro emphasis on formal education with a more capacious understanding of the riches of black inheritance was a task that, Gates understands, had to be left for later generations, not least his own.
Revisionism always risks revising right out of existence not just the old, too rosy account but also the multi-hued reality. Here there are lessons we can take from Du Bois’s extraordinary, prophetic history. For the curious thing is that Du Bois pays more attention to the enduring legacy of Reconstruction than have many of his revisionist successors. At a time when the era had been reduced to the D. W. Griffith fable of illiterate blacks conspiring with opportunistic whites, Du Bois wanted to assert the lasting value and significance of what had been achieved in the all too brief period of black political enfranchisement. We couldn’t understand the enormity of the betrayal, Du Bois thought, if we didn’t understand the magnitude of what was betrayed. So, along with the horrors of terrorism and the slow crawl of renascent white supremacy, Du Bois also registers the accomplishments that Reconstruction created in its brief moment: public-health departments were established where none had existed before; public education for blacks began—miserably underfunded, but, still, there were schools where less than a decade before it had been a crime for a slave to learn to read. This is a view that Foner shares as well. As he writes, “Although black schools and colleges remained woefully underfunded, education continued to be available to most African Americans. And the autonomous family and church, pillars of the black community that emerged during Reconstruction, remained vital forces in black life, and the springboard from which future challenges to racial injustice would emerge.”
It’s also why Frederick Douglass, in ways that seem puzzling to us now, was not so single-mindedly incensed about the Great Betrayal as one might have expected. Described by his detractors as simply having lost the appetite for the fight, in truth he must have had a clear enough memory of what chattel slavery had been like not to confuse it with subjection. The oppressed—blacks on their land, Jews in their shtetl—can build cultural fellowships that ease their burden and point a path out. The enslaved—blacks in the cabins, Jews in the camps—have no plausible path at all. It is at once not enough of a difference and all the difference in the world.
Du Bois tries strenuously to fit the story of the end of Reconstruction into a Marxist framework: the Southern capitalists were forcing serfdom upon their agricultural laborers in parallel to the way that the Northern ones were forcing it on their industrial workers. His effort is still echoed in some contemporary scholarship. But an agricultural class reduced to serfdom is exactly the kind of stagnant arrangement that capitalism chafes against. Sharecropping is not shareholding. When the entrepreneurial white South wanted to assert its departure from the antebellum order, it invoked a South emancipated from the planter classes and, in a slogan from the next century, now “too busy to hate.” At the same time, the agrarian rhetoric of the restored South was always an anti-modernist rhetoric, antagonistic toward bourgeois free enterprise. (That the so-called “Southern Agrarian” school later assembled some of America’s leading literary modernists is among the long-term ironies in the story.)
In truth, sharecropping, coupled with a cotton monoculture, was a terrible model for economic development, and, indeed, left the South long impoverished. Du Bois poises “property and privilege” against “race and culture” as causes that led to the reconquest of the South by white supremacy, and, though his Marxist training insists that it must somehow all be property and privilege, his experience as an American supplies a corrective afterthought or two. The motives of the South were, as Du Bois eventually suggests, essentially ideological and tribal, rather than economic. He recognized that, in a still familiar pattern, poor whites “would rather have low wages upon which they could eke out an existence than see colored labor with a decent wage,” and saw in “every advance of the Negroes a threat to their racial prerogatives.” It is the same formula of feeling that makes the “white working class” angrier at the thought that Obamacare might be subsidizing shiftless people of color than receptive to the advantages of having medical coverage for itself. Du Bois called it a “psychological wage,” but this is to give a Marxist-sounding name to a non-Marxist phenomenon: ethnic resentment and clan consciousness are social forces far more powerful than economic class. It reflects the permanent truth that all people, including poor people, follow their values, however perverted, rather than their interests, however plain.
There’s no era in which thought is monolithic, and late-nineteenth-century America was probably as disputatious as any era has been. Gates charts the growth of Social Darwinism as well as the “biological” racism of Louis Agassiz—but it’s worth emphasizing that Agassiz was a racist because he was fervently anti-Darwinian. His student William James, on a naturalist’s expedition with him to Brazil, saw through his prejudices. There is no shortage of radical egalitarian thought at the time, coming from figures who were by no means marginalized. Thaddeus Stevens chose to be buried in a black cemetery, with the inscription on his stone reading “Finding other Cemeteries limited as to Race by Charter Rules, I have chosen this that I might illustrate in my death, the Principles which I advocated through a long life: equality of man before his creator.”
And then the most famous American text by the most famous American writer of the period was Mark Twain’s “Adventures of Huckleberry Finn,” which, published in the eighteen-eighties and set half a century earlier, manages to take in all the stereotypes of the post-Reconstruction era (Jim is a type of the comic Negro) while complicating them in ways that remain stirring, and ending with an unequivocal gesture toward the equality of black and white, when Huck decides that he will go to Hell rather than betray a black friend. When the right side loses, it does not always mean that the truth has not been heard. We are too inclined to let what happens next determine the meaning of what happened before, and to suppose that the real meaning of Reconstruction was its repudiation. It’s a style of thought that sees the true meaning of dinner as the next day’s hunger and the real meaning of life as death. And yet yesterday’s good deeds remain good even if today’s bad ones occlude them.
There is plenty of cause to denounce the liberal institutions of the era, North and South and West, in the face of the reënslavement of the era’s black people. But, even reading White’s fiercely disabused history of the period, one can still be astonished by the degree to which liberal institutions worked to curb the worst social sadism that, until then, had been a commonplace of human history. It can be helpful to expand the historical scale just a tad. Although the failure of the Republic to sustain its ideals is appallingly self-evident, elections involving millions of people were held routinely, if imperfectly; venal bosses like Boss Tweed, instead of sending on power to his son, were tried and imprisoned; Jews worshipped freely; freethinkers flourished; immigrants settled; reformers raged against corruption, and, in a few key cases, won their battle; dissent, even radical dissent, was aired and, though sporadically persecuted was, on the whole, heard and tolerated. No arrangement like it had ever been known before on so large a scale in human history. Compared with the system’s ambitions and pretensions, it was as nothing. But, compared with the entirety of human history before, it was, in its way, quite something.
What is true and tragic is that the black population benefitted least of all from these institutions. Yet the same more than flawed institutions, in turn, enabled freed slaves, as Foner maintains, to build the social capital that would allow them to find ways around the supremacists. How did that happen? One turns back to Gates’s best book, the incandescent memoir “Colored People,” with its evocation of Piedmont, West Virginia, in the nineteen-fifties and sixties. Gates is clear-eyed about the patterns of bigotry that still obtained—only he would see that “Leave It to Beaver” was, above all, a television show about property—but he provides an intimate and affectionate sense of how all the richness of clan connection becomes cultural connection, of how the world of his childhood was illuminated by profound family relations and an enormously bountiful cultural heritage, in music, certainly, but in dance and literature and, yes, athleticism, too. (Athletics because it was the one place, he says, where blacks and whites directly butted heads, and blacks won.)
Accepting Gates’s observation that jazz, and the popular music that flowed from it and through it, is the greatest of American inventions, we have to recognize both the bigotry that impeded it and the extraordinary self-emerging social institutions that empowered it. Every life of a great jazz musician shows us both—social sadism beyond belief to be endured, but also social networks of support, filled with intimately collaborative and competitive relationships, artists both supporting and outdoing one another—the creation of the great cutting contest that E. H. Gombrich long ago identified as the core engine of artistic progress. The most influential of American musicians, Louis Armstrong, suffered from bigotry in New Orleans, but there was the Colored Waif’s home to teach him the cornet, a sympathetic Jewish émigré family with a thriving tailor shop to help him buy one, a talent contest at the Iroquois Theatre that a poor black boy could win, and even a saloon where he could go to hear, and later be hired by, the great King Oliver. In the town where the white mob had lynched blacks to end their freedom, the black victims had improvised institutions to enable it. Sustaining traditions were available, at a price.
The moral arc of the universe is long. Eight years of Obama may be followed by eight of Trump, but the second cannot annihilate the first. At one point in “Stony the Road,” Gates writes wisely of images as weapons. Imagery can indeed have agency, but this takes actors—bad actors who weaponize the imagery. Anti-Semitic caricatures had persisted for centuries; Der Stürmer’santi-Semitic cartoons had to be weaponized by Hitler. Patterns of oppression can be held in place only by oppressive people.
This is why the greatest divide among historians is between the academics who tend to see people as points of compressed social forces and those popular historians, chiefly biographers, who see the actors as nearly the whole of the story. The academics study the tides of history, while the popular historians go out fishing to find (and tag) the big fish that presumably make the ocean worth watching. The tidalists have the tenure, but the fishermen sell all the books. Gates, who is expert at both, catching fish while seeing tides, leaves us with a simple, implicit moral: a long fight for freedom, with too many losses along the way, can be sustained only by a rich and complicated culture. Resilience and resistance are the same activity, seen at different moments in the struggle. It’s a good thought to hold on to now. ♦
Alan: I have been a frequent critic of Mormonism but have always admired its core belief in "Ongoing Revelation." If, as Christians believe, "The New Testament" supercedes "The Old Testament," why wouldn't Christianity's God behave in accordance with "His" own precedent? And lest we forget, the words "abortion" and "homosexuality" do not appear in the four canonical gospels.
LGBTQ: Stunning News From The Church Of Latter Day Saints Concerning God's "Ongoing Revelation"
In 2012 I wrote in support of two years obligatory national service for every American, with (arguable) exceptions for quadraplegics and people so severely deranged that they depend on daily administration of anti-psychotic medication.
It might even prove that obligatory national service for "everyone" -- including physically and emotionally "disabled" people -- is"just what the doctor ordered."
Finally, we would have maximum mainstreaming of people who are currently marginalized and discarded.
Here is how I see our need for service (and the promotion of service orientation) across the entire socio-economic spectrum.
"Proposal For Two Years Obligatory National Service"
Danny makes some good arguments here (if I actually believed we needed a military at all). His pro-war sentiments shine through, although they are well hidden in his suspicions of U.S. foreign policy goals. In many ways, Danny is the problem -- he has the unrealistic expectation that the money makers of the military industrial complex and the war-makers may actually be forced to change their ways if coerced by "smarter" soldiers who want to save their own asses. Wishful thinking. Patrick
By Maj. Danny Sjursen
Was Ending the Draft a Grave Mistake?
I spent last week at Angelo State University in remote central Texas as a panelist for the annual All-Volunteer Force (AVF) Forum. It was a strange forum in many ways, but nonetheless instructive. I was the youngest (and most progressive) member, as well as the lowest-ranking veteran among a group of leaders and speakers that included two retired generals, the chief of staff to former Secretary of State Colin Powell, a few former colonels and several academics. Despite having remarkably diverse life experiences and political opinions, all concluded that America’s all-volunteer military is not equitable, efficient or sustainable. The inconvenient truth each of the panel participants had the courage to identify is that the end of the draft in the U.S. had many unintended—and ultimately tragic—consequences for the republic.
The oft-praised U.S. military is, disturbingly, the most trusted public institution in the country. These days, active service members and veterans are regularly paraded before an otherwise apathetic citizenry at nearly every sporting event. Public figures and private citizens alike fawn over and obsessively thank the troops at every possible opportunity. It seems strange, however, that Americans are so hyperproud of their military, seeing as it neither reflects society nor achieves national objectives overseas. After all, the military only accounts for about 0.5 % of Americans and, as recent statistics indicate, the Army is falling well short of its recruiting goals. Not to mention that for all the vacuous pageantry and celebrations of a military that is increasingly divergent from civil society, few seem to ask an important question: When was the last time the AVF won a war?
The AVF is ultimately an unfair, ineffective and unsustainable organization charged with impossible, ill-advised missions by policymakers and a populace that actually care rather little for the nation’s soldiers. As the AVF nears its 50th anniversary, there’s no better time than now to assess the model’s flaws and its effect on American democracy.
Way back when the U.S. military was bogged down in an unwinnable, immoral and ill-advised war in Vietnam, newly elected President Nixon faced a serious problem. Tricky Dick, as he is sometimes known, wanted to prolong and escalate the war into Cambodia and Laos in order to achieve “peace with honor”—in other words, seem tough and save some American face. Only the growing domestic anti-war movement that was gaining influence in Congress stood in his way. No doubt cynically, but also astutely, Nixon surmised that fear of individual conscription largely motivated youthful anti-war activists. Thus, in a Faustian devil’s bargain,he helped end the draft and usher in a brand-new all-volunteer force. Surprisingly, his gambit worked, and the steam blew out of the anti-war movement over time. Today, it is with the same all-volunteer force Nixon left us with that the U.S. wages war across the breadth of the planet.
Terribly Unfair
Proponents of the volunteer military force promised equity. No longer would the poorest Americans be forced to serve in foreign wars. Rather, only those who truly wanted to serve the nation would do so. On the surface, this seemed intuitive. What actually happened was a different matter entirely. In a sort of economic draft, the military mostly began to draw servicemen from the third and fourth income quintiles. Those who needed the money the military offered and were lured by modest cash bonuses would serve, while the wealthiest, perhaps unsurprisingly, opted out. This meant the U.S. elites would no longer serve and, in fact, would become almost totally absent from the new AVF. The tiny percentage that would serve America’s neo-imperial war machine wouldn’t reflect U.S. society at all. Today, volunteers are far more rural, Southern and likely to hail from military families than their civilian peers. Thus, an unrepresentative warrior caste—not the citizens’ Army that won World War II—became the norm.
While most Americans and their political leaders seem completely fine with the glaring injustice inherent in such a system, those who serve have had to deal with the consequences. America’s soldiers have been subject to multiple combat tours, while reserve and National Guard troops have been activated for war at record rates. As a result, post-traumatic stress disorder and suicide rates have skyrocketed, topping off recently at 22 self-inflicted veteran deaths per day. To keep these soldiers “in the fight,” damaged individual troopers were loaded up with psychotropic medications and sent back overseas. What’s more, a dangerous civil-military gap opened between the vast majority of Americans—who are rarely exposed to real soldiers—and a military that has learned to resent the populace. And when a military becomes so professionalized and distant as to resemble a Roman Praetorian Guard, the republic is undoubtedly in peril.
Wildly Inefficient
Contrary to early optimistic promises, the U.S. military since 1973 sports a poor efficiency record. Especially since 9/11—the real first test of the new system—American armed forces have produced exactly zero victories. Prior to the World Trade Center attack, it can be argued that a much larger AVF crushed Saddam Hussein’s poorly led and equipped Iraqi army in 1991, but it’s important to remember that that war was an anomaly—Saddam’s troops fought us in an open desert, without any air support, and according to the conventional tactics the U.S. military had been training against for years. I also refuse to count the imperial punishments inflicted on Panama (1989) and Grenada (1983) as victories, because neither was a necessary or even a fair fight. Besides, the invasion of the tiny island of Grenada was more fiasco than triumph.
Worst of all, the AVF is inefficient because it enables the militarization of U.S. foreign policy, ensuring high costs and much wear and tear on equipment and personnel. The lack of a draft means the loss of what the co-founder of the AVF Forum, retired Maj. Gen. Dennis Laich, calls “skin in the game.” When the citizenry isn’t subjected to the possibility of military service, it becomes apathetic, ignores foreign affairs and fails to pressure Congress to check presidential war powers. Without this check, president after president—Democrats and Republicans—have centralized control of foreign affairs in what has resulted in increasingly imperial presidencies. With its huge budget, professional flexibility and can-do attitude, the military has become the primary—in some ways, the only—tool in America’s arsenal as presidents move living, breathing soldiers around the world like so many toy soldiers.
Completely Unsustainable
The AVF could also bankrupt us, or, at the very least, crash the economy. Thanks to the influence of the military-industrial complex and the militarization of foreign policy, U.S. defense budgets have soared into the stratosphere. At present, America spends more than $700 billion on defense—a figure greater than all domestic discretionary spending and larger than the combined budgets of the next seven largest militaries. As the Vietnam War should have taught us, skyrocketing military spending without concurrent tax increases often results in not only massive debt but crippling inflation. After 18 years of forever global war without any meaningful increase in taxation on the nation’s top earners, get ready for the next crash. Trying to stay a hegemon (a dubious proposition in the first place) with rising deficits and a paralyzing national debt is a recipe for failure and, ultimately, disaster.
As recent recruitment shortfalls show, getting volunteers may not be a sustainable certainty either. This also increases costs—the military has had to train more full-time recruiters, pay cash bonuses for enlistment and retention, and hire extremely expensive civilian contractors to fill in operational gaps overseas. Nor can the AVF count on getting the best and brightest Americans in the long term. With elites opting out completely and fewer Americans possessing the combination of capacity—only 30 percent of the populace is physically/mentally qualified for the military—and propensity to serve, where will the military find the foot soldiers and cyberwarriors it needs in the 21st century?
In sum, throughout this century the U.S. military has won zero wars, achieved few, if any, “national goals” and cost Americans $5.9 trillion tax dollars, more than 7,000 troop deaths, and tens of thousands more wounded soldiers. It has cost the world 480,000 direct war-related deaths, including 244,000 civilians, and created 21 million refugees. Talk about unsustainable.
An Unpopular Proposal
At the recent forum, Laich proposed an alternative to the current volunteer system. To ensure fairness, efficiency and sustainability, the U.S. could create a lottery system (with no college or other elite deferments) that gives draftees three options: serve two years on active duty right after high school, serve six years in the reserves or go straight to college and enroll in the ROTC program. Whether or not one agrees with this idea, it would create a more egalitarian, representative, affordable and sustainable national defense tool. Furthermore, with the children of bankers, doctors, lawyers and members of Congress subject to service, the government might think twice before embarking on the next foolish, unwinnable military venture.
Few Americans, however, are likely to be comfortable delegating the power of conscription to a federal government they inherently distrust. Still, paradoxically, the move toward a no-deferment, equitable lottery draft might result in a nation less prone to militarism and adventurism than the optional AVF has. Parents whose children are subject to military service, as well as young adults themselves, might prove to be canny students of foreign policy who would actively oppose the next American war. Imagine that: an engaged citizenry that holds its legislators accountable and subsequently hits the streets to oppose unnecessary and unethical war. Ironic as it may seem, more military service may actually be the only workable formula for less war. Too bad returning to a citizens’ military is as unpopular as it is unlikely.
-----Original Message----- From: Truthdig <newsletter@truthdig.com> To: Pmtoneill <Pmtoneill@aol.com> Sent: Thu, Apr 4, 2019 7:06 am Subject: Was Ending the Draft a Grave Mistake?
By Maj. Danny Sjursen — Switching from all-volunteer to mandatory military service could force Americans to reconsider their ever-expanding empire. Read more
Millions of Americans have taken antidepressants for many years. What happens when it’s time to stop?
Rachel Aviv
Laura Delano recognized that she was “excellent at everything, but it didn’t mean anything,” her doctor wrote. She grew up in Greenwich, Connecticut, one of the wealthiest communities in the country. Her father is related to Franklin Delano Roosevelt, and her mother was introduced to society at a débutante ball at the Waldorf-Astoria. In eighth grade, in 1996, Laura was the class president—she ran on a platform of planting daffodils on the school’s grounds—and among the best squash players in the country. She was one of those rare proportional adolescents with a thriving social life. But she doubted whether she had a “real self underneath.”
The oldest of three sisters, Laura felt as if she were living two separate lives, one onstage and the other in the audience, reacting to an exhausting performance. She snapped at her mother, locked herself in her room, and talked about wanting to die. She had friends at school who cut themselves with razors, and she was intrigued by what seemed to be an act of defiance. She tried it, too. “The pain felt so real and raw and mine,” she said.
Her parents took her to a family therapist, who, after several months, referred her to a psychiatrist. Laura was given a diagnosis of bipolar disorder, and prescribed Depakote, a mood stabilizer that, the previous year, had been approved for treating bipolar patients. She hid the pills in a jewelry box in her closet and then washed them down the sink.
She hoped that she might discover a more authentic version of herself at Harvard, where she arrived as a freshman in 2001. Her roommate, Bree Tse, said, “Laura just blew me away—she was this golden girl, so vibrant and attentive and in tune with people.” On her first day at Harvard, Laura wandered the campus and thought, This is everything I’ve been working for. I’m finally here.
She tried out new identities. Sometimes she fashioned herself as a “fun, down-to-earth girl” who drank until early morning with boys who considered her chill. Other times, she was a postmodern nihilist, deconstructing the arbitrariness of language. “I remember talking with her a lot about surfaces,” a classmate, Patrick Bensen, said. “That was a recurring theme: whether the surface of people can ever harmonize with what’s inside their minds.”
During her winter break, she spent a week in Manhattan preparing for two débutante balls, at the Waldorf-Astoria and at the Plaza Hotel. She went to a bridal store and chose a floor-length strapless white gown and white satin gloves that reached above her elbows. Her sister Nina said that, at the Waldorf ball, “I remember thinking Laura was so much a part of it.”
Yet, in pictures before the second ball, Laura is slightly hunched over, as if trying to minimize the breadth of her muscular shoulders. She wears a thin pearl necklace, and her blond hair is coiled in an ornate bun. Her smile is pinched and dutiful. That night, before walking onstage, Laura did cocaine and chugged champagne. By the end of the party, she was sobbing so hard that the escort she’d invited to the ball had to put her in a cab. In the morning, she told her family that she didn’t want to be alive. She took literally the symbolism of the parties, meant to mark her entry into adulthood. “I didn’t know who I was,” she said. “I was trapped in the life of a stranger.”
Before Laura returned to Harvard, her doctor in Greenwich referred her to a psychiatrist at McLean Hospital, in Belmont, Massachusetts. One of the oldest hospitals in New England, McLean has treated a succession of celebrity patients, including Anne Sexton, Robert Lowell, James Taylor, and Sylvia Plath, who described it as “the best mental hospital in the US.” Laura’s psychiatrist had Ivy League degrees, and she felt grateful to have his attention. In his notes, he described her as an “engaging, outgoing, and intelligent young woman,” who “grew up with high expectations for social conformity.” She told him, “I lie in my bed for hours at a time staring at the wall and wishing so much that I could be ‘normal.’ ”
The psychiatrist confirmed her early diagnosis, proposing that she had bipolar II, a less severe form of the disorder. Laura was relieved to hear the doctor say that her distress stemmed from an illness. “It was like being told, It’s not your fault. You are not lazy. You are not irresponsible.” After she left the appointment, she felt joyful. “The psychiatrist told me who I was in a way that felt more concrete than I’d ever conceptualized before,” she said. “It was as though he could read my mind, as though I didn’t need to explain anything to him, because he already knew what I was going to say. I had bipolar disorder. I’d had it all along.” She called her father, crying. “I have good news,” she said. “He’s figured out the problem.”
She began taking twenty milligrams of Prozac, an antidepressant; when she still didn’t feel better, her dose was increased to forty milligrams, and then to sixty. With each raised dose, she felt thankful to have been heard. “It was a way for me to mark to the world: this is how much pain I am in,” she said. Laura wasn’t sure whether Prozac actually lifted her mood—roughly a third of patients who take antidepressants do not respond to them—but her emotions felt less urgent and distracting, and her classwork improved. “I remember her carrying around this plastic pillbox with compartments for all the days of the week,” a friend from high school said. “It was part of this mysterious world of her psychiatric state.”
At parties, she flirted intently, but by the time she and a partner were together in bed, she said, “I’d kind of get hit with this realization that I was physically disconnected. And then I’d feel taken advantage of, and I would kind of flip out and start crying, and the guy would be, like, ‘What the heck is going on?’ ” Most antidepressants dampen sexuality—up to seventy per cent of people who take the medications report this response—but Laura was ashamed to talk about the problem with her psychiatrist. “I assumed he’d see sexuality as a luxury,” she said. “He’d be, like, ‘Really? You have this serious illness, and you’re worried about that?’ ”
During her junior year, her pharmacologist raised her Prozac prescription to eighty milligrams, the maximum recommended dose. The Prozac made her drowsy, so he prescribed two hundred milligrams of Provigil, a drug for narcolepsy that is often taken by soldiers and truck drivers to stay awake during overnight shifts. The Provigil gave her so much energy that, she said, “I was just a machine.” She was on the varsity squash team and played the best squash of her life. She was so alert that she felt as if she could “figure people out,” unpacking the details of their identities: she imagined that she could peer into their childhoods and see how their parents had raised them.
The Provigil made it hard for Laura to sleep, so her pharmacologist prescribed Ambien, which she took every night. In the course of a year, her doctors had created what’s known as “a prescription cascade”: the side effects of one medication are diagnosed as symptoms of another condition, leading to a succession of new prescriptions. Her energy levels rose and fell so quickly that she was told she had a version of bipolar disorder called “rapid cycling,” a term that describes people who have four or more manic episodes in a year, but is also applied, more loosely, to people who shift dramatically between moods. Sometimes Laura thought, Women who are happy and socialize like to buy dresses. She’d go to Nordstrom and buy two or three dresses. She recognized that this behavior was “textbook”—she had bought her own copy of the Diagnostic and Statistical Manual of Mental Disorders—but the awareness didn’t prevent the purchases.
Laura felt that the pressures of her junior year were paralyzing, so she did not return for the spring semester. That summer, she kept a journal in which she outlined her personal goals: “overanalysis must go”; “stop molding myself to the ideal person for my surroundings”; “find some faith in something, in anything.” But the idea of returning to Harvard that fall made her so distressed that she thought every day about dying. She took the semester off, and, at her request, her parents drove her to a hospital in Westchester County, New York. A psychiatrist there wrote that she “presents with inability to function academically.” At the hospital, where she stayed for two weeks, she was put on a new combination of pills: Lamictal, a mood stabilizer; Lexapro, an antidepressant; and Seroquel, an antipsychotic that she was told to use as a sleep aid. Her father, Lyman, said, “I had no conviction that the drugs were helping. Or that they weren’t helping.”
Laura returned to Harvard and managed to graduate, an achievement she chalked up to muscle memory; she was the kind of student who could regurgitate information without absorbing it. Then she held a series of jobs—working as an assistant for a professor and for a state agency that issued building permits—that she didn’t believe would lead to a career. She experienced what John Teasdale, a research psychologist at the University of Oxford, named “depression about depression.” She interpreted each moment of lethargy or disappointment as the start of a black mood that would never end.
Psychiatric diagnoses can ensnare people in circular explanations: they are depressed because they are depressed.
Over the next four years, her doctors tripled her antidepressant dosage. Her dosage of Lamictal quadrupled. She also began taking Klonopin, which is a benzodiazepine, a class of drugs that has sedative effects. “What I heard a lot was that I was ‘treatment-resistant,’ ” she said. “Something in me was so strong and so powerful that even these sophisticated medications couldn’t make it better.”
For a brief period, Laura saw a psychiatrist who was also a psychoanalyst, and he questioned the way that she’d framed her illness. He doubted her early bipolar diagnosis, writing that “many depressions are given a ‘medical’ name by a psychiatrist, ascribing the problem to ‘chemistry’ and neglecting the context and specificity of why someone is having those particular life problems at that particular time.” He reminded her, “You described hating becoming a woman.” Laura decided that “he wasn’t legit.” She stopped going to her appointments.
She rarely saw friends from high school or college. “At a certain point, it was just, Oh, my God, Laura Delano—she’s ill,” the friend from high school said. “She seemed really anesthetized.” Laura had gained nearly forty pounds since freshman year, which she attributes partly to the medications. When she looked in the mirror, she felt little connection to her reflection. “All I ever want to do is lie in my bed, cuddle with my dog, and read books from writers whose minds I can relate to,” she wrote to a psychiatrist. “That’s all I ever want to do.” She identified intensely with Plath, another brilliant, privileged, charismatic young woman who, in her journal, accuses herself of being just another “selfish, egocentric, jealous and unimaginative female.” Laura said that, when she read Plath’s work, she “felt known for the first time.”
Laura found a psychiatrist she admired, whom I’ll call Dr. Roth. At appointments, Laura would enter a mode in which she could recount her psychic conflicts in a cool, clinical tone, taking pride in her psychiatric literacy. She saw her drugs as precision instruments that could eliminate her suffering, as soon as she and Dr. Roth found the right combination. “I medicated myself as though I were a finely calibrated machine, the most delicate error potentially throwing me off,” she later wrote. If she had coffee with someone and became too excited and talkative, she thought, Oh, my God, I might be hypomanic right now. If she woke up with racing thoughts, she thought, My symptoms of anxiety are ramping up. I should watch out for this. If they last more than a day or two, Dr. Roth may have to increase my meds.
The day before Thanksgiving, 2008, Laura drove to the southern coast of Maine, to a house owned by her late grandparents. Her extended family was there to celebrate the holiday. She noticed relatives tensing their shoulders when they talked to her. “She seemed muted and tucked away,” her cousin Anna said. When Laura walked through the house and the old wooden floorboards creaked beneath her feet, she felt ashamed to be carrying so much weight.
On her third day there, her parents took her into the living room, closed the doors, and told her that she seemed trapped. They were both crying. Laura sat on a sofa with a view of the ocean and nodded, but she wasn’t listening. “The first thing that came into my mind was: You’ve put everyone through enough.”
She went to her bedroom and poured eighty milligrams of Klonopin, eight hundred milligrams of Lexapro, and six thousand milligrams of Lamictal into a mitten. Then she sneaked into the pantry and grabbed a bottle of Merlot and put the wine, along with her laptop, into a backpack. Her sisters and cousins were getting ready to go to a Bikram-yoga class. Her youngest sister, Chase, asked her to join them, but Laura said she was going outside to write. “She looked so dead in her eyes,” Chase said. “There was no expression. There was nothing there, really.”
There were two trails to the ocean, one leading to a sandy cove and the other to the rocky coast, where Laura and her sisters used to fish for striped bass. Laura took the path to the rocks, passing a large boulder that her sister Nina, a geology major in college, had written her thesis about. The tide was low, and it was cold and windy. Laura leaned against a rock, took out her laptop, and began typing. “I will not try to make this poetic, for it shouldn’t be,” she wrote. “It is embarrassingly cliché to assume that one should write a letter to her loved ones upon ending her life.”
She swallowed a handful of pills at a time, washing them down with red wine. She found it increasingly hard to sit upright, and her vision began to narrow. As she lost consciousness, she thought, This is the most peaceful experience I’ve ever had. She felt grateful to be ending her life in such a beautiful place. She fell over and hit her head on a rock. She heard the sound but felt no pain.
When Laura hadn’t returned by dusk, her father walked along the shoreline with a flashlight until he saw her open laptop on a rock. Laura was airlifted to Massachusetts General Hospital, but the doctors said they weren’t sure that she would ever regain consciousness. She was hypothermic, her body temperature having fallen to nearly ninety-four degrees.
How Ayelet Waldman Found A Calmer Life On Tiny Doses Of LSD
After two days in a medically induced coma, she woke up in the intensive-care unit. Her sisters and parents watched as she opened her eyes. Chase said, “She looked at all of us and processed that we were all there, that she was still alive, and she started sobbing. She said, ‘Why am I still here?’ ”
After a few days, Laura was transported to McLean Hospital, where she’d been elated to arrive seven years earlier. Now she was weak, dizzy, sweating profusely, and anemic. Her body ached from a condition called rhabdomyolysis, which results from the release of skeletal-muscle fibres into the bloodstream. She had a black eye from hitting the rock. Nevertheless, within a few days she returned to the mode she adopted among doctors. “Her eye contact and social comportment were intact,” a doctor wrote. Although she wasstill disappointed that her suicide hadn’t worked, she felt guilty for worrying her family. She reported having a “need to follow rules,” a doctor wrote. Another doctor noted that she did not seem to meet the criteria for major depression, despite her attempted suicide. The doctor proposed that she had borderline personality disorder, a condition marked by unstable relationships and self-image and a chronic sense of emptiness. According to her medical records, Laura agreed. “Maybe I’m borderline,” she said.
She was started on a new combination of medications: lithium, to stabilize her moods, and Ativan, a benzodiazepine, in addition to the antipsychotic Seroquel, which she had already been taking. Later, a second antipsychotic, Abilify, was added—common practice, though there was limited research justifying the use of antipsychotics in combination. “It is tempting to add a second drug just for the sake of ‘doing something,’ ” a 2004 paper in Current Medicinal Chemistry warns.
Shortly before Laura was discharged, she drafted a letter to the staff on her unit. “I truly don’t know where to begin in putting in words the appreciation I feel for what you’ve all done to help me,” she wrote. “It’s been so many years since I’ve felt the positive emotions—hope, mostly—that have flooded over me.” Unpersuaded by her own sentiment, she stopped the letter midsentence and never sent it.
Laura moved back home to live with her parents in Greenwich and spent her nights drinking with old friends. She told her psychiatrist, “I don’t feel grounded. . . . I am floating.” Her father encouraged her to “try to reach for one little tiny positive thought, so you can get a little bit of relief.” When she couldn’t arrive at one, he urged her, “Just think of Bitsy,” their cairn terrier.
When it was clear that positivity was out of reach, Laura began seeing a new psychiatrist at McLean, who embraced the theory that her underlying problem was borderline personality disorder. “It is unclear whether she has bipolar (as diagnosed in the past),” he wrote.
The concept of a borderline personality emerged in medical literature in the nineteen-thirties, encompassing patients who didn’t fit into established illness categories. Describing a borderline woman, the psychoanalyst Helene Deutsch, a colleague of Freud’s, said, “It is like the performance of an actor who is technically well trained but who lacks the necessary spark to make his impersonations true to life.” In 1980, the diagnosis was added to the DSM, which noted that “the disorder is more commonly diagnosed in women.” One of its defining features is a formless, shifting sense of self. An editorial in LancetPsychiatry this year proposed that “borderline personality disorder is not so much a diagnosis as it is a liminal state.”
In 2010, Laura moved in with her aunt Sara, who lived outside Boston, and attended a day-treatment program for borderline patients. “It was another offering of what could fix me, and I hadn’t tried it,” she said. At her intake interview, she wore stretchy black yoga pants from the Gap, one of the few garments that allowed her to feel invisible. She said that the director of the program told her, “So, you went to Harvard. I bet you didn’t think you’d end up at a place like this.” Laura immediately started crying, though she knew that her response would be interpreted as “emotional lability,” a symptom of the disorder.
Laura had been content to be bipolar. “I fit into the DSM criteria perfectly,” she said. But borderline personality disorder didn’t feel blameless to her. Almost all the patients in Laura’s group were women, and many had histories of sexual trauma or were in destructive relationships. Laura said that she interpreted the diagnosis as her doctors saying, “You are a slutty, manipulative, fucked-up person.”
Laura sometimes drank heavily, and, at the suggestion of a friend, she had begun attending Alcoholics Anonymous meetings. Laura was heartened by the stories of broken people who had somehow survived. The meetings lacked the self-absorption, the constant turning inward, that she felt at the clinic, where she attended therapy every day. When Laura’s pharmacologist prescribed her Naltrexone—a drug that is supposed to block the craving for alcohol—Laura was insulted. If she were to quit drinking, she wanted to feel that she had done it on her own. She was already taking Effexor (an antidepressant), Lamictal, Seroquel, Abilify, Ativan, lithium, and Synthroid, a medication to treat hypothyroidism, a side effect of lithium. The medications made her so sedated that she sometimes slept fourteen hours a night. When she slept through a therapy appointment, her therapist called the police to check on her at her aunt’s house. “That really jolted something in me,” Laura said.
In May, 2010, a few months after entering the borderline clinic, she wandered into a bookstore, though she rarely read anymore. On the table of new releases was “Anatomy of an Epidemic,” by Robert Whitaker, whose cover had a drawing of a person’s head labelled with the names of several medications that she’d taken. The book tries to make sense of the fact that, as psychopharmacology has become more sophisticated and accessible, the number of Americans disabled by mental illness has risen. Whitaker argues that psychiatric medications, taken in heavy doses over the course of a lifetime, may be turning some episodic disorders into chronic disabilities. (The book has been praised for presenting a hypothesis of potential importance, and criticized for overstating evidence and adopting a crusading tone.)
Laura wrote Whitaker an e-mail with the subject line “Psychopharms and Selfhood,” and listed the many drugs she had taken. “I grew up in a suburban town that emphasized the belief that happiness comes from looking perfect to others,” she wrote. Whitaker lived in Boston, and they met for coffee. Whitaker told me that Laura reminded him of many young people who had contacted him after reading the book. He said, “They’d been prescribed one drug, and then a second, and a third, and they are put on this other trajectory where their self-identity changes from being normal to abnormal—they are told that, basically, there is something wrong with their brain, and it isn’t temporary—and it changes their sense of resilience and the way they present themselves to others.”
At her appointments with her pharmacologist, Laura began to raise the idea of coming off her drugs. She had used nineteen medications in fourteen years, and she wasn’t feeling better. “I never had a baseline sense of myself, of who I am, of what my capacities are,” she said. The doctors at the borderline clinic initially resisted her requests, but they also seemed to recognize that her struggles transcended brain chemistry. A few months earlier, one doctor had written on a prescription pad, “Practice Self-Compassion,” and for the number of refills he’d written, “Infinite.”
Following her pharmacologist’s advice, Laura first stopped Ativan, the benzodiazepine. A few weeks later, she went off Abilify, the antipsychotic. She began sweating so much that she could wear only black. If she turned her head quickly, she felt woozy. Her body ached, and occasionally she was overwhelmed by waves of nausea. Cystic acne broke out on her face and her neck. Her skin pulsed with a strange kind of energy. “I never felt quiet in my body,” she said. “It felt like there was a current of some kind under my skin, and I was trapped inside this encasing that was constantly buzzing.”
A month later, she went off Effexor, the antidepressant. Her fear of people judging her circled her head in permutations that became increasingly invasive. When a cashier at the grocery store spoke to her, she was convinced that he was only pretending to be cordial—that what he really wanted to say was “You are a repulsive, disgusting, pathetic human.” She was overstimulated by the colors of the cereal boxes in the store and by the grating sounds of people talking and moving. “I felt as if I couldn’t protect myself from all this life lived around me,” she said.
She began to experience emotion that was out of context—it felt simultaneously all-consuming and artificial. “The emotions were occupying me and, on one level, I knew they were not me, but I felt possessed by them,” she said. Later, she found a community of people online who were struggling to withdraw from psychiatric medications. They’d invented a word to describe her experience: “neuro-emotion,” an exaggerated feeling not grounded in reality. The Web forum Surviving Antidepressants, which is visited by thousands of people every week, lists the many varieties of neuro-emotion: neuro-fear, neuro-anger, neuro-guilt, neuro-shame, neuro-regret. Another word that members used was “dystalgia,” a wash of despair that one’s life has been futile.
For many people on the forum, it was impossible to put the experience into words. “The effects of these drugs come so close to your basic ‘poles of being’ that it’s really hard to describe them in any kind of reliable way,” one person wrote. Another wrote, “This withdrawal process has slowly been stripping me of everything I believed about myself and life. One by one, parts of ‘me’ have been falling away, leaving me completely empty of any sense of being someone.”
It took Laura five months to withdraw from five drugs, a process that coincided with a burgeoning doubt about a diagnosis that had become a kind of career. When she’d experienced symptoms of depression or hypomania, she had known what to do with them: she’d remember the details and tell her psychiatrist. Now she didn’t have language to mark her experiences. She spent hours alone, watching “South Park” or doing jigsaw puzzles. When her aunt Sara updated the rest of the family about Laura, the news was the same: they joked that she had become part of the couch. Her family, Laura said, learned to vacuum around her. Had she come from a less well-off and generous family, she’s not sure she would have been able to go off her medications. Others in her situation might have lost their job and, without income, ended up homeless. It took six months before she felt capable of working part time.
Laura had always assumed that depression was caused by a precisely defined chemical imbalance, which her medications were designed to recalibrate. She began reading about the history of psychiatry and realized that this theory, promoted heavily by pharmaceutical companies, is not clearly supported by evidence. Genetics plays a role in mental disorder, as do environmental influences, but the drugs do not have the specificity to target the causes of an illness. Wayne Goodman, a former chair of the F.D.A.’s Psychopharmacologic Drugs Advisory Committee, has called the idea that pills fix chemical imbalances a “useful metaphor” that he would never use with his patients. Ronald Pies, a former editor of Psychiatric Times, has said, “My impression is that most psychiatrists who use this expression”—that the pills fix chemical imbalances—“feel uncomfortable and a little embarrassed when they do so. It’s kind of a bumper-sticker phrase that saves time.”
Dorian Deshauer, a psychiatrist and historian at the University of Toronto, has written that the chemical-imbalance theory, popularized in the eighties and nineties, “created the perception that the long term, even life-long use of psychiatric drugs made sense as a logical step.” But psychiatric drugs are brought to market in clinical trials that typically last less than twelve weeks. Few studies follow patients who take the medications for more than a year. Allen Frances, an emeritus professor of psychiatry at Duke, who chaired the task force for the fourth edition of the DSM, in 1994, told me that the field has neglected questions about how to take patients off drugs—a practice known as “de-prescribing.” He said that “de-prescribing requires a great deal more skill, time, commitment, and knowledge of the patient than prescribing does.” He emphasizes what he called a “cruel paradox: there’s a large population on the severe end of the spectrum who really need the medicine” and either don’t have access to treatment or avoid it because it is stigmatized in their community. At the same time, many others are “being overprescribed and then stay on the medications for years.” There are almost no studies on how or when to go off psychiatric medications, a situation that has created what he calls a “national public-health experiment.”
Roland Kuhn, a Swiss psychiatrist credited with discovering one of the first antidepressants, imipramine, in 1956, later warned that many doctors would be incapable of using antidepressants properly, “because they largely or entirely neglect the patient’s own experiences.” The drugs could only work, he wrote, if a doctor is “fully aware of the fact that he is not dealing with a self-contained, rigid object, but with an individual who is involved in constant movement and change.”
A decade after the invention of antidepressants, randomized clinical studies emerged as the most trusted form of medical knowledge, supplanting the authority of individual case studies. By necessity, clinical studies cannot capture fluctuations in mood that may be meaningful to the patient but do not fit into the study’s categories. This methodology has led to a far more reliable body of evidence, but it also subtly changed our conception of mental health, which has become synonymous with the absence of symptoms, rather than with a return to a patient’s baseline of functioning, her mood or personality before and between episodes of illness. “Once you abandon the idea of the personal baseline, it becomes possible to think of emotional suffering as relapse—instead of something to be expected from an individual’s way of being in the world,” Deshauer told me. For adolescents who go on medications when they are still trying to define themselves, they may never know if they have a baseline, or what it is. “It’s not so much a question of Does the technology deliver?” Deshauer said. “It’s a question of What are we asking of it?”
Antidepressants are now taken by roughly one in eight adults and adolescents in the U.S., and a quarter of them have been doing so for more than ten years. Industry money often determines the questions posed by pharmacological studies, and research about stopping drugs has never been a priority.
Barbiturates, a class of sedatives that helped hundreds of thousands of people to feel calmer, were among the first popular psychiatric drugs. Although leading medical journals asserted that barbiturate addiction was rare, within a few years it was evident that people withdrawing from barbiturates could become more anxious than they were before they began taking the drugs. (They could also hallucinate, have convulsions, and even die.)
Valium and other benzodiazepines were introduced in the early sixties, as a safer option. By the seventies, one in ten Americans was taking Valium. The chief of clinical pharmacology at Massachusetts General Hospital declared, in 1976, “I have never seen a case of benzodiazepine dependence” and described it as “an astonishingly unusual event.” Later, though, the F.D.A. acknowledged that people can become dependent on benzodiazepines, experiencing intense agitation when they stop taking them.
Selective serotonin reuptake inhibitors, or S.S.R.I.s—most prominently Prozac and Zoloft—were developed in the late eighties and early nineties, filling a gap in the market opened by skepticism toward benzodiazepines. S.S.R.I.s were soon prescribed not just for depression but for the nervous ailments that the benzodiazepines had previously addressed. (There had been other drugs used as antidepressants, but they had often been prescribed cautiously, because of concerns about their side effects.) As Jonathan Metzl writes, in “Prozac on the Couch,” S.S.R.I.s were marketed especially to female consumers, as drugs that would empower them at work while preserving the kind of feminine traits required at home. One advertisement for Zoloft showed a woman in a pants suit, holding the hands of her two children, her wedding ring prominent, next to the phrase “Power That Speaks Softly.” Today, antidepressants are taken by one in five white American women.
Concerns about withdrawal symptoms emerged shortly after S.S.R.I.s came to market, and often involved pregnant women who had been told to discontinue their medications, out of concern that the drugs could affect the fetus. A 2001 article in the Journal of Psychiatry & Neuroscience chronicled thirty-six women who were on either antidepressants, benzodiazepines, or a combination of the two, and who stopped taking the drugs when they became pregnant. A third of the patients said they felt suicidal, and four were admitted to a hospital. One had an abortion, because she no longer felt capable of going through with the pregnancy.
Internal records of pharmaceutical manufacturers show that the companies have been aware of the withdrawal problem. At a panel discussion in 1996, Eli Lilly invited seven experts to develop a definition of antidepressant withdrawal. Their findings were published in a supplement of the Journal of Clinical Psychiatry that was sponsored by Eli Lilly and was highly favorable to the company’s own product, Prozac, which has the longest half-life of all the S.S.R.I.s; the drug clears slowly from the body. The panelists observed that withdrawing from other antidepressants was more likely to lead to “discontinuation reactions,” such as agitation, detachment, “uncharacteristic crying spells and paralyzing sadness.” “Although generally mild and short-lived,” one paper in the supplement explained, “discontinuation symptoms can be severe and chronic.” The panel defined “discontinuation syndrome” as a condition that could be “rapidly reversed by the reintroduction of the original medication.”
Shortly after the Eli Lilly panel, SmithKline Beecham, which manufactured Paxil, distributed a memo to its sales team accusing Eli Lilly of “trying to hide” the withdrawal symptoms of its products. “The truth of the matter is that the only discontinuation syndrome Lilly is worried about is the discontinuation of Prozac,” the memo said. In another internal memo, SmithKline Beecham instructed staff to “highlight the benign nature of discontinuation symptoms, rather than quibble about their incidence.”
Guy Chouinard, a retired professor of psychiatry at McGill and at the University of Montreal, who served as a consultant for Eli Lilly for ten years and did one of the first clinical trials of Prozac, told me that when S.S.R.I.s came on the market he was thrilled to see his patients, previously crippled by self-doubt and fear, living tolerable and fulfilling lives. Chouinard is considered one of the founders of psychopharmacology in Canada. In the early two-thousands, he began to see patients who, after taking certain antidepressants for years, had stopped their medications and were experiencing what he described as “crescendo-like” anxiety and panic that went on for weeks and, in some cases, months. When he reinstated their medication, their symptoms began to resolve, usually within two days.
Most people who discontinue antidepressants do not suffer from withdrawal symptoms that last longer than a few days. Some experience none at all. “The medical literature on this is a mess,” Chouinard told me. “Psychiatrists don’t know their patients well—they aren’t following them long-term—so they don’t know whether to believe their patients when they say, ‘I’ve never had this experience in my life.’ ” He thinks that withdrawal symptoms, misdiagnosed and never given time to resolve, create a false sense that patients can’t function unless they go back on their drugs.
Giovanni Fava, a professor of psychiatry at the University of Buffalo, has devoted much of his career to studying withdrawal and has followed patients suffering from withdrawal symptoms a year after stopping antidepressants. A paper published last month in a journal he edits, Psychotherapy and Psychosomatics, reviewed eighty studies and found that in nearly two-thirds of them patients were taken off their medications in less than two weeks. Most of the studies did not consider how such an abrupt withdrawal might compromise the studies’ findings: withdrawal symptoms can easily be misclassified as relapse. Fava’s work is widely cited, yet he said that he has struggled to publish his research on this topic. To some degree, that makes sense: no one wants to deter people from taking drugs that may save their life or lift them out of disability. But to avoid investigating or sharing information on the subject—to assume that people can comprehend the drugs’ benefits and not their limits—seems to repeat a pattern of paternalism reminiscent of earlier epochs in the history of psychopharmacology.
David Taylor, the director of pharmacy and pathology at the Maudsley Hospital, in London, and the author of more than three hundred peer-reviewed papers, told me, “It is not as though we haven’t been burned by this before.” If he hadn’t experienced antidepressant withdrawal himself, Taylor said, “I think I would be sold on the standard texts.” But, he said, “experience is very different from what’s on the page.”
Taylor described his own symptoms of withdrawal, from the antidepressant Effexor, as a “strange and frightening and torturous” experience that lasted six weeks. In a paper published last month in Lancet Psychiatry, he and a co-author reviewed brain imaging and case studies on withdrawal and argued that patients should taper off antidepressants over the course of months, rather than two to four weeks, as current guidelines advise. Such guidelines are based on a faulty assumption that, if a dose is reduced by half, it will simply reduce the effect in the brain by half. The paper asserts that the increasing long-term use of antidepressants “has arisen in part because patients are unwilling to stop due to the aversive nature of the withdrawal syndrome.” But, Taylor told me, his research “wouldn’t stop me from recommending an antidepressant for someone with fully fledged major depression, because the relief of suffering is of a different order of magnitude than the symptoms when you stop taking them.”
In the fifth edition of the DSM, published in 2013, the editors added an entry for “antidepressant discontinuation syndrome”—a condition also mentioned on drug labels—but the description is vague and speculative, noting that “longitudinal studies are lacking” and that little is known about the course of the syndrome. “Symptoms appear to abate over time,” the manual explains, while noting that “some individuals may prefer to resume medication indefinitely.”
Three months after Laura stopped all her medications, she was walking down the street in Boston and felt a flicker of sexual desire. “It was so uncomfortable and foreign to me that I didn’t know what to do with it,” she said. The sensation began to occur at random times of day, often in public and in the absence of an object of attraction. “It was as if that whole part of my body was coming online again, and I had no idea how to channel it,” she said. “I felt occupied by this overwhelming power.” She had never masturbated. “I was, like, Why do people like this? It didn’t make sense.”
When she was thirty-one, she began a long-distance relationship with Rob Wipond, a Canadian journalist. Both of them became emotional when talking with me about Laura’s sexuality. Laura told me, “I felt like a newborn. I hadn’t ever figured out what my body was meant to be.” Rob said, “She was open and awake. Everything was new to her. We were, like, ‘Well, gee, what is this sexuality thing—what shall we do?’ ”
For years, Laura had been unable to have stable relationships—a symptom, she’d assumed, of borderline personality disorder. “I honestly thought that, because I was mentally ill, the numbness was just part of me,” she told me. “I looked at beautiful sex scenes in movies, and it never crossed my mind that this was in the cards for me.” Now she wondered about the effects of the many medications she had been taking. “On this very sensory, somatic level, I couldn’t bond with another human being,” she said. “It never felt real. It felt synthetic.”
Laura bought a book about women’s sexuality, and learned how to give herself an orgasm. “It took so long and I finally figured it out, and I just broke down in tears and called Rob, and I was, like, ‘I did it! I did it! I did it!’ ”
She felt fortunate that her sexuality had returned in a way that eluded other people who were withdrawing from drugs. Although it is believed that people return to their sexual baseline, enduring sexual detachment is a recurring theme in online withdrawal forums. Audrey Bahrick, a psychologist at the University of Iowa Counseling Service, who has published papers on the way that S.S.R.I.s affect sexuality, told me that, a decade ago, after someone close to her lost sexual function on S.S.R.I.s, “I became pretty obsessive about researching the issue, but the actual qualitative experience of patients was never documented. There was this assumption that the symptoms would resolve once you stop the medication. I just kept thinking, Where is the data? Where is the data?” In her role as a counsellor, Bahrick sees hundreds of college students each year, many of whom have been taking S.S.R.I.s since adolescence. She told me, “I seem to have the expectation that young people would be quite distressed about the sexual side effects, but my observation clinically is that these young people don’t yet know what sexuality really means, or why it is such a driving force.”
Laura felt as if she were learning the contours of her adult self for the first time. When she felt dread or despair, she tried to accept the sensation without interpreting it as a sign that she was defective and would remain that way forever, until she committed suicide or took a new pill. It felt like a revelation, she said, to realize that “the objective in being alive isn’t the absence of pain.” She remembered identifying with a sad little bubble pictured in a popular advertisement for Zoloft—the bubble is moping around, crying and groaning, until it takes the medication and starts to bounce while birds sing—and became increasingly aware that her faith in the drugs’ potential had been misplaced. “I never felt helped by the drugs in the sense that I have meaning, I have purpose, I have relationships that matter to me,” she said. Overprescribing isn’t always due to negligence; it may also be that pills are the only form of help that some people are willing to accept. Laura tried to find language to describe her emotions and moods, rather than automatically calling them symptoms. “The word I use for it is ‘unlearn,’ ” she said. “You are peeling off layers that have been imposed.”
Laura still felt fondness for most of her psychiatrists, but, she said, “the loss of my sexuality is the hardest part to make peace with—it feels like a betrayal. I’ve discovered how much of the richness of being human is sexuality.”
She wrote several letters to Dr. Roth, her favorite psychiatrist, requesting her medical records, because she wanted to understand how the doctor had made sense of her numbness and years of deterioration. After a year, Dr. Roth agreed to a meeting. Laura prepared for hours. She intended to begin by saying, “I’m sitting in front of you and I’m off all these drugs, and I’ve never felt more vibrant and alive and capable, and yet we thought I had this serious mental illness for life. How do you make sense of that?” But, in Dr. Roth’s office, Laura was overwhelmed by nostalgia: the familiar hum of the white-noise machine, the sound of the wind sucked inside as Dr. Roth opened the front door. She had always loved Dr. Roth’s presence—the way she would sit in an armchair with her legs folded, cradling a large mug of coffee, her nails neatly polished. By the time Dr. Roth walked into the waiting room, Laura was crying.
They hugged and then took their usual positions in Dr. Roth’s office. But Laura said that Dr. Roth seemed so nervous that she talked for the entire appointment, summarizing the conversations they’d had together. It was only when Laura left that she realized she had never asked her questions.
Laura started a blog, in which she described how, in the course of her illness, she had lost the sense that she had agency. People began contacting her to ask for advice about getting off multiple psychiatric medications. Some had been trying to withdraw for years. They had developed painstaking methods for tapering their medications, like using grass-seed counters to dole out the beads in the capsules. Laura, who had a part-time job as a research assistant but who still got financial help from her parents, began spending four or five hours a day talking with people on Skype. “People were so desperate that, when they found someone who had gotten off meds, they were just, like, ‘Help me,’ ” she said.
David Cope, a former engineer for the Navy, told me that Laura’s writings “helped keep me alive. I needed to know that someone else had gone through it and survived.” In the process of withdrawing from Paxil, Ativan, and Adderall, he felt detached from emotional reactions that had previously felt habitual. “The way I would explain it to my wife is, I know that I love her,” he told me. “I know that I care for her. I know that I would lay down my life for her. But I don’t feel love. There’s no emotional-physical response: the sense of comfort and tingly love when you smell your spouse’s hair—I don’t have that.”
Angela Peacock, a thirty-nine-year-old veteran of the war in Iraq, told me, “I want to be Laura when I grow up.” Peacock had been on medications for thirteen years, including the “P.T.S.D. cocktail,” as it has become known: the antidepressant Effexor, the antipsychotic Seroquel, and Prazosin, a drug used to alleviate nightmares. “I never processed the trauma of being a twenty-three-year-old at war, and how that changed my view of humanity,” she said. “I just pressed Pause for thirteen years.”
Laura realized that she was spending her entire workday on these conversations. Because she needed to become financially self-reliant, she began charging seventy-five dollars an hour (on a sliding scale) to talk to people. Few psychiatrists are deeply engaged with these questions, so a chaotic field of consultants has filled the void. They are immersed in what Laura describes as “the layperson withdrawal community,” a constellation of Web forums and Facebook groups where people who have stopped their psychiatric medications advise one another: Surviving Antidepressants, the International Antidepressant Withdrawal Project, Benzo Buddies, Cymbalta Hurts Worse. The groups offer instructions for slowly getting off medications—they typically recommend that people reduce their doses by less than ten per cent each month—and a place to communicate about emotional experiences that do not have names. For many people on the forums, it was impossible to separate the biochemical repercussions from the social ones. The medicines worked on their bodies, but they also changed the way people understood their relationships and their social roles and the control they had over elements of their lives. A common theme on the forums is that people felt that at some point, having taken so many medications for so long, they’d become disabled—and they were no longer sure if this was due to their underlying disorder, the effect of withdrawing from their medications, or the way they had internalized the idea of being chronically ill.
Peter Gordon, a Scottish psychiatrist who has worked for the National Health Service for twenty-five years, told me that he has struggled to find doctors to help him with his own process of withdrawal, so he turned to the online communities, which he believes are “changing the very nature of the power balance between patient and doctor.” He went on Paxil twenty-one years ago, for social anxiety, and has tried to go off several times, using a micropipette to measure a small reduction of the liquid form of the medication each month. It has not worked. Each time, he said, “I find my temperament different. I am not an angry person—I am gentle, I am affectionate, I am open—but in withdrawal I found that these qualities were less clear. I was more irritable. I was critical of my wife and focussed on things I wouldn’t normally care about.” He continued, “I personally find it really hard to try to capture that experience in words, and, if I’m finding it difficult to translate it into words, how are the studies going to capture it? There’s going to be an additional loss from words to quantifiable ratings. We are trained to understand the evidence base as paramount—it is the primary basis for mental-health prescriptions around the world, and I fully subscribe to it—but this evidence base can never be complete without listening to the wider story.”
After consulting with people on the phone for nearly five years, Laura worked with Rob Wipond and a physician’s assistant named Nicole Lamberson to create an online guide for people who wanted to taper off their pills. There were few precedents. In the late nineties, Heather Ashton, a British psychopharmacologist who had run a benzodiazepine-withdrawal clinic in Newcastle, had drafted a set of guidelines known as the Ashton Manual, which has circulated widely among patients and includes individual tapering schedules for various benzodiazepines, along with a glossary of disorienting symptoms. “People who have had bad experiences have usually been withdrawn too quickly (often by doctors!) and without any explanation of the symptoms,” Ashton wrote.
Laura’s Web site, which she called the Withdrawal Project, was published online in early 2018 as part of a nonprofit organization, Inner Compass Initiative, devoted to helping people make more informed choices about psychiatric treatment. She and Rob (whom she was no longer dating) created it with a grant from a small foundation, which gave her enough money to pay herself a salary, to hire others who had consulted with people withdrawing from medications, and to cull relevant insights about tapering strategies. “Anecdotal information is the best we have, because there is almost no clinical research on how to slowly and safely taper,” Laura said. The Web site helps people withdrawing from medications find others in the same city; it also offers information on computing the percentage of the dosage to drop, converting a pill into a liquid mixture by using a mortar and pestle, or using a special syringe to measure dosage reductions. Lamberson, who had struggled to withdraw from six psychiatric medications, told me, “You find yourself in this position where you have to become a kitchen chemist.”
Swapnil Gupta, an assistant professor at the Yale School of Medicine, told me that she is troubled that doctors have largely left this dilemma to patients to resolve. She and her colleagues have embarked on what she describes as an informal “de-prescribing” initiative. They routinely encounter patients who, like Laura, are on unnecessary combinations of psychiatric medications, but for different reasons: Laura saw her therapists as gurus who would solve her problems, whereas poor or marginalized patients may be overtreated as they cycle in and out of emergency rooms. Yet, when Gupta, who works at an outpatient clinic, raises the idea of tapering off patients’ medications, she said, some of them “worry they will lose their disability payments, because being on lots of meds has become a badge of illness. It is a loss of identity, a different way of living. Suddenly, everything that you are doing is yours—and not necessarily your medication.”
Gupta, too, is trying to recalibrate the way she understands her patients’ emotional lives. “We tend to see patients as fixed in time—we don’t see them as people who have ups and downs like we all do—and it can be really disconcerting when suddenly they are saying, ‘See, I’m crying. Put me back on my meds.’ ” She said, “I have to sit them down and say, ‘It’s O.K. to cry—normal people cry.’ Just today, someone asked me, ‘Do you cry?’ And I said, ‘Yes, I do.’ ”
In the fall of 2018, a few days after Thanksgiving, Laura’s sister Nina texted me: “10 years to the day, Laura has some news for you that may be a great ending to your story.” The previous year, Laura had moved to Hartford to live near a new boyfriend, Cooper Davis, and his four-year-old son. Now they had just returned from spending the holiday with her family in Maine. Standing in the kitchen of their second-floor apartment, Laura told Cooper that wood and thin plastic utensils can’t go in the dishwasher. He asked if a number of different household items were safe for the dishwasher, before saying he had one last question and pulling an engagement ring out of his pocket. Cooper had been planning to propose for several weeks, and he hadn’t realized that the moment he’d chosen was precisely a decade after her suicide attempt.
Laura had met Cooper, who works at an agency that supports people with psychiatric and addiction histories, two years earlier, at a mental-health conference in Connecticut. Cooper had been given Adderall for attention-deficit hyperactivity disorder at seventeen and had become addicted. As an adolescent, he said, he was made to believe “I am not set up for this world. I need tweaking, I need adjusting.”
His work made him unusually welcoming of the fact that people in various states of emotional crisis often want to be near Laura. A few months after they were engaged, Bianca Gutman, a twenty-three-year-old from Montreal, flew to Hartford to spend the weekend with Laura. Bianca’s mother, Susan, had discovered Laura’s blog two years earlier and had e-mailed her right away. “I feel like I’m reading my daughter’s story,” she wrote. Susan paid Laura for Skype conversations, until Laura told her to stop. Laura had come to think of Bianca, who had been diagnosed as having depression when she was twelve, as a little sister navigating similar dilemmas.
While Bianca was visiting, a friend from out of town who was in the midst of what appeared to be a manic episode was staying at an Airbnb a few houses down the street. Laura was fielding phone calls from the woman’s close friends, who wanted to know what should be done, but the only thing Laura felt comfortable advising was that the woman get some sleep—she had medications to help with that—and avoid significant life decisions. The woman had been traumatized by a hospitalization a few years earlier, and Laura guessed that “she came here because she didn’t want to be alone, and she knows that I would never call the cops on her.”
Laura and Bianca spent the weekend taking walks in the frigid weather and having leisurely conversations in Laura’s small living room. Bianca, who is barely five feet tall, moved and talked more slowly than Laura, as if many more decisions were required before she converted a thought into words. She had been on forty milligrams of Lexapro—double the recommended dose—for nearly nine years. She’d taken Abilify for six years. Now, after talking to Laura, Bianca’s father, an emergency-medicine doctor, had found a pharmacy in Montreal that was able to compound decreasing quantities of her medication, dropping one milligram each month. Bianca, who worked as an assistant at an elementary school, was down to five milligrams of Lexapro. Her mother said, “I often tell Bianca, ‘I see you coping better,’ and she’s, like, ‘Calm down, Mommy. It’s not like being off medication is going to wipe me clean and you’re going to get the daughter you had before’ ”—the hope she harbored when Bianca first went on medication.
Bianca, who had reddish-blond hair, which she’d put in a messy bun, was wearing a bulky turtleneck sweater. She sat on the couch with her legs curled neatly into a Z—a position that she later joked she had chosen because it made her feel more adult. Like Laura, Bianca had always appreciated when her psychiatrists increased the dosage of her medications. She said, “It was like they were just matching my pain,” which she couldn’t otherwise express. She described her depression as “nonsensical pain. It’s so shapeless and cloudy. It dodges all language.” She said that, in her first conversation with Laura, there was something about the way Laura said “Mm-hmm” that made her feel understood. “I hadn’t felt hopeful in a very long time. Hopeful about what? I don’t know. Just hopeful, I think, because I felt that connection with someone.” She told Laura, “Knowing that you know there’s no words—that’s enough for me.”
At my request, Laura had dug up several albums of childhood photographs, and the three of us sat on the floor going through them. Laura looked radically different from one year to the next. She had had a phase of wearing pastel polo shirts that were too small for her, and in this phase, when Laura was pictured among friends, Bianca and I struggled to tell which girl was her. It wasn’t just that she was fatter or thinner; her face seemed to be structured differently. In her débutante photos, she looked as if she were wearing someone else’s features. Bianca kept saying, “I don’t see you.”
Since I’d known Laura, she had always had a certain shine, but on this day she seemed nearly luminous. She had taken a new interest in clothes and was wearing high-waisted trousers from Sweden with a tucked-in T-shirt that accentuated her waist. When Cooper returned to the house, after an afternoon with his family, she exclaimed, “Oh, Cooper is back!” Then she became self-conscious and laughed at herself.
I told Laura that I was wary of repeating her sister’s sentiment that marriage was the end of her story. She agreed. “It’s not, like, ‘Laura has finally arrived,’ ” she said. “If anything, these trappings of whatever you want to call it—life?—have made things scarier.” She still felt overwhelmed by the tasks of daily life, like too many e-mails accumulating, and she cried about five times a week. She was too sensitive. She let situations escalate. Cooper said that his tendency in moments of tension was to get quiet, which exacerbated Laura’s fear that she was not being heard. She did not see a therapist—she felt exhausted from years of analyzing her most private thoughts—but, she said, “If I actually sat in front of a psychiatrist and did an evaluation, I would totally meet the criteria for a number of diagnoses.” But the diagnostic framework no longer felt meaningful to her.
Perhaps we all have an ugly version of ourselves that, in our worst moments, we imagine we’ve become: when Bianca felt hopeless, she thought, mockingly, This is you. How could you possibly think otherwise, you poor thing? Laura’s thought was: You are not a legitimate person. You don’t deserve to be here. In many of our conversations, Laura said, she was trying to ignore the thought: Who do you think you are, speaking with this journalist? Shut up and go away. She said, “And yet we’re also having this conversation and I’m totally present in it.”
Bianca said, “It’s like your darkness is still there, but it’s almost like it’s next to you as opposed to your totality of being.”
Laura agreed that she was experiencing “the stuff of being alive that I just had no idea was possible for me.” But, she said, “it’s not like I’m good to go. Literally every day, I am still wondering how to be an adult in this world.” ♦
A major study has revealed that poor diets were responsible for 10.9m deaths around the world in 2017, while tobacco was associated with 8m deaths. Researchers say the biggest problem is… https://t.co/bz1q71jW8a
Twitter
Timothy Caulfieldverified_user
CaulfieldTim
"Poor Diets Cause One Fifth of the World’s Deaths?" https://t.co/iFitf5zCI8 @ConscienHealth: good study but "let’s not get too literal about causes of death. What we have here is nothing more than contributing factors."
Twitter
Massive Dietary Changes Needed For Personal And Environmental Health
Alan: If, as a people, we can no longer define "cruelty, identify "incitation of violence," or proclaim evident corrosion of "The General Welfare," it is time to step aside and let America's inevitable collapse fully implode.
The ever ready alternative is for "blue states" to federate with Canada, and/or The European Union -- or even a larger Hemispheric (and trans-Atlantic) Union -- while letting residents of Jesus Land discover the actual horror of living in a theocratic state.
Two people familiar with the push said President Trump is planning to nominate the former GOP presidential candidate to the Federal Reserve’s seven-member board of governors, which has two empty seats. Trump is hoping to fill the other empty seat with conservative economist Stephen Moore, a pick that has led to an outcry from former White House officials in both parties because of Moore’s political background and lack of Fed-related experience.
Alan: Carson believes that the Egyptian pyramids were used to store grain.
This is not "Fake News"
It is Republican Truth.
Ben Carson is now Trump's Secretary of Housing And Urban Development (HUD).
Ben Carson Believes that Egyptian pharoahs built the Pyramids -- almost entirely solid structures -- to store grain in the event of famine. During Carson's 2016 presidential campaign, this unhinged view won conservative voters to his cause.
"Trump is basically a man with low self-esteem, which he has worked against by being a bully and a narcissist. His actions scream, “Take me, I’m yours if you’ll admire and compliment me.” The Russians would never want to recruit him, just continuously have access to him and be able to influence him." (NY Review of Books)
Sign the petition to U.S. House Democrats: Investigate Brett Kavanaugh
Since taking back the U.S. House of Representatives, Democrats have been using their new investigative reach to open probes into various parts of the Trump administration.
With full committee control, House Democrats have the power to hold unethical people and policies to account, but they have yet to fulfill a midterm promise: to investigate Brett Kavanaugh.
Kavanaugh faced 83 ethics complaints related to his conduct at his confirmation hearings. In December, a federal panel of judges concluded that while the complaints “‘are serious,’ there is no existing authority that allows lower court judges to investigate or discipline Supreme Court justices.” Congress, however, would not face this problem, thus allowing a House committee controlled by Democrats to conduct an investigation.
The 83 complaints aren’t all, though. House committees can open a probe into Kavanuagh’s 2004 and 2006 perjury or misleading congressional testimony. They could also investigate the numerous sexual assault allegations against him if Dr. Christine Blasey Ford, Deborah Ramirez, and Julie Swetnick gave them the go-ahead.
There is no good reason to delay an investigation any longer. A congressional probe is the only way to hold Kavanaugh accountable—and with Republicans’ undying loyalty to Trump, it will have to start in the House.
Sign the petition to U.S. House Democrats: Open an investigation into Supreme Court Justice Brett Kavanaugh immediately.
Please keep your midterm promise to investigate Brett Kavanaugh. With the courts refusing to investigate ethics complaints against him because of lack of jurisdiction, a congressional probe is the only way we can hold him accountable.