History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Sat, 14 Dec 2019 16:44:11 +0000 Sat, 14 Dec 2019 16:44:11 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://www.historynewsnetwork.org/site/feed Cynicism May be the Real Threat to Impeachment

 

Cynicism is to democratic politics what rust is to motor vehicles. Both are corrosive if left unchecked. Rust will destroy a vehicle, and cynicism, if it becomes endemic, will ultimately destroy democracy. 

 

This thought struck me after some recent conversations with a few friends and acquaintances about the possible impeachment of President Trump. The cynical view of the process is that all politicians are corrupt in one way or another; they act based on self-interest and not in the public interest. In this view, Trump is no different; he is just doing what politicians do. This type of public cynicism may very well be the greatest impediment that Democrats face during the impeachment process. As David Brooks recently wrote in the New York Times, “it’s a lot harder to do impeachment in an age of cynicism, exhaustion and distrust” especially when Trump’s actions are viewed by many as “the kind of corruption that politicians of all stripes have been doing all along.”

 

Countering this level of cynicism won’t be easy. First, we need to acknowledge that corruption has been allowed to seep into our system by reviewing some examples of recent corruption. Second, we should see what the founders intended, and how far we have strayed from their views of corruption. 

 

Part of the problem of educating the public on Trump’s abuse of power is that our political system, including our own Supreme Court, has come to accept a degree of corruption. The revolving door between Congress and lobbyist is well known. The infamous lobbyist Jack Abramoff, who was convicted of fraud in 2005, once said that after he dangled a job offer in front of a politician that “we owned him.” Members of Congress spend an inordinate amount of time raising money for their reelection. Former Senator Tom Daschle reports that senators spend two-thirds of their time fund raising in the years immediately preceding their reelection campaigns. “Members of Congress spend too much time raising money and not enough time doing their job,” according to former representative David Jolly. Good people are discouraged from running for federal office due to the money chase, which has led to ever increasing numbers of millionaires and billionaires seeking office. This occurs in both parties. In the Democratic Party, two mega-rich men have joined the field in pursuit of the presidential nomination. Thus, it isn’t that the cynics are completely wrong, but rather that their viewpoint lacks any historical context or subtlety in distinguishing minor acts of corruption from those that threaten our democracy. 

 

Perhaps it would be helpful to see what our founders thought about corruption. They were steeped in both classical republicanism and liberalism and understood the challenges of forming a government based on consent that was protected from corruption. Republicanism taught them that leaders as well as the public needed to act with civic virtue, placing the public good above their own self-interest. In their view, any actions that violated the public trust were considered corrupt. The founding generation was hyper vigilant against perceived acts of public corruption. That was why they included a provision in the Constitution restricting the acceptance of gifts or emoluments from foreign governments without the permission of Congress. “Gifts play a potentially dangerous role in both judicial and democratic practice,” legal scholar Zephyr Teachout wrote in her book Corruption in America. “They can create obligations to private parties that shape judgment and outcomes.” Because we still fear the impact that gifts will have on decision making, many any states have instituted ethics laws that ban altogether or require the disclosure of gifts to government officials.

 

The founders also subscribed to the theory of classical liberalism, which placed the liberty of the individual at the center of a good society. It has always been true that people act in their own self-interest. Part of the challenge in forming a government was to balance the individual’s pursuit of self-interest with the need for laws and policies designed to promote the broader public interest. Historian Jack Rakove argues that James Madison, for example, judged political decisions by “asking whether they satisfied both the public good and private rights.” In order to control power and corruption, multiple mechanisms were considered necessary. The founders placed constitutional restraints on office holders by designing three branches of government and a system where each branch could serve as a check on the other. “The accumulation of all powers…in the same hands…may justly be pronounced the very definition of tyranny,” Madison wrote in Federalist 47. Yet Madison also assumed that ancillary means would be needed to check power. One of these would occur naturally by playing ambitious politicians off against each other. “Ambition must be made to counteract ambition,” he famously wrote in Federalist 51. It would not take long for Madison’s vision of ambitious politicians checking each other. Soon after formation of the new government under the Constitution, Alexander Hamilton’s ambition to form a national bank was met by opposition from none other than Madison, who thought the bank was unconstitutional. While Hamilton eventually succeeded with legislation to form the bank, and Washington signed the bill, debates over the constitutionality of actions by ambitious politicians have continued throughout American history. Madison also wanted to encourage the election of “men who possess the most attractive merit and the most diffusive and established characters,” which would occur in a larger republic. Such men were thought by the founders to be able to act in a disinterested manner, a term that was “used as a synonym for the classic conception of civic virtue,” historian Gordon Wood writes, as a person “not influenced by private profit.”

 

In our modern world, we seem to have gotten away from such restraints and have allowed private interests to have an undue influence on elected officials. We hear the refrain of those that are frustrated with corruption across the political spectrum. The appeal of Trump to drain the swamp is matched by politicians on the left, like Elizabeth Warren and Bernie Sanders, who rage against the influence of the wealthy and large corporations. Perhaps the Supreme Court’s Citizens United decision in 2010 represents the low point for laws against corruption. In the decision, the Court overturned all limitations on independent campaign spending by corporations, unions and the wealthy. Justice Kennedy, writing for the majority in a 5-4 decision, argued that only bribery was corruption. “Independent expenditures, including those made by corporations, do not give rise to corruption or the appearance of corruption.” He went on to find that “the fact that speakers may have influence over or access to elected officials does not mean that those officials are corrupt.” The outcome was predictable. Our politics have increasingly become flooded with spending by independent groups and dark money donors. The website OpenSecrets.org shows that such funding has tripled in the years since Citizens United. Former President Barack Obama warned about the dangers of the decision in his State of the Union message in 2010. “I don’t think American elections should be bankrolled by America’s most powerful interests, or worse, by foreign entities,” Obama said.

 

Incidents of  minor corruption continue to occur. Hunter Biden traded on his father’s name and position as vice president when he took  a position on the board of a Ukrainian company. Even though it was not illegal, nor is there any evidence that former Vice President Biden did anything wrong in calling out corruption in Ukraine, it nevertheless smacked of impropriety.  And while this type of petty corruption occurs all too often, it pales in comparison to a president of the United States who abuses the power of his office in order to elicit dirt on a political opponent. It has become clear from the partial transcript of the president’s phone call along with recent testimony in the House that Trump was engaged in a shakedown of Ukrainian President Zelensky, withholding military aid to that country and a White House meeting unless he got what he wanted. 

 

Thus there’s little wonder  that cynicism is on the rise, and that we need to begin to take actions to control the corrupt impact that money and favors are having on our political system. Still, there is something wholly different about Donald Trump’s actions than those of other presidents who have exceeded their power. In my next essay, I will explore those differences.   

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173792 https://historynewsnetwork.org/article/173792 0
How the Nutcracker Ballet Went from a Flop to a Christmas Classic

I reveled in watching the New York City Ballet’s gorgeous production of Tchaikovsky’s The Nutcracker ballet at Koch Theater, Lincoln Center, New York on Tuesday night. Particularly lovely was the scene in which it snows on stage. Later that night, driving home, it started to snow for real and a one-inch snowfall covered the metropolitan New York area like a soft, gentle white blanket an hour later. There was snow on the treetops, the bushes, the shores of lakes, the roofs of buildings, country lanes, barns, vast lawns, highways and byways.  Perfect.

 

The Nutcracker has been staged by the New York City Ballet since 1954 and in that sixty-five years has brought boundless joy to tens of thousands of people, as other productions of The Nutcracker all across the country have done. If the Nutcracker boy prince saves his heroine from the army of huge mice, the Christmas tree grows from twelve to forty feet, the Sugarplum Fairy delights the audience and armies of neatly uniformed toy soldiers march across the stage, it is time for the Nutcracker and time for Christmas.

 

The production of the ballet that I saw at Lincoln Center on Tuesday was wonderful. I have seen it several times over the years and each time it gets better. This production was the best. First, it has that wonderful, rhapsodic music of Peter Tchaikovsky. It has sensational dancing by the enormously skilled members of the New York City Ballet company, the 65 year old and flawless original choreography by George Balanchine, a lot of adorable kids led by  Sophie Thomopoulos (in the performance I saw) as Marie and Brandon Chosed as her pal Fritz, Lauren Lovette as the Sugarplum Fairy, Joseph Gordon as her cavalier and Unity Phelan as Dewdrop.

 

The story in the Nutcracker is simple. A number of families gather at the home of a friend for the Christmas holidays. They all dance, the adults on the left side of the stage and the children on the right. The adults are fine dancers but, surprisingly, so are the children. Later, the parents carry the little ones off to bed and they retire, too. Little Marie falls asleep on a couch in the parlor clutching a Nutcracker doll she received as a present. At night, she dreams that the doll has become a life size Nutcracker prince who saves her from an army of terrifying huge mice. He uses a small battalion of adorably dressed toy soldiers to do so. In the second act, the pair watch a dreamland party with numerous ballet performances. At the end of the party, the two kids soar off into the night sky in a bright, white sleigh

 

The Nutcracker ballet, today hailed worldwide by critics, got off to a rough start in chilly old St. Petersburg, Russia, when it debuted in 1892. The critics just hated it, as did audiences. They all thought Tchaikovsky, who would die a year later, had written his worst ballet. They all believed that the story, based on E.T. A. Hoffman’s  tale The Nutcracker and the Mouse King,  was silly, that the party scene was  far too long, that the best dancing came at the end of the performance and not early in it, as they all believed proper, and that the show starred children and not seasoned dancers. Oh, everybody back in 1892 thought Tchaikovsky’s near-immortal music was absolutely dreadful, too.

 

The reception was so ferocious that ballet companies rarely staged the Christmas ballet. In 1919, though, little George Balanchine played the role of the child prince and loved the show. Later, in 1954, as head of the New York City Ballet, he saw several more productions of it and brought it to New York. There, he staged if differently. Balanchine did not see it as a mere ballet, but as a Christmas season spectacular. If it had more family appeal, he thought he could make it a holiday hit and he was right. He substantially increased the role of children in it, built scrumptious sets, merged the regular ballet company with kids from a ballet school and surrounded it with holiday cheer every way he could.

 

It was a hit right away in that winter of 1954 and has been a worldwide success ever since. In fact, kids who see it grow up and take their own kids and then their grand kids to the ballet. It has had a generational appeal. The ballet has also become a box office superstar. Today, about 40% of annual ticket revenue for all Americana ballet companies comes from productions of The Nutcracker.

 

When you go to sleep some snowy night during the holiday season, close your eyes tight and wait for the huge mice to attack and the Nutcracker Prince to save you and all of your loved ones. He’ll be there, I assure you, ready to do battle.

 

PRODUCTION: The Nutcracker is produced by the New York City Ballet. Scenery: Rouben Ter-Arutunian, Costumes: Karinska, Lighting: Ronald Bates, Conductor: Daniel  Capps. The ballet uses the original choreograhy by George Balanchine. The ballet runs through January 5, 2020.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173825 https://historynewsnetwork.org/article/173825 0
Roundup Top 10!  

A University’s Betrayal of Historical Truth

by David W. Blight, W. Fitzhugh Brundage, Kevin M. Levin

The University of North Carolina agreed to pay the Sons of Confederate Veterans $2.5 million—a sum that rivals the endowment of its history department.

 

Trump’s Legacy Is Being Written Right Now

by Carolyn Eisenberg

The articles of impeachment against the president will reverberate through time and set the terms of possible reforms.

 

 

Nikki Haley gets the history of the Confederate flag very wrong

by Adam H. Domby

On Friday, Haley declared the Confederate flag was “hijacked” by the racism of a single white supremacist terrorist in 2015, and that before then, “people saw it as service, sacrifice and heritage.”

 

 

How teachers advocating for their students could backfire

by Diana D'Amico Pawlewicz

It reinforces the view of teachers as self-sacrificing servants instead of highly trained professionals.

 

 

Donald Trump is attacking both Jews and the left with one clean blow

by Kate Aronoff

Anti-Semitism and anti-leftism share a violent and intimate history that is being revived under Donald Trump.

 

 

A proposed EPA rule prioritizes industry profit over people’s lives

by Mona Hanna-Attisha

Limiting access to peer-reviewed science undermines the agency’s effectiveness.

 

 

Don’t Embrace Originalism to Defend Trump’s Impeachment

by Saul Cornell

Liberal legal scholars are at risk of falling into a right-wing trap.

 

 

Worcestershire Sauce and the Geographies of Empire

by Julia Fine

The complex origin of Worcestershire sauce reveals the ways that imperial ideals and aspirations — both in Britain and the colonies — structured not only British food habits, but also the ways in which companies presented such foods to the public.

 

 

50 years ago, LAPD raided the Black Panthers. SWAT teams have been targeting black communities ever since

by Matthew Fleischer

For one of the most dramatic moments in American policing, the raid on the Panthers headquarters is a relatively small historical footnote. But in the years since, SWAT has become a mainstay of modern policing.

 

 

 

 

The Emergence of Abraham Lincoln

by Sidney Blumenthal

How America’s 16th president went from virtual obscurity to ending slavery.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173840 https://historynewsnetwork.org/article/173840 0
The Capital City and the Civil War

 

Washington, DC, had never, in its brief and undistinguished history, known a social season like this one. The winter of 1863–64 had been bitterly cold, but its frozen rains and swirling snows had dampened no spirits. Instead a feeling, almost palpable, of optimism hung in the air, a swelling sense that, after three years of brutal war and humiliating defeats at the hands of rebel armies, God was perhaps in his heaven, after all. The inexplicably lethal Robert E. Lee had finally been beaten at Gettysburg. Vicksburg had fallen, completing the Union conquest of the Mississippi River. A large rebel army had been chased from Chattanooga. Something like hope—or maybe just its shadow—had finally loomed into view.

 

The season had begun as always with a New Year’s reception at the Executive Mansion, hosted by the Lincolns, then had launched itself into a frenzy whose outward manifestation was the city’s newest obsession: dancing. Washingtonians were crazy about it. They were seen spinning through quadrilles, waltzes, and polkas at the great US Patent Office Ball, the Enlistment Fund Ball, and at “monster hops” at Willard’s hotel and the National. At these affairs, moreover, everyonedanced. No bored squires or sad-eyed spinsters lingered in the shadows of cut glass and gaslight. No one could sit still, and together all improvised a wildly moving tapestry of color: ladies in lace and silk and crinolines, in crimson velvet and purple moire, their cascading curls flecked with roses and lilies, their bell-shaped forms whirled by men in black swallowtails and colored cravats.

 

The great public parties were merely the most visible part of the social scene. That winter had seen an explosion of private parties as well. Limits were pushed here, too, budgets broken, meals set forth of quail, partridge, lobster, terrapin, and acreages of confections. Politicians such as Secretary of State William Seward and Congressman Schuyler “Smiler” Colfax threw musical soirees. The spirit of the season was evident in the wedding of the imperially lovely Kate Chase—daughter of Treasury Secretary Salmon P. Chase—to Senator William Sprague. Sprague’s gift to Kate was a $50,000 tiara of matched pearls and diamonds. When the bride appeared, the US Marine Band struck up “The Kate Chase March,” a song written by a prominent composer for the occasion.

 

What was most interesting about these evenings, however, was less their showy proceedings than the profoundly threatened world in which they took place. It was less like a world than a child’s snow globe: a small glittering space enclosed by an impenetrable barrier. For in the winter of 1863–64, Washington was the most heavily defended city on earth. Beyond its houses and public buildings stood thirty-seven miles of elaborate trenches and fortifications that included sixty separate forts, manned by fifty thousand soldiers. Along this armored front bristled some nine hundred cannons, many of large caliber, enough to blast entire armies from the face of the earth. There was something distinctly medieval about the fear that drove such engineering.

 

The danger was quite real. Since the Civil War had begun, Washington had been threatened three times by large armies under Robert E. Lee’s command. After the Union defeat at the Second Battle of Bull Run in August 1862, a rebel force under Lee’s lieutenant Stonewall Jackson had come within twenty miles of the capital while driving the entire sixty-thousand-man Union army back inside its fortifications, where the bluecoats cowered and licked their wounds and thanked heaven for all those earthworks and cannons.

 

A year and a half later, the same fundamental truth informed those lively parties. Without that cordon militaire, they could not have existed. Washington’s elaborate social scene was a brocaded illusion: what the capital’s denizens desperately wanted the place to be, not what it actually was.

 

This garishly defended capital was still a smallish, grubby, corrupt, malodorous, and oddly pretentious municipality whose principal product, along with legislation and war making, was biblical sin in its many varieties. Much of the city had been destroyed in the War of 1812. What had replaced the old settlement was both humble and grandiose. Vast quantities of money had been spent to build the city’s precious handful of public buildings: the Capitol itself (finished in December 1863), the Post Office Building, the Smithsonian Institution, the US Patent Office, the US Treasury, and the Executive Mansion. (The Washington Monument, whose construction had been suspended in 1854 for lack of funds, was an abandoned and forlorn-looking stump.)

 

But those structures stood as though on a barren plain. The Corinthian columns of the Post Office Building may have been worthy of the high Renaissance, but little else in the neighborhood was. The effect was jarring, as though pieces of the Champs-Élysées had been dropped into a swamp. Everything about the place, from its bloody and never-ending war to the faux grandiosity of its windswept plazas, suggested incompleteness. Like the Washington Monument, it all seemed half-finished. The wartime city held only about eighty thousand permanent residents, a pathetic fraction of the populations of New York (800,000) and Philadelphia (500,000), let alone London (2.6 million) or Paris (1.7 million). Foreign travelers, if they came to the national capital at all, found it hollow, showy, and vainglorious. British writer Anthony Trollope, who visited the city during the war and thought it a colossal disappointment, wrote:

 

Washington is but a ragged, unfinished collection of unbuilt broad streets.… Of all the places I know it is the most ungainly and most unsatisfactory; I fear I must also say the most presumptuous in its pretensions. Taking [a] map with him… a man may lose himself in the streets, not as one loses oneself in London between Shoreditch and Russell Square, but as one does so in the deserts of the Holy Land… There is much unsettled land within the United States of America, but I think none so desolate as three-fourths of the ground on which is supposed to stand the city of Washington.

 

He might have added that the place smelled, too. Its canals were still repositories of sewage; tidal flats along the Potomac reeked at low tide. Pigs and cows still roamed the frozen streets. Dead horses, rotting in the winter sun, were common sights. At the War Department, one reporter noted, “The gutter [was] heaped up full of black, rotten mud, a foot deep, and worth fifty cents a car load for manure.” The unfinished mall where the unfinished Washington Monument stood held a grazing area and slaughterhouse for the cattle used to feed the capital’s defenders. The city was both a haven and a dumping ground for the sort of human chaff that collected at the ragged edges of the war zone: deserters from both armies, sutlers (civilians who sold provisions to soldiers), spies, confidence men, hustlers, and the like.

 

Washington had also become the nation’s single largest refuge for escaped slaves, who now streamed through the capital’s rutted streets by the thousands. When Congress freed the city’s thirty-three hundred slaves in 1862, it had triggered an enormous inflow of refugees, mostly from Virginia and Maryland. By 1864 fifty thousand of them had moved within Washington’s ring of forts. Many were housed in “contraband camps,” and many suffered in disease-ridden squalor in a world that often seemed scarcely less prejudiced than the one they had left. But they were never going back. They were never going to be slaves again. This was the migration’s central truth, and you could see it on any street corner in the city. Many would make their way into the Union army, which at the end of 1863 had already enlisted fifty thousand from around the country, most of them former slaves.

 

But the most common sights of all on those streets were soldiers. A war was being fought, one that had a sharp and unappeasable appetite for young men. Several hundred thousand of them had tramped through the city since April 1861, wearing their blue uniforms, slouch hats, and knapsacks. They had lingered on its street corners, camped on its outskirts. Tens of thousands more languished in wartime hospitals. Mostly they were just passing through, on their way to a battlefield or someone’s grand campaign or, if they were lucky, home. Many were on their way to death or dismemberment. In their wake came the seemingly endless supply trains with their shouting teamsters, rumbling wagon wheels, snorting horses, and creaking tack.

 

Because of these soldiers—unattached young men, isolated, and far from home—a booming industry had arisen that was more than a match for its European counterparts: prostitution. This was no minor side effect of war. Ten percent or more of the adult population were inhabitants of Washington’s demimonde. In 1863, the Washington Evening Star had determined that the capital had more than five thousand prostitutes, with an additional twenty-five hundred in neighboring Georgetown, and twenty-five hundred more across the river in Alexandria, Virginia. That did not count the concubines or courtesans who were simply kept in apartments by the officer corps. The year before, an army survey had revealed 450 houses of ill repute. All served drinks and sex. In a district called Murder Bay, passersby could see nearly naked women in the windows and doors of the houses. For the less affluent—laborers, teamsters, and army riffraff—Nigger Hill and Tin Cup Alley had sleazier establishments, where men were routinely robbed, stabbed, shot, and poisoned with moonshine whiskey. The Starcould not help wondering how astonished the sisters and mothers of these soldiers would be to see how their noble young men spent their time at the capital. Many of these establishments were in the heart of the city, a few blocks from the president’s house and the fashionable streets where the capital’s smart set whirled in gaslit dances.

 

This was Washington, DC, in that manic, unsettled winter of 1863–64, in the grip of a lengthening war whose end no one could clearly see.

 

Excerpted from HYMNS OF THE REPUBLIC: The Story of the Final Year of the American Civil War, by S.C. Gwynne. Copyright © 2019 by Samuel C. Gwynne.  Excerpted with permission by Scribner, a Division of Simon & Schuster, Inc.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173788 https://historynewsnetwork.org/article/173788 0
30 years after Czechoslovakia's Velvet Revolution, have we come full circle? The following is adapted from remarks made at Tel Aviv University, 27 November 2019. A shorter version of this article was posted in The Times of Israel.

I’m dedicating these remarks to the memory of my dear friend and esteemed colleague Tatiana Hoffman, who might have been a much better speaker tonight were it not for her sudden and untimely death three years ago. She came to Israel in 1968 as Tatiana Stepankova, a young reporter for Czech Radio, was stranded here by the Soviet invasion, and became a perennial treasure for our own media. By virtue of her personality and professional integrity, as well as her intimate involvement with the Prague Spring, she was for us a personification of the drama that stretched from then to the Velvet Revolution of 1989 and after. Many of us felt that in some ways it paralleled Israel’s own story as an arena of the Cold War. As we suspected then and know now, it was indeed no coincidence that the Soviet-abetted Egyptian artillery barrages across the Suez Canal which preceded the War of Attrition began right after the Warsaw Pact intervention in Czechoslovakia.  Likewise, we saw a resemblance between the oppression of liberty there and the struggle for the rights of Soviet Jews and the ultimate liberation of both. 

 

In that heady autumn of 1989, Tatiana and I were co-editors of Voice of Israel radio’s world news program. Her lasting friendship with the leading figures of the Prague Spring, who now led its revival, granted us a unique advantage to cover the thrilling events not only there but throughout Eastern Europe. 

 

During our studio time on a Thursday night, we learned that Vaclav Havel had just been released from his umpteenth prison term. In short order she succeeded in putting a call through to his home, and was told “too bad you didn’t call a few minutes ago, Alexander Dubcek just left.” So, we had only one world exclusive, which was quite a scoop in itself, but the interview’s content was understated and self-effacing in a typically Czech fashion. Havel characteristically refrained from any triumphant declarations or predictions, and said mainly that he was just very tired.

 

Fast-forward now to the following April and Havel’s visit to Israel – the first by a Czechoslovak leader -- at the peak of his glory as president. From his address at the Hebrew University, one passage has stuck in my memory ever since as a constant reminder and warning. Evoking Franz Kafka, Havel confessed: “I would not be at in the least surprised if, in the very middle of my presidency, I were to be summoned and led off to stand trial before some shadowy tribunal.  Nor would I be surprised if I were to wake up in my prison cell and then, with great bemusement, proceed to tell my fellow prisoners everything that had happened to me in the past six months. Every step of the way I feel what a great advantage it is to know that I can be removed at any moment from this post.” 

 

I took this then – as I did Havel’s unprepossessing small talk over beer in Jerusalem’s pedestrian mall – as another indication that his humility had not been diminished by victory and honor. But since then I have increasingly realized how apt his premonition was for the broader condition of society and politics. What the Czechs and Slovaks finally won in 1989 after long years of such admirable, patient, non-violent resistance was not only the fortunate exception rather than the rule; it can much more easily be reversed than achieved. All the more so if too much satisfaction and confidence is allowed to set in – as it did -- that the happy end of history has been achieved and ensured for good. I wish I had Havel’s genius to present this gloomy message in his sardonically poignant style.

 

When invited to take part in today’s event I did not know that the Czech Republic alone would be co-sponsoring it with Tel Aviv University. Only when I received the program did I find out that the Slovak half of the Velvet Revolution would be absent from the stage, if not – I hope -- from the audience. I can only hope that this omission is not yet another omen of how in thirty years we have come nearly full circle. 

 

Slovakia, indeed, provided an early instance of the pendulum-swing to the other extreme of malevolent proto-autocracy almost equal to the one that was thrown off in the Velvet Revolution. We were glad to see the apparent repulse of this retrograde trend in Slovakia, after it affected the Czech Republic as well to some degree. The very anniversary that we are celebrating tonight was marked in Prague by mass protests – two or three hundred thousand, depending on your source --led by surviving leaders of the velvet revolution against the present regime that they see as corrupt. The protesters are counter-charged that they are trying illegitimately to overthrow a duly elected government. That’s an argument that sounds familiar to Israeli ears, as it is the main defense of Prime Minister Binyamin Netanyahu’s supporters against his indictment on multiple counts of bribery, fraud and breach of trust.

 

So overall our relief appears to have been premature. This menacing tide has now swept your neighbors in Poland to the north and Hungary to the south, and it shows little sign of receding. The wave from west to east that we so blissfully assumed would permanently transform Europe and beyond has become a backwash from east to west that now threatens to destroy what we never doubted was indestructible.

 

Let me borrow a phrase from another great heir to the Czech tradition of the ironic absurd – Havel’s contemporary and translator Tomáš Straussler, better known as Tom Stoppard. His play’s title Travesties applies so well to the mutations of democratic leadership that are now playing out in the once-United States and once-United Kingdom. They vividly illustrate how the most contemptible crooks, liars and demagogues can be no less rapidly destructive than monstrous ideologues.  Moreover, this holds true in what many of us too readily trusted as the impregnable, if imperfect, bastions of enlightenment no less than among recently converted newcomers. 

 

History is repeating itself as Havelian satire rather than farce. Again, an autocratic Moscow is working to expand its sphere, weaken its adversaries and break up their alliances. In areas like Syria, where neither the European revolutions of 1989 nor the so-called Arab Spring achieved the goals of the Prague Spring, this Russian purpose is still being pursued by armed force. Elsewhere, the subversion is subtler: instead of sending in the tanks as in Prague, it manipulates the supposed apotheosis of free expression – the media and internet – against itself. But the results are just as pernicious.

 

However, most of the fault is not in our adversaries but in ourselves. Speaking for my  fellow baby-boomers in the West, we indulged ourselves to take for granted – as the human norm, rather than a fragile evolutionary breakthrough – those hard-fought achievements of our parents who were justly called the greatest generation. We raised our children on the facile delusion that democracy requires only the mere shell of elected government and majority rule. This ultimately allowed too many of us and them to accept demonization of any restraint by constitutional institutions, any protection of minority rights or individual freedoms, as – horror! – elitist and undemodratic machinations of a “deep state.”

 

To this was added the ingredient of economic mismanagement – the crisis of unfettered capitalism that within less than 20 years followed the collapse of corruptedsocialism; the Prague Spring had, after all, called for socialism with a human face. The failure of both systems left too many behind without any system on which to pin their hopes for betterment. So one needed only to channel their resentment with populist and chauvinist slogans in order to fan a racial, religious and xenophobic backlash – and there was the recipe for a replay of the 1930s scenario no less than the postwar one, with little basis in sight to expect a near-miraculous redemption like the one whose not-so-happy anniversary we are marking today. Maintaining real democracy, it appears, depends on a sophisticated, enlightened and benevolent electorate, of which sadly we have not educated enough.  Consolation is offered by pointing to the much better tendencies that some polls show among the nextgeneration, the millennials. I do fervently hope this is so, and that the presently ascendant forces will not nip it in the bud.

 

Finally – we’re here to discuss Czechoslovakia, not Israel. But the past year has climaxed a similar process here too, as I briefly alluded before. Our nation’s own revolution -- of which we mark a major anniversary in two days: the United Nations Partition Resolution of 29 November 1947 -- has lost, at the very least, whatever velvet lining it had and faces an unpromising near future. Pardon me for ending a congratulation to foreign friends with such a domestic admonishment – but as Havel so presciently warned, we too are now summoned before our own tribunal.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173823 https://historynewsnetwork.org/article/173823 0
A Boss is a Boss: Nurses Battle for Their First Union Contract at Albany Medical Center

A nonprofit employer is not necessarily a better boss than a profit-making one.

 

That sad truth is reinforced by the experience of some 2,200 nurses at Albany Medical Center, who have been fighting for a contract since April 2018, when they voted for union representation.

 

Even that union recognition struggle proved exceptionally difficult.  The management at Albany Med―a vast, sprawling enterprise with roughly $2 billion in revenue and 9,500 workers, making it the New York capital district’s largest private employer―fought vigorously to prevent unionization.  As a result, three union organizing campaigns conducted between 2000 and 2003 were defeated by very narrow margins. 

 

But worker discontent grew over subsequent years.  In April 2009, Albany Med informed workers that it was eliminating scheduled raises, freezing hiring, cutting vacant positions, and reducing employee time off.  This announcement followed a year in which James Barba, the hospital’s president and CEO, received $4.4 million in total compensation.  In 2011, Albany Med―hit with a federal class-action lawsuit charging that its officials had conspired with their counterparts at other area hospitals to keep down the pay of registered nurses―grudgingly agreed to settle its share by providing the aggrieved nurses with $4.5 million.  Moreover, nurses complained of short-staffing, computerized duties that left them with less time for their patients, and low salaries.

 

Against this backdrop, the New York State Nurses Association (NYSNA) began waging another union organizing drive in 2015.  As the campaign gathered momentum and a union representation election loomed, management resistance grew ever fiercer.  Nurses reported receiving daily emails from administrators discouraging them from voting union, managers pulled nurses aside for on-on-one meetings to question them about how they would vote, pro-union flyers were torn from bulletin boards, and Filipino nurses on work visas were warned that unionizing could jeopardize their immigration status.  The situation became such a scandal that, in March 2018, Governor Andrew Cuomo ordered the state Labor Department to investigate complaints of intimidation, threats, and coercion by the Albany Med administration.

 

Finally, in April 2018, with 1,743 nurses casting ballots in a government-supervised representation election, the pro-union forces emerged victorious by a two-to-one ratio.

 

Nevertheless, rather than accept defeat and engage in good-faith collective bargaining, the Albany Med management has adopted a well-established corporate tactic for undermining a fledgling union:  denying it a first contract.  Thus, more than a year-and-a-half since contract talks began, they appear stalled. NYSNA has not been able to resolve the major issues of concern to the nurses at Albany Med and, as a result, has been unable to deliver on its promises.  Seizing the opportunity, anti-union employees, reportedly with the assistance of management at Albany Med, have begun a petition campaign to decertify the union.

 

Meanwhile, NYSNA―anxious to expose Albany Med’s stalling tactics, bring public pressure to bear on management, and maintain union morale―has begun running television ads and staging lively, colorful informational picketing outside Albany Med’s New Scotland Avenue hospital.  Hundreds of nurses and supporters from other unions have joined the picket lines.

 

The primary issue for the nurses remains adequate staffing. According to Albany Med’s management, the hospital is short almost 200 nurses, a vacancy rate of about 10 percent.  From the standpoint of the nursing staff, this is appalling, both because they are being overworked and because their patients are receiving inadequate care.  “You’re afraid to end your shift and go home because there’s just not enough nurses to go around,” declared Kathryn Dupuis, a nurse employed there for 24 years.  As a result, she often works overtime. “Bottom line is it’s my conscience. I got into this to help people.”

 

Other issues are important to the nurses, as well.  According to union activists, nursing salaries at Albany Med are lower than at other upstate New York hospitals, while the health insurance plans available to hospital employees are very expensive.  Naturally, these conditions interact with the problem of maintaining adequate staffing at Albany Med.  “It’s a really great place to work,” one RN remarked at a public forum.  “But when you have a family and you have to pay a lot for health insurance or your wages barely cover your mortgage, it’s just easier to go someplace else with competitive wages and benefits.”

 

Yet another issue involves Albany Med’s use of Filipino nurses for what the union charges is “forced labor.”  This October, NYSNA filed a federal lawsuit alleging that the hospital was violating the labor provisions of the Trafficking Victims Protection Act. The lawsuit focused on an Albany Med program, begun in 2002, that recruited nearly 600 nurses from the Philippines.  These nurses, the lawsuit noted, were required to sign a contract including clauses providing for a penalty of up to $20,000 if the recruited nurse resigned before a three year period ended.  The lawsuit contends that the contract included a threat that, if a nurse breached the contract, the hospital “would report the nurse to federal immigration authority, which could result in deportation proceedings.”

 

Albany Med can also bring considerable pressure to bear on the City of Albany.  As a nonprofit enterprise, Albany Med is tax-exempt, as are other major nonprofit institutions in the city.  This means that only 36 percent of the value of Albany real estate is taxable, a situation limiting the revenue available to fund the city’s operations and resulting in considerably higher taxes for the city’s homeowners.  As the nonprofits, with their thousands of employees, are major users of city services, the city administration pressed them to make voluntary payments in lieu of taxes.  Albany Med initially agreed to pony up $500,000 per year―a pittance for this enormous enterprise, but badly-needed revenue for the city.  Even so, Albany Med failed to make this payment in 2017, and skipped it again in 2018.  The inconsistent nature of these payments has left the City of Albany a perennial supplicant to the giant medical complex. It might also explain why, just this year, the city administration named a downtown street after James Barba, Albany Med’s CEO and president.

 

Albany Med is quite capable of providing safe staffing, decent wages, affordable health insurance, a less punitive approach to immigrant labor, and regular payments to the City of Albany.  After all, it is a very wealthy enterprise―in its own words, “a vast organization” that, in addition to its extensive New Scotland Avenue hospital complex, has more than 100 locations throughout the region, including affiliations with other hospitals as well as a network of urgent care and multi-specialty centers.  In recent years, Albany Med has spent hundreds of millions of dollars on an enormous building and expansion program and, consequently, now owns a considerable portion of downtown Albany.

 

As a result, although Albany Med seems determined to continue its traditional anti-union policy, NYSNA will be waging a heightened campaign to secure the first union contract for the medical center’s aggrieved nurses. 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173797 https://historynewsnetwork.org/article/173797 0
A Concise History of Diets through Life and a Lot of Show Biz Spice

One of the first photos you see in Renee Taylor’s delightful play about dieting is a black and white picture of her as a chubby kid in New York in the late 1940s. In hundreds of subsequent photos and videos, Taylor, the unforgettable mom of Fran Drescher in the hit TV series The Nanny, tells the story of her life and all the diets she has been on, real and crank, medical and fanciful. It’s about caloric food you can bake and a LOT of chocolate cake.

 

Her story is told in her engaging one woman show, My Life on a Diet, that just opened at the George Street Playhouse, in New Brunswick, N.J. The play is the story of her career in show business, marriage (53 years) to actor/writer Joe Bologna and a world of calories. As she says, it’s a story of her highs and lows, on and off the scale.

 

In her story, told as she sits at a desk in her home, she tells the rather remarkable tale of all the famous celebrities she knew as friends and lovers. Each has a number of anecdotes attached. Lovers included brilliant off-color comic Lenny Bruce, who overdosed during his relationship to her, and friends Barbra Streisand and, most importantly, Marilyn Monroe.

 

She met most accidentally.

 

Taylor enrolled at the Lee Strasberg Acting School in New York in the 1950s to become a performer. Sitting in class with her was Marilyn Monroe, who was just becoming famous. Taylor had no qualms in befriending Monroe and Monroe saw in her a level headed, down to earth friend that she desperately needed. The two hit off right away and remained pals for years.

 

Taylor rose from bit movie player to c-star of some movies and became a television star in several shows and then The Nanny. Through it all, she constantly a waged war against weight, fighting all the way to keep it down, and often failing. The play starts off as a standard Hollywood story but as it goes on you feel real empathy for her and her waistline combat. 

 

Renee had personal struggles, too. She dated a lot of men before meeting Bologna, and they had a tempestuous, marriage counselor filled marriage. Her good friend Marilyn died young of an overdose of pills. Lenny Bruce overdosed, too. You begin to see Taylor as just like any other human being, with lots of troubles, grieving over the losses of friends as we all have, and not just a glitzy Hollywood star. It’s a humanity that develops right through the end of the show and makes her lovable.

 

Oh, the endless diets. They are funny. She makes up celebrity diets and recounts tales of famous people she met who went crazy over diets, such as Jackie Kenney’s sister, rail-thin Princess Lee Radziwell. “The woman walked up to a gourmet delight buffet table an ate three little carrots for dinner. I leaned over and said to her, oh, such overeating…”

 

There was 40’s box office Queen Joan Crawford, whom she met with her slightly nutty mother Freida. Mom told Joan she had to work harder at body cleansing diets to save her health and Crawford, with a long nod, said “I’m doing that.”

 

Taylor’s story is familiar to any one who has been on a diet. She always weighed herself after getting up and before breakfast. “I also fixed the scale before I got on it,” she laughed.

 

You have to admire her for battling against her weight and remaining sane in Hollywood over such a long time. We all know what a crazy life show people have – too much eating and drinking, drugs, love affair, on and off employment, shrinks, always waiting for the next job. What do you do? You eat.

    

The play is warm and loving. It is a memoir of sorts with her as the center. It is not a drama or high comedy or sprawling spectacle, either, but it is good – as good as a big, calorie ridden holiday dinner, with a big dessert cake, please – large slice.

 

PRODUCTION: The play is produced by the George Street Playhouse. It is written by Taylor and Joe Bologna, and directed by Bologna. Sets and Lighting: Harry Feiner, Projections: Michal Redman, Costumes: Pol’ Atteu, Sound: Christopher Bond. The show runs through December 15.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173822 https://historynewsnetwork.org/article/173822 0
Jack Miles' God in the Qur’an brings the three Abrahamic traditions to the table

 

We cannot force someone to hear a message they are not ready to receive, but we must never underestimate the power of planting a seed.

 

In God in the Qur’an, Jack Miles is planting that seed.  A seed that brings the three Abrahamic traditions to the table.  A table where for centuries, the commonality of names and terminologies not only hid the deeply essential and symbolic concepts they represented, but the subtle distinctions and critical differences across these traditions. The implication of these identities and expressions, if understood correctly within each tradition, will definitely allow for an evolving understanding of the historical/biblical personalities as well as the metaphysical, parapsychological, and the deeply theological phenomenon generically referred to as God.

 

This God with Its many names is understood differently not only from one religious tradition to the next but also within the hues and shades of sectarian and sub-sectarian beliefs. In reality, no two individuals, regardless of their creed or conviction, can ever perceive God in the same way. Yet, the infinite diversity is contained within a realm that spans across the divine-human divide. Professor Jack Miles is trying, and very successfully in my opinion, to make us all aware of that.  

 

Going back and forth between the perceptions of the individual and the collective, this biblical scholar of international repute and valuable scholarship is inviting his readers to the round table with a proviso.  In a true Biblical / Qur’anic tradition he makes one simple request—much like when God talked to Moses:

“Take off your sandals, for the place where you are standing is holy ground.” (Exodus). Qur’anic scholars refer to this calling as “خلع نعلین ” or the taking off of the sandals:

 

إِنِّي أَنَا رَبُّكَ فَاخْلَعْ نَعْلَيْكَ إِنَّكَ بِالْوَادِ الْمُقَدَّسِ طُوًى f

Verily, I am thy Sustainer! Take off, then, thy sandals! Behold, thou art in the twice hallowed valley. (20:12)

 

The only place in the Qur’an where the concept of  قَدَسَ, hallow or holy, has been used in reference to an entity other than God. The relevance? That is where the voice of God was heard. Perhaps from this, the word  قُدس, Arabic for Jerusalem, might have been extrapolated.

 

What are the sandals? The cultural baggage filled with layers of interpretations and personal perceptions that thicken the veil between the human and the divine.  As each individual puts her or his own stamp of ‘approval’ on what God is, the process becomes so mindlessly entangled that even god loses itself behind the many veils of an identity crisis.

 

Dr. Miles contextualizes this understanding of the iconic historical/biblical personalities named Adam, Noah, Abraham, Joseph, Moses, and Jesus, and by implication Muhammad.

 

Jack Miles is describing, defining, decoding, and deciphering these nuanced understandings of God that have remained entrenched in the religious lore of the three traditions and he avails us the opportunity to a clearer understanding of the interactive God in the core and context of the Qur’an, not just as an address, but as the God Whose احدیه   or totality is as important as Itsواحدیه  or singularity in being The Totality of a Cosmic Whole that is the Oneness of Being.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173789 https://historynewsnetwork.org/article/173789 0
What the Trump Impeachment Inquiry Means for the Rest of the World

 

Once again, the United States is experiencing the profound drama of Presidential impeachment proceedings. But, dissimilar from the past, this time the implications for the rest of the world could be large.

 

Consider the two modern predecessors to today’s impeachment inquiry into President Donald J. Trump’s attempt to persuade Ukraine’s government to begin a criminal investigation of one of his leading Democratic challengers, former Vice President Joe Biden and Biden’s son Hunter. 

 

The first was the slow-brewing crisis that began with a midnight break-in at the Democratic National Committee’s offices at the Watergate Hotel in Washington in 1972. This impeachment went on for two years and consumed the American political system. It finally ended in President Richard Nixon’s resignation in August 1974. The second was the special counsel investigation of President William J. Clinton, who was impeached in the U.S. House of Representatives but acquitted by the Senate in 1999.

 

In both cases, the roots of the crises were domestic. Nixon was accused of misusing his office for domestic political goals, and then of obstructing the investigation. Clinton was accused of perjury and other abuses relating to his personal behavior. The case against Trump is significantly different. U.S. foreign policy is at its very core.

 

American relations with Ukraine are not some peripheral issue. Its Ukraine policy is rooted in its commitments to European and international security. At least since Russia’s annexation of the Crimea and incursions into eastern Ukraine in 2014, helping Ukraine gain its independence and sovereignty has been a central foreign policy issue for both the US and the European Union.

 

Moreover, unlike the two previous impeachment crises, this one could clog up the machinery of US foreign policy. During Watergate, Henry Kissinger, serving as both National Security Advisor and Secretary of State, kept the ship afloat, and Sino-American relations, the Vietnam war, and US-Soviet interaction remained high priorities. Likewise, throughout the Clinton drama, which coincided with the beginnings of the Kosovo War, US foreign policy making did not have any major disruptions.

 

Obviously, the same cannot be said for the Trump impeachment investigation. The proceedings have shown deep divisions between a foreign policy establishment that is trying to maintain the stated American policy on Ukraine and a White House that has been pursuing fundamentally different goals. 

 

Whether this apparatus is still capable of carrying forward its work on this critical problem is now an open question. On the White House side, there is a noticeable absence of ‘adults in the room’. Under Secretary of State, Mike Pompeo, who himself has been implicated in the scandal, an already diminished State Department has become a key battleground in the larger impeachment fight.

 

Moreover, Trump himself could make the current impeachment drama far worse for the rest of the world. During the Clinton proceedings, the White House was committed to maintaining business as usual and avoided participating in the daily disputes of the process. Trump has already adopted the exact opposite approach, not least by attacking (on Twitter) the former American ambassador to the Ukraine while she was testifying before the House Intelligence Committee.

 

Clearly, Trump intends to obsess over every detail of the process. Every minute that he spends tweeting and watching Fox News will be time that other occupants of the Oval Office would have spent focusing on pressing issues of state. In this respect, the Trump drama has parallels to Watergate, which was clearly a distraction for Nixon. But given that Trump is even less constrained (or even aware of) the constitutional principles he is accused of violating, his efforts to derail the proceedings are likely to become even more brazen.

 

Whether Trump’s behavior justifies removing him from office will be for the Senate to decide. But whatever happens, America’s political crisis comes at a time of rising global instability. In addition to a revisionist Russia seeking opportunities for zero-sum gains wherever they can be found, an increasingly assertive China is flexing its muscles in East Asia and on a global stage.

 

Meanwhile, the Middle East has entered another phase of profound instability, such that a single spark could easily ignite another crisis. Kim Jong Un’s nuclear armed regime probably is contemplating new provocations after recent missile tests and declaring it is no longer interested in future meetings with President Trump. Trade tensions with China remain high despite the recent announcement of a ‘phase one’ deal between America and China. And mass protests are sweeping the globe, from Santiago and Quito to Beirut and Hong Kong.

 

In today’s intertwined world, a crisis anywhere can end up on the President’s desk, and the policy response that does (or doesn’t) come can have global implications. French President Emmanuel Macron recently made headlines in America (PBS Nighty News Hour) and elsewhere by warning of an impending ‘brain death’ for NATO. If that grim prognosis about the state of transatlantic relations was true earlier this month, it is now even more relevant now that the impeachment process has reached a fever pitch.

 

In the previous impeachment episodes, the United States remained a strategic participant in global affairs. But America under Trump has already proved to be a source of global disruption. Whether the latest scandal leads to a strategic explosion or merely a strategic hiatus remains to be seen. However, the world can ill afford either scenario.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173795 https://historynewsnetwork.org/article/173795 0
The Power of the 2017 Congressional Recess

In February 2017, Congress went home for what’s commonly known as “recess.” The word “recess” is actually a bit misleading; it suggests that this period is a break. Recess is not a break. In fact, its actual name is the “District Work Period,” when there are no votes or hearings in Washington. This is when members of Congress get free time to meet with their constituents back home.

Recess is also when most members of Congress schedule their listening forums, like town halls, and in February 2017 a lot of Republican members went ahead and put town halls on the books. Under normal circumstances, town halls tend to be sleepy affairs, attended mostly by retirees and other people with flexible schedules and long, confusing lists of grievances. In 2009 the Tea Party had stormed town halls nationwide to great effect, stunning Democratic and Republican representatives alike and dealing a serious blow to the effort to pass health care reform. In 2017, we knew—because our in-box was full of messages from around the country—that Indivisibles were getting ready to turn the tables.

Not a lot of other people knew that yet. The Indivisible Guide had received enthusiastic coverage in political media outlets, but the growth of the grassroots movement hadn’t yet cracked the mainstream media, so there wasn’t a high level of public awareness and understanding of what was brewing nationwide. Back at Indivisible HQ, we ’d been doing our best to get Indivisible groups press coverage and to convey to as many people as possible that something very big was happening. It wasn’t easy. You try explaining to a political reporter that a massive grassroots resistance movement is taking shape and you know that because there are a zillion new groups called “Indivisible” on Facebook and your in-box is overflowing with random people making plans to pressure their elected officials. Most were skeptical, and reasonably so.

It was early February when we got our first call from CNN. Kyung Lah, a CNN correspondent, was outside a high school in Cottonwood Heights, Utah, where Representative Jason Chaffetz was preparing to face a very large and very frustrated crowd. Indivisible Utah and other grassroots groups had mobilized over a thousand people to pack Chaffetz’s town hall. Those who couldn’t fit in the auditorium were protesting outside. They’d brought green and red signs so members of the crowd could wave them to signal approval or disapproval as Chaffetz spoke. There was a lot of red that night. In the Q&A session, questioners honed in on Chaffetz’s unwillingness as chairman of the House Oversight Committee to investigate any of Trump’s rampant ethics issues, a choice that contrasted starkly with the witch hunt he ’d waged for years by carrying out transparently partisan investigations of Hillary Clinton’s emails. His constituents responded to his weak excuses with chants of “Do your job!”

Their preparation and outrage paid off: the visuals and headlines coming out of the event showed a congressman facing a shocking revolt in his own ruby-red district.

Chaffetz wasn’t alone.

Having bowed to constituent pressure to host a town hall, Senator Tom Cotton, over in Arkansas, now looked out over a sea of angry, concerned faces, struggling to find his supporters. The room was overflowing: as Ozark Indivisible’s Caitlynn Moses told us, “They’d had to change location twice, and they wouldn’t give us a date because they couldn’t find a place big enough . . . [T]housands of people came specifically to just yell at Tom Cotton.” Questioner after questioner took the mic and demanded to know why he was trying to take away their health care. Kati McFarland, a young woman with a rare genetic disorder, brought the crowd to tears with her story and concluded starkly: “Without the coverage for pre-existing conditions, I will die. That’s not hyperbole.” In the face of his constituents’ emotional, powerful appeals, Cotton seemed unable to cope. The story coming out of the February 2017 congressional recess was one of lives saved by the Affordable Care Act—and put at risk by Trump-supporting senators like Cotton.

The fight to save the Affordable Care Act was about more than resisting Trump; it was intensely personal. Trish Florence of Indivisible SATX was fighting for Medicaid for her family. Lisa Dullum with Greater Lafayette Indivisible was a breast cancer survivor and depended on the Affordable Care Act for her own care. Rosemary Dixon with Prescott Indivisible credited the Affordable Care Act for saving her life when she needed a kidney transplant. Kim Benyr of Ozark Indivisible was fighting for the Affordable Care Act while her young daughter, Maddy, was facing terminal cancer. In between events pressuring Tom Cotton, Ozark Indivisible put together a binder full of stories and pictures for their senators and representatives on how the Affordable Care Act had saved their lives and the lives of their children, family, and friends. They delivered the binders in person to bewildered congressional staffers in northwest Arkansas. Across the country, groups like Indivisible Kansas City, Indivisible Lovettsville, and Indivisible Austin compiled stories from people whose lives or financial stability had been saved by the ACA and shared them virtually and in person.

It shouldn’t have been a surprise that a bill that would throw millions off their health care was unpopular. But the speed, scope, strategy, and sheer splashiness with which people had organized all over the country was certainly a surprise. These public, visual, media-ready confrontations were suddenly taking place across the country. If you weren’t part of the grassroots surge yourself, all this seemed to come out of nowhere.

CNN’s Kyung Lah, in Utah, called us to ask the question that was about to be on everyone’s lips: Where did all these people come from?

We described the Indivisible Guide, how it had helped spark grassroots groups nationwide, and that it was these local groups that were doing the work. The town halls that were hitting the national radar now weren’t flash protests; they were the product of groups of people who’d started organizing weeks earlier and now had the capacity to turn out constituents in large numbers. We obviously had not personally organized a thousand people to show up at a town hall in Utah. We ’d been in touch with the Utah groups — a few weeks earlier they’d reached out for help when police had responded to a routine polite visit to a congressional office by arresting their members — but we’d had only one day’s heads-up that the Chaffetz town hall was happening.

Lah explained why she was asking: “Congressman Chaffetz is claiming that this is an Astroturf effort and that many of the protestors are being paid by outside groups.”

We had no idea how to respond to this, because it made no sense. We were paying people to show up? Like, logistically, how would that even work? But Chaffetz wasn’t the only one to fling this bogus accusation at his local grassroots interlocutors. With the Women’s March, the airport protests against Trump’s Muslim ban, and now the raucous town halls, Republicans nationwide were reeling. Taking their cues straight from the White House, they agreed: the problem wasn’t massive popular opposition to their agenda, hypercharged by widespread horror at the election of Donald Trump. That couldn’t be it! No, something else was going on.

Dismayed conservatives landed on an explanation: paid protestors, probably funded by liberal philanthropist George Soros, were responsible for this ruckus. Sean Spicer explained to Fox News that what was happening was “a very paid, Astroturf-type movement.” Representative Dave  Brat  urged the press to “Google ‘Indivisible ’ and the Soros-funded movement that is pushing all of this.” Right-wingers started hunting for evidence of this shadowy conspiracy. They published exposés “revealing” the secret playbook being used to disrupt town halls (aka the Indivisible Guide). A conservative opposition research firm even sent a tracker to follow us at a donor conference and posted creepy, surreptitious video of us on a right-wing news site in a bizarre effort to “prove” that we were Soros funded. (At the time, we weren’t.)

These allegations drew on an age-old anti-Semitic smear: that a Jewish banker was pulling strings behind the scenes to create chaos. They were also hilariously false. And while we were nervous about the security implications of becoming right-wing targets, we were mostly struck by the irony of being branded a slick Astroturf operation. As of February 2017, our fledgling organization was the functional equivalent of a bunch of kids in a trench coat pretending to be an adult. We had a website and an email address. We were talking to Indivisible groups, getting press, forming partnerships, sending emails to our growing list, producing policy analysis, and publishing social media updates. From the outside, we were doing a pretty solid impersonation of an actual nonprofit with staff and a budget. In reality, we were a collective of roughly one hundred frazzled, sleep-deprived volunteers rapidly approaching the end of our collective ropes. Our biggest expenses to date had been T-shirts and pizza for volunteers, which we put on our personal credit card. We ’d quit our jobs, but no one had gotten a dime of pay yet. And we sure weren’t paying anyone else.

The idea that some deep-pocketed donor was paying for all this was also absurd and darkly hilarious to the Indivisible leaders spending their nights, weekends, and fake sick days building the burgeoning movement nationwide. “Soros-funded Astroturf” became something of an inside joke for the Indivisible movement. Group leaders would show up at mass protests with signs that said “Hey, George, where ’s my check?” or wearing shirts emblazoned with the moniker “Unpaid Protestor.”

At the same time, the right-wing pushback was also taking stranger and scarier forms for groups around the country. In California, Republican representative Dana Rohrabacher’s office called the cops on a group of moms from Indivisible OC 48 after a bizarre scuffle in which a Rohrabacher staffer accidentally hit a toddler with a door—the toddler had been delivering Valentine ’s Day cards—then fell over herself. Rohrabacher followed up by issuing an unhinged press release denouncing the moms as a “mob” of “unruly activists” who were “enemies of American self-government and democracy.”  Indivisible OC 48’s leaders were subjected to vicious online fury and harassment, but Indivisible OC 48 doubled down on its scrutiny of Rohrabacher.

Those organizing in traditionally conservative areas ran into their own problems. Sarah Herron, the leader in Indivisible East Tennessee, noted the fear of social and professional reprisals against those closely affiliated with anti-Trump resistance. One Southern Indivisible organizer who made headlines organizing to pressure her conservative electeds to hold town halls was told by her employer that she ’d have to quit Indivisible or quit her job. She handed in her resignation the next day.

But this kind of experience wasn’t just confined to red states. New Jersey’s NJ 11th for Change was an Indivisible group that had been pushing Republican representative Rodney Frelinghuysen through a well-organized weekly event called “Fridays with Frelinghuysen” (which Frelinghuysen was invited to but never attended). The group was rocked when Frelinghuysen responded to the pressure by reaching out directly to the bank where one group leader, Saily Avelenda, was employed to complain to her bosses. Under pressure from the bank, Avelenda quit her job, then took her story—including the handwritten note Frelinghuysen had sent to a board member of the bank— to the press.

These are just a few of the stories that came back to us. For every one recounted here, there were more stories we heard from grassroots leaders across the country facing threats, ostracism, and even violence in their communities—all for standing up and making their voices heard. But the Indivisible movement kept building. These leaders had gotten involved because the country had fallen into the hands of a vile, dangerous bully. They were not going to put up with harassment in their own communities.

From the forthcoming book WE ARE INDIVISIBLE: A Blueprint for Democracy After Trump by Leah Greenberg and Ezra Levin. Copyright © 2019 by Leah Greenberg and Ezra Levin. Published by One Signal Publishers/Atria Books, an Imprint of Simon & Schuster, Inc. Reprinted by permission.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173790 https://historynewsnetwork.org/article/173790 0
A President Ready to Pardon

“Mr. President: There’s a military plot to take over the government!”

 

Those may not have been the words—and probably not even the thoughts—of U.S. Navy Secretary Richard V. Spencer when he confronted President Donald J. Trump after the President pardoned a Navy SEAL for bringing discredit to the armed forces. 

 

But Secretary Spencer’s actual response—and that of SEAL commander Rear Admiral Collin Green--represent the most open flirtations with disregard of presidential orders since General of the Army Douglas Macarthur flouted the policies of President Harry Truman in 1951.

 

The opening words of a military plot against the American government were fiction, of course—a scene from the 1951 Hollywood thriller Seven Days in May. The statement reflected the discontent in the film’s fictional military establishment with the fictional political establishment.  Adding to the film’s tension was the real-life conflict that year between General of the Army Douglas Macarthur and President Harry S. Truman. After Macarthur had publicly criticized the President’s Korean War policies, the President exercised his power as Commander in Chief and fired Macarthur.

 

Although the dismissal caused some discomfort in the Pentagon and outrage among civilian supporters who revered the general, the Constitution was and is--quite clear: “The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States….”

 

When, therefore, President Donald J. Trump pardoned a U.S. Navy petty officer and ordered the dismissal of U.S. Navy Secretary Richard V. Spencer for defying presidential orders, the President clearly acted within his constitutional rights.  

 

But in doing so, say his critics, the President just-as-clearly violated the spirit, if not the letter, of the Uniform Code of Military Justice, and, they trembled, he may well have undermined military discipline—and conduct--for years. The Navy petty officer committed a crime, and the President pardoned bot man and crime. Moreover, the President had done so twice before—pardoning murders in both cases.

 

The turmoil the President set loose with his latest pardon began simply enough, with the trial of chief petty officer Edward Gallagher, a Navy SEAL. Accused of shooting Afghan civilians, murdering a captive enemy fighter, posing for photographs by his victim’s body, and threatening fellow SEALs if they reported his misconduct, Gallagher won acquittal on all but one count, for which the court martial demoted him one rank. 

 

The President reversed the decision, but a continuing Navy investigation charged Gallagher with involvement with drugs, and Rear Admiral Colin Green, the SEAL commander, ordered Gallagher expelled and stripped of his prized SEAL insignia. Again, the President intervened, countermanding the admiral and provoking the Navy Secretary’s resignation.  Having served as a Marine captain and aviator, Secretary Spencer called the Navy’s actions against Gallagher essential for “good order and discipline.” He responded to the President’s pardon by declaring, “I cannot in good conscience obey an order that I believe violates the oath I took.” 

 

But contrary to his statement, the Secretary’s oath called on him to obey the Constitution, which makes the President commander in chief--even if, like Mr. Trump, the President never served in the military or if his order conflicts with the Uniform Code of Military Conduct (UCMC). In fact, the President is not and cannot be in the military and, ipso facto, is not subject to and cannot violate the UCMC. Indeed, if a member of the military wins election to the presidency, he or she must sever all military ties and cannot wear any uniform in office.  (Nor should the President even salute military officers. Doing so is a recent—and incongruous--custom started by President Reagan; it violates both letter and spirit of the Constitution.) 

 

Although critics contend Mr. Trump’s pardons may undermine military discipline, he is not the first President to issue controversial pardons. President Abraham Lincoln routinely pardoned more than 1,500 Union soldiers convicted of desertion. “If almighty God gives a man a cowardly pair legs,” the President chuckled, “how can he help their running away with him?” In one case, Lincoln accompanied his order to release a deserter with the command, “Let him fight instead.”

 

Not to be outdone, President Andrew Johnson granted full pardons on Christmas day, 1868, to all Confederate troops who fought against the Union in the American Civil War. 

 

A century later, President John F. Kennedy prevented the Army from punishing a soldier for insulting the President, while President Richard M. Nixon commuted the life sentence of 1st Lt. William Calley for leading the Mai Lai massacre of hundreds of Vietnamese citizens in 1968.

 

And on his first day in office in 1977, President Jimmy Carter infuriated millions of Americans by granting unconditional pardons to hundreds of thousands of men who had evaded the draft during the Vietnam War while hundreds of thousands of less privileged had fought, bled, and died gone in battle.

 

So President Trump was far from first to issue controversial military pardons. Indeed, the most recent pardon was his third since taking office. Previously, he had pardoned two army officers of murder, and set off a storm of criticism undermining the Uniform Code of Military Conduct. 

 

Such criticism, however, may be based more on politics than military science. With 1.3 million members of the military living in a strictly controlled hierarchy in 150 countries, the likelihood of a single presidential pardon provoking a widespread disciplinary breakdown, let alone mutiny, is beyond remote. 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173796 https://historynewsnetwork.org/article/173796 0
NATO's First Post-Wall Summit 30 Years Later

NATO Summit 1989, NATO Photos

 

NATO is not brain-dead as President Emmanuel Macron of France has recently claimed. The alliance is vital for our security. On November 20, 2019, in response to Macron’s claim, Germany’s Foreign Minister Heiko Maas emphasized that “NATO has for the last 70 years been Europe’s life insurance.”(1) Maas suggested stronger political ties to improve NATO’s cohesion as a way to adjust the alliance to the current challenges we face. 

 

Debates over NATO’s purpose are not new. The alliance has always managed to adapt in times of challenges. Thirty years ago, NATO witnessed one of its finest hours: On December 4-5, 1989, NATO leaders met in Brussels for their first summit after the fall of the Berlin Wall. Their discussions had bold significance: Their meeting signaled that NATO was determined to play the pivotal role in the emergence of Europe’s new security architecture. President George H.W. Bush saw NATO as the core for the “future shape of the new Europe and the new Atlanticism.”(2)

 

From a German vantage point, the key outcome of the Brussels Summit was President Bush’s full support for Germany’s unification. Bush argued that NATO’s policy should be based on four key principles: Self-determination, Germany’s commitment to NATO, general European stability, and support for the principles of the Helsinki Final Act.America’s leadership was essential for Europe’s transformation. The Europeans needed the United States as a power broker and alliance manager at a time when the Soviet Union as the external enemy had disappeared.(3) Without NATO and without American troops in Europe, the European states would lapse into a security competition among themselves. The disastrous European Council Summit in Strasbourg on December 8-9, 1989 revealed rifts and mistrust: Margaret Thatcher, Italian Prime Minister Giulio Andreotti and Dutch Prime Minister Ruud Lubbers attacked Chancellor Kohl for his bold initiatives to achieve unification. As a high-level European argued, “it is not acceptable that the lead nation be European. A European power broker is a hegemonic power. We can agree on US leadership, but not on one of our own.”(4) 

 

Unified Germany was a potential source of instability. Its NATO membership was the best way to contain it and to keep it integrated in the West. Bush and Kohl discussed Germany’s unification on the eve of NATO’s Summit when Kohl pointed out that “everyone in Europe is afraid of two things: (1) that Germany would drift to the East – this is nonsense; (2) the real reason is that Germany is developing economically faster than my colleagues. Frankly, 62 million prosperous Germans are difficult to tolerate – add 17 million more and they have big problems.”(5)

 

Bush used his intervention at the outset of the December 1989 NATO summit to set the tone and to lay out a blueprint for a strengthened NATO, buttressed with additional political responsibilities. NATO would remain North America's primary link with Europe. Bush encouraged the European allies to build a united Western Europe within an Atlantic framework, thereby sustaining US leadership. He emphasized that NATO’s task was “to consolidate the fruits of revolution and to provide the architecture for continued change.”(6) The promotion of greater freedom in Eastern Europe was a basic goal of NATO’s policy. At the same time, NATO’s primary role would continue to be security and deterrence. The Soviet Union still posed a major military threat, and it was unpredictable whether Gorbachev’s reform policy in the Soviet Union would eventually prevail.“I pledge today that the United States will maintain significant military forces in Europe as long as our Allies desire our presence as part of a common security effort. The U.S. would remain engaged in the future of Europe and in the common defence. This was not old thinking but good thinking”, Bush said.(7)

 

The December 1989 Summit also paved the way for NATO’s transformation into a more political alliance. NATO had to reinvent itself politically for the initial challenges of the post-Cold War era. NATO’s aim was to work for a Europe whole and free and to create an enduring peace on the continent. NATO’s enlargement was part of this endeavor. The current scholarly debate about NATO enlargement often views NATO exclusively through a Russia prism and overlooks that NATO’s outreach to the East was preceded by a rebalancing within the West related to Germany’s role in NATO. Unified Germany became America’s strategic partner toward Central and Eastern Europe and the Soviet Union’s successor states. At the same time, one of NATO’s key functions still pertained to the containment and integration of Germany. NATO Secretary General Manfred Woerner reiterated that it was a “historic task […] to protect the Germans from temptation, Europe from instability, and safeguard those elements that have made a new Europe possible.”(8)

 

(1) Foreign Minister Maas ahead of the NATO Foreign Ministers Meeting, Press Release, 20 November 2019, see https://www.auswaertiges-amt.de/en/newsroom/news/maas-nato/2278322, accessed 22 November 2019.

(2) See “President’s Afternoon Intervention on the Future of Europe,” 4 December 1989, https://www.margaretthatcher.org/document/110774, accessed 22 November 2019.

(3) See Mary E. Sarotte, 1989. The Struggle to Create Post-Cold War Europe (Princeton: Princeton University Press, 2009); Kristina Spohr, Post Wall, Post Square. Rebuilding the World after 1989 (London: Harper & Collins, 2019).

(4) Robert O. Keohane, “The Diplomacy of Structural Change. Multilateral Institutions and State Strategies,” in: Helga Haftendorn and Christian Tuschoff (Eds), America and Europe in an Era of Change (Boulder: Westview, 1993), p. 53.

(5) Memcon Bush and Kohl, 3 December 1989, https://bush41library.tamu.edu/files/memcons-telcons/1989-12-03--Kohl.pdf, accessed 22 November 2019.

(6)For a verbatim account on Bush’s Remarks see Telegram UK Del NATO to FCO (Telno. 373) “Meeting of NATO Heads of State and Government, 4 December: Briefing by President Bush,” 4 December 1989, in: The National Archives, Kew, Prime Minister’s Office Files, PREM 19, Vol. 3102.

(7)Ibid.

(8) Memcon Bush and Woerner, 24 February 1990, https://bush41library.tamu.edu/files/memcons-telcons/1990-02-24--Woerner.pdf, accessed 22 November 2019.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173793 https://historynewsnetwork.org/article/173793 0
An Extinct Species: The Liberal Republican

Nelson Rockefeller

One of the salient features of current headlines is the Republicans’ refusal to break with the president elected on their party ticket. Donald Trump has garnered solid party support even when defying long-held Republican principles of economic conservativism, such as free-trade or low tariffs, and strong opposition to adversaries like Russia and North Korea.

 

This phenomenon of total party unity runs counter to much of U.S. political history. It has been much more common for national political parties to have multiple wings or factions. Politicians within the same party could disagree on issues, but every four years they would try to coalesce around a single presidential candidate. The political parties have traditionally tried to accommodate a variety of interests and points of view. Because both parties had conservative and liberal wings, presidents could never count on total support from their parties in Congress. 

 

Yet this is no longer the case. As both parties have unified, those whose views are not considered mainstream enough have been sidelined or eliminated. This shift has meant the extinction of the liberal or moderate Republican.

 

Rising to prominence during the 1930s and 40s, they generally supported the liberal legislation of the New Deal and Truman’s Fair Deal. In the 1960s they backed Kennedy’s New Frontier and Johnson’s Great Society. 

 

Most of them came from the eastern seaboard. Among the best known were New York Governor Nelson Rockefeller, and Senators Jacob Javits and Kenneth Keating of New York, Clifford Case of New Jersey, Leverett Saltonstall of Massachusetts  and George Aiken of Vermont (both of whom served as senator and governor).

 

During the 1950s and ‘60s their influence proved pivotal. For example, Liberal Republicans had a moderating influence on Republican presidential nominations. In 1952, liberal Republicans secured the nomination of moderate Dwight Eisenhower over conservative Robert Taft. Another instance of liberal Republican influence occurred when Nelson Rockefeller exerted his leverage over Richard Nixon. The GOP was set to nominate Nixon, yet a week before the 1960 GOP National Convention, Rockefeller’s press secretary announced that the New York governor had concerns about the “strength and specifics” in the party platform. New York contributed 10% of the party’s delegates so Nixon worried Rockefeller might be able to swing the nomination.  Nixon flew to New York to hammer out platform agreements with Rockefeller. At Rockefeller’s Fifth Avenue apartment Richard Nixon agreed to increase defense spending, build a stronger nuclear arsenal, increase economic stimulus spending, and oppose segregation and racial discrimination. Sometimes called the Treaty of Fifth Avenue, these agreements prevented a platform fight.

 

Liberal Republican influence in Congress was also profound. When Medicare first passed the Senate in 1964, nine liberal Republican voted for it providing the margin of victory. 

 

Although their voting records were similar to those of Democrats, these liberal or moderate Republicans never switched parties. Some wanted to remain untainted by the corrupt big city machines that often controlled the Democratic Party. Jacob Javits’s father worked for Tammany Hall and saw Democratic corruption first-hand so Javits preferred the Republicans. Nelson Rockefeller’s family had always been Republican and he and some other Republicans did not want to associate with the racist, segregationist Southern Democrats. Like the Rockefellers, other liberal Republicans could trace their Republican heritage back to the time of Abraham Lincoln. 

 

The beginning of the end of liberal Republican influence began with the rise of Senator Barry Goldwater. By 1964, the Arizona conservative had become the most popular Republican among party leaders and activists throughout the country. Although Goldwater carried only his home state plus five southern states, the conservative movement grew, fueled in part by negative reactions to the racial turmoil of the 1960s, the war in Southeast Asia, and the development of the counter-culture. 

 

 By 1980, most of the prominent liberal Republicans had retired or had been defeated at the polls by Democrats or other Republicans. Robert F. Kennedy defeated Keating in 1964. Saltonstall, who retired in 1966 was replaced by Edward Brooke, a moderate to liberal Republican who lost his seat to a Democrat in 1978; Aiken retired and was replaced by a Democrat in 1974. Case lost to a conservative Republican in 1978 as did Javitsin 1980. Nelson Rockefeller accepted Gerald Ford’s appointment as vice-president and faithfully served a conservative administration.

 

 Beginning with the 1976 presidential election, Republicans began tripping over each other to prove their conservative bona fides. Ford claimed to lead the most conservative administration ever as he vetoed 60 bills passed by the Democratic-controlled Congress. Yet in 1976, the Republican governor of California Ronald Reagan challenged Ford for the Republication presidential nomination. Reagan claimed he was a true conservative who would dismantle the welfare state and challenge the Washington bureaucracy. In 1976, no liberal wing spoke up to counter the Republican drift to the right.  Although he narrowly lost the nomination, Reagan was elected president in 1980, signaling the beginning of a new more conservative era in American politics.

 

Since then, candidates have been largely in lockstep on economic issues such as taxes, government size, immigration, and social issues such as abortion and gay marriage. Anyone who dissents from these views has been marginalized or silenced to the detriment of the party and the country.

 

Formerly, the Republican Party was composed of liberals, conservatives, and some who viewed themselves in the middle of the road. In Congress, liberal Republicans were often able to reach across the aisle preventing the gridlock that is so pervasive today. The wings of the party debated and competed with one another, but were usually able to unite behind a presidential candidate with broad appeal in a general election. In Federalist 10, James Madison states that politically diverse factions competing with one another would prevent tyranny. One cannot say that tyranny pervades the Republican Party. But one dominant faction does exist representing an extremist base. If a vibrant liberal Republican wing had been present in 2016, American history since that year might well have been very different.

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173791 https://historynewsnetwork.org/article/173791 0
The Massive Influence of Northern California Democratic Leaders in American Politics Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015)  A paperback edition is now available.

 

Northern California Democrats have played a major role in American politics in recent decades, and have reached a peak in the time of President Donald Trump.

 

Past Democrats from Northern California, particularly around San Francisco, included Governor Eugene (Pat) Brown (1959-1967), and Senator Barbara Boxer (1993-2017), who also served in the House of Representatives (1983-1993). 

 

Additionally, Pat Brown’s son, Jerry Brown, served as Governor when he was young (1975-1983) and again three decades later (2011-2019), along with being Oakland Mayor (1999-2007) and California Attorney General (2007-2011). Jerry Brown also sought the Presidency three times, in 1976, 1980, and 1992.

 

Supreme Court Associate Justice Stephen Breyer has been a major liberal influence in his 25 years on the high Court since confirmation in 1994, by appointment of President Bill Clinton.

 

Presently, Speaker of the House Nancy Pelosi is in her second round as the highest ranking woman in American government history. Pelosi has served in Congress since 1987 and was previously the Speaker of the House from 2007-2011. Pelosi is setting the standard on how to control her Democratic majority and also deal with the danger and threat presented by President Donald Trump as the impeachment inquiry that she so craftily developed moves forward.

 

Two San Francisco based members of the House of Representatives have also played a major role in present impeachment efforts. Congressman Eric Swalwell (2013-present) serves on the House Select Committee on Intelligence and the House Judiciary Committee, both key posts involved in the impeachment effort, and briefly sought the Presidency.Congresswoman Jackie Speier (2008- present) is also on the House Select Committee on Intelligence in the present impeachment investigation.

 

Further, former San Francisco Mayor Dianne Feinstein (1978-1988) has served in the US Senate since 1992 and is Ranking Member of the Senate Judiciary Committee. She chaired the Senate Intelligence Committee when the Democrats had control before 2015.

 

The other California Senator, Kamala Harris, came to the Senate in 2017 after serving as California Attorney General from 2011-2017, and District Attorney of San Francisco from 2004-2011.  She recently ended her candidacy for the Democratic Presidential nomination in 2020.

 

Additionally, Governor Gavin Newsom, who took office in 2019, was previously Lieutenant Governor under Governor Jerry Brown (2011-2019), and also served as San Francisco Mayor from 2004-2011.

 

It is rare for one city and one area of any state to have as great of an impact on American life as San Francisco and Northern California have had. The impact of these political leaders will still be significant in the 2020s.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/blog/154286 https://historynewsnetwork.org/blog/154286 0
Remembering Altamont, the Day the Sixties Died

Fifty years ago this month, on December 6, 1969, the 1960s died. On a sunny Saturday in the dusty hills east of San Francisco, the Altamont Rock Concert dissolved into chaos, leaving four dead and dozens injured. Like so many events of the 1960s, an idea conceived with good intentions went terribly wrong and ended in tragedy.

 

The day-long free concert, attended by 300,000 people, was held at a run-down race car track, a choice made at the last-minute after negotiations to hold it closer to the city fell through.  The event was sponsored by the Rolling Stones, who initially envisioned a “Woodstock West,” a free music festival featuring the biggest California bands including Crosby, Still, Nash and Young, the Jefferson Airplane and the Grateful Dead. 

 

In what was to become, quite literally, a fatal mistake, the event organizers hired the Hells Angels motorcycle gang to serve as security guards. As noted in Joel Selvin’s book, Altamont, the biker group was hired (for $500 worth of beer) on the recommendation of Rock Scully, the Grateful Dead’s manager.   

 

Today, the event is remembered by the shocking images of violence that occurred at the climax of the concert, as the Rolling Stones took the stage surrounded by Hells Angels. As the drug-fueled young men and women in the crowd surged forward, the leather-jacketed gang members beat many senseless with pool cues and motorcycle chains. Dozens were taken to the hospital and one young black man was stabbed to death. 

 

The most complete visual account of the violence was captured in Gimme Shelter, the concert documentary film made by the Maysles brothers. When the film appeared in theaters in 1970, its scenes of swaggering gang members and bloody concertgoers provided a shocking counterpoint to the cheerful atmosphere of brotherhood and sharing associated with Woodstock.

 

Altamont’s significance came from its timing, just four months after Woodstock.  Four deaths at a rock concert was a tragedy, but other contemporary events had produced higher body counts. The members of the Manson gang had killed seven people in August and despite President Nixon’s pledge to de-escalate, the Vietnam War clamed 12,000 Americans in 1969. 

 

I attended the Altamont concert along with two college friends. For many of us, the disaster was particularly demoralizing because the event had been billed as California’s answer to Woodstock. Although I was safely ensconced at a suburban junior college, like many Bay Area young people I felt a sense of belonging to the hippie, flower-power movement. After all, we had seen it first-hand in San Francisco’s Haight Ashbury district; we had attended the free concerts in Golden Gate Park, heard Allen Ginsberg chant and seen the Diggers hand out free food. 

 

The Concert

On that fateful December 6, we headed out on the freeway for the remote Altamont Pass.  We turned off onto the narrow, road leading to the Speedway and were excited by a scene that reminded us of the photos of Woodstock. Thousands of cars were parked head-first on the roadside and a huge stream of young men and women walked towards the concert, toting blankets, wine bottles and backpacks. Guitar music, casual laughter and clouds of marijuana smoke filled the air.

 

The first sign of trouble appeared as we entered the gates of the race track. Inside, a dozen Hells Angels roared up and down along the edge of the crowd. Revving their engines and spinning their wheels, they sprayed the concertgoers with mud and gravel. Some of the bikers cruised slowly by making obscene comments about the young women. I saw a large bearded gang member with oil-stained jeans grab a wine bottle out of the hands of a startled attendee and speed off.

 

My friends and I glanced nervously at each other. What could we do? There weren’t any cops in sight. No one wanted to pick a fight with a Hells Angel. 

 

The event stage was set at one end of the oval speedway. A series of gently sloping, grass-covered hills ringed the track. By 10 am, a massive throng of white, suburban youths were spread across the bumpy landscape.

 

We wedged ourselves into a space about a thousand feet from the performers. With our view obscured by two metal scaffolds bristling with movie equipment and an old school bus, we could only see part of the stage.  

 

The primitive sound system might have sufficed for a small auditorium, but much of the music was dispersed in the afternoon breeze.  For us, the sound quality was akin to what you’d hear standing in a parking lot and listening a radio playing inside a car. 

 

We could see and hear the different bands come and go, but we had no idea of the beatings taking place near the stage. This was a sixties outdoor concert experience.  It is hard to imagine today, when pop performances use giant video screens to project the singers’ faces and the audience focuses on the event through their cell phones.

 

At Altamont, we endured a long wait as evening fell and temperatures dropped. Our patience was rewarded when the Rolling Stones finally took the stage around 8 pm. The sound system was turned up and the band cranked through a dozen of their hits including “Honky Tonk Women,” “Jumpin Jack Flash,” “Satisfaction” and “Sympathy for the Devil.”  

 

After the last song, the band hurried off the stage. We stumbled through the darkness, tripping over blankets and wine bottles, guided to our cars by the glow of distant headlights.

 

It wasn’t until the next day we learned of the casualty count: one man drowned in a drainage canal, two campers crushed by a runaway car and one young African American man, Meredith Hunter, stabbed to death in front of the stage. 

 

In this pre-Internet era, our main source of news about the event was KSAN-FM, the Bay Area’s most popular rock station. Their news director hosted a six-hour call-in program. Beating victims and eyewitnesses recalled their experiences, often in tears. As we struggled to understand what happened, it became clear that it was one thing to question authority, another to abolish it altogether. 

 

A week later, Rolling Stone covered the disaster and described it as "rock and roll's all-time worst day."

 

On December 31, the sixties officially ended and the new era of the 1970s began. Following the assassinations, urban riots and massive causalities of Vietnam, we hoped for a fresh beginning. Through TV ads and newspaper commentaries, the media suggested the seventies would be sleeker, smoother; society would be restyled after the head-on crash of the 1960s.   

 

Instead, we endured four more years of the Vietnam war, the Kent State shootings, gasoline rationing, Watergate and the Iran hostage crisis. In November 1980, the electorate turned decisively conservative and elected Ronald Reagan. 

 

Today, the Woodstock festival is commemorated by a large concrete memorial placed in front of the grassy hillside where the concert was held. Nearby, the Museum at Bethel Woods displays artifacts celebrating the festival and the counterculture it embodied.

 

In contrast, the Altamont Speedway sits empty, having ceased operations a decade ago. The concrete oval remains, in a small valley hidden from the nearby interstate. The only sound is the whirring of hundreds of wind turbines, sending electricity to the wealth engines of Silicon Valley.  

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173730 https://historynewsnetwork.org/article/173730 0
After the Bleeding Stopped

Writing about an event that you were involved in a half-century in the past is dicey business.  This is especially true when there was no contemporary press coverage of your piece of the affair to check against and you’ve either drifted out of contact with principal participants long, long ago or they are, well, rather dead.  

 

It’s not as if the event itself, the disastrous Rolling Stones free concert at California’s Altamont Speedway was not extensively covered in all its painful detail at the time. It was even documented in the iconic film Gimme Shelter.  Multiple cameras captured the chaos on and in front of the stage as Hells Angels “security guards” scuffled with band members and charged into the crowd to pummel anyone they perceived was threatening them or was otherwise “out of line” with pool cues.  A man was stabbed to death by a trio of Angels during the Rolling Stones’ climactic performance.  Altamont immediately became the media counterpoint to Woodstock’s “3 Days of Peace and Music” held four months earlier near New York City. 

 

Though I hesitated to sort through my memories and write about the concert for its 50th anniversary because I lack more details than I’m comfortable with, underground press chronicler Ken Waschberger encouraged me.  “Just say what you remember,” he wisely advised.  So here goes. 

 

My role at Altamont was to coordinate the post-show grounds cleanup. In truth, it was my own big mouth that entangled me with the whole mess. In 1969, I got involved in the music and countercultural scene in Kansas City, successfully negotiating the Mother Love Tribe’s weekly Sunday park concerts with the Parks and Recreation Department while working on the local “underground” paper, the Screw (later renamed Westport Trucker). 

 

It had been an absurdly busy year. I was also, as time and circumstances allowed, helping out a local rock band; romancing a lady; finishing high school a year early, and successfully obtaining the objective of a law suit against a local school district on behalf of some of the inmates left behind. In the midst of this, I was called out at the very last minute to lend some minor assistance at the Woodstock Music and Arts Fair north of New York City. I arranged for a colleague to handle my last two scheduled park concerts and caught a plane to LaGuardia.

 

After helping at Woodstock, I went to San Francisco to work with a local San Francisco production company, the Family Dog, but with the real objective of relaxing and getting myself “grounded” between concert seasons in Kansas City. This was my third brief stint with the Dog since 1966 and I did whatever odd tasks their revered founder, Chet Helms, had for me. Chet put me up in their relatively unused green school bus parked below the sand dunes and beside the Dog’s music hall on the Pacific coast.  The bus’s exterior was well faded from the sun and sea air and the interior required quite a bit of cleaning and reorganizing, but it couldn’t have been a sweeter set-up.  

 

That fall, I’d proudly recounted to Chet, the Grateful Dead’s Ron McKernan and Phil Lesh, It’s a Beautiful Day’s David LaFlamme, and others what a marvelous job Mother Love had done keeping KC’s Volker Park in shape despite the crowds.  Since everyone wanted to hear about Woodstock (it was kind of a mix of curiosity and San Francisco “Woodstock envy”), I’d also relayed my experience working it. These stories put me on the radar. 

 

Originally, the Rolling Stones concert was supposed to be held at Golden Gate Park. The second planned location, the Sears Point Raceway near the Dead’s ranch in Novato, also fell through. In the middle of this location turmoil, I got a call from Rock Scully, one of the Grateful Dead’s managers. He said he heard grrr-aaat things about my work and would like to know if I’d be willing to coordinate the grounds cleanup at the new Altamont Speedway location. 

 

I explained to Scully that I had nothing to do with any grounds work at Woodstock; that my principal function, shared with two other fellows (one of whom had disappeared almost immediately), was keeping an eye on the unused lighting equipment under the stage and shooing people off the elevator frame when equipment had to be moved. With the exception of just one lengthy stint when I helped get helicoptered-in food supplies up to the far crest of the bowl where Hugh Romney’s diligent Hog Farmers and other volunteers were preparing food for the masses, I’d very briefly outside of “the citadel”* perhaps only four or five times.  I also informed him that only two or three of the 15 Volker Park shows I did before Woodstock were attended by more than 10,000 people (which, in truth, was an exaggeration because I should have said 5,000).  

 

But Scully literally wouldn’t take “no” for an answer and pressed hard.  He even put Grateful Dead front-man “Pigpen” Ron McKernan on the line.  His brother had done me some favors “way back” in ‘66-‘67 (no, they weren’t drug related) and Ron brought it up in an effort to push me into agreeing!  Good cop - bad cop?  I liked Ron but this seemed more like bad cop and bad cop. 

 

I knew that, due to the last-minute problems associated with moving the production roughly a hundred miles from Sears Point, it would be a big task. Others at the Dog were asked to assist and, like me, reluctantly agreed only because it was the Grateful Dead doing the asking.  One good friend, ace sound man Lee Brinkman, had mixed feelings but saw it almost as a duty.  I consoled myself with the thought that my work would not truly start until the day after the show and looked forward to, for once, just being a spectator.

 

“OK.  OK.  I’ll do it,” I told Scully. 

 

*The citadel was what Woodstock’s red-shirted security personnel called the fenced-in area that they had withdrawn to when the festival’s outer perimeter was abandoned and the event was declared to be free to all.  It encompassed the area immediately in front of the stage; the conglomeration of trailers and facilities to the rear, and wide expanses to both sides that were used as helicopter landing fields.

 

The Concert

The barren brown hills that marked much of the final drive to the speedway were a far cry from the lush green of the bus ride into Bethel, New York, months earlier, but the crowd at the site was in good spirits, almost festive.  

 

After setting up camp with some friends about a hundred yards from the stage I went around back to ”check in.” Unlike Woodstock, there was no “citadel” and the backstage area was a disorganized and tightly packed jumble of rental trucks, school and metro-type busses tents, and trailers.  I quickly located the Dog’s green school bus which would serve as my post-show office and storage, but the expected tools were nowhere to be found.  They must be somewhere else or coming later, I figured.

 

The most unsettling difference between the two events, however was that the stage at Altamont was frightfully low, about three feet. Gravity matters. This design feature had been perfectly appropriate for the elevated ground of the Sears Point site but invited trouble when situated in the low area here as people would tend to be pressed to it instead of back away. I also didn’t like that the Hells Angels’ San Francisco chapter that frequently worked security for the Grateful Dead did not seem to be there in force.  Nor did I see club president Sony Barger who supplied a steadying hand when things began to escalate between the Angles and those waiting in line outside the Family Dog to see the Dead’s side band, New Riders of the Purple Sage, just a couple weeks before.  Instead, Angels from the San Jose chapter and many bikers of undetermined origin hugged the stage area as a considerable amount of beer was passed around. 

 

A fellow with the band It’s a Beautiful Day who was helping with the staging (it wasn’t singer-violinist David LaFlamme) confirmed that Sony wasn’t there yet and one of the SF Angles said he was dealing with “legal stuff” back in the city but that he’d definitely be coming.  

 

As the concert started, there were immediately problems. The free Stones concert also featured about a half dozen of San Francisco’s top bands and the first one up, Santana, leaped to a bouncing start but quickly came to a jarring halt.  There was some kind of trouble at the stage but I thought it was probably just a problem with the sound system and wondered if my friend Lee was pulling his hair out. I couldn’t have been more wrong.

 

The gruesome events that went on until well after dark have been recounted over and over elsewhere. I went around to behind the stage three times before the show ended in futile attempts to see if I could be of any assistance.  It was on my first trip back as the trouble escalated that I got the shock of my young life. I was carrying out some simple task with another Family Dog Productions guy (whose name is now forgotten) when we were told that the Grateful Dead were bugging out. They were fleeing the scene and leaving everybody else to deal with the mess they’d organized on behalf of the Rolling Stones!  We immediately turned to each other.  His jaw had practically dropped to the ground and the eyes were popped wide. I must have looked the same to him.  The only reason we were there was because the Dead had asked us.  

 

Ultimately, I remember only one band --- and one band only --- making it all the way through their set with no interruptions: The Flying Burrito Brothers.  Because bands shortening their sets and because the Stones wanted their performance to be held after dark for dramatic effect, the absence of the Dead created a gap of more than two hours before the Stones went on stage.

 

At the time, when I learned that the Stones would not go on until it was dark, I naively assumed that there was a bright side to the unexpected moratorium. I believed that the gap would actually provide a “cooling off’” period and the bikers could be delicately gotten under control, off the stage, and reorganized into specific security duties. 

 

Tragically, the people who were in the best position to carry this off, the Grateful Dead had run for the tall grass and by the time Sony finally roared up with an escort of more SF Angles things were far beyond even his ability to restore order. Instead of cooling off, tensions mounted during the long, long wait and beatings resumed as the Stones started up after nightfall and played on, oblivious for a time that the death right in front of them had even happened. 

 

The Aftermath

Later that night, after the bleeding stopped, everyone involved in the organization and staging of the show worked at record speed to “get the heck out of Dodge.”  By first light, there were only two abandoned vehicles in the immediate area, a few cars in the far distance, and the Dog’s faded green bus, my “office” for the clean-up.  

 

Thankfully, the massive throng had come and gone in less than 24 hours and there’d been no rain so the grounds were in nowhere near as bad a shape as after Woodstock.  Nevertheless, the flood of humanity, receding hurriedly in the dead of night, had left enough refuse in its wake to keep a crew fully employed for a couple days. Unfortunately, there was no crew.  

 

When the Grateful Dead’s manager Scully asked me what I needed for the cleanup, I asked how many people were expected. He’d guessed there would be “50-, maybe 100-thousand” attendees, but it was actually about 300,000. Despite my tender age I’d already been involved --- to varying degrees --- in stuff like this for several years. I told him I would need willing workers, cash for 50 yard rakes plus 20 each of garden rakes and shovels, a truck with drinking water, three-days’ meals catered for 50 people, five porta potties in the work area, tents and cots for 50, and for the Speedway to agree to make their phone(s) available. I also said that we would have to hire a firm with the proper equipment to scoop our piles into trucks that would haul the stuff off for disposal. 

 

Scully promised he would personally ensure that it was on site. As for as the workers, Scully said that could be coordinated with the current version of the old Haight-Ashbury Switchboard and that he’d already talked with them --- he said --- and told me who to contact.  

 

To make a long story short, I received almost none of the required support. The folks at the Switchboard said that they’d never heard from him or anybody about this. I found it hard to believe that Scully had simply lied to get an agreement out of me, and rationalized that he’d intended to get that ball rolling and had gotten sidetracked.  Though apprehensive, I pressed on and the Switchboard made a genuine effort to enlist at least some volunteers, but at the site itself, there wasn’t one lousy rake or any of the other things Scully promised. 

 

Extensive improvisation was required to get the job done. Rakes were fashioned from the wooden slats inserted diagonally in the track’s perimeter chain link fence that lined the top of the ridge to the right of the stage. The impromptu clean-up crew, which fluctuated from about 15 to 45-50 people depending on the time of day, compromised principally stragglers who were willing to be put to work.  These were primarily kids who had become separated from their rides. The rest of the stragglers were either those who wanted to linger a little longer or were simply too blasted to arrange a hook-up with others heading back to the city.  There were as many as 250 of them that morning and I’d begun my recruitment campaign the night before.  

 

We faced an enormous task. First, we gathered the glass and heavy items like abandoned ice chests and a surprising number of shoes into irregularly spaced piles sorted by type --- glass, cans, and miscellaneous--- which included some 30,000 wine bottles (roughly one for every 10 to 12 in attendance).  Then followed the attack on the paper waste. Working individually and in skirmish lines of up to a dozen, much of the refuse had been consolidated into piles by the middle of the third post-show day and luckily, at least while I was there, no significant winds came up to rescatter the paper.

 

Though one would have to walk a pretty fair distance to the porta potties near the stage, especially if you were working at the far ends of the grounds, the facilities were clean and not overflowing.  Water could be obtained from the speedway office, which was a pretty good hike up and around the track, but there was no shortage of jugs and bottles to put it in.  What we didn’t have was food.  

 

I would have brought a whole list of contact numbers if I’d known that things would work out this way but, as it was, I had just two: the drummer Mickey Hart’s at the Grateful Dead’s ranch and the Family Dog’s office.  The former was constantly busy (off the hook?) and the latter alternately busy or not answering.  I was, however, able to get the Switchboard’s number from the operator and, true to form, they were apparently able to put out the word on our plight and relief came in the form of a perfectly timed arrival of fruits, vegetables, and hard-boiled eggs for dinner.  Fruits, veggies, baked goods and some desperately need extra rakes were brought the next morning by a small party of ecology-minded students from either UC Berkeley or San Francisco State.

 

The wooden slats in the track’s perimeter fence were steadily disappearing as they early on were recognized by the stragglers as an excellent source of firewood on the cold nights.  As for myself, and later on an injured kid and his girlfriend, I was able to take advantage of the butane heaters for the stage that had been stored in the Family Dog’s bus. I also put up the Dog’s very distressed soundman, Lee, for two nights since he’d accidentally been left behind during the confused nighttime exit after the show. The stragglers, including my volunteer workers, understandably dwindled at an escalating pace.

 

Scully showed up at the speedway office three days after the event with a load of ittsy bittsy, triangle-cut ham and cheese sandwiches, a gaggle of reporters and, of course, no tools.  From atop the hill by the track he gathered the reporters and camera crews and waved his arm over the expanse and the --- from a distance --- neat piles of trash dotting the grounds from the abandoned stage and towers and extending out perhaps a quarter mile. 

 

Scully spoke proudly of the cleanup yet made no effort at all to speak with the remaining kids who had now been at it for several days.  As for me, it was only by luck that I’d been at the speedway office waiting for my ride out when he arrived.  We’d never met each other face to face --- and he made no effort to find me either --- so I used my anonymity to hold back and observe the circus. I made one last trek down to the stage area to tell people that additional food had arrived at the office.  Then I was outta there.

 

If the Grateful Dead and their management had followed up on supplying the technical support they’d promised, the cleanup would have been a pretty straight-forward task, However, the speedway employees I dealt with firmly believed that all the pushing and the promises by Scully to get me to say “yes” was just so that he could tell the site’s owners during the negotiations that he had things all arranged to clean up the inevitable mess.  After all this time, who really knows?  What I can say is that while many features on Altamont’s 50th anniversary will focus on the violence, I’ll always remember what happened after the bleeding stopped: the chaos, the broken promises, and what the willing volunteers--- unsupported and unknown --- ultimately accomplished there.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173736 https://historynewsnetwork.org/article/173736 0
A Wealth Tax? Two Framers Weigh In

 

Wealth taxes are on the current political table and hotly debated. All taxation was on the framers’ table as they considered a new constitution. What would they make of the measures we are considering now? And more to the point: does the Constitution they drafted allow Congress to tax a person’s overall wealth? 

 

The short answer: yes and no. The longer answer requires historical context.

 

In the six months preceding the Federal Convention of 1787, Congress received from the separate states, which alone possessed powers of taxation, a grand total of $663—hardly enough to run the nation. Little wonder that the framers’ proposal, what is now our Constitution, granted Congress sweeping authority to levy taxes: “The Congress shall have Power to lay and collect Taxes, Duties, Imposts and Excises …”

 

Although taxing authority was broad, the framers delineated only five specific types, each with a qualification:

 

“Duties, Imposts, and Excises shall be uniform throughout the United States.” 

“No Capitation, or other direct, Tax shall be laid, unless in proportion to the Census or enumeration.”

“No Tax or Duty shall be laid on any Articles exported from any State.” 

 

Presumably any taxes not mentioned would have to meet qualifications as well. The underlying principle was fairness, but mechanisms to achieve that goal differed. Exports were ruled out so Congress wouldn’t handicap Virginia, for instance, by taxing tobacco. Most other taxes would be uniform. Direct taxes, on the other hand, had to be apportioned according to state populations. 

 

But what, exactly, was a direct tax? And how might that differ from its opposite, an indirect tax, a term that does not appear in the Constitution? 

 

That’s what Rufus King, delegate from Massachusetts, asked his colleagues at the end of the day on August 20, 1787, after some twelve weeks of deliberations. From James Madison’s Notes of Debates: “Mr. KING asked what was the precise meaning of direct taxation. No one answered.” We have no indication that the framers ever defined direct and indirect taxation, either then or at any other time during their proceedings. 

 

Fast forward to the mid-1790s, when two former delegates weighed in. Alexander Hamilton, while Secretary of the Treasury, recommended a federal tax on carriages, an item that only rich people could afford. Congress levied that tax, but Daniel Hylton, a carriage owner, challenged the measure on constitutional grounds. It was a direct tax, he argued, and Congress had failed to apportion it amongst the states. The case found its way to the Supreme Court. There, Associate Justice William Paterson, who had introduced the New Jersey plan at the Federal Convention, offered a coherent explanation of the framers’ treatment of taxation:

 

“It was … obviously the intention of the framers of the Constitution, that Congress should possess full power over every species of taxable property, except exports. The term taxes, is generical, and was made use of to vest in Congress plenary authority in all cases of taxation… All taxes on expences or consumption are indirect taxes. … Indirect taxes are circuitous modes of reaching the revenue of individuals, who generally live according to their income. In many cases of this nature the individual may be said to tax himself.”

 

An individual taxes himself because he chooses to participate in an activity that is taxed; he can circumvent the tax by not purchasing or consuming the item in question. That’s what makes it “indirect.” A “Capitation, or other direct, Tax” is not optional in this sense; both capitation taxes and property taxes were widespread in America, and a person does not choose to be a person (a capitation tax), nor willingly decide to own no property. But why didn’t the framers apply the simple rule of uniformity to such taxes? Paterson recalled that peculiar politics, not abstract reasoning, led to the Federal Convention’s approach:

 

“I never entertained a doubt, that the principal, I will not say, the only, objects, that the framers of the Constitution contemplated as falling within the rule of apportionment, were a capitation tax and a tax on land. … The provision was made in favor of the southern States. They possessed a large number of slaves; they had extensive tracts of territory, thinly settled, and not very productive. A majority of the states had but few slaves, and several of them a limited territory, well settled and in a high state of cultivation. The Southern states, if no provision had been introduced in the Constitution, would have been wholly at the mercy of the other states. Congress in such case, might tax slaves, at discretion or arbitrarily, and land in every part of the Union after the same rate or measure: so much a head in the first instance, and so much an acre in the second. To guard against imposition in these particulars, was the reason of introducing the clause in the Constitution, which directs that representatives and direct taxes shall be apportioned among the states, according to their respective numbers.”  

 

In the end, Paterson and his fellow justices concluded that the tax on chariots must be considered indirect. To apportion the tax among the states would be absurd, they argued; if there was only one chariot owner in a state, he would have to assume his state’s entire liability. And if there were no chariots, how could that state ever meet its apportioned share? But to strike down a tax on chariots because it was unworkable would seriously undermine Congress’s critical authority “to lay and collect Taxes,” which buttressed the entire governmental apparatus. The only alternative was to declare Hamilton’s chariot tax indirect—and thereby constitutional.

 

What might Paterson and his colleagues have concluded if Congress had levied a tax on all wealth, not just one particular luxury item? That depends. They might have applied the same reasoning: treating a wealth tax as “direct” would be hopelessly impractical, but if seen as an excise, it would fall within Congress’s plenary powers of taxation. Indeed, any tax (except one on exports) that the framers had notcategorized as “direct” would be constitutional, if applied uniformly. So it all boils down to one question: what taxes, exactly, are to be considered direct?

 

While Paterson enumerated “a capitation tax and a tax on land” as the “principal” taxes “falling within the rule of apportionment,” he hinted they might not be the “only” ones. Perhaps he could envision others, taxes that were not addressed at the Federal Convention. New England governments in those times relied heavily on property taxes that included not only raw land but also improvements and livestock, which made the land more valuable. Such taxes were not based on “expences or consumption,” activities that might be avoided—Paterson’s criteria for indirect taxes. They taxed who a person was, economically—the apparent (although unstated) standard for a direct tax. 

 

Alexander Hamilton made this explicit. Within a brief filed for Hylton v. United States, he added a third category to the list of direct taxes:

 

“What is the distinction between direct and indirect taxes? It is a matter of regret that terms so uncertain and vague in so important a point are to be found in the Constitution. We shall seek in vain for any antecedent settled legal meaning to the respective terms—there is none.

 

“But how is the meaning of the Constitution to be determined? It has been affirmed, and so it will be found, that there is no general principle which can indicate the boundary between the two. That boundary, then, must be fixed by a species of arbitration, and ought to be such as will involve neither absurdity nor inconvenience.

 

“The following are presumed to be the only direct taxes.

Capitation or poll taxes.

Taxes on lands and buildings.

General assessments, whether on the whole property of individuals, or on their whole real or personal estate; all else must of necessity be considered as indirect taxes.”

 

In Hamilton’s estimation, any “general assessment” of a person’s “whole” property or estate—what we call today a wealth tax—was one of those “other” direct taxes that must be apportioned amongst the states. But to apportion a wealth tax would be absurd; today, a handful of wealthy individuals in West Virginia or Mississippi, to account for their state’s quota, would have to pay several times as much as those residing in states with numerous rich taxpayers. In Hylton v. United States, the Supreme Court used the unfairness of apportionment to declare the chariot tax indirect, and therefore constitutional, but according to Hamilton, that line of reasoning is off the table. A wealth tax is a direct tax, pure and simple, he reckoned; unless apportioned, it will be unconstitutional. 

 

Of course Hamilton was not on the Supreme Court, so his statement has no official bearing. Legal scholars might disagree with his assessment of “general assessments,” but politically, that is beside the point. No wealth tax based on apportionment among the states will make its way out of committee, and any wealth tax claiming to be “indirect” will inevitably wind up before the Supreme Court. There, originalists who idolize the framers will look no farther than Hamilton’s testimony to justify striking down the measure. 

 

And all conservatives, likely the majority for years to come, will weaponize Hamilton to support their preconceived aversion to wealth taxes. They will not be swayed, as some scholars contend, by Knowlton v. Moore, which in 1900 upheld an inheritance tax by treating it as indirect. There’s plenty of wiggle room between taxing a person’s “whole real or personal estate”—a wealth tax—and taxing how a person chooses to dispense with that estate. When conservative justices weigh an obliquely relevant case from 1900 versus Hamilton’s forceful pronouncement that yields the result they prefer, there is little doubt as to which they will favor. 

 

Might Chief Justice Roberts be a swing vote? When upholding the Affordable Care Act as within the taxing authority of Congress, he declared: “A tax on going without health insurance does not fall within any recognized category of direct tax.” Because it was “plainly not a tax of the ownership or land or personal property,” it did not have to be “apportioned among the several States.” A wealth tax, on the other hand, would target “ownership of land or personal property.” It must be apportioned, Roberts will conclude, or else it’s unconstitutional.  

 

We have been through this before. In Pollock v. Farmers’ Loan and Trust Company (1895), the Supreme Court declared that taxing income derived from wealth (rents, interest, and dividends) was a direct tax and therefore had to be apportioned, while taxing income derived from labor (wages and salaries) was indirect and therefore did not have to be apportioned. This meant that Congress could tax working people readily, while taxing wealthy people would be unworkable. Workers cried foul. They pushed for, and got, the Sixteenth Amendment, which repudiated Pollock by lifting the apportionment requirement from all income taxes.

 

Today, facing rampant inequality, we can cry foul again—but we remain saddled with a provision of the Constitution geared to protect the slave-owning interests of Southern states in 1787. Even so, taxing income rather than wealth is always possible. There is no constitutional limit on the tax rate, so long as it is “uniform throughout the United States.”

 

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173734 https://historynewsnetwork.org/article/173734 0
Roundup Top 10!  

China Isn’t the Soviet Union. Confusing the Two Is Dangerous.

by Melvyn P. Leffler

An unusual confluence of events World War II led to America’s bitter rivalry with the U.S.S.R. That pattern is not repeating.

 

The Forgotten Origins of Paid Family Leave

by Mona L. Siegel

In 1919, activists from around the world pressed governments to adopt policies to help working mothers.

 

 

A Historic Crime in the Making

by Rebecca Gordon

400 years of history leading up to Donald Trump.

 

 

Donald Trump, Meet Your Precursor

by Manisha Sinha

Andrew Johnson pioneered the recalcitrant racism and impeachment-worthy subterfuge the president is fond of.

 

 

Pelosi did what no one else could

by Julian Zelizer

From the perspective of presidential history, this will become a major part of how we remember the term.

 

 

Why Did U.N.C. Give Millions to a Neo-Confederate Group?

by William Sturkey

The University of North Carolina’s settlement over a controversial statue is a subsidy for white nationalism.

 

 

Trump’s border wall threatens an Arizona oasis with a long, diverse history

by Jared Orsi

Heavy machinery grinds up the earth and removes vegetation as construction of President Trump’s vaunted border wall advances toward the oasis.

 

 

Calling Trump ‘the chosen one’ is a political act — not a theological statement

by Wallace Best

Claims about God’s plans for the United States often morph into justifications for wrongdoing.

 

 

The History That Happened: Setting the Record Straight on the Armenian Genocide

by Ryan Gingeras

For a brief moment this fall, world interest fixed its attention to an event of the past. News that the U.S. Congress approved a formal resolution recognizing the Armenian Genocide was carried as a leading story by media outlets worldwide.

 

 

 

Spinster, old maid or self-partnered – why words for single women have changed through time

by Amy Froide

Attitudes toward single women have repeatedly shifted – and part of that attitude shift is reflected in the names given to unwed women.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173786 https://historynewsnetwork.org/article/173786 0
How did November become the Mizrahi Heritage Month? And what’s Mizrahi anyhow?

A Yemenite family walks through the desert to a reception camp 

 

Recently, a growing number of Jewish American organizations began marking November as the "Sephardic/Mizrahi Heritage Month." In the American context, awareness months come to illuminate histories of marginalized communities whose stories are overshadowed and underrepresented in the official curricula and memory. The Mizrahi heritage month, by contrast, is not a local, grassroots initiative that emerged in response to experiences of discrimination or marginalization. Instead, it is a transatlantic importation of recent attempts by the Israeli government to commemorate the forced expulsion of Jews from the Arab and Muslim world in the wake of the establishment of Israel. Nor is November a month that has any particular significance in the histories or rituals of any of the dozen Jewish minority communities that resided in Northern Africa and the Middle East. Instead, the specific date, November 30th, was chosen by the Israeli lawmaker as a symbolic birth date of the mass exodus of Jews from Arab speaking lands triggered by the UN Partition Resolution of November 29th, 1947.

 

Erroneously, in the North American Jewish world, the terms “Sephardi” and “Mizrahi” are often treated as synonymous. Yet, unlike the term “Sephardi,” which originates in the expulsion of Jews from the Iberian Peninsula in 1492 (Sepharad is the Hebrew term for Spain), Mizrahi is a category which is not only far more recent in historical terms but is also politically charged and rooted in a specific Israeli context. Mizrahi, literarily meaning “eastern” or “Oriental” in Hebrew, was an adjective-turned-term that was coined in pre-statehood Palestine and later used in Israel to denote any non-Ashkenazi Jew. In early statehood Israel, a considerable degree of patronizing attitude and discrimination towards “Oriental Jews” who were regarded as less civilized, ill-educated, lacking sufficient ideological commitment resulted in discrimination. During the 1950s, Mizrahi Jews were sent to frontier settlements and to newly established "development towns" that were established in the country's peripheral regions. Soon enough, these towns transformed into conspicuous pockets of deprivation and poverty, and their Mizrahi residents became a discernible low-status blue-collar class, deprived of the same employment and education opportunities as their Ashkenazi peers. In that process, the adjective "Mizrahi” became a highly contested and politically charged term, and not a neutral sociological category.  

 

Years of a persistent civic struggle for equal rights by Mizrahi activists and scholars in Israel, accompanied by demands for recognition of their full history, did not solve all social problems and inequality nor erase past scars. During the 1970s, the Likud Party’s leadership reappropriated the Mizrahi struggle to claim a stake at the Israeli national story. Other political parties, such as Shas, the non-Ashkenazi ultra-Orthodox party that was established during the mid-1980s, also tried to harness the Mizrahi struggle with a considerable level of success. A 2014 legislation that created a new day of commemoration for Mizrahi Jews is yet another attempt to divert the Mizrahi call for equality in Israel to a political cause. In particular, it uses the politics of memory to create a false equation between "Jewish refugees from the Arab World" and the Palestinian refugees. Jewish communities in the Middle East and North Africa region had undergone different experiences, not just between different countries, but even between various communities in the same country. 1948 was undoubtedly a turning point for a great number of them, and the Israel-Palestine conflict loomed over much of it. But casting these rich histories into one single-dimensional narrative is, in fact, a cynical strategy employed by the Israeli Right to avoid the need to address Palestinian claim for compensation on behalf of the Palestinian refugees. 

 

Right-leaning Jewish organizations in the US, such as the San Francisco based JIMENA (Jews Indigenous to the Middle East), were quick to adopt the Israeli ready-made mold of “Mizrahi commemoration” and to blend it with the American practice of “awareness months.” Their website describes them as a non-profit “committed to achieving universal recognition to the heritage and history of the 850,000 indigenous Jewish refugees Arab from countries.” Similar ideas are expressed by Hen Mazzig, a charismatic yet controversial "Hasbarah" (pro-Israeli advocacy) speaker who tours North American campuses, to speak to students about his family's immigration from Tunisia and Morocco, his experiences as a gay officer in the IDF, and ways of combating anti-Israeli critics. Critics of Israel on US campuses are described by him as silencing Middle Eastern and North African Jews. Almost simultaneously, a call upon Jews to join a mass kaddish (a prayer traditionally recited in memory of the dead) on November 30th appeared on the pages of the Jerusalem Post. It remains unclear if these are grassroots initiatives or a well-orchestrated state-funded campaign. As the Jewish daily The Forward revealed, Mazzig is most probably a contractor paid by the Israeli government. 

 

While asserting correctly that the heritage of Middle Eastern Jews does not receive equal space in the American-Jewish establishment, the kind of heritage that JIMENA and wishes to promote is equally superficial and shallow and is mostly comprised of stories of persecution and harassment followed by a final expulsion and a Zionist redemption. The historical narrative they are offering has less to do with the particular heritage and histories of these diverse communities and more to do with a politics of competitive victimhood and a “quid pro quo” argument about the nature of the Israeli-Palestinian conflict, in which Jewish refugees from the Middle East are cast as mirror images of the roughly 750,000 Palestinians who were expelled during the 1948 War.

 

As historians who dedicate their careers to research modern Jewish history, who believe in the importance of studying the histories of Middle Eastern Jewish communities alongside Ashkenazi communities, we welcome the intent to deepen and broaden our understanding of the place of these communities. Eurocentric assumptions, including our tendency to understand Jewish modernity writ large as coming out of the experiences of European Jews, provides a very narrow prism that fails to capture the Jewish historical experience in all its richness and diversity. It is about time that academic Jewish Studies programs would expand their curriculum and educate the students and the wider public about the culture and history of Mizrahi Jews, alongside other non-Ashkenazi communities such as the Yemenite Jews, Iranian (Persian) Jews, Greek and Balkan Jews, Caucasus Jews, Bukharan Jews and more. We also believe in the value and the importance of heritage months to raise awareness to and help advance our understanding of marginalized social groups. Notably, per the Library of Congress website, the American-Jewish Heritage month is celebrated in May, as part of an effort of Jews to be part of the “big American Jewish tent.” We raise our eyebrows, however, at what seems as what might be an attempt to hijack this noble cause for a partisan issue and a state-sponsored invented tradition. Jewish communities in the US should pay greater attention to the non-Ashkenazi stories alongside the Ashkenazi saga. But we wonder if a day of commemoration that is copy-pasted mechanically rather than reflectively by Jewish diaspora communities would serve that purpose.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173735 https://historynewsnetwork.org/article/173735 0
William Barr’s Upside-Down Constitution

 

Attorney General William Barr’s November 15 speech before the Federalist Society, delivered at its annualNational Lawyers Convention,received considerable attention.Barr attackedwhat he views as progressives’ unscrupulous and relentless attacks on President Trump and Senate Democrats’ “abuse of the advice-and-consent process.” Ironies notwithstanding, the core analysis of his speech is a full-throated defense of the Unitary theory of executive power, which purports to be an Originalist view of the Founders’ intent. 

 

This defense, however, reveals the two fundamental flaws of the Unitary view: first, that it is built on a fictional reading of constitutional design; and second, that its precepts attack the fundamental tenets of the checks and balances system that the Founders did create. 

 

Barr’s speech begins with his complaint that presidential power has been weakened in recent decades by the “steady encroachment” of executive powers by the other branches. Even allowing for congressional resurgence in the post-Watergate era of the 1970s, no sane analysis of the Reagan era forward could buttress Barr’s ahistorical claim. Ironically, the presidents in this time period who suffered political reversals—Bill Clinton’s impeachment and the thwarting of Barack Obama’s agenda by congressional Republicans in his final six years of office—nevertheless emerged from their terms with the office intact in powers and prestige. 

 

Attorney General Barr’s reading of colonial history claims that the Founders’ chief antagonist during the Revolutionary period was not the British monarchy (which, he claims, had been “neutered” by this time) but an overbearing Parliament. Had Barr bothered to consult the penultimate statement of American grievances, the Declaration of Independence, Barr would have found the document to direct virtually all of its ire against “the present King of Great Britain.” The lengthy list of grievances detailed in the document charge “He,” George III, with despotism and tyranny, not Parliament (where some of whose members expressed sympathy for the American cause). Barr’s message? Legislatures bad, executives not so much. 

 

Barr insists that by the time of the Constitutional Convention there was “general agreement” on the nature of executive power and that those powers conformed to the Unitary vision—complete and exclusive control over the Executive branch, foreign policy preeminence, and no sharing of powers among the branches. Barr dismisses the idea of inter-branch power-sharing as “mushy thinking.” Yet the essence of checks and balances is power-sharing. As the political scientist Richard Neustadt once noted, the Founders did not create separate institutions with separate powers, but “separate institutions sharing powers.” 

 

And as if to reassure himself and other adherents, Barr insists that the Unitary view is neither “new”—even though it was cooked up in the 1980s by the Meese Justice Department and the Federalist Society—nor a “theory.” Barr says, “It is a description of what the Framers unquestionably did in Article II of the Constitution.” Yet aside from the veto power, he fails to discuss any actual Article II powers. And in the case of the veto, he fails to note that this power is found in Article I, and is generally understood as a legislative power exercised by the executive. Shouldn’t an Originalist take a passing interest in original text? Nor does he explain why Article II is brief and vague, compared to Congress’s lengthy and detailed Article I powers. What we know about that brevity and vagueness is that it reflected two facts: the Founders’ difficulty and disagreement in defining presidential powers, and the wish of a few Founders who favored a strong executive to leave that door open, hoping that future presidents might help solidify the office. That wish, of course, came true. 

 

Most of the latter part of Barr’s speech is devoted to a condemnation of the judiciary, where it has not only set itself up at the “ultimate arbiter” of interbranch disputes, but worse has “usurped Presidential authority” by the very act of hearing cases and ruling against asserted presidential powers. Underlying these complaints are the Unitary tenet that the courts have no rightto rule in any area of claimed executive power. Barr vents his frustration at the extent to which Trump administration decisions and actions have found themselves tied up in court. Experts continue to debate what issues and controversies are or are not justiciable. But to assert by Unitary theory fiat that the courts cannot rule is to make an assertion found nowhere in the Constitution. And Barr also misses the fact that court rulings historically have most often favored executive powers. 

 

The Trump administration’s many questionable actions have raised both new and old concerns about the extent and reach of executive power. There is plenty of blame for abuses of power to spread around, most certainly including to Congress. But the Unitary theory offers no remedy to the power problems of the present era. And the idea that it somehow is an Originalist reading of constitutional powers would be laughable if it didn’t have so many adherents in the seats of power.  

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173732 https://historynewsnetwork.org/article/173732 0
Impeachment Has Always Been a Purely Political Process

 

What exactly is impeachment? A common view is that the House of Representatives judges that the president has committed a crime, then the Senate tries him. But the Constitution’s criterion of “high crimes and misdemeanors” does not correspond to anything specific in the U.S. criminal code. Even if it did, legal precedent has already established that a sitting President cannot be indicted. Hence, when special prosecutor Leon Jaworski presented indictments over the Watergate break-in, he listed President Nixon as an “unindicted” co- conspirator. Even if a prosecutor did manage to indict a sitting president, he would not face a trial. Only after being impeached, convicted in the Senate and expelled from office would he be liable to legal prosecution like other citizens.

 

Impeachability is therefore very much in the eye of the beholder. The Mueller probe was sometimes likened to a prosecutor returning his findings to a grand jury made up of Congress. But that was a false analogy. The Constitution is silent about what process is to be followed before articles of impeachment are voted on. No investigation is explicitly mandated. Strictly speaking, a newly elected House of Representatives could vote articles of impeachment on its first day, without any investigation. The reason this does not happen is because those favoring impeachment naturally prefer to be supported in the court of public opinion. But they needn’t rely on the outcome of an investigation if they are willing to gamble on the public’s support regardless. The Mueller probe disappointed House Democrats because it found no evidence of collusion with Russia and no evidence of obstruction of justice that could stand up in court. But it made no difference to Trump’s opponents. Adam Schiff and Erick Swalwell maintained that Trump was guilty of both before the Mueller probe was completed, and they maintain it now that it has come and gone. At the end of the day, they are banking on public opinion sharing their conviction that the president has committed a crime. Even if impeachment articles are voted in, however, the prospect of the Republican majority in the Senate convicting Trump are virtually zero.

 

Why doesn’t the Constitution make a president’s impeachment synonymous with a prosecution for breaking the law? In Germany, for example, the President can be impeached by either of the two legislative chambers. At that point, the case goes to a federal court, which decides if he is guilty and whether to remove him from office. To clarify why the American process is so complicated and indirect, we have to ask ourselves what the Founders meant impeachment for in the first place.

 

As I wrote in TYRANTS: POWER, INJUSTICE AND TERROR, one of the Constitution’s fundamental aims, according to Alexander Hamilton, was to forestall the emergence an American tyrant — a “Catiline or Caesar.” Tyranny in the 18th century context need not connote full-blown monsters like Hitler. If a ruler violated Americans’ rights, like King George III taxing them without granting them representation in Parliament, that was tyranny. Ancient democracies like Athens, according to Hamilton, veered between the extremes of tyranny and anarchy. They could only rely on a virtuous statesman like Pericles to break this cycle. Such statesmen are very rare, and could prove to be tyrants in disguise — it’s too chancy. The “new” political science of the Enlightenment, Hamilton says, relies instead on the institutional division of powers, preventing one branch of government from tyrannizing over the others. Just as the president having the power of the purse would violate Congress’s jurisdiction, Congress’s ability to try a president like a law court would violate that of both the Executive and the Judicial branches. Rooted in Machiavelli’s analysis of the Roman Republic, the American constitution was designed to promote a peaceable political warfare among the three branches of government that would forestall the actual violence of civil strife resulting from one branch reigning supreme. As James Madison put it, the causes of vice are sewn into the human soul. We can’t remove those causes, but through the correct institutional mechanisms, we can impede their harmful effects.

 

In sum, the removal of a President from office through impeachment was designed as a purely political process. It is value neutral with respect to legal guilt or innocence. The Founders stacked the deck against it happening so that it would not be trivially invoked, which is why it’s happened only three times. In the case of Donald Trump, at the end of the day, all that matters is how the House and Senate vote. This may not be as morally satisfying to either side as outright condemnation or exoneration. But the Founders were wary of the power of moral outrage to spark the extremes of tyranny and anarchy characteristic of democracy in the past. Whatever the verdict regarding President Trump’s impeachment, the reaction will be confined to screaming pundits, not armed mobs. Or so we hope.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173733 https://historynewsnetwork.org/article/173733 0
Neville Chamberlain, Sir Horace Wilson, & Britain's Plight of Appeasement

 

Adapted from Fighting Churchill, Appeasing Hitler by Adrian Phillips, published by Pegasus Books. Reprinted with permission. All other rights reserved.

 

In 1941, as his time in office drew to a close, the head of the British Civil Service, Sir Horace Wilson, sat down to write an account of the government policy with which he had been most closely associated. It was also the defining policy of Neville Chamberlain, the Prime Minister whom Wilson had served as his closest adviser throughout his time in office. It had brought Chamberlain immense prestige, but this had been followed very shortly afterwards by near-universal criticism. Under the title ‘Munich, 1938’, Wilson gave his version of the events leading up to the Munich conference of 30 September 1938, which had prevented – or, as proved to be the case, delayed – the outbreak of another world war at the cost of the dismemberment of Czechoslovakia. By then the word ‘appeasement’ had acquired a thoroughly derogatory meaning. Chamberlain had died in 1940, leaving Wilson to defend their joint reputation. Both men had been driven by the highest of motivations: the desire to prevent war. Both had been completely convinced that their policy was the correct one at the time and neither ever admitted afterwards that they might have been wrong.

 

After he had completed his draft, Wilson spotted that he could lay the blame for appeasement on someone else’s shoulders. Better still, it was someone who now passed as an opponent of appeasement. In an amendment to the typescript, he pointed out that in 1936, well before Chamberlain became Prime Minister, Anthony Eden, the then Foreign Secretary, had stated publicly that appeasement was the government’s policy. The point seemed all the more telling as Eden had been edged out of government by Chamberlain and Wilson in early 1938 after a disagreement over foreign policy. Eden had gone on to become a poster-boy for the opponents of appeasement, reaping his reward in 1940 when Chamberlain fell. Chamberlain’s successor, Winston Churchill, had appointed Eden once again as Foreign Secretary. Wilson was so pleased to have found reason to blame appeasement on Eden that he pointed it out a few years later to the first of Chamberlain’s Cabinet colleagues to write his memoirs.

 

Wilson’s statement was perfectly accurate, but it entirely distorted the truth, because it ignored how rapidly and completely the mean­ing of the word ‘appeasement’ had changed. When Eden first used the word, it had no hostile sense. It meant simply bringing peace and was in common use this way. ‘Appease’ also meant to calm someone who was angry, again as a positive act, but Eden never said that Britain’s policy was to ‘appease’ Hitler, Nazi Germany, Mussolini or Fascist Italy. Nor, for that matter, did Chamberlain use the word in that way. The hostile sense of the word only developed in late 1938 or 1939, blend­ing these two uses of the word to create the modern sense of making shameful concessions to someone who is behaving unacceptably. The word ‘appeasement’ has also become a shorthand for any aspect of Brit­ish foreign policy of the 1930s that did not amount to resistance to the dictator states. This is a very broad definition, and it should not mask the fact that the word is being used here in its modern and not its con­temporary sense. The foreign policy that gave the term a bad name was a distinct and clearly identifiable strategy that was consciously pursued by Chamberlain and Wilson. 

 

When Chamberlain became Prime Minister in May 1937, he was confronted by a dilemma. The peace of Europe was threatened by the ambitions of the two aggressive fascist dictators, Hitler in Germany and Mussolini in Italy. Britain did not have the military strength to face Germany down; it had only just begun to rearm after cutting its armed forces to the bone in the wake of the First World War and was at the last gasp of strategic over-reach with its vast global empire. Chamberlain chose to solve the problem by setting out to develop a constructive dialogue with Hitler and Mussolini. He hoped to build a relationship of trust which would allow the grievances of the dictator states to be settled by negotiation and to avoid the nightmare of another war. In other words, Chamberlain sought to appease Europe through discus­sion and engagement. In Chamberlain’s eyes this was a positive policy and quite distinct from what he castigated as the policy of ‘drift’ that his predecessors in office, Ramsay MacDonald and Stanley Baldwin, had pursued. Under their control, progressive stages in aggression by the dictators had been met with nothing more than ineffectual protests, which had antagonised them without deterring them. 

 

Chamberlain’s positive approach to policy was the hallmark of his diplomacy. He wanted to take the initiative at every turn, most famously in his decision to fly to see Hitler at the height of the Sudeten crisis. Often his initiatives rested on quite false analyses; quite often the dictators pre-empted him. But Chamberlain was determined that no oppor­tunity for him to do good should be allowed to escape. The gravest sin possible was the sin of omission. At first his moves were overwhelming­ly aimed at satisfying the dictators. Only after Hitler’s seizure of Prague in March 1938 did deterring them from further aggression become a major policy goal. Here, external pressures drove him to make moves that ran counter to his instincts, but they were still usually his active choices. Moreover, the deterrent moves were balanced in a dual policy in which Hitler was repeatedly given fresh opportunities to negotiate a settlement of his claims, implicitly on generous terms. 

 

Appeasement reached its apogee in the Czech crisis of 1938. Chamberlain was the driving force behind the peaceful settlement of German claims on the Sudetenland. He was rewarded with great, albeit short-lived, kudos for having prevented a war that had seemed almost inev­itable. He also secured an entirely illusory reward, when he tried to transform the pragmatic and unattractive diplomatic achievement of buying peace with the independence of the Sudetenland into something far more idealistic. Chamberlain bounced Hitler into signing a bilateral Anglo-German declaration that the two countries would never go to war. Chamberlain saw this as the first building block in creating a lasting relationship of trust between the two countries. It was this declaration, rather than the dismemberment of Czechoslovakia under the four-power treaty signed by Britain, France, Germany and Italy, that Chamberlain believed would bring ‘peace for our time’, the true appeasement of Europe. At the start of his premiership, Chamberlain had yearned to get ‘onto terms with the Germans’; he thought that he had done just that. 

 

Appeasing Europe through friendship with the dictators also required the rejection of anything that threatened this friendship. One of the most conspicuous threats was a single individual: Winston Church­ill. Almost from the beginning of Hitler’s dictatorship Churchill had argued that it was vital to Britain’s interests to oppose Nazi Germany by force, chiefly by rearming. Unlike most other British statesmen, Churchill recognised in Hitler an implacable enemy and he deployed the formidable power of his rhetoric to bring this home in Parliament and in the press. But Churchill was a lone voice. When he had opposed granting India a small measure of autonomy in the early 1930s, he had moved into internal opposition to the Conservative Party. Only a handful of MPs remained loyal to him. Churchill was also handicapped by a widespread bad reputation that sprang from numerous examples of his poor judgement and political opportunism. 

Chamberlain was determined on a policy utterly opposed to Churchill’s view of the world. He enjoyed a very large majority in Parliament and faced no serious challenge in his own Cabinet. Chamberlain and Wilson were so convinced that their policy was correct that they saw opposition as dangerously irresponsible and had no hesitation in using the full powers at their disposal to crush it. Churchill never had a real chance of altering this policy. It would have sent a signal of resolve to Hitler to bring him back into the Cabinet, but this was precisely the kind of gesture that Chamberlain was desperate to avoid. Moreover, Chamberlain and Wilson each had personal reasons to be suspicious of Churchill as well as sharing the prevalent hostile view of him that dominated the political classes. Wilson and Churchill had clashed at a very early stage in their careers and Chamberlain had had a miserable time as Churchill’s Cabinet colleague under Prime Minister Stanley Baldwin. Chamberlain and Wilson had worked closely to fight a – largely imaginary and wildly exaggerated – threat from Churchill’s support for Edward VIII in the abdication crisis of 1936. 

 

Churchill was right about Hitler and Chamberlain was wrong. The history of appeasement is intertwined with the history of Churchill. According to legend Churchill said, ‘Alas, poor Chamberlain. History will not be kind to him. And I shall make sure of that, for I shall write that history.’ Whatever Churchill might actually have said on the point barely matters; the witticism expresses a mindset that some subsequent historians have striven to reverse. The low opinion of Chamberlain is the mirror image of the near idolatry of Churchill. In some cases, his­torians appear to have been motivated as much by dislike of Churchill – and he had many flaws – as by positive enthusiasm for Chamberlain. Steering the historical debate away from contemporary polemic and later hagiography has sometimes had the perverse effect of polarising the discussion rather than shifting it onto emotionally neutral territory. Defending appeasement provides perfect material for the ebb and flow of academic debate, often focused on narrow aspects of the question. At the last count, the school of ‘counter-revisionism’ was being challenged by a more sympathetic view of Chamberlain. 

 

Chamberlain’s policy failed from the start. The dictators were happy to take what was on offer, but gave as good as nothing in return. Chamberlain entirely failed to build worthwhile relationships. Chamberlain’s advocates face the challenge that his policy failed entirely. Chamberlain’s defenders advance variants of the thesis that Wilson embodied in ‘MUNICH, 1941’: that there was no realistic alternative to appeasement given British military weakness. This argument masks the fact that it is practically impossible to imagine a worse situation than the one that confronted Churchill, when he succeeded Chamberlain as Prime Minister in May 1940. The German land attack in the west was poised to destroy France, exposing Britain to a German invasion. It also ducks the fact that securing peace by seeking friendship with the dictators was an active policy, pursued as a conscious choice and not imposed by circumstances. 

Chamberlain’s foreign policy is by far the most important aspect of his premiership and the attention that it demands has rather crowded out the examination of other aspects of his time at Downing Street. Discussion of his style of government has focused on the accusation that he imposed his view of appeasement on a reluctant Cabinet, which has been debated with nearly the same vigour as the merits or otherwise of the policy itself. In the midst of this, little attention has been paid to Wilson, even though Chamberlain’s latest major biographer – who is broadly favourable to his subject – concedes he was ‘the éminence grise of the Chamberlain regime … gatekeeper, fixer and trusted sounding board’.Martin Gilbert, one of Chamberlain’s most trenchant critics, made a start on uncovering Wilson’s full role in 1982 with an article in History Today, but few have followed him. There have been an academic examination of his Civil Service career and an academic defence of his involvement in appeasement.Otherwise, writers across the spec­trum of opinions on appeasement have contented themselves with the unsupported assertion that Wilson was no more than a civil servant.Wilson does, though, appear as a prominent villain along with Chamberlain’s shadowy political adviser, Sir Joseph Ball, in Michael Dobbs’s novel about appeasement,Winston’s War. 

 

Dismissing Wilson as merely a civil servant begs a number of questions. The British Civil Service has a proud tradition and ethos of political neutrality, but it strains credulity to expect that this has invariably been fully respected. Moreover, at the period when Wilson was active, the top level of the Civil Service was still evolving, with many of its tasks and responsibilities being fixed by accident of personality or initiative from the Civil Service side. Wilson’s own position as adviser to the Prime Minister with no formal job title or remit was unprecedented and has never been repeated. Chamberlain valued his political sense highly and Wilson did not believe that his position as a civil servant should restrict what he advised on political tactics or appointments. Even leaving the debate over appeasement aside, Wilson deserves attention. 

 

Wilson was so close to Chamberlain that it is impossible to understand Chamberlain’s premiership fully without looking at what Wilson did. The two men functioned a partnership, practically as a unit. Even under the extreme analysis of the ‘mere civil servant’ school whereby Wilson was never more than an obedient, unreflecting executor of Chamberlain’s wishes, his acts should be treated as Chamberlain’s own acts and thus as part of the story of his premiership. It is practically impossible to measure Wilson’s own autonomous and distinctive input compared to Chamberlain’s, but there can be no argument that he represented the topmost level of government. 

 

Wilson’s hand is visible in every major aspect of Chamberlain’s premiership and examining what he did throws new light almost everywhere. Wilson’s influence on preparations for war – in rearming the Royal Air Force and developing a propaganda machine – makes plain that neither he nor Chamberlain truly expected war to break out. One of the most shameful aspects of appeasement were the measures willingly undertaken to avoid offending the dictators, either by government action or by comment in the media; Wilson carries a heavy responsibility here. 

 

Above all it was Wilson’s role in foreign policy that defined his partnership with Chamberlain and the Chamberlain premiership as a whole. He was also the key figure in the back-channel diplomacy pursued with Germany that showed the true face of appeasement. Wilson carries much of the responsibility for the estrangement between Chamberlain and the Foreign Office, which was only temporarily checked when its political and professional leaderships were changed. Chamberlain and Wilson shared almost to the end a golden vision of an appeased Europe, anchored on friendship between Britain and Germany, which was increasingly at odds with the brutal reality of conducting diplomacy with Hitler. The shift to a two-man foreign policy machine culminated in the back-channel attempts in the summer of 1939 intended to keep the door open to a negotiated settlement of the Polish crisis with Hitler, but which served merely to convince him that the British feared war so much that they would not stand by Poland. Chamberlain and Wilson had aimed to prevent war entirely; instead they made it almost inevitable.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173728 https://historynewsnetwork.org/article/173728 0
How Tony Kushner’s A Bright Room Called Day Can Help Us Understand Our Political Moment

Tony Kushner’s A Bright Room Called Day, which opened at the Public Theater in New York on October 29 is a rare bird—a revival (with a substantial re-write) that proves to be more timely and incisive than the original was. In my opinion, the play does not work terribly well theatrically—I was not moved by any of the characters—but it is a good play to think about. 

 

The main question that the play poses isone askedby the nineteenth-century Russian thinker Nikolai Chernyshevsky and later Lenin: “What Is to be Done?” More specifically, in this case, “How should we respond to kleptocratic authoritarianism?” 

 

The success of the play as a thought experiment depends on one’s willingness to conceive of the parallels between historical periods. In its first avatar, performed in 1985, the play focuses on a group of friends in Berlin in 1932-33, and draws parallels between that time and the early years of Reagan’s second term, in order to register what it regarded as the incipient fascism of mid-1980s US. Breaking the realistic framework, a woman from Reagan’s time unaccountably appears, urging the young Berliners to flee, or at least to do something. The play was panned by critics like Frank Rich who, writing in the New York Times in 1991, found it “fatuous” and “infuriating.” He felt that Kushner had gone too far in making a simplistic and reductive comparison of Reagan’s America to Hitler’s Germany. 

 

In 2019, however, after a revision in which what is new makes up about 40% of the play, the comparison of the thirties with the later period appears more firmly based and even prescient. In the revised and expanded version, Kushner includes a second character from the future of the Berlin characters—the author, who speaks in our present with the emissary from the eighties. Uncertain about the value of what he is doing and has done—writing plays—the author asks, “Can theater make any [political] difference?” His willingness to examine his own choices and to ask meta-theatrical questions in the theater makes him an appealing figure. 

 

In defense of the parallels he draws between the 30s, the 80s, and the late 2010s, he argues that if one set of events (such as the Holocaust or Shoah) is established as the standard against which all others are to be judged, and yet no others are allowed to be comparable to it, then it is not useful as a point of reference. One cannot, he maintains, exempt one period from the realm of historical comparisons, although, I would add, one must be careful and responsible when drawing them.

 

If one allows this argument, then Kushner’s central political insight in the play is strong: Trumpism was not a sudden anomaly; rather, decades under recent Republican presidents prepared the way for the current embrace of unconstitutional reactionary authoritarianism by eight or nine of every ten Republicans. Having been proudly anti-intellectual, Presidents Reagan and G. W. Bush denied, as Trump denies, that reason should play a role in the conduct of the country’s affairs. This elevation of irrationality, of going with the gut, is linked with a dangerous animus against the federal government that threatens the Enlightenment basis of the American republic, as John Meacham has recently argued. 

 

Reagan and Trump made blatant appeals to the racial resentments of whites, the first by inveighing against “welfare queens and cheats” and by declaring for “states’ rights” in Philadelphia, Mississippi, where three young civil rights workers were murdered two decades earlier. Trump similarly found “very fine people” both among neo-Nazis and anti-fascist demonstrators. Presidents Reagan and G. W. Bush, like Trump, routinely made statements contrary to the truth. Reagan told so many falsehoods, mistaking what happened in movies with what happened outside of films, that the press and media stopped reporting them. George W. Bush deceived the country about warrantless, illegal surveillance of millions of Americans, about the grounds for an unjustified war of aggression against Iraq, and about his authorization of the use of torture by the US. Both helped prepare a major party to accept the barrage of falsehoods that Donald Trump launches every day.  

 

To return to the central question Kushner’s play raises—how we can respond to kleptocratic authoritarianism—it is worth remembering the maxim that the first casualty in war is truth, and, as Thucydides observes, this is especially in the case of civil war. As Masha Gessen writes, language means something until it doesn’t. Take the use under G.W. Bush of “regime change” instead of “invasion,” or the replacement of “torture” with “enhanced interrogation techniques”—as though prohibiting the word makes the thing go away.    

 

One of the prime examples of the deformation of language in the current moment comes from the language used to describe the fractious state of the polity itself—the language of “polarization,” and the supposed diagnosis that we are being “tribal” when we occupy one side or the other of an issue. In fact, however, the allowed spectrum of opinions—the Overton window—has moved far to the right in the last forty years. Ethnic nationalism can now be overtly advocated by a nominee for one of the second-highest courts in the land, while Democrats have been moving toward more centrist, not more extreme positions during the same decades.  

 

I do not know how to move the public discourse back toward the old center in, say, the late 70s, but writers can raise the question of what should be done, as Tony Kushner does in A Bright Room Called Day, and they can point out and bear witness to the corruption of language in their own time, as George Orwell did in his 1947 essay “Politics and the English Language.”  

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173731 https://historynewsnetwork.org/article/173731 0
A Review of Amazon Prime’s Series Dostoevsky

 

The great Russian writer Fyodor Dostoevsky has influenced many writers like William Faulkner, as well as other individuals from around the world. Americans unfamiliar with his life, and perhaps even with some of his greatest works like Crime and Punishment and The Brothers Karamazov can now get to know him via Amazon Prime’s 8-part subtitled series Dostoevsky, directed by the Russian Vladimir Khotinenko.

 

The series first appeared in 2011 (in 7 parts) on Rossiia 1 television channel, and a Western expert on Russian literature and film, Peter Rollberg, then wrote, “In scope and quality, Khotinenko’s 7-part biopic can be compared to the best HBO and Showtime history dramas, such as John Adams (2008) and The Tudors (2007-2010).”

 

Indeed the series has much to recommend it-good acting (especially by Evgenii Mironov as Dostoevsky), picturesque scenery (e.g., in St. Petersburg and foreign sites such as Baden Baden), and a fascinating story that, despite taking some artistic liberties, depicts well the tumultuous and eventful life of one of Russia’s greatest writers. Each episode begins with Dostoevsky sitting for the famous portrait of him painted by V. Perov in 1872. 

 

As we watch the almost eight hours of the series, we witness some of the main events of his adult life beginning with his traumatic experience on a December morning in 1849 when he and other prisoners stood on a St. Petersburg square, heard their death sentences read out, and excepted to be shot by a firing squad. In his late twenties by then, Dostoevsky had already gained some fame as a writer, but became involved with dissidents whom the reactionary government of Tsar Nicholas I considered treasonous. Only at the last minute, did a representative of the tsar bring word that Nicholas I was going to spare the lives of the condemned, and Dostoevsky spent the next four years in a Siberian prison camp in Omsk, which he later described in his novel "The House of the Dead."

 

The first episode of the series is set mainly in this camp, and the gloomy existence of the prisoners may be off-putting to some viewers. But the experience was important to Dostoevsky. Himself the son of a serf-owning Moscow doctor, he was forced to mix with less educated common criminals, but came to appreciate their Russian Orthodoxy, their religion of Christ, of sin and suffering, of resurrection and redemption. He came to regret his earlier rebellious ideas, influenced by Western European utopian thinkers. His prison experiences convinced him that the only path for Russian intellectuals to follow was one that united them with the common people and their religious beliefs.

 

Through a variety of techniques, usually by having Dostoevsky state his convictions or argue with someone like the writer Turgenev, the series conveys his post-prison populism and Russian nationalism. In one scene in Episode 3, a young man at a dinner table tells Dostoevsky that he left St. Petersburg for prison camp “a dissenter and socialist, and you returned a defender of The Throne and Orthodoxy.” Toward the beginning of Episode 6, Dostoevsky tells the painter Perov, “The ones seeking freedom without God will lose their souls . . . . Only the simple-hearted Russian nation . . . is on the right way to God.” 

 

But the series is more concerned with depicting his personality and love life, which begins to manifest itself during Episode 2, set mainly in the Central Asian-Siberian border town of Semipalatinsk (present-day Seney). Dostoevsky served in the army there for five years (1854-1859) before finally being allowed to return to European Russia. But his service allowed him sufficient time for writing and mixing with some of the town’s people, including Maria Isaeva, a somewhat sickly, high-strung, strong-willed woman in her late twenties. 

 

Episodes 2 and 3 depict the writer’s stormy relations with her in Siberia and then in their early days in St. Petersburg. After she leaves Semipalatinsk to accompany her husband, who takes a new job in the distant town of Kuznetsk (today Novokuznetsk), he soon after dies. Dostoevsky makes a secret and unlawful trip to this Siberian city, but has to contend with a younger rival, a schoolteacher, for Maria’s affection.  Finally, after much agonizing by both Maria and Dostoevsky and another trip to Kuznetsk, the two marry there in February 1857.

 

While in Semipalatinsk, Dostoevsky makes a written appeal to his brother, Mikhail, and to an aunt for money. The writer’s financial difficulties, later exacerbated by gambling loses, will remain a consistent theme for most of the rest of the series.

 

After finally being allowed to settle in St. Petersburg in late 1859, Dostoevsky renews acquaintance with Stepan Yanovskiy, a doctor friend, and is introduced to his wife, the actress Alexandra Schubert. She and Dostoevsky soon become romantically involved, while Maria shows increasing signs of having consumption (tuberculosis)-she died of it in 1864. His main health problem was epilepsy, and occasionally, as at the end of Episode 3, we see him having a seizure.

 

In Episode 4, we are introduced to a young woman, Appolinaria Suslova, who for several years became Dostoevsky’s chief passion. Young enough to be his daughter, she reflected some of the youthful radicalism of the Russian 1860s. An aspiring writer herself, she was fascinated by the older author, and eventually had sexual relations with him. But their relations were stormy, often mutually tormenting, and while traveling in Western Europe together, she sometimes denied him any intimacy. A fictionalized portrait of her can be found in the character of Polina in Dostoevsky’s The Gambler (1866).    

 

In Episodes 4 through 8, we see sometimes see Dostoevsky at the roulette tables from 1863 to 1871 in such places as Weisbaden, Baden Baden, and Saxon les Bains, usually loosing, and from 1867 to 1871 most often travelling with his second wife, Anna (nee Snitkina), whom he first meet when she came to him to work as a stenographer in 1866 to help him complete The Gambler and Crime and Punishment. 

 

But Anna does not appear until Episode 6, and only after Dostoevsky’s infatuation with two very young sisters, Anya and Sofia (Sonya) Korvin-Krukovskaya, who later became a famous mathematician. Nevertheless, once Anna appears she remains prominent for the remainder of the series, first as his stenographer, then as his wife and the mother of his children. In Episode 7, they travel to Western Europe, where they remain in places like Baden Baden and Geneva until 1871, when they return to Russia. Throughout their marriage, Anna remains the level-headed, common-sense wife who tolerates and loves her much older mercurial husband. But the couple had their up-and-down moments, including the death of two children. The first to die, Sofia (Sonya) does so in May 1868, at the end of Episode 7.

 

Thus, to deal with the rest of Dostoevsky’s life-he died in January 1881-the Russian filmmakers left themselves only one episode, number 8. And much happened in that dozen years, including the couple’s return to Russia; the birth of more children ( two boys and a girl, but the youngest, Alyosha, died in 1878); trips to Bad Ems for emphysema treatments, and major writings (the novels The Idiot, The Possessed, The Adolescent, and The Brothers Karamazov and his collection of fictional and non-fictional writings in A Writer’s Diary).There are brief mentions and/or allusions to these writings, but not much. 

 

Dostoevsky often encounters people whose last name is ungiven. In Episode 8, for example, he visits the dying poet and editor Nikolai Nekrasov, but only those already familiar with Dostoevsky’s biography might realize who he is. The final scene in that last episode shows Dostoevsky and a young bearded man whom he addresses as Vladimir Sergeevich sitting on hay behind a horse and carriage driver on their way to the famous Optina Monastery.

 

The not-further-identified young man, although only in his mid-twenties, was in fact the already well-known philosopher Vladimir Soloviev, son of Sergei Soloviev, who by his death in 1879 completed 29 volumes of his History of Russia from the Earliest Times. Dostoevsky had read some of his history and earlier that year had attended Vladimir’s "Lectures on Godmanhood." At one of these talks attended by Dostoevsky, Leo Tolstoy was present, but the two famous writers never met each other. During the remaining 22 years of his life, Soloviev went on to develop many philosophic and theological ideas and to influence later religious thinkers including Dorothy Day and Thomas Merton.

 

At Optina, Dostoevsky sought consolation from the death of his son Alyosha by talking to the monk Ambrose, who became the model for Father Zossima in his The Brothers Karamazov. And the brothers Alyosha and Ivan Karamozov, in different ways, reflect the influence of the young Soloviev.

 

Of the novel itself, one of Dostoevsky’s most influential, little is said in the series except when in the final episode, Dostoevsky tells a police official that he plans to write a work about a hero who goes through many phases and struggles with the question of the “existence of God.” The Brothers Karamazov deals with that question-but also, of course, with much more. And Dostoevsky did not live long enough to complete The Life of a Great Sinner, a work he had long contemplated but only managed to include portions of in some of his great novels. 

 

Despite the many positive aspects of the 8-part series, it only hints at Dostoevsky’s relevance for our times. Just a few examples are his significance for understanding 1) Vladimir Putin and his appeal to Russians, 2) terrorism, and 3) whether or not to accept the existence of God and the implications of faith vs. agnosticism. 

 

Regarding his influence on Putin, an excellent article by Russian-expert Paul Robinson thoroughly examines the question. He begins his essay by writing, “I’ve spent the last week ploughing through the 1,400 pages of Fyodor Dostoevsky’s Writer’s Diary. . . . The experience has left me pretty well acquainted with the writer’s views on the Russian People (with a capital ‘P’), Europe, the Eastern Question, and Russia’s universal mission. I’ve also just finished writing an academic article which discusses, among other things, references to Dostoevsky in Vladimir Putin’s speeches.”  

 

In novels such as Notes from the UndergroundCrime and Punishment, and The Possessed, Dostoevsky reflected on and provided insight into the thinking of many a terrorist. As one essay on his insight into terrorism indicates, Theodore Kaczynsk, the Unabomber, “was an avid reader of Dostoevsky.” Freud wrote on the great Russian writer and appreciated some of his insights into what is sometimes referred to as “abnormal psychology.” Some even claim that Dostoevsky “ought to be regarded as the founder of modern psychology.”

 

Regarding the existence of God, it is The Brothers Karamazov that is most often cited, especially its chapters on “Rebellion” and “The Grand Inquisitor,” where the brothers Ivan and Alyosha discuss whether to accept or reject God. Ivan rejects because he cannot accept any God that would allow innocent suffering, especially that of little children. In the agnostic Camus’s The Rebel he devotes his chapter “The Rejection of Salvation” to Ivan’s stance.  

 

In summary, this reviewer’s advice: enjoy Amazon’s Dostoevsky but then go on to read more by and about him. You can even download his great novels and many of his other works at the Project-Gutenberg-Dostoevsky site.     

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173727 https://historynewsnetwork.org/article/173727 0
Losing Sight of Jefferson and Falling into Plato

The Death of Socrates, by Jacques-Louis David (1787)

 

Many professors at higher-level academic institutions profess to be practitioners of a Socratic method of teaching, which is a method of students arriving at understanding by a teacher “pestering” them with probing questions that lead to self-searching. Many, if not most, of such practitioners presume that the method facilitates true learning. Socratic teaching and learning are not reducible, in John Dewey’s words, to pouring information into the heads of students, but they are a matter of drawing out what is, in some sense, already there.

Socrates (469–399 B.C.) was one of Ancient Athens’ most unusual citizens. He professed a profound love of Athens insofar as he claimed he could never offer a return to Athens (or its citizens) as valuable as what he had, throughout his life, gained from the cosmopolitan polis.

 

What was unique about Socrates was his zetetic (seeking) manner of living. He renounced all pleasures other than the pleasure he experienced in searching for knowledge, which he said he never possessed. He claimed to be wise only inasmuch as he recognized that he, unlike other prominent Athenians, understood that he knew nothing, and thus, that human wisdom counted for nothing next to divine wisdom.

 

Never claiming to have any (real) knowledge—that is, knowledge of things substantive such as of the virtues piety, wisdom, justice, self-control, and courage—he spent his days throughout his life in pursuit of knowledge. His daily zetetic activity showed that he did not consider that activity fatuous. That demonstrates that he, at least, thought that acquisition of knowledge was humanly possible. Otherwise, his life would have been as pointless as searching to find great-tailed grackles, birds endemic to hot places like Texas, in Anchorage, Alaska.

 

Socrates’ method of pursuit was elenchus—a method of dialectic exchange in which the one chiefly in pursuit of knowledge, usually Socrates, asked an interlocutor a number of questions, pointedly articulated to get at the nature of a particular virtue, or even virtue in general.  After expostulation of an initial definition from an initial question—e.g., “What is virtue?”—questions would be crafted to expose insufficiencies in the proposed definition—e.g., “Whatever is just is virtuous” (Meno)—with the expectation of either refinement of the proposed definition or proposal of a different definition, in keeping with flaws in the definition, exposed by later questions.

 

The end of all such elenctic Socratic dialogs was aporia—a state of puzzlement or confusion which characterizes an interlocutor who came to recognize that he did not or might not know what he had thought he knew. In Plato’s Socratic dialogs, interlocutors in aporia often walk away in anger from Socrates, but sometimes walk away accepting that they must now seek the knowledge they hitherto thought that they had.

 

Socrates, Plato tells us in Apology, made few friends through his practice as he showed prominent Athenian politicians, poets, and craftsmen that they did not know the things they thought that they knew (things such as justice, piety, and beauty). He doubly irritated many, as the youth often imitated his methods. Hence, he was ultimately sentenced to death because of numerous charges, each reducible to corruption of the young.

 

Socrates was not supposed to be sentenced to death, for Athenian democracy was in the main largely tolerant of differences of opinion—even somewhat disruptive differences of opinion (consider Epicurus’ philosophical Garden, which placed just outside Athens, preached a minimalist sort of hedonism, through freedom from mental disquiet, ataraxia, and social withdrawal). Socrates merely could not promise to stop doing what was considered by many to be so disruptive of daily Athenian affairs—his daily dialectic.

 

There are too few genuine Socrateses in today’s higher education and even fewer students, willing to be challenged dialectically. Anyone who has taught in higher education for more than two decades has doubtless come to recognize that the “New Millennials” and even the generation beyond them, the students born in or after 2000, are difficult, if not impossible to teach. They are, in the words of Professor Elayne Clift, “devoid of originality, analytical ability, [and] intellectual curiosity.” She continues, “Having passed through a deeply flawed education system in which no one is paying attention to critical thinking and writing skills, they just want to know what they have to do to make their teachers tick the box that says ‘pass.’” All the other teachers do that.

 

Why? Students have an effective tool, rage, through a sense of academic entitlement, which many utilize as a security blanket. When they perform poorly, as they often tend to do, they blame their poor performance on poor teaching. Teachers, due to pressure aligned with need of strong evaluations, seldom challenge inculpation. They are too afraid of poor evaluations and direct complaints, which may readily result in loss of adjunct work or failure to attain tenureship. Many teachers, I believe, now sheepishly accept the notion that they, not the students, are the problem. Moreover, students are also in a position of power because they are viewed as consumers, and institutions are in the market to attract as many students each year as they can attract. Education is increasingly following the pattern of successful businesses, which follow the mantras: The more customers, the more money, and the customer is always right.

 

Socratic teaching in such a milieu is impossible, because it is designed principally to expose ignorance. Today’s students are entitled to a passing grade without working because they already know everything they really need to know. Each is a sun of his own solar system—no binary stellar systems here!—and the orbiting bodies orbit for the sake of that sun. Says Daniel Mendelson, “Perhaps because they have received more attention than any generation in the history of Homo sapiens, millennials seem to be convinced that every aspect of their existence, from their love lives to their struggles with reverse peristalsis, is of interest not just to their parents but to everyone else as well.”

 

Plato millennia ago saw the problem as one of pure democracy (demokratia). In a democracy, according to Plato in Republic, each person thinks of himself as the equal of all others in all ways. The city is full of freedom and free speech and everyone is free to do what he wants to do when he wishes to do it. Democracy, constitutionally, is a “supermarket of constitutions,” as it embraces all persons and all rules. “[A democrat], always surrendering himself to whichever desire comes along, lives as if it were chosen by lot” (557a–561b). 

 

Thomas Jefferson, of course, was aware of the pitfalls of democracy. There can be no such thing as a pure democracy over a large expanse of land, he avers, but only in a small parcel, such as a ward, where smallness of political space enables all to have an equal share in political matters. Hence, Jefferson, while in France, speaks to James Madison (30 Jan. 1787) of the need of representative government, where “the will of every one has a just [and not a direct] influence,” for affairs of state and country. Even with representative government, he continues to Madison, “the mass of mankind … enjoys a precious degree of liberty & happiness.” Nonetheless, “it has it’s evils too: the principal of which is the turbulence to which it is subject,” because of its embrace of freedom of expression.

 

Yet the pitfall of turbulence is often massively misconstrued or hyperbolized by scholars—e.g., Conor Cruise O’Brien. Jefferson writes thus in a letter to Madison (20 Dec. 1787) of Shays’ Rebellion, an event which horrified many, especially New Englanders. “The late rebellion in Massachusetts had given more alarm than I think it should have done. Calculate that one rebellion in 13 states in the course of 11 years, is but one for each state in a century & an half. No country should be so long without one.” Earlier in the same year (Jan. 30), he writes to Madison, “A little rebellion now and then is a good thing & as necessary in the political world as storms in the physical.” Some six years later (3 Jan. 1793), he writes to William Short of the sanguinary effects of the French Revolution: “My own affections have been deeply wounded by some of the martyrs to this cause, but rather than it should have failed, I would have seen half the earth desolated. Were there but an Adam and an Eve left in every country, and left free, it would be better than as it now is.” The bloodshed of a rebellion is, of course, abominable in the short term, but “this evil is productive of good. It prevents the degeneracy of government, and nourishes a general attention to the public affairs.” In sum, what appears as a pitfall of democracy on a large scale is really its great strength.

 

Jefferson did have an antidote—a means of preventing too many rebellions. That was periodic constitutional reform, effected through robust discussions by the people themselves. John Stuart Mill agrees. The strength of a vital democracy, Mill notes, is not only its tolerance of difference of opinions, but also the vitality with which it aims to iron out those differences through respectful, progressive debate. Robert Healey agrees: “Democracy is a means of determining courses of action through use of open and admitted conflict of opinion. Its ideal is not the achievement of a homogeneous society, but true cooperation, the working together of different people and groups who have deliberated with each other.” Thus, the aim of democratic thriving is citizens’ engagement with the institution through collisions of ideas. Those collisions are not purposeless, but aim at truth or at least heightened understanding of problems or issues in a solemn effort toward resolution.

 

Robust discussion is what is missing in the educative climate of the twenty-first century which worships a new, radical liberalism—toleration of diversity of opinion as an end, not as a means. The teaching milieu in the twenty-first century embraces tolerance of differences, but not progress with the aim of ironing out those differences through respectful debate—the Jeffersonian ideal behind periodic constitutional renewal. Thus, with disavowal of the Jeffersonian ideal, there is avowal of Plato’s concept of democracy in which freedom has become an end, and a deteriorative end, and not a means to an end—human flourishing.

 

We ought to strive today to recognize and aim at the Jeffersonian democratic ideal, not the Platonic degenerative conception. Within the ideal of Jeffersonian republicanism, there not only is room for Socratic methods of teaching, there also is need for Socratic methods of teaching. Why? It is, as Jefferson noted, better to have turbulent liberty than quiet servitude.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173729 https://historynewsnetwork.org/article/173729 0
The Myth of the First Thanksgiving is a Buttress of White Nationalism and Needs to Go

 

Most Americans assume that the Thanksgiving holiday has always been associated with the Pilgrims, Indians, and their famous feast. Yet that connection is barely 150 years old and is the result of white Protestant New Englanders asserting their cultural authority over an increasingly diverse country. Since then, the Thanksgiving myth has served to reinforce white Christian dominance in the United States. It is well past time to dispense with the myth and its white nationalist connotations. 

 

Throughout the colonial era, Thanksgiving had no association whatsoever with Pilgrims and Indians. It was a regional holiday, observed only in the New England states or in the Midwestern areas to which New Englanders had migrated. No one thought of the event as originating from a poorly documented 1621 feast shared by the English colonists of Plymouth and neighboring Wampanoag Indians. Ironically, Thanksgiving celebrations had emerged out of the English puritan practice of holding fast days of prayer to mark some special mercy or judgment from God, after which the community would break bread. Over the generations, these days of Thanksgiving began to take place annually instead of episodically and the fasting became less strictly observed. 

 

The modern character of the holiday only began to emerge during the mid to late 1800s.  In 1863, President Abraham Lincoln declared that the last Thursday of November should be held as a national day of Thanksgiving to foster unity amid the horrors of the Civil War. Afterward, it became a tradition, with some modifications to the date, and spread to the South too. Around the same time, Americans began to trace the holiday  back to Pilgrims and Indians. The start of this trend appears to have been the Reverend Alexander Young’s 1841 publication  of the Chronicles of the Pilgrim Fathers, which contained the only primary source account of the great meal, consisting of a mere four lines. To it, Young added a footnote stating that “This was the first Thanksgiving, the harvest festival of New England.” Over the next fifty years, various New England authors, artists, and lecturers disseminated Young’s idea until Americans took it for granted. Surely, few footnotes in history have been so influential.

 

For the rest of the nation to go along with New England’s idea that a dinner between Pilgrims and Indians was the template for a national holiday, the United States first had to finish its subjugation of the tribes of the Great Plains and far West. Only then could its people stop vilifying Indians as bloodthirsty savages and give them an unthreatening role in a national founding myth. The Pilgrim saga also had utility in the nation’s culture wars. It was no coincidence that authorities began trumpeting the Pilgrims as national founders amid widespread anxiety that the country was being overrun by Catholic and then Jewish immigrants unappreciative of America’s Protestant, democratic origins and values. Depicting the Pilgrims as the epitome of colonial America also served to minimize the country’s longstanding history of racial oppression at a time when Jim Crow was working to return blacks in the South to as close to a state of slavery as possible and racial segregation was becoming the norm nearly everywhere else. Focusing on the Pilgrims’ noble religious and democratic principles in treatments of colonial history, instead of on the shameful Indian wars and systems of slavery more typical of the colonies, enabled whites to think of the so-called black and Indian problems as southern and western exceptions to an otherwise inspiring national heritage. 

 

Americans tend to view the Thanksgiving myth as harmless, but it is loaded with fraught ideological meaning. In it, the Indians of Cape Cod and the adjacent coast (rarely identified as Wampanoags) overcome their initial trepidation and prove to be “friendly” (requiring no explanation), led by the translators Samoset and Squanto (with no mention of how they learned English) and the chief, Massasoit. They feed the starving English and teach them how to plant corn and where to fish, whereupon the colony begins to thrive. The two parties then seal their friendship with the feast of the First Thanksgiving. The peace that follows permits colonial New England and, by extension, modern America, to become seats of freedom, democracy, Christianity and plenty. As for what happens to the Indians next, this myth has nothing to say. The Indians’ legacy is to present America as a gift to others or, in other words, to concede to colonialism. Like Pocahontas and Sacajawea (the other most famous Indians of Early American history) they help the colonizers then move offstage. 

 

Literally. Since the early twentieth century, American elementary schools have widely held annual Thanksgiving pageants in which students dress up as Pilgrims and Indians and reenact this drama. I myself remember participating in such a pageant which closed with the song, “My Country Tis of Thee.” The first verse of it goes: My country tis of thee/ Sweet land of liberty/ Of thee I sing./ Land where my fathers died!/ Land of the Pilgrim’s pride!/ From every mountain side,/ Let freedom ring!” Having a diverse group of schoolchildren sing about the Pilgrims as “my fathers” was designed to teach them about who we, as Americans, are, or at least who we’re supposed to be. Even students from ethnic backgrounds would be instilled with the principles of representative government, liberty, and Christianity, while learning to identify with English colonists from four hundred years ago as fellow whites. Leaving Indians out of the category of “my fathers” also carried important lessons. It was yet another reminder about which race ran the country and whose values mattered. 

 

Lest we dismiss the impact of these messages, consider the experience of a young Wampanoag woman who told this author that when she was in grade school, the lone Indian in her class, her teacher cast her as Chief Massasoit in one of these pageants and had her sing with her classmates “This Land is Your Land, This Land is My Land.” At the time, she was just embarrassed. As an adult, she sees the cruel irony in it. Other Wampanoags commonly tell of their parents objecting to these pageants and associated history lessons that the New England Indians were all gone, only to have school officials respond with puzzlement at their claims to be Indian. The only authentic Indians were supposed to be primitive relics, not modern, so what were they doing in school, speaking English, wearing contemporary clothing, and returning home to adults who had jobs and drove cars?

 

Even today, the Thanksgiving myth is one of the few cameos Native people make in many schools’ curriculum. Most history lessons still pay little to no heed to the civilizations Native Americans had created over thousands of years before the arrival of Europeans or how indigenous people have suffered under and resisted colonization. Even less common is any treatment of how they have managed to survive, adapt, and become part of modern society while maintaining their Indian identities and defending their indigenous rights. Units on American government almost never address the sovereignty of Indian tribes as a basic feature of American federalism, or ratified Indian treaties as “the supreme law of the land” under the Constitution. Native people certainly bear the brunt of this neglect, ignorance, and racial hostility, but the rest of the country suffers in its own ways too.   

 

The current American struggle with white nationalism is not just a moment in time. It is the product of centuries of political, social, cultural, and economic developments that have convinced a critical mass of white Christians that the country has always belonged to them and always should. The myth of Thanksgiving is one of the many buttresses of that ideology. That myth is not about who we were but how past generations wanted us to be. It is not true. The truth exposes the Thanksgiving myth as a myth rather than history, and so let us declare it dead except as a subject for the study of nineteenth-and twentieth-century American cultural history. What we replace it with will tell future Americans about how we envision ourselves and the path of our society. 

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173689 https://historynewsnetwork.org/article/173689 0
Bodhisattvas and Saints

A Christian depiction of Josaphat, 12th century manuscript

 

On a side of the baptistry of the Piazza Duomo in the northern Italian city of Parma, there is a portal designed and constructed in the late twelfth-century and into the early thirteenth by the architect Benedetto Antelami. Above an entrance for catechumens – that is converts to Catholicism – Antelami depicted the story of two saints whose story was popular in the Middle Ages, but who have become more obscure in subsequent centuries. St. Josaphat and St. Barlaam, the former an Indian prince and the later the wondering holy man who converted him, are depicted in pink Verona marble at the side of the Romanesque church. For the generations of Parmesans who worshiped and were baptized inside of the structure, they passed underneath an entrance that told tale of the “country of the Indians, as it is called… vast and populous, lying far beyond Egypt,” as described by St. John of Damascus in his seventh-century account of the two men. 

 

According to hagiographies, St. Josaphat had been the son of a powerful Indian ruler who’d taken to persecuting the Christians in his kingdom that had been converted by the Apostle St. Thomas. Tortured by a prediction that his son would also convert, young Josaphat was sequestered away in a palace to prevent him from experiencing the suffering of the world. The prince escapes, however, and during his sojourns, he encounters an aged man, a leper, a dead body, and finally the mendicant monk Barlaam who brings Josaphat to an enlightenment. For many of you reading, this story may sound familiar; though for a medieval Italian it would have been simply another beloved saint’s tale. But St. Josaphat is a particularly remarkable Roman Catholic saint, for he’s normally known by a rather different title – the Buddha. 

 

The Buddha, as in the Christian legend which clearly developed from his story in a centuries-long game of telephone which stretched across Eurasia, had been sequestered away within a luxurious palace, only to leave and encounter an elderly man, a leper, a corpse, and finally a wandering ascetic. In the European legend, St. Josaphat converted to Christianity, and for Christian believers thus achieves salvation. For Buddhists, Siddhartha ultimately reached enlightenment and taught other sentient beings how to overcome the suffering which marks our existence. These are different things of course, yet there is a certain congruence between the Buddha’s call in the Dhammapada to “Conquer anger with love, evil with good, meanness with generosity, and lies with truth” with Christ’s teaching in John 8:32 that the “truth shall set you free.” In the legend of St. Josaphat and St. Barlaam those congruences are made a little more obvious, even for all of the differences from its source material.  

 

Author Pankaj Mishra explains that long before the Western vogue for Buddhism associated with the 1960’s counterculture and the Beats, before the appropriation of Siddhartha’s story by Transcendentalists and Romantics, the Buddha “himself reached the West in the form of a garbled story of two Christian saints.” The origin of that story’s route into Christendom is obscure, even while the congruencies between the two narratives show a clear relationship. How Siddhartha Gautama, the historical Buddha venerated by half-a-billion Buddhists, and whose life pre-dates Jesus Christ by five centuries, became a popular medieval Christian saint is a circuitous, obscure, and enigmatic story which tells us something about the ways in which religions are porous countries, endlessly regenerative, continually borrowing from another, and generating shared stories of meaning.

 

So how did the Buddha end up carved on a baptistry portal in Parma, what scholar Gauranga Nath Banerjee describes as being among the “most curious thing borrowed by the Roman and Greek churches”? The etymological genesis of the name “Josaphat” is relatively straightforward, as the saint’s Latin name derived from the Greek “Ioasaph,” itself from the Arabic “Yudasaf,” back to the Sanskrit “Bodhisattva,” the Buddhist honorific for a person who has achieved enlightenment, but rather than extinguishing their suffering in nirvana opts to be reborn so as to help their fellow humans achieve peace. 

 

The gradual development of “Bodhisattva” into “Josaphat” provides a rough genealogy of the way in which the central narrative of Buddhism ended up in a Christian hagiography. Historian Lawrence Sutin explains that it was in the “mid-nineteenth century that scholars first concurred that the origins of the Barlaam and Josaphat legend lay in the traditional life story of the Buddha.” He goes on to explain how the narrative found its way into Western Europe from Greece, and before that Georgia, where it had in turn arrived from Persian sources associated with Manicheanism, a once influential and now extinct religion which venerated both Christ and the Buddha. Sutin takes pains to emphasize that narrative similarity doesn’t imply theological congruence, for it “cannot be demonstrated that any distinct Buddhist teaching had survived in the oft-mutated… legend.”

 

An important point, even as there is something beautiful and strange about the thought of medieval Christians worshiping underneath the mantle of the Buddha without even knowing it. Religions aren’t easily reducible into one another, to convert Buddhism into Christianity is to violate that which is singular about both. A certain strain of well-meaning comparative religious studies from the middle of the twentieth-century, often associated with Huston Smith’s classic textbook The Religions of Man, has a tendency to remake all of the tremendous diversity of world religions into versions of liberal Protestantism. There is an ecumenical tolerance implicit in the argument that all religions are basically the same, that they all share certain deep truths that can be easily translated into one another, but as scholar Stephen Prothero argues this is a “lovely sentiment but it is dangerous, disrespectful, and untrue.” When it comes to consilience between Buddhism and Christianity, a fair and honest accounting which is respectful to both traditions must admit that “salvation” and “enlightenment” are not the same thing, nor is “karma” equivalent to “sin,” and that “nirvana” is not another word for “heaven.” These things are different, and there is a significance and power in that. 

 

However, we still have St. Josaphat and St. Barlaam, Orthodox and Catholic saints rather than Bodhisattvas, but characters whose story still derives from those forgotten Buddhist sources. They may not demonstrate that all religions teach the same thing deep down, they aren’t examples of how faiths can be converted into one another, and teachings being easily translated. But they do demonstrate that in that ineffable domain between beliefs, in that meeting point where the mysteries of different religions can touch, there is a place for communication. Sutin writes that in the Roman martyrology, St. Josaphat and St. Barlaam’s “joint feast day is observed on November 27, a date that has been ignored by present-day Western Buddhists but might well serve as a time for celebration of longtime affinities between the two paths.” Religions may not be reducible to each other, but the example of the Parma baptistry is a reminder that faiths are ever shifting countries whose borders are more porous than can be assumed. Something in the Buddhist story appealed to person after person in a chain that led from India to Italy and beyond, and even as that story altered it was the power of those characters and their narrative that exemplifies a certain shared understanding. Within the space of that portal, there is room for meeting, for mutual understanding, for empathy, for reciprocity, for faith. For mystery.  

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173707 https://historynewsnetwork.org/article/173707 0
Curing Ourselves with Fred Rogers

 

I find it hard not to be upset all the time about American politics, and the American society underneath. For me, things have been getting worse for a long time. Often, I find out that things had been even worse than I thought earlier, but I didn’t know the facts until they were uncovered by some journalist prying into our secretive government. The good news that honest investigations can reveal what powerful people want to keep secret doesn’t quite outweigh the bad news that these investigations reveal.

 

I don’t mean that I am upset all the time. At many times every day, I rejoice at my grandchildren and the children who are raising them, I root for some team on TV, I puzzle over a murder mystery, I accomplish do-it-yourself things all around our old house trying to recapture the past, or I kneel in the gardens pulling up weeds. Thoughts about America as a nation are swamped by the joys of one person’s everyday life.

 

But when those thoughts peek through, or take up all the air when we watch the news at night, they are unhappy ones. In the 15 years I have been writing columns about politics, I have identified all the big problems we face now. The names have changed, but the political ideas and underhanded methods persist. What is new is that those problems all seem more upsetting to me lately. I can identify when this condition began four years ago, as Trump came down the escalator to announce that he was campaigning for President.

 

I think the diagnosis is evident: I suffer from T.I.A.D., Trump-Induced Anxiety Disorder. There may be some help; please see this video for the wonder drug Impeachara.

 

Even if the drug doesn’t work, or exist, just thinking about it provides some temporary relief.

 

I think a longer-lasting cure may be available, one that’s been in front of us all the time. Liz and I joined lots of local baby boomers to see “It’s a Beautiful Day in the Neighborhood”. Fred Rogers was news to me. I had never watched his program and I only knew of his reputation, not him.

 

All the evidence I can find says that Mr. Rogers was just as he was portrayed by Tom Hanks: a lover and inspired teacher of children; impossibly nice to everyone around him; willing to talk to children about the most difficult subjects, like divorce and nuclear war; clever but transparent about using television to spread his message of love and tolerance.

 

Less well known is that he was a determined advocate for public television, was an ordained Presbyterian minister, and that he wrote all the songs for Mr. Roger’s Neighborhood. However far you dig into Fred Roger’s life, he was a remarkably good man who spread goodness all around him.

 

Instead of stressing about Trump’s latest idiocy or the decline of American politics, about which we can do very little, we could try to emulate Mr. Rogers. We could see the world as an opportunity to make a difference in people’s lives and devote our energies to doing that.

 

I’m no Fred Rogers. Coming from New York, I could never talk that slowly. The rest of us are just not so good so much of the time. But that doesn’t matter. We can all inch our way toward goodness by thinking more about the real people right in front of us and less about the personalities we see on the screen and the news we get from people we don’t know.

 

That is really the message of my whole collection of articles. The way to take back our lives is to focus more on the immediate, to practice the principles we believe in, to wrest more control by being intentional whenever we can.

 

Mr. Rogers can’t save us, even though Esquire did put him on the cover of an issue about heroes. He wasn’t trying to save the world himself. He was doing his part as less than a billionth of humanity. If we want to be cured of T.I.A.D. without danger of remission, we all have to do our parts, for our own lives and for others.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/blog/154283 https://historynewsnetwork.org/blog/154283 0
Roundup Top 10!  

Thanksgiving is a good time to lose our illusions about U.S. history

by Nick Alexnadrov

We misread the past each November, when we consider our country’s earliest phase. We like to think tolerance, a love of liberty and a democratic impulse motivated English colonists. But history tells a different story.

 

Queer Like Pete

by Jim Downs

Buttigieg is getting slammed for being a type of gay man America doesn’t understand.

 

 

How to Talk About the Truth and Trump at Thanksgiving

by Ibram X. Kendi

If we are serious about bringing Americans together, the work has to start with our own families.

 

 

Trump's Toadies Should Take Note: Watergate Says Everyone Goes Down

by Kevin M. Kruse

The lesson Nixon imparts to today’s POTUS loyalists is that courts of law and of public opinion will judge them harshly.

 

 

Contrary to conservative claims, the ERA would help families — but it’s not enough

by Alison Lefkovitz

Decades after its introduction, the Equal Rights Amendment is still urgently needed, and passing it .may soon be possible.

 

 

The apocalyptic myth that helps explain evangelical support for Trump

by Thomas Lecaque

Implicit is a vision of the president as a triumphantly apocalyptic figure, one who evokes the medieval legend of the Last World Emperor.

 

 

A 1970 Law Led to the Mass Sterilization of Native American Women. That History Still Matters

by Brianna Theobald

The fight against involuntary sterilization was one of many intertwined injustices rooted in a much longer history of U.S. colonialism. And that history continues to this day.

 

 

It’s Easy to Dismiss Debutante Balls, But Their History Can Help Us Understand Women’s Lives

by Kristen Richardson

The debutante ritual flourished roughly from 1780 to 1914—beginning with the first debutante ball in London and ending with the outbreak of World War I.

 

 

Après Moi, le Déluge...

by Tom Engelhardt

The Age of Trump, the End of What?

 

 

 

Trump’s xenophobia is an American tradition — but it doesn’t have to be

by Erika Lee

Some have always pushed to keep out immigrants, but people have always fought back, too.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173726 https://historynewsnetwork.org/article/173726 0
The History Behind the Rocket Used in the Latest Attack Against Israel

Israel's Ze'ev rocket, c. 1969 (photo: Israel Defense Forces)

 

A shorter version of this article was posted in The Times of Israel.

It took a few days for the Israeli military to disclose that during last week's round of rocketing from Gaza by Islamic Jihad, a new weapon was introduced: a projectile packing a far larger explosive charge than those hitherto fired. Residents of a settlement in the Northern Negev related that it had caused a terrifying blast when it landedlate at night, andin the morning they discovered that it had completely demolished one of their greenhouses, leaving an enormous crater. This pit was incomparably wider and deeper than the relative pockmarks that were made in roads and fields by the Qassam and Grad rockets of previous salvos, destructive as those had been when they scored a direct hit on a residential or industrial building.Imagine, some residents said, what would have happened if this "mega-rocket" had struck one of their houses; none of the reinforced shelters that have been built in residences of the region could have withstood it. If this threat persists and becomes routine, the residents feared, they might have to move away. Expectations of such a response, as well as operational considerations, were presumably part of the military's motivation to delay disclosing the matter.

 

Photos that the Palestiniansposted of their new weapon looked eerily familiar to my co-researcher at the Hebrew University's Truman Institute, Isabella Ginor, and myself. The stubby rocket and its primitive-looking pipe-frame launcher, as well as its scary effect, closely resembledwhat we documented for our recent book The Soviet-Israeli War, 1967-1973. But then it was an Israeli development that figured centrally in the War of Attrition against Egypt along the Suez Canal – and presented a challenge to the Egyptians' Soviet advisers. Their accounts provided extensive detail about the Israeli rocket, which had remained top secret on the Israeli side for years afterward and whose role in the course of the war was therefore largely overlooked.

 

For the Palestinians now, as for the Israelis then, the purpose of wielding this blunderbuss was to counter the adversary's overwhelming advantage in firepower. The rapid Soviet resupply of Egypt's army after its devastating defeat in the Six-Day War soon had the small Israeli garrison east of the canal hopelessly outnumbered – by an estimated 13 to one -- in artillery pieces as well as manpower to fire them. Israeli engineers got to work on a makeshift counterbalance, based on the heaviest variant of Soviet Katyushathat had been captured in the June war.  

 

A year later, a Soviet artillery adviser to the Egyptian II Army Corps on the canal, G. V. Karpov, was summoned to inspect the fragments of Israeli rocket of a heavy and hitherto unfamiliar model, which had “left a big crater” when it was first used. As the canal front was still relatively quiet, this appears to have been a test firing. But Karpov got a better look when, on 8 September 1968, the Egyptians – encouraged by the Soviet advisers – unleashed the first massive shelling on what were the still-flimsy Israeli positions on the east bank of the canal. The Israeli response included, among others, a number of Ze'evs.

 

The Israeli “flying bomb” was inaccurate, and Israeli operators would soon learn that it was prone to boomerang. Still, at virtually point-blank range it could cause a good deal of damage to positions that were hardened only against smaller shells. The intended, and successful, effect of its blast was what would be called, a half-century later, “shock and awe.” Although UN observers reported at least three such rockets fired on 8 September, Egypt – like Israel today – was in no hurry to publicize this, evidently for fear of sowing panic among residents of the towns along the canal's west bank. 

 

Israeli soldiers, from whose outposts the Ze’ev was launched by specialists, were not permitted to handle the top-secret weapon themselves. I and my fellow paratroop reservists were likewise warned when two big crates were installed in our strongpoint overlooking the Jordan, pointed at Jordanian or Palestinian targets across the river. They were described to us only, mysteriously, as "Ze'evim." We never got to witness a launch, but our counterparts on the canal front judged by the rocket's visible impact that that it must deliver a half ton of high explosive

 

Drawing on his expertise, Karpov calculated correctly that the rocket's warheadwas actually less than one-fifth as big, and this too at the expense of very short range – 4km – which would put the launch sites within easy reach of Egypt's new, Soviet-supplied 130mm cannon. He began working out countermeasures, which were partly implemented on 26 October when the next artillery barrage was initiated. 

 

The Egyptians claimed (but UN observers denied) that this round was provoked by Israel's firing of two 216mm rockets which destroyed houses in Port Tawfik, at the southern end of the canal.Out of the 14 rockets the Egyptians accused Israel of launching at civilian targets, they exhibited (and presumably turned over to Karpov for further study) one unexploded specimen, which they claimed was shot down by their anti-aircraft guns. If this was true, and the rocket wasn't simply a dud, it was quite a feat given the missile’s short trajectory. Israel's infinitely more sophisticated anti-missile array did not accomplish the same against the Palestinian "mega-rocket" last week. 

 

Cairo also claimed that its big guns – clearly following Karpov's instructions -- destroyed 10 “newly constructed” Israeli rocket-launch sites. The IDF, as before, denied using any missiles at all. Soviet advisers' memoirs confirmed years later that Egyptian firepower was “concentrated on the Israelis’ 216mm [rockets].” Unofficial Israeli accounts later admitted surprising hits on rear-line position that had been considered out of range (and rumors even spread about female soldiers fleeing naked from a shower shed).  In Egypt the entire engagement was henceforth referred to as “the missile incident” and would later be described as one of the Egyptians' major achievements.

 

Israel, which had sustained relatively heavy losses in those first two bombardments, took advantage of the lull that followed to construct the cannon-proof bunkers of the Bar-Lev Line. Egypt was slower to do the same, and even Karpov's guidelines did not put the Ze'ev entirely out of action. On 9 March 1969, the day after Egypt began the consecutive shelling and commando raids that would become the War of Attrition, a Ze'ev killed Egyptian Chief of Staff Abdel Moneim Riad and some of his officers while they inspected a frontline position.

 

Was the Ze'ev a game-changer? Besides the Soviet-devised riposte, the Egyptians did ultimately improve their fortifications. The appearance of Israel's heavy rocket hastened the USSR's agreement to provide Egypt with the longer-range Luna (Frog) tactical missile and other weapons it had hitherto withheld. The balance in the War of Attrition was tipped in Israel's favor that summer only by the introduction of its fighter jets as "flying artillery." Then the balance was reversed when the Soviets sent in their own SAM batteries, in their largest direct intervention overseas since the start of the Cold War. Although the Israel Air Force won a famous dogfight against Soviet-piloted MiGs in July 1970, its unsustainable losses to the Soviet missiles forced it to accept an unfavorable ceasefire which created the preconditions for Egypt's cross-canal offensive on Yom Kippur, 1973.

 

Israel's improvised heavy rocket did provide at least a temporary stopgap in military terms. But perhaps most relevant for today's confrontation with Gaza-based Palestinian organizations and the exposure of Israeli civilians to a similar weapon, it is worth recalling the role of the Ze'ev's big bang in terrifying the civilian populace of Egypt's civilian communities west of the canal – which emptied out soon after. I'll leave it to Israel's military experts to draw the lessons of this precedent for the present case of asymmetrical warfare – now that the "mega-rocket" shoe is on the other foot.   

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173706 https://historynewsnetwork.org/article/173706 0
Legalize Torture? It’s Tortured Logic

Kathryn Bigelow’s Zero Dark Thirty (2013) starred Jessica Chastain as Maya, a tough, brilliant and single-minded CIA agent who is prepared to use torture in the interrogation of suspected terrorists. There was nothing sadistic about her character, and she comes to doubt the efficacy of torture – though in the end she is able to learn the whereabouts of Osama bin Laden, which she could not have done, the film suggests, had she been unwilling to employ “enhanced interrogation techniques.” This assertion, that the use of torture did in fact produce useful intelligence that helped lead the U.S. to bin Laden, sparked debate as well as outrage. The Report (2019) is, among other things, writer-director Scott Z Burns’ answer to Zero Dark Thirty. It is largely about another single-minded individual, Daniel J. Jones (Adam Driver), lead investigator of the Senate Intelligence Committee, who spent five arduous years doggedly uncovering the CIA’s suspect detention and interrogation program following the 9/11 terrorist attacks. His investigation eventually culminated in a 6,700-page report, a damning exposé of the CIA’s methods of “enhanced interrogation” and the psychologists who helped design them – methods which included walling, cramped confinement, stress positions, waterboarding, the use of insects, and mock burial – despite having no interrogation experience. Like Jones, the film is unwavering not only in its moral condemnation of torture, but in its claim that torture is not effective and never produces real, actionable intelligence. Torture is admittedly an extremely difficult issue to confront. It is so morally reprehensible that we are understandably reluctant to even consider the possibility that it could ever be justified, under any circumstances. The problem is that the world is a messy place – it isn’t morally tidy – and sometimes the right thing to do is not available to us. According to the American Field Manual, rulebook of military interrogators, “The use of force is a poor technique, as it yields unreliable results, may damage subsequent collection efforts and can induce the source to say whatever he thinks the interrogator wants to hear.” However, if we are to deal honestly with this issue, we must recognize the fact that there is substantial evidence that sometimes torture is effective in eliciting information and, indeed, it has been known to save innocent lives. In Why Terrorism Works (2002), Alan Dershowitz writes, “There can be no doubt that torture sometimes works. Jordan apparently broke the notorious terrorist of the 1980s, Abu Nidal, by threatening his mother. Philippine police reportedly help crack the 1993 World Trade Center bombings by torturing a suspect.” If, in certain dire situations, something like nonlethal torture may be justifiable then it appears we should at least consider Dershowitz’s suggestion that if and when torture is practiced, that it is done in accordance with law and with some kind of warrant issued by a judge. “I’m not in favor of torture,” Dershowitz writes, “but if you’re going to have it, it should damn well have court approval.” His claim is that if we are, in fact, going to torture then it ought to be done in accordance with law: for tolerating torture while pronouncing it illegal is hypocritical. In other words, democratic liberalism ought to own up to its own activities, according to Dershowitz. If torture is, indeed, a reality then it should be done with accountability. There are, however, significant problems with the reasoning behind torture-warrants. For one, the legalization of torture would significantly distort our moral experience of the world, corroding the very notion of law itself, which does not rule through abject terror: law is, after all, meant to replace sheer brutality as a way of getting people to do things. Indeed, the rule against torture is paradigmatic of what we mean by law itself. In short, to have torture as law is undermining of what we take the very rule of law to signify. Such considerations are closely connected with the following concern which is addressed in The Report: namely, what are the consequences of institutionalizing torture? That is clearly what the introduction of torture-warrants would imply – and once you institutionalize torture you then have to elaborate on all aspects, including the training not only of would-be torturers but also medical personnel. In other words, the legalization of interrogational torture would apparently require the professionalization of torture; that is, the acceptance of torture as a profession. This normalization is especially disquieting when we stop to consider in particular the role of doctors and medical professionals in torture: for nothing is more antagonistic to what we mean by medicine then its utilization in the prolongation of a person’s agony and brutalization. Sadly, the participation of medical practitioners in torture is nothing new; and we would do well to remind ourselves of that history, for we are now most certainly part of it. In his book Torture, Edward Peters observes that it was under the Third Reich that torture was “transformed into a medical specialty, a transformation which was to have great consequences in the second half of the twentieth century.” Medical involvement in torture first came to world attention with the disclosure of practices in Nazi concentration camps. The Nuremberg trials revealed that physicians had, for example, placed prisoners in low-pressure tanks simulating high altitude, immersed them in near-freezing water, and had them injected with live typhus organisms. It is likely that hundreds of doctors and nurses participated in these experiments, although only twenty-one German physicians were charged with medical crimes. What needs to be emphasized is a point that Robert Jay Lifton, M.D. makes with what he calls an “atrocity-producing situation” – by which he refers to an environment “so structured, psychologically and militarily, that ordinary people can readily engage in atrocities.” As Lifton observes, many Nazi doctors were engaged not in cruel medical experiments, but directly involved in killing. To get to that point, however, they had to undergo a process of socialization; first to the medical profession, then to the military, and finally to the concentration camps: “The great majority of these doctors were ordinary people who had killed no one before joining murderous Nazi institutions. They were corruptible and certainly responsible for what they did, but they became murderers mainly in atrocity-producing settings.” Referring to the CIA program, Atul Gawande, a surgeon and author, observed that “The torture could not proceed without medical supervision. The medical profession was deeply embedded in this inhumanity.” In fact, the program was developed by two psychologists, Jim Mitchell and Bruce Jessen, who – as the film relates – based their recommendations on the theory of “learned helplessness,” which essentially describes a condition in which an individual, repeatedly subjected to negative, painful stimuli, comes to view their situation as beyond their control and themselves as powerless to effect any change. The crucial point is that medical professionals were an integral part of the program. Referring to American doctors that were involved in the torture at Abu Ghraib, Robert Lifton points out, “Even without directly participating in the abuse, doctors may have become socialized to an environment of torture and by virtue of their medical authority helped sustain it.” We can hardly underestimate the significance of the process of socialization in facilitating participation in torture. Certain factors are decisive in terms of weakening the moral restraints against performing acts that individuals would normally find unacceptable. Following Harvard University professor of social ethics, Herbert Kelman, we can identify three forces that are particularly important. Kelman was particularly interested in what he described as “sanctioned massacres” – such as occurred at My Lai during the Vietnam War – but his observations are relevant to the torture setting as well. The first factor is authorization: rather than recognizing oneself as an independent moral agent, the individual feels that they are participating in a mission that relinquishes them of the responsibility to make their own moral choices. The presence of medical professionals helps to lend a sense of legitimacy to the enterprise. Routinizaton is another factor, which speaks directly to the establishment of torture as a profession – so that the torturer perceives the process not as the brutal treatment of another human being but simply as the routine application of a set of specialized skills; or as Kelman puts it, “a series of discrete steps most of them carried out in automatic, regularized fashion.” Finally, dehumanization, whereby the victim is deprived of identity and systematically excluded from the moral community to which the torturer belongs: it becomes unnecessary for the agents to regard their relationship to the victim as ethically significant – in short the victim is denied any inherent worth and therefore any moral consideration. Medical personnel who act as advisors, as it were, on torture techniques are directly implicated in the practice of torture. But if we were to follow Dershowitz’s suggestion and effectively institutionalize torture, this medical involvement would be an inevitable result – for it was present already when torture was being practiced clandestinely. It seems strange that Dershowitz, who finds the current hypocrisy so outrageous, would attempt to remedy the situation not by eliminating the hypocrisy but rather legitimizing it. For what could be more hypocritical than doctors, sworn to do no harm, taking a more or less active role in the systematic and scientific brutalization of another human being? But such would be the unavoidable outcome of legalizing torture through “torture-warrants.” In closing, institutionalizing torture would have very bad consequences – far worse than the hypocrisy that so troubles Dershowitz. Not only would the practice of torture likely metastasize – instead of being limited to one-off cases – its professionalization would contribute to the formation of “atrocity-producing situations,” and we have seen how this relates in particular to the complicity of doctors in the torture situation. Physicians, nurses and the medical establishment itself would be severely ethically compromised by the institutionalization of torture. All of which is to say that the legalization of torture should be avoided. Best to then uphold the absolute ban on torture, even if that ban will be subject to violation under extraordinary circumstances.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173709 https://historynewsnetwork.org/article/173709 0
Cinderella, Whose History Goes Back to the First Century, Is Still a Delight, Glass Slippers and All

 

Who does not know the story of Cinderella, one of the world’s most beloved fairy tales?

 

The story:  Lovable peasant girl Cinderella’s evil step mother treats her very poorly while showering love and affection on her two idiot sisters. The Fairy Godmother arrives, gets CInderella dressed to kill and sends her off to the Prince’s Ball in glittering glass slippers, a wonderful carriage and instructions to leave the ball by midnight because that is when her lovely carriage turns into a pumpkin. At the ball, she meets the incredibly good looking Prince and they fall madly in love but, OMG, she did not tell him her name (or Facebook page). She has to flee at midnight and, running away, leaves a glass slipper behind. The love-sick Prince tours the Kingdom looking for the owner of the slipper. He puts the shoe on the feet of hundreds of young women (all hopelessly pathetic) and then finds Cinderella. The slipper fits. They get married, feed the hungry, house the homeless and fix the Kingdom’s economy (all of this in two minutes) and live happily ever after.

 

The latest version of Cinderella is a revised version of a 1957 television play with music and lyrics by Richard Rodgers and Oscar Hammerstein. It was based on the 1950 Walt Disney animated film of the fairy tale. This new version opened Sunday at the Paper Mill Playhouse, in Millburn, N.J and it is as wonderful as musicals about good looking Princes and lost Princesses can be. The new play features superb acting, memorable choreography, very good music and one crackerjack Fairy Godmother.

 

I went to the Paper Mill Playhouse thinking to myself that the play was going to be mediocre at best. Throughout my life I have seen most of the Cinderella plays and movies. What could possibly be left?

 

Well, I was hooked from the first moment of the play, when adorable peasant girl Cinderella wanders through the woods and, what ho!, accidentally meets the Prince. He likes her right away, despite her peasant girl status on the Kingdom social rung. He continually loses her though (should have paid the extra for caller ID).

 

Then we meet the God awful step mother, a real social climbing shrew. She makes Cinderella do all of the dirty work in the household while the other two daughters relax. For them life is fun, fun, fun, while for Cinderella it is work, work, work.

 

The stepmother prods the two sisters to go a ball the Prince is throwing to find a wife... They are outrageous and make total fools of themselves at the ball, as does everybody there except, of course, Cinderella. At times, the way the women converge on the Prince is like The Bachelor television show.

 

Anyway, as you all know, the slipper is dropped and the biggest woman hunt in fairy tale history follows. The scene where the Prince gives up after trying the slipper on hundreds of women is wonderful. The Cinderella slowly steps out of the crowd and he slips the slipper on her easily. All the other women groan.

 

Every generation has its own take on Cinderella. This one is pretty heavy on the woes of working class Americans, the problems of families, the inexperienced Prince ignoring his trusted advisor on the issues and making his own, wise, decision. There is a lot of material on insanely jealous women and how they act, but let’s not go there.

 

The Kingdom long ago was not much different from the U.S., or any other country, today. All the Prince really needed was a good wide screen TV, an I-Phone and some Taylor Swift CDs.

 

Mark Hoebee does a sensational job as director. He brings back the old fairy tale, but adds lots of new wrinkles, too. He gets fine work from his cast.  Ashley Blanchet is a revelation as Cinderella. She plays the role wonderfully and has a majestic singing voice. Her marvelous Prince is well played by the equally talented Billy Harrigan Tighe. Other fine performances are from Michael Wayne Wordly as the Prince’s audacious advisor,  Donna English as the stepmother and Rose Hemingway and Angel Lin as the step daughters.

 

One of the reasons the musical succeeds is the spritely choreography of Joann M. Hunter. It is just dazzling.

 

Cinderella is not only one of the world’s most beloved fairy tales, but one of the oldest. There are debates about when and where the story was invented, but it narrows down to two places.

 

The tale seemed to have first appeared in Egypt around 100 A.D. That story featured a lost Greek girl who stumbled into a party hosted by the Pharaoh. Some of the Cinderella elements – bad parents and foolish sisters, were in it. The next version was produced around 700 A.D. in the T’And dynasty in China. It, too, had some of the later elements of the story. Between 100 A.D. and the late 17th century there were over 300 Cinderella type stories in dozens of countries. In one, poor CIndy had to eat her big toe (uggh!).

 

The popular Cinderella tale that we all know and love, called Cendrillon, was written by Frenchman Charles Perrault in 1697. It had the mean stepmom the two dolt sisters, the charming Prince, carriage and slippers. It stood for centuries. Walt Disney came along in 1950 with probably the most successful Cinderella tale as an animated film. It did well in theaters and then was shown over and over again on Disney’s television shows. It cemented the Cinderella legend. Over the last twenty years there have been a few more live action films about Cinderella. This latest show has a brand new book, very up to date, by Douglas Carter Beane.

 

PRODUCTION: The musical is produced by The Paper Mill Playhouse. Sets: Anna Louizos, Costumes: William Ivey long, Lighting; Charlie Morrison, Sound: Matt Kraus. Choreography is by Joann M. Hunter. The play is directed by Mark Hoebee. It runs until December 29. 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173708 https://historynewsnetwork.org/article/173708 0
We Cannot Forget About Acid Rain In the 1960s, ecologists started to record the detrimental effects of acid rain. While acid rain damaged many areas in America, the Adirondack Park (located in upstate New York) endured the worst consequences of any area in the nation.  

 

Acid Rain is created when nitrogen oxides (NO) and sulfur dioxide (SO2) mix and combine with water to create sulfuric and nitric acids. These acids can be carried through the air for hundreds of miles and return to the earth in a number of ways, including rain and snow. Numerous things create these acids but burning coal is one of the biggest creators of acid rain. When coal became the primary fuel used to generate electricity in America in 1961, acid rain became a significant problem. The Adirondacks and other down-wind areas suffered from the consequences of acid rain even though very little coal was burnt there. This was an alarming indication that pollution like acid rain was not just a local issue, but a national threat.  

 

Acid Rain destroys forests, harms wildlife, degrades buildings, pollutes water supplies, creates caustic fog, and threatens the lives of humans. If this list is not bad enough, acid rain also increases the number of black flies—arguably the peskiest of all pests. If anything is certain, it is that nobody wins with acid rain (except the black flies, of course). Acid Rain is a dangerous problem and history can teach us an important lesson about it that we cannot afford to ignore.

 

In the Adirondacks alone, the effects of acid rain were astounding. During the 1980s acid rain scare, a third of the red spruce trees died and over a fourth of the lakes were so acidic that they could not support fish. For perspective, the Adirondack park is six million acres large, with 2,800 lakes and millions of trees. Acid rain not only destroyed many of these lakes and trees but it endangered livelihoods as well. Fisherman during the time were so desperate to save the fish population that they would dump truck beds full of lime into the lakes to try and counter the acidity (to no avail). The beautiful landscape of the Adirondacks (which draws tourists from all over the world) was degraded and fog obscured the unique Adirondack views. The beautiful park was being destroyed by coal burning hundreds of miles away.

 

After the terrible consequences of acid rain were recognized, legislation and regulations were enacted to help save the park. In 1990, Congress passed the Clean Air Act to help control acid rain. The effectiveness of this law is debated but it was an important start. As the 1990s progressed, more lawsuits and settlements were brought against polluters (through the work of New York Attorneys General Eliot Spitzer, Andrew Cuomo, and Eric Schneiderman) which improved the conditions of the Adirondacks. These lawsuits, paired with the Clean Air Interstate Act and the eventual Cross-State Air Pollution Rule, provided desperately needed help to the Adirondacks. After these actions were taken, it took years for fish to return and trees to recover. Yet these actions turned a bad situation into a true environmental success story.

 

While major improvements have occurred in the last few decades, the Adirondacks are still not safe from acid rain. In 2018, President Trump repealed the Clean Power Plan. This plan, which was passed during President Obama’s presidency, was designed to further reduce emissions and coal burning. This plan would continue to protect the Adirondacks from the harmful effects of acid rain. In addition to the repeal of the Clean Power Plan, other layers of critical protection have been removed as President Trump has made other environmental deregulations as well. These actions threaten us with the same dangers present during the 1970s.

 

We cannot afford to ignore the history of acid rain. President Trump’s deregulations and repeals could harm many areas of the country, including places like the Adirondacks. Too much is at risk (as history has shown us) to allow acid rain to reoccur. We must remember the dreadful history of the Adirondacks when deciding the environmental future of our nation.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173619 https://historynewsnetwork.org/article/173619 0
Trump's Official Withdrawal from the Paris Climate Agreement Mirrors George W. Bush's Exit from Kyoto Protocol Earlier this month, President Donald Trump made good on a campaign promise when he officially notified the United Nations of the United States’ intent to withdraw from the Paris Agreement, While the President has repeatedly criticized the Agreement, last week was the first possible day the withdrawal process could legally begin as peranguage of the agreement that the United States helped craft in 2015. 

 

Secretary of State Mike Pompeo announced the initiation of the withdrawal process via a statement and on Twitter. Using a justification popular with Trump voters, Pompeo’s statement claimed the Agreement will hurt the U.S. economy:

"President Trump made the decision to withdraw from the Paris Agreement because of the unfair economic burden imposed on American workers, businesses, and taxpayers by U.S. pledges made under the Agreement," Pompeo said… The United States has reduced all types of emissions, even as we grow our economy and ensure our citizens’ access to affordable energy."

 

The United States has a long history of being hesitent to match other western nation's commitment to the climate. In fact, Trump’s decision to withdraw from the Paris Agreement marks the second time that the United States has not only entered, but helped craft, a climate agreement, and then exited it. In fact, The pro-economy, America-first rhetoric used by the Trump Whitehouse regarding Paris is eerily similar to that used by President George W. Bush in 2001 to justify withdrawing from the 1997 Kyoto Protocol

 

In the 1990s, the  international community realized that greenhouse gas emissions were negatively impacting on climate health as global temperatures rose. To respond, international leaders committed to substantial emission reduction targets via the Kyoto Protocol, an extension of the 1992 United Nations Framework Convention on Climate Change (UNFCCC). Under President Bill Clinton, the United States agreed, along with 40 other countries and the European Union, to reduce emissions 5.2 percent below 1990 levels during the target period of 2008 to 2012. Much like President Trump, then-candidate George W. Bush distinguished himself from opponent Al Gore (a man instrumental in the United States’ adoption of Kyoto) by campaigning against Kyoto:

“The Kyoto Treaty would affect our economy in a negative way,” Bush said during his 2000 presidential campaign. “We do not know how much our climate could or will change in the future. We do not know how fast change will occur, or even how some of our actions could impact it.”

 

Bush officially withdrew from Kyoto in 2001, putting the United States far behind its European counterparts in efforts to control climate change. Despite some backlash at the time, only a slim majority of Americans believed the effects of climate change were immediate: 58% of Americans either agreed with Bush’s withdrawal from Kyoto or had no opinion at all. Today, however, climate policy is more important to a majority of Americans. Two thirds of Americans believe they actively witness the effects of climate change and a slim minority favor Trump’s decision to withdraw from Paris. 

 

This begs the question: Why aren’t Democratic candidates talking more about the climate? Yes, it is a fair assumption that any individual attempting to secure the 2020 Democratic nomination possesses a significantly more activist stance than Donald Trump regarding the climate, and many do consider re-entry into the Paris Agreement a necessary condition for legitimate candidacy. No “frontrunner” candidate, however, has discussed the fact that Trump’s withdrawal from Paris won’t go into effect until November 4th, 2020 - a day after the 2020 general election. The timing of the withdrawal process would allow for a near-seamless re-entry into the agreement by a newly elected Democratic president, and Trump’s election all but secures the death of the Paris Agreement in the United States. 

 

Public support for action on climate change is higher now than ever, and the potential to reverse Trump’s Paris decision could carry significant weight for undecided voters. Despite this the climate was mentioned only 10 times in the October Democratic primary debate. General election season is fast-approaching, and the Democratic party as a whole would do well to shift it’s rhetoric toward more generally popular topics, like environmental activism and re-entry into the Paris Agreement, so as to establish a rapport early those undecided voters who will decide the 2020 election.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173558 https://historynewsnetwork.org/article/173558 0
Historians criticize Trump after he calls impeachment inquiry a ‘lynching’ On Tuesday October 22, 2019, President Donald Trump described the House’s impeachment proceedings against him as a “lynching.” He tweeted “so some day, if a Democrat becomes President and the Republicans win the House, even by a tiny margin, they can impeach the President, without due process or fairness or any legal rights. All Republicans must remember what they are witness here – a lynching. But we will WIN!”

 

Trump evokes one of the darkest chapters of American history. Concentrated in the 19th and early 20th centuries, lynching’s were extrajudicial executions of African Americans. They were often public events used to enforce racial subordination and segregation in the South. Trump's use of the term to describe his political predicament provoked significant outage from many historians and politicians. Let’s take a look at how historians denounced his use of the term.

 

Lawrence B. Glickman, a history professor at Cornell University wrote a Washington Post article about the long history of politicians claiming to be victims of lynching and racial violence. The article explains a type of conservative rhetoric described as “elite victimization.” Glickman argues Trump’s use of the term is a mode of speech typically used by wealthy, powerful elite men who employ such language of enslavement to claim to be victims. Glickman’s article gives the reader a key insight to how wealthy White men have appropriated the language of minority rights in order to depict themselves as precarious and weak.

 

In the Washington Post article, Glickman provides examples of previous politicians who used images of racialized subjection, including slavery and lynching, to describe their plight. For example, on December 2, 1954, the Senate voted to censure Senator Joseph McCarthy, who led the fight in Congress to root out suspected Communists from the Federal Government. Sen. McCarthy (R-Wis.) complained the “special sessions amounted to a lynch party.” Glickman also highlighted the 1987 ad-campaign from the National Conservative Political Action Committee, which condemned the “liberal lynch mob” for criticizing President Ronald Reagan during the Iran-Contra scandal. Like Trump, these politicians conceived of themselves as a persecuted  minority. Instead of embracing their elite position of power, some conservative men have instead appropriated victimhood, distorting the history of lynching.

 

Seth Kotch, a history professor at UNC –Chapel Hill and an expert on lynching, tweeted that “lynching is not something that can be appropriated by a billionaire president who wants to do crimes without consequences. But victimhood apparently can be.” In a follow-up tweet, Kotch said the President’s complaint “is really revealing [how] lynching was about the perverse and enduring idea of white male victimhood.”  The idea of white male victimhood is a topic Kotch mentions in his latest book Lethal State: A History of the Death Penalty in North Carolina. In an interview with The INDY newspaper, Koch detailed that lynching's after slavery targeted African American men to preserve white supremacy and capital punishment. Mob murders in North Carolina disrupted Black communities, stole Black wealth, and destroyed Black owned property. White men who joined lynch mobs did so “because maintaining White dominance was materially and symbolically important to them… as part of their racial inheritance.” Kotch’s historical references are  significant because they teach others how to acknowledge and memorialize the victims of the lynch mobs.

 

Kevin Kruse, a history professor at Princeton University delivered a thorough takedown of President Donald Trump’s claim that the impeachment inquiry represents a lynching. In a past tweet that was reposted in an article by AlterNet, Kruse stated “I’m not sure what legal rights’ he thinks he’s entitled to in the current stage of the impeachment process – which are akin to a grand jury investigation and indictment – but whatever rights he imagines he has will apply in the Senate trial.” Kruse convincingly argued the constitutional mechanics of the impeachment process in the House only require a minimum majority of lawmakers in order to advance. In the same thread, Kruse says “comparing impeachment proceeding to a lynching is even more insulting when you’ve cozied up to the very forces of White supremacy that historically have used lynching as a tool to terrorize racial minorities.” Kruse’s twitter thread helps us understand the ways that the impeachment process is being properly conducted, destroying the president's assertion it is unfair. Kruse also historicized the inappropriate metaphor by informing readers that the first time impeachment proceeding were describe as "lynching" was when conservatives tried to defend Richard Nixon in the Watergate investigation.

Historians have made it clear that the term “lynching” should not be applied to situations like impeachment inquiries. Historians say metaphorical use of the term is problematic because it erases the history of the racist violence once practiced in the United States.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173557 https://historynewsnetwork.org/article/173557 0
The History Briefing on "Quid Pro Quo:" The Evolution and History of Quid Pro Quo Quid Pro Quo: A favor or advantage granted or expected in return for so.

 

Over the past two months, the impeachment inquiry has sparked intense debate over the alleged quid pro quo agreement between President Donald Trump and the Ukrainian government to investigate Joe Biden in exchange for lifting halted military aide.

 

In the rough transcript of the phone call between President Trump and President Zelensky released by the White House, President Trump states “I would like you to do us a favor though” to President Zelensky while discussing the United States providing military support for Ukraine. President Trump has fiercely defended himself, claiming that there was “no quid pro quo” in his “perfect” phone call to the Ukrainian President. As Congress brings public hearings in the impeachment inquiry, it is important to understand exactly what “quid pro quo” means to determine if it applies to this phone call. Historians have provided an important perspective on how our understanding of “quid pro quo” has changed over time.

 

In an interview with NPR, the Wall Street Journal’s language columnist Ben Zimmer discussed the definition and past understanding of the term. Quid pro quo means “something for something” in Latin. Zimmer explained that in the 16th century, apothecaries would substitute one medication (quid) with a similar one that often did not work as well or may have even been harmful (quo). It was a “practice people were scared of,” Zimmer stated.  Once the term quid pro quo was used in a legal context, the term retained its initial negative connotation although it should seem neutral. Even though “quid pro quo” has been used in the English language for over 500 years, “the political situation can't help but reform the way that we're going to understand this particular phrase.” History demonstrates that the use of this term has evolved based on the ways it has been utilized over time, the most recent being its legal use in the impeachment inquiry.

 

Today, lawyers evaluate quid pro quos in cases involving bribery, extortion, and sexual harassment, Columbia Law School professor Richard Briffault explained in a New York Times article. He highlighted that while not all instances are illegal, in politics the term is often used to describe corruption. The Washington Post’s video “Quid pro quo, explained” highlighted that quid pro quo is usually very hard to prove because it is rare that there is an explicit demonstration of trading one thing for another.  The initial deal does not have to be successfully completed to be considered quid pro quo; an attempt is sufficient.

 

Doug Rossinow, a history professor at Metropolitan State University, compared the Ukrainian quid quo pro with the Iran-Contra affair in a Washington Post article.  In 1984, Congress passed a law that essentially barred President Ronald Reagan from using a proxy army, known as the Contras, to destabilize the socialist government in Nicaragua. In an attempt to secretly maintain supplying the Contras with weapons and money, Reagan engaged in illegal efforts with other governments to supply the contras for the United States in return for U.S. military aid. The article explains that what made this the Iran-contra affair was the fact that once this scandal came to light in 1986 after a supply plane was shot down by Nicaraguans, it was also discovered that Reagan’s team had authorized the sale of weapons to Iran, which was labeled a terrorist state. Even when this scandal emerged publicly, Reagan avoided impeachment. Rossinow emphasized a key difference between these two situations: “Like Reagan, Trump has played fast and loose with American assets and security policy… Reagan committed impeachable acts out of zealotry, Trump played the Ukraine card in what seems a crassly political gambit.” Reagan was able to avoid impeachment his motives did not seem to be for his own personal political gain. 

 

While the use and meaning behind “quid pro quo” evolved over time, its history demonstrates how it has come to be associated with corruption and abuse of power. Understanding the history of this phrase is important because it allows for an informed opinion on the current impeachment.

 

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173583 https://historynewsnetwork.org/article/173583 0
Democrats Should Welcome Michael Bloomberg Into the Primary Race

 

Soon after Michael Bloomberg filed for the presidential primary in Alabama, politicos rushed to criticize the former mayor of New York City. They berated Bloomberg for trying to enter the presidential race without doing the hard work that long engaged other Democratic candidates. Some dismissed Bloomberg as an ambitious billionaire. Voters wanted leaders who attacked Wall Street, critics asserted, not someone who made a fortune there. Others warned that Bloomberg, a New Yorker, could not appeal to voters in the heartland. These arguments received considerable attention in the national media, but they do not hold under scrutiny. 

 

There are good reasons for the late filing. Bloomberg hinted months ago that he would not consider a run unless moderates, especially Joe Biden, slipped. Biden has been losing ground to Elizabeth Warren in the early primary states of Iowa and New Hampshire. Perhaps Bloomberg worried about recent polls that indicate Warren and other Democratic candidates might struggle against Trump in the general election. A New York Times Upshot/Siena College poll shows that Trump is running close to or ahead of current leaders in the Democratic field in six states that may decide the 2020 presidential election. Time is running out for Bloomberg or other potential candidates. Many Democratic strategists think additional choices could prove helpful. 

 

They recognize that the party’s four current leaders, Joe Biden, Elizabeth Warren, Bernie Sanders, and Pete Buttigieg bring distinct skills, yet each has vulnerabilities. Biden is likeable and experienced but seems unsteady in the debates. Warren offers well-researched plans for reform, but some voters think her proposals are too costly, and polls indicate she is behind Trump in most tossup states. Sanders impresses numerous followers with strong challenges to inequality, but some characterize him as a cranky socialist. Buttigieg is a brilliant communicator, but some think he is too young and inexperienced to be elected president in 2020. 

 

David Axelrod, formerly a key adviser to Barack Obama, summed up the worries. Axelrod pointed to “nervousness about Warren as a general election candidate, nervousness about Biden as a primary candidate . . . and fundamental nervousness about Trump and somehow the party will blow the race.” The major reason some Democrats welcome an entry from Bloomberg is that no other Democrat presently appears comfortably positioned to defeat Donald Trump. 

 

Like all the top Democratic contenders, Michael Bloomberg brings strengths and vulnerabilities. He was very effective as three-term mayor of New York City. Historian David Greenberg noted in the New York Times that under Bloomberg, “Crime plummeted, schools improved, racial tensions eased, the arts flourished, tourism boomed, and city coffers swelled.” There were controversies, of course, especially over “stop and frisk” anti-crime measures that disproportionately affected black and brown citizens. Bloomberg will need to address this important issue if he campaigns for president. African Americans and Hispanics want just treatment, and they have a substantial presence in the party. Other criticisms focus on Bloomberg’s personal characteristics rather than issues related to his terms as mayor. Some say Bloomberg is too old or too short or that Americans are not ready for a Jewish president.

 

A more frequently articulated criticism is that voters do not want to see another rich man in the White House. Bernie Sanders warned, “Sorry, you ain’t going to buy this election.” A billionaire like Bloomberg cannot be counted on “to end the grotesque level of income and wealth inequality which exists in America today,” argued Sanders. Critics that echo Sanders’s attack, complain that Democrats have given too much power over the years to wealthy candidates, officials, and benefactors.

 

Michael Bloomberg is, indeed, one of the wealthiest Americans, but it is worth noting that several of America’s best presidents also ranked among the country’s richest. George Washington and Thomas Jefferson possessed fortunes in land and slaves. Theodore Roosevelt and John F. Kennedy benefited from the success of wealthy, business-oriented fathers. Franklin D. Roosevelt, born of the manor, probably did more for America’s unemployed and poor than any other president. 

 

Self-made achievement in business and the professions does not guarantee success in politics. Herbert Hoover, an orphan at age 10, achieved multi-millionaire status as an mining engineer but fared poorly at the White House. Donald Trump claims to be a high-achieving billionaire, yet scholars rate him the worst or one of worst presidents.

 

We cannot judge Michael Bloomberg’s potential for effective presidential leadership in terms of his record in business. Nevertheless, a Bloomberg candidacy provides distinct opportunities for Democrats. Michael Bloomberg could become a Democratic asset if the U.S. economy continues humming along in 2020. Studies reveal that economic growth and low unemployment frequently benefit an incumbent in presidential races. Trump says his policies ignited a business boom. The claim is incorrect. Markets tumbled when Trump weaponized trade wars and threatened to shut down the government. Current Democratic candidates have not been persuasive when responding to Trump’s bogus claims about brilliant economic management.

 

More effectively than any other Democrat currently in the presidential lineup, Michael Bloomberg can counter Trump’s boasts about business acumen by demonstrating greater expertise in financial affairs. Unlike Donald Trump, who received a $60.7 million loan from his father to launch a business career, Michael Bloomberg started without a strong initial boost. Through brilliant planning and investing, Bloomberg emerged as the fourteenth richest person in the world, according to Forbes. Also, after three terms as mayor, he left New York City’s finances in excellent shape. After retiring from city politics, he committed much of his money to progressive causes. Bloomberg funded global health programs and supported political candidates that challenged the National Rifle Association, protected the environment, and worked to limit climate change. Bloomberg also took the Giving Pledge. Nearly all his net worth will be given away in the years ahead or left to his philanthropic foundation. 

 

Bloomberg’s critics are focusing on irrelevant matters when denouncing his candidacy. It is not particularly important that Bloomberg is a senior citizen or a diminutive New Yorker or that he entered the primary race long after other candidates began campaigning. Nor are facts about Bloomberg’s enormous wealth likely to turn off millions of voters (the image of business success benefited Trump in 2016). 

 

Economist and columnist Paul Krugman, usually insightful, recently joined the chorus of complaints about irrelevant matters. He mocked the idea “that America is just waiting for a billionaire businessman to save the day by riding in on a white horse.” But whether a rescuer is rich or middle class is not especially important. The significant question for Democrats is: who will be available to the party if polls reveal in 2020 that Donald Trump is competitive or dominant in battleground states against leading Democratic contenders? If the 2020 surveys indicate that America and the world are in danger of experiencing four more years of deeply flawed presidential leadership, Michael Bloomberg’s candidacy may look promising. 

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173699 https://historynewsnetwork.org/article/173699 0
Investigating Technology and the Remaking of America

 

An agricultural and ranching valley in Northern California, the “Valley of Heart’s Delight,” became the cradle for technological innovation and manufacture that reshaped America in the decades following the Second World War and led to the Information Age. By the 1970s, with the upsurge in silicon chip makers there, a writer labeled the area Silicon Valley, and the name stuck. Integrated circuits, microprocessors and microcomputers were among the technologies developed in the Valley.

 

As a result of this work, we now carry supercomputers--smart phones—that have more power than the computers that made possible the American journey to the moon a half century ago. And it’s possible now to access a wealth of information through these devices and our computers, a realization of the vision of legendary MIT professor, engineer, and computational pioneer Vannevar Bush. In 1945, Bush wrote of his dream for “the memex,” an office machine that would organize and hold all human knowledge. Now we have the Internet.

 

Many of the major innovators of Silicon Valley dreamt of making the world better through technology by connecting people and making available a world of information. In recent years, however, the optimism about technology has faded with increased concerns about privacy, monopolization, disinformation, toxic social media platforms, and other issues. 

 

In her lively and extensively researched book The Code: Silicon Valley and the Remaking of America (Penguin Press, 2019), acclaimed History Professor Margaret O’Mara chronicles the story of the Valley from the wartime era of Vannevar Bush to Steve Jobs and Bill Gates to more recent innovators, such as Mark Zuckerberg, as she tackles the origins of emerging questions today about dark side of the technology. 

 

In this comprehensive history of Silicon Valley, Professor O’Mara lays out the political and historical context of technological advances over the past seven decades. She shares engaging profiles of many of the leading figures in technology from engineers and scientists to venture capitalists who made many of the achievements possible. Her writing is based on rigorous archival research as well as dozens of interviews, original research of company and personal records, and many other materials.

 

Before graduate school, Professor O’Mara worked in the Clinton-Gore White House as a policy analyst specializing in private-public partnerships. She brings that expertise to The Code as she details the often overlooked but critical role of massive federal government funding of technology in the wake of the Second World War Two with the nuclear arms race, the Cold War, and the space race—an infusion of funding that continues to the present. She pays particular attention to the politicians and lobbyists who were often enthralled by high tech and made possible special government treatment for this unique industry with generous funding as well as tax breaks, lack of regulation, trade deals, and more.  

Written for scholars and general readers alike, The Code puts a human face on the development of our technology today as it chronicles major developments and illuminates the personalities who made our high-tech world possible. The book will serve as an important reference for all who study the history of technology and politics, and for those who want to understand how we got to the questions about omnipresent technology that we grapple with today.

 

Margaret O’Mara is the Howard & Frances Keller Endowed Professor of History at the University of Washington where she teaches undergraduate and graduate courses in U.S. political and economic history, urban and metropolitan history, and the history of technology. Her other books include Cities of Knowledge (Princeton, 2005) and Pivotal Tuesdays (Penn Press, 2015). She has also taught history at Stanford University. She earned her doctorate in history at the University of Pennsylvania.

 

Professor O’Mara is also contributing opinion writer at The New York Times, and her writing also has appeared in The Washington Post, Newsweek, Foreign Policy, American Prospect, and Pacific Standard, among others. In addition to teaching, she speaks regularly to academic, civic, and business audiences. She lives near Seattle with her husband and two daughters.

 

Professor O’Mara generously discussed her background as a historian and The Code at her office at the University of Washington in Seattle. 

 

Robin Lindley:  Thank you for meeting with me Professor O’Mara and congratulations on your groundbreaking history of technology, The Code. Before getting to your book, I wanted to ask first about how you came to be a history professor. When you were young, did you think about being a historian? 

Professor Margaret O’Mara: No, that wasn’t the first sort of thing I saw myself doing. The first time I thought about it, I wanted to be an astronaut or a doctor or other things that five-year-olds want to do. 

I was really involved in theater when I was a young teen and I was a theater kid, so I wanted to be an actor. My mother was a professional actor. So many members of my family have done things that involve standing up in front of people and performing in some way. We have actors, we have musicians. My brother's a professional rock musician, and he's way cooler than me. My father is a retired clergyman who stood in front of people. My grandfather was an elected official and civil servant in the U.K. My grandmother was a concert pianist. 

When I think about the larger ecosystem of my two families, what I do as a professor seems [natural] because I stand up in front of people and teach or I speak to groups of people. I came into graduate school from the very beginning with a very public facing history in mind. My goal was to be a historian who was speaking to policymaking and public audiences because I'd come from policymaking. I think that's animated everything I've done ever since.  

 

Robin Lindley: Thanks for sharing that background. Did you get your undergraduate degree in history then? 

Professor Margaret O’Mara: I did. I went to Northwestern partially because it was on my radar screen when I was a teenage theater kid and wanted to be a theater major and they've got a renowned theater program. But by the time I was applying to college, I realized I was going to do something else. I didn't know what, but I originally was an English major and then I added history as a second major. In little ways, I realized that so much of what I loved about literature was its history and I saw these texts as historical and material culture of the past, and reflections and depictions of the past. And that's really what interested me the most.      

And, reflecting back on it, you realize when you're young, you sometimes have experiences and don't realize how formative they are. I went to Little Rock Central High School, the site of the famous desegregation crisis in 1957 in which the governor called out the Arkansas National Guard troops to prevent the integration of this Southern high school. And then Eisenhower had to call in federal troops to enforce the integration order. This was a seminal moment in the struggle to integrate public schooling in the South. 

My graduating class was exactly 30 years after the crisis at Central High. The Little Rock Nine, the nine African American students who integrated the school, came back as a group for the first time that year to visit the school. I remember vividly their visit. They walked down the hallways and were welcomed and celebrated at this place, in the space that had been so incredibly hostile to them. 

 

Robin Lindley: Were there many black students in the school when you graduated?

Professor Margaret O’Mara: Yes. It was majority minority by then. It spanned the socioeconomic spectrum, was multiracial, and produced many national merit scholars every year, and also had a lot of kids who were living in poverty, and then every stop in between. It was a really fantastic, amazing high school.

 

Robin Lindley: You were raised in Little Rock then?

Professor Margaret O’Mara: Yes. I grew up in Little Rock, Arkansas. And so Bill Clinton, the person who was governor when I was growing up, was someone I knew because Little Rock is tiny. It may be 200,000 people now, but it was about 150,000 people when I was growing up there, and yet it was the biggest city in the state. There was a kind of intimacy and familiarity with which everyone knew one another, including the Clintons. That cannot be overstated. It was really a very small stage. 

 

Robin Lindley: So Bill and Hillary Clinton were a real presence as you grew up? 

Professor Margaret O’Mara: Oh, absolutely. We lived in the same neighborhood. They were younger than my parents and Chelsea is younger than me, so we didn't socialize, but we knew lots of friends in common. It was just a small town. 

As a side note, I will say that Bill Clinton is an extraordinary figure and so is Hillary. From a very early point, even when he was the governor of this tiny state, he was so extraordinarily charismatic. He just glows with charisma. You always remember every interaction with Bill Clinton, even when he was governor of Arkansas. 

 

Robin Lindley: We were at a rally for him in Seattle in 1992, and he just happened to walk by and we shook his hand. He was extremely enthusiastic and my wife Betsy said she felt this electricity emanating from him.

Professor Margaret O’Mara: Exactly. He's a remarkable politician. When I tell people how he’s mesmerizing and magnetic, they think about his problematic record with women, but it's something that transcends that. He’s always in campaign mode, seeking your vote. He always expresses this intense interest in you and he wants to know exactly what you are up to and that makes you feel so incredibly important. That’s a talent. 

So all this is related to the question of growing up in this place that was very historically resonant. I was part of that experience and then someone I knew become president.

 

Robin Lindley: How did you get involved in politics?

 Professor Margaret O’Mara: After I graduated from Northwestern, I worked on the campaign for Clinton. That was partially because I was a history major and didn't have a job. I tried to do this corporate recruiting and hadn't gotten hired mostly because I had my intellectual passions but hadn't really figured out my professional ones. 

If you had asked me then if I wanted to wanted to be a historian, no, I didn't really. I didn't know I wanted to be historian until I applied to grad school. And even when I in grad school, I thought I'm not going to be a professor. I was going to get my history degree and then go back to Washington to work on policy and have a career there. This was exactly my plan.

I never thought I'd be doing what I'm doing. With the broader restructuring of the academic job market where there's so few jobs like mine that continue to exist, the fact that I'm sitting here as a full professor talking to just continually blows my mind. Before I went to grad school, I had thought that going becoming an academic was walling oneself off from public conversation, which is why I wanted to take my PhD and use it in a different way. Then I realized that I could actually craft a career for myself where I could do both. And that's why I've always consistently stayed doing things in public throughout my academic career.          

 

Robin Lindley: What did you do in the Clinton campaign? 

Professor Margaret O’Mara: I started off in the mail room as all great careers start. Well, technically the correspondence office. As a note to all striving young people out there who are trying to get in on the entry level like I did, I started in the most unglamorous position ever. I was operating the autopen, which is this crazy machine that essentially forges the candidate’s signature. It has this mechanical arm that would write Bill Clinton in Sharpie on letters. You sit at it and operate it with a foot pedal. 

The great thing about politics is it's a young person's game. If you're young and you're motivated and you hook up with the right mentors, then you can rise pretty fast. So, I started with the autopen and then I moved into field operations. I started at the headquarters in Little Rock and was doing get out the vote and worked for the campaign in Michigan for the last month before the election. 

Then I went back to Little Rock to work on the transition team on economic policy as a staff assistant. I was a junior person doing proofreading and copy editing, and I was right there in the heart of everything. Clinton and Gore had an economic summit in December 1992 in Little Rock where they brought down all these leading business leaders and economists to talk about what to do about the economy, and I helped put the content for that together. I began to understand suddenly this landscape of power and business that I didn't know, and who was who and what was what. 

 

Robin Lindley: And then you moved on to work at the White House.

Professor Margaret O’Mara: I ended up working in the West Wing of the White House on economic policy. And then I moved to Health and Human Services as a policy aide. 

You realize as a young person too that there's a tradeoff between high glamour and substance in these political appointments. The high glamour is definitely the White House, right? So you're kicking around the West Wing and you're going to Rose Garden ceremonies and you get the cool badge that you show when you walk in every day. It's pretty trippy.

But you're rarely doing anything substantive. You're answering the phone and running memos from one place to another. I really wanted to do something with more substance, so I went to the agencies. You're going to these giant concrete block buildings that are so unglamorous, right? But then you go and you learn about public policy and you learn how these programs work and you can learn the operations and what it really means to be in the executive branch and to execute the laws. 

The glamour quotient goes down significantly, but the substance goes up. And I was fortunate that I was working for someone who was an extraordinary mentor and boss who's still a good friend of mine who was directing intergovernmental affairs at HHS. That’s one of the most important jobs at that agency because it deals with states and localities, and the programs run by HHS at the time were all state-federal cooperative programs such as Medicare, Medicaid, and then AFDC which was turned into something else. You’re with the states and the states are the ones implementing the programs. So that was an incredible education in how policymaking works.

Then I went back to the White House and worked for Al Gore, but not on tech policy.

 

Robin Lindley: What were you doing for Vice President Gore?

Professor Margaret O’Mara: It was urban-focused economic policy. I worked on the Empowerment Zone program, which was this program to recapitalize urban neighborhoods that had been redlined. That was a centerpiece of Clinton's urban policy, and it was a program given to Gore for his portfolio. It was really interesting because it involved a whole different set of domestic programs and agencies. 

There was all kinds of targeting of communities for Empowerment Zones. There was one in Harlem. Local coalitions would apply for and get these zone [designations], and then get access to special benefits such as tax breaks and incentives and programmatic support from a whole host of different agencies. It was supposed to both provide more social capital to the local organizations on the ground who were trying to rebuild the social infrastructure of these communities and also create incentives for private sector capital to invest in real estate development and infrastructure development and all sorts of other things. 

 

Robin Lindley: That was a very important program, especially for inner cities.

Professor Margaret O’Mara: The verdict on how well that worked is still out. Historians are turning their attention to these programs and finding rather problematic and mixed results. Timothy Weaver’s Blazing the Neoliberal Trail is one example. We're still trying to figure out how to thread that needle of uneven capital investment and if capital investment in a poor area also means gentrification and displacement. So that again was another education. 

I think cumulatively the experience gave me an appreciation of not only how politics works, but also how power works, and an appreciation for the essential humanity of people who are in very powerful positions, who are simply human beings trying to figure things out and sometimes they make good decisions and sometimes they make wrongheaded decisions. Generally speaking, presidents and political leaders are trying to do the best they can in terms of implementing agendas that they think are important. There have been exceptions, but [this experience] continues to shape the way that I write about history and the way I teach history. 

I worked with Gore for a couple of years and I decided during that time that I didn't want to work in Washington or work in the hurly-burly of political life for my career. I loved writing and research and I wanted time to reflect on how these policies got to be the way they are and how the political landscape grew. 

 

Robin Lindley: And then you went to graduate school in history at the University of Pennsylvania.

Professor Margaret O’Mara: Yes. The thing about Washington DC is it's all reactive. By necessity, you're just ricocheting from one thing to another. The wonderful thing about the scholarly world is you have an opportunity to be reflective and proactive so you can sit back, you can read lots of books, you can think about how these pieces fit together. And then you can produce scholarship, right? You're not reacting to the news of the day. You’re thinking more in a more measured and long-term way. That's how I made the rather strange decision to transition from politics to grad school. 

 

Robin Lindley: It seems that urban history was your primary focus in grad school. Your doctoral dissertation was award-winning and published as a book, Cities of Knowledge. 

Professor Margaret O’Mara: Urban history was my interest, and, at first, high tech was not at all on my radar screen. I'd come from working chiefly on programs that served poor people and were seeking to address poverty.  I came to grad school assuming that I was going to continue my work on that. I went to work with the late Michael B. Katz who was an extraordinary scholar of poverty and social inequality. 

When I embarked on the dissertation project, I knew I wanted to look at the American economy of the 1950s and suburbanization and poverty, as well as look at the world and economic geography of the US before the War on Poverty. Then, I started thinking about the role of federal economic policy. What was federal economic development public policy during this time? There were certainly things like the Area Redevelopment Act and efforts targeted toward poorer parts of the country that were designed to remedy their economic situation. But really the Big Kahuna was not an economic development policy at all. It was the military industrial complex. Then I knew what I wanted to do. 

 

Robin Lindley: And universities were at the center of your research for your dissertation. 

Professor Margaret O’Mara: The great lesson I learned early on was don't ever have too many preconceived ideas about what your dissertation is going to be about and what it's going to discover and what it's going to conclude. It's very tempting to say, I'm going to show that X happens. 

I learned from that first project that the questions I was asking were not all the questions I needed to ask, and that the archives told me things and led me in places I hadn't expected. So, it became a book about universities as economic engines. It became a book about the transformation of American higher education. It became a book that was about the West Coast of the United States, a part of the country that I had not lived in, and I had not really spent much time in before I started writing about it. And it became about the origins of the technology industry, and I was not a historian of technology. I was a political historian. I was someone who was interested in policy.

 

Robin Lindley: How do you prefer to be seen as a historian now? You have a background in urban history, political history, presidential history, and now tech history.

Professor Margaret O’Mara: I'm a historian of modern America. I'm interested in how the private and public sectors interact across time and space. I'm a political and economic historian and, in doing that, a historian of cities as sites of particular forms of economic production. I think they're all intertwined. I find my home in sub-disciplines. 

The different playgrounds I play in are political history, urban history and technology history, although I should be quite clear that I'm not a historian of technology in the way of historians trained in history of science and technology who have a much deeper, more granular sense of the technological dynamics and the science itself.  I'm a science, technology and society person, broadly defined. 

I'm going to continue to resist being just one thing. For a while I felt I needed to choose a lane. I wrote a book about presidents and I followed up with a book about high tech and it seemed like they were disparate subjects, but really they're all tightly connected. 

 

Robin Lindley: You certainly provide historical context and illuminate interconnections between politics, culture and economics in The Code.

Professor Margaret O’Mara: I wrote The Code the way I did to show that when we interlace political history, social history, business history and technology history, new insights emerge about each of those domains because we understand the relationship of each with the others. To look particularly at the phenomenon of the modern American technology industry and Silicon Valley as a place and an industry without considering the broader political and cultural currents is too limiting. 

To make the tech industry a sidebar in the world of 2019 seems absurd. It’s central. And the way it got to be so central was because it has been intertwined all along. It's never been separated. It's never been a sidebar. It's never been a bunch of wacky guys out in California doing their thing to be different. They weren't that different. They were different in distinctive ways, but their differences were constructed by and enabled by American culture, the broader currents in American culture. I think people who are students of cultural history, intellectual history, social history, political history, and urban history can all gain from this understanding of the history of the technology industry and of Silicon Valley in particular. 

 

Robin Lindley: How did The Code evolve from your initial plan? Did you envision this comprehensive history of Silicon Valley or did you have something else in mind?

Professor Margaret O’Mara: I went into this book because, ever since I wrote Cities of Knowledge, I was asked what's Silicon Valley's magic formula? How did it come to be? Only one part of that book was about Silicon Valley and that narrative ends around 1970. I set out to answer those two questions. Initially I was going to focus from the seventies through dot-com boom. And then I was thinking very much in terms of writing a political history of that era and the role of politics and policy and the growth of the tech industry. And as soon as I set out, I realized I was going to have to first go further back in time for the story make sense.

There were a lot of things that I had reflected on since the publication of Cities of Knowledge that had broadened and deepened my analysis of the origins of the Valley, and I wanted to bring that in. And you can't really start in 1970 without explaining how all these players got there. And, as I kept on going, I was encouraged to push it to our present day because one of the things that has happened, and I think that the book really makes clear, is how the scale and the scope and the speed of tech went into hyperspace after 2000. After the dot-com bust, you see the growth of new companies and new industries that are of a different order of magnitude. Yet the culture has some of the same persistent patterns. 

What I realized when I was finishing the book and about to send off the manuscript in October 2018 was that this was an explicit explanation of how we got to now with big tech and how we now have these big five companies: Apple, Amazon, Facebook, Google and Microsoft. And this book was not only for people inside the technology industry, and not only scholars, but it was for everyone who uses these technologies and these platforms, which are pretty inescapable. 

It's very hard to navigate life in modern America without in some way using one of one of the products of the big five. Even if you choose to turn everything off, this stuff is touching you whether you know it or not. And it may not be obvious to the reader but, from the very first page when I start the narrative, when I started the 1940s, I wanted to make sure that there were continuing threads of ideas and processes that take us all the way to the present. Take the idea about connecting people and making the world more open and connected, a mantra repeatedly invoked by Mark Zuckerberg of Facebook. It has its origins deep in the past. I also wanted to show where a particularly important element of the tech story, the practice of high-tech venture capital, began and how the venture capital industry shaped what was possible in tech—including who got to be a technologist. 

 

Robin Lindley: Thanks for explaining that process. A major theme in your book is how Silicon Valley grew because of a flood of government money and other public support such as tax breaks, favorable trade deals, etc. You offer a counterpoint to a popular perception that individual entrepreneurs such as Steve Jobs alone created the flourishing tech industry. 

Professor Margaret O’Mara: Here is where you've had the presence of government and politics and policy all along. It’s never gone away, and not just with the Defense Department and NASA, but with other matters. The government nudged the tax code favorably in the tech industry’s direction. 

There is a reason that this industry rose so high and for so long. It was treated politically as a golden child. Every city wanted a high-tech employer. And every lawmaker in Washington thought these companies, until recently, were the prime example of great American companies that they held up and celebrated. 

And now, that mood has shifted dramatically. So, it's been so interesting. When I started this book, everyone was still pretty rah-rah on tech. It was still the golden years of the Obama era, when Obama was doing town halls at Facebook and all seemed so great and so hopeful. And now it's so dark. 

For every scholar, if you're doing your job, you are a gentle critic. If you're deconstructing myths that people like to tell about themselves, you're speaking truth to power to some degree. Now I sometimes find myself saying, slow down a minute, and let's think about this. We're using these devices and there have been extraordinary technological advances that have made [some situations] better for humankind. At the same time, they also have brought these other very serious consequences. Let's take a more measured in historical view about it. 

 

Robin Lindley: I appreciate that you planned to write for a general audience. Frankly, I was somewhat intimidated by this big book on tech. Thank you for making this history so lively and engaging. 

Professor Margaret O’Mara: Thanks. I went into this project knowing I wanted to write a trade book, not an academic book. I wanted it to be for general audience because I felt that there was a need for a comprehensive history of Silicon Valley that connected the deeper past to the present. 

This was the book that I wished existed in 1999 when I first embarked on my dissertation research and moved out to the California and felt like the blind man and the elephant. I was getting little pieces of this history, but I couldn’t put it all together. I didn’t quite understand how all these things are connected. And after 20 years, I decided to write it myself, and I think there's an important need for it now. 

I like to think I was writing a book about technology for people like me, meaning people who are not technologists and the people like the me of 1999: non-technologists interested in history and policy, interested in social history, interested in more broadly in the past. People who are technology users but don't really understand how it works. 

To help readers, I wanted to write something that was neither cheerleading, aren't these guys great, nor isn’t it terrible--burn it all down. I hope that I got the tone right. It's a work of history and, as a historian, we aren't supposed to be writing Jeremiads. Our job is to do the best we can to build an archive and write from that. That's really what I did. 

 

Robin Lindley: Speaking of building an archive, what was your research process? I imagine that you had to start from scratch just to find many materials. Did you find archives on this relatively recent history of technology?

Professor Margaret O’Mara: I had to build my own archive. One of the challenges is that it's such recent history and another challenge is that companies that are busy building the future aren't really big on archives. They don't get that they should be saving stuff. And when you get to a big company that actually has resourced it out and has an archive, in many cases, they are closed to the public or they are extremely restricted in what they show people and what you can use. So there's limited utility there.  

At the same time, I was fortunate in that people around tech funded and participated in a number of really robust oral history projects. There is one on venture capitalists at the University of California, Berkeley, that was funded by venture capitalists. It has a lot of interviews and oral histories with VCs performed by a trained oral historian, which I am not. I was so grateful for those for that archive. The Computer History Museum in Mountain View, California, has an extensive and growing oral history collections. And the professional organization, the IEEE has a lot of oral histories that they have both recorded and transcribed. Many of these are available digitally. So those [archives] are incredible resources for anyone doing this. 

I will say that a lot of the questions being asked in the oral histories were, rightfully so, about the technology itself and the development of the technology. I was really interested in understanding more about the social conditions. Hey, what was it like for a woman in tech? What was it like living in Palo Alto in 1965? Tell me about what you were doing after hours.

I wanted to know about the things that were not often as visible about business operations and organization, for which the venture capitalist oral history project is very useful for. Then you're talking about financing from banks. And that was very helpful. 

 

Robin Lindley: And you interviewed dozens of people as part of your research.

Professor Margaret O’Mara: The interviews helped me better understand the network itself. Like who's friends with whom?  I would interview someone, they would add, Oh, you should talk to my friend so-and-so. And I'm like, how do you guys know each other? Oh, we've known each other since 1972, and this is how you reach him or her. I was also interested in talking with people whose voices had not been represented in the archive. 

In these conversations, I also asked about politics. There was almost nothing I could find about lobbying trips that electronics executives took to Washington and who they met with, so I talked with former politicians and to people involved in the lobbying with the executives. By and large, the CEO themselves would go and lobby and that that was part of their power. A group of high-tech CEOs went to DC in the early eighties and lobbied for changes in trade policy because they were getting slaughtered by Japan in the chip market. 

They were doing personal, one-on-one lobbying. So very interesting. For that, I relied a lot on newspaper and magazine reporting. I think I've read every issue of Business Week between 1978 and 1982. I'm overstating it, but I did call up back issues from the library and they were not digitized, so I had a giant stack of volumes. It was actually quite useful to sort through pages of the magazines and see what's proximate to what, and then what’s on the front page. 

 

Robin Lindley: You probably saw the faces of rising tech luminaries on many older magazine covers.

Professor Margaret O’Mara: That’s right. So you can see how it was growing and who was reporting on it and why and when. 

Then I had tons of books from the period like journalism books about economic competition and technology. I was spending a lot of time on sites for Powell’s and Amazon just finding used books that you can buy for penny because that was the only place I could find them. There also were some obscure journalistic books about the trade war with Japan and stuff like that. I have boxes full of books actually that I’ll give to a foundation for other historians. 

 

Robin Lindley: The Code is sure to become a major reference for other historians who research technology and economics.

Professor Margaret O’Mara: I really wanted to put a trail of breadcrumbs in this book for future historians to pick up on because there's a lot more to be written. 

 

Robin Lindley: Why did technology flourish in Silicon Valley? I understand from The Code and Cities of Knowledge that Stanford was a hub for this development, particularly because of its innovative engineering school. What else attracted people to the area initially in the post Second World War era? 

Professor Margaret O’Mara: They come out for jobs in electronics. So first, you have this agricultural valley. Stanford is there and it's pretty good, but it wants to be really good. Fred Terman, who is a student of Vannevar Bush, comes back home to the Stanford faculty. He says, all this federal money will be flooding us and it’s best to spend this money on science research and we need to be ready for it. He completely reworks the curriculum and builds up physics and engineering and builds up these big labs. He gets these big federal contracts and is also at the same time courting industry. 

And there was great weather, lots of open space, as well as ongoing aeronautics and military projects in the vicinity. And also, as I wrote about in Cities of Knowledge, the Defense Department was incentivizing these big defense contractors to decentralize and not have all their operations in one place so, if a Soviet bomb came, it wouldn't wipe out the whole joint. That’s why Lockheed moved its missile and space division to Sunnyvale. 

There were these twin magnets in the Valley. You had Stanford, which was on the make and working really hard to bring in federal money and upgrade its place in the hierarchy of universities. And you had Lockheed, which was hiring thousands of electrical engineers to work on missile and space projects. 

And from the get go, the nascent tech industry was already there before the war, specializing in oscillators and communications technologies like radar and microwave radio. So that's the building blocks of the modern computer revolution, with miniaturization of electronics, right? Once the transistor was invented—not in Silicon Valley but in Bell Labs in 1947—the capacity for electronics to get smaller and smaller and more powerful starts amping up.

Then you have communications technology. First there was time sharing and then there was the internet. 

At this is time, Seattle was building airplanes and Boston was the hub of computing. There was no computing industry in the Valley for a long time. It was all East Coast. But once you had these twin magnets in this agricultural Valley, you start seeing East Coast electronics industries opening labs and satellite facilities in and around there to take advantage of all these smart young men coming out of Stanford or the offshoots of Lockheed. 

By the end of the fifties, the Valley was not Silicon Valley yet, but it was known as a hub of small electronics. If you wanted to look for electrical engineers, you needed to go to California. There was this new symbiosis. The young men who came out there when it was still remote. It wasn’t that close to San Francisco. There was nothing going on there. Just two bars, and it was just boring. By and large, they were not people with connections or rich fathers or guys with Ivy league degrees. They weren’t going to get a job at aFortune500 company or work their way up in their father's law firm or bank, or they wouldn't go all the way out to California. The guys who come out were middle-class boys. Many of them were scholarship students and smart engineers who didn't have family connections and didn't have personal wealth even though many of them became very wealthy later. 

And they didn't come into the game with money, but they were lucky. They were privileged. They were white, they were male, they were native born. They were middle or lower middle class but they were college educated. So that set them apart. And they were coming out when all of the winds were blowing in their direction. 

If you were a smart MIT- or Stanford-trained engineer in the fifties, the world was your oyster. The Cold War was creating this huge demand for people just like them. They've got their pick of where to work. They worked in companies like Sylvania or Litton or other companies that no one remembers anymore. 

Some went to Lockheed, which was the biggest employer in the Valley through the 1980s. This is not really recognized, partly because almost everything they did was top secret and no one could talk about it. They couldn’t write magazine cover stories about top secret missile research, so there wasn't buzz about it like there was on the commercial side.

So that's the beginning. That's how all these people started.     

 

Robin Lindley: You vividly bring to life the daily activities of the workforce of mostly white male tech experts. You also mention some outstanding women in tech. What would you like readers to know about the role of women in Silicon Valley? 

Margaret O’Mara: That there have been women there all along. The early Valley was a manufacturing region, filled with microchip fabrication plants and the rest, and that workforce was heavily feminized as well as being disproportionately Asian-American and Latina. Women who started their careers picking and canning fruit when the Valley was mostly an agricultural region then shifted over into electronics production as the industry grew. (The fiercely anti-union stance of the tech companies, however, meant that these jobs were not unionized, nor did workers often share in the benefits given to white-collar workers, like stock options).

The early days of computer programming involved a heavily female workforce, in good measure because the coding was seen as something rote, simple, unskilled.  All the art—and all the money—was in the hardware.  Even as a software business started to bloom in the 1970s and early 80s, however, there remained a good deal of technical women in the industry simply because the pool of trained people was smaller and a growing industry was desperate for programmers. 

I should also add that another critically important group of women in the Valley were the wives of the male engineers and executives who pulled long hours in semiconductor firms and others. The work hard, play hard atmosphere of the industry was made possible by the fact that most of these men had wives at home who were keeping everything else running, caring for children and household, so that the husbands could throw themselves into their work.  In short, women have always been part of the tech story. They just haven’t gotten much of the glory.

 

Robin Lindley: And Silicon Valley eventually eclipsed the traditional research hub in Boston. Was that because of the very different cultures?

Professor Margaret O’Mara: They had different cultures, but something I came to appreciate in the process of writing this book was the symbiotic relationship between Boston and the Bay Area. It’s similar in some ways to the symbiotic relationship now between Seattle and the Bay Area where these two [tech centers] now are. They are competitors, but they share people who ping back and forth and money that goes back and forth. And the same thing with Boston and the Bay Area. You see people going from MIT to Stanford and back to MIT. 

Stanford and the Bay Area had the weather advantage, so people tended to move West and not go back East. But the capital was still East Coast centered until the eighties when you had tech venture capital, the money guys, out West. And there was a lot of venture capital. investing in high tech. These guys were all over. Some were investing in Chicago, on the East Coast, in the Midwest. 

The decisive move West was not really until the late eighties with the death of the minicomputer industry and the swift decline of Digital and Wang, which were two big players in Boston. At the same time, the end of the Cold War shook the Boston economy more. It was more defense dependent than the Bay Area by then. Both areas were shaken by the end of the Cold War, but Boston didn't recover, and it didn't have a second act after minicomputers. It didn't have high tech venture capitalists or entrepreneurs that were then going on to found other companies. It didn't have that multigenerational dimension. It just had one big act, and that was it, although there's still plenty going on there now in biotech. 

So Boston's still very much an important tech hub, but it's not what you have in the Valley. I think that's where the culture comes in. What develops in the Valley develops partially in isolation. I call it an “entrepreneurial Galapagos” because of the isolation. You have these strange species such as law firms that are specializing in high tech startups, like Wilson Sonsoni, that are figuring out how you structure a corporation that's founded by a couple of 22-year-olds who have no experience [as in the case of Apple]. You have high tech venture capitalists that are not just providing money, but they're providing very hands-on mentorship and executive direction to these companies. And in fact, they staff them up there. Basically, the VCs swoop in and bring the rest of the executive team and bring the adult supervision. They connect these new companies into the network, and that becomes this multigenerational thing.

And then you have the fact that the Valley has been specializing from day one in small electronics and communications devices. At the beginning of the commercial internet, it's perfectly poised to be the dominant place in that space even though the internet was not invented in the Valley, but was a Department of Defense creation. But the Valley researchers and technologists were at the forefront of miniaturization of digital technology and digital communication and software and hardware that enabled communication since the very beginning. 

 

Robin Lindley: Speaking of the internet, we know that Al Gore didn’t invent the internet, but wasn’t he largely responsible for bringing this technology from the military and academia to consumers?

Professor Margaret O’Mara: Al Gore was one of the few politicians in Washington in the 1980s and 1990s who really took the time to learn about and understand the industry and where it was going. Newt Gingrich was another.  And Gore’s great contribution was pushing forward the commercializationof the Internet in the early 1990s, opening it up to all kinds of users and allowing it to become a place of buying and selling. 

The Internet had been around for over 20 years by then, but it was a noncommercial space, restricted for most of its existence to academics and defense-sector government employees.  As a senator, Gore sponsored legislation that gave the Internet backbone with the computing power it needed to scale up into a commercial network and supported the opening of the Internet to commercial enterprises. As Vice President, he led the push to write the rules of the road of the network, which resulted in the protocols and standards that govern its use today as well as resulted in Internet companies being quite loosely regulated. This allowed the dot-com boom and the social media and search platforms that followed, but, as we now see, had consequences that few could have anticipated in the early 1990s. 

 

Robin Lindley: You write that Seattle and Silicon Valley are part of the same whole. How do you see that relationship?

Professor Margaret O’Mara: I talk a lot in the book about Amazon and Microsoft and the evolution of those companies because they're very important now. One reason I do that is you see how, from the very beginning, both companies had very close ties to the Bay Area and that every element of the Seattle innovation ecosystem has connections here that are really important. 

You have very early venture capital money from the Valley that capitalized Microsoft. You have the same for Amazon. And then it goes the other way. Jeff Bezos personally invested in Google at a very early stage. And you have this crisscrossing of people and capital and expertise that’s just a two-hour flight away. 

My theory is that one reason the venture capital community hasn't gotten bigger in Seattle than one would expect is partially because it's easy to fly down and raise money. And now Seattle is getting some benefit from the overcrowding and saturation of the Bay Area because it’s harder and harder to live there. So people are coming up to Seattle. We'll see what happens. 

 

Robin Lindley: That’s an illustration ofthe importance of free movement of people in America, as you stress. You also discuss how immigration shaped the tech industry, especially after the 1965 Immigration and Nationality Act. How was immigration important to the development of Silicon Valley?

Professor Margaret O’Mara: Critically important. In the book, I highlight the 1965 Hart-Celler Act, the immigration reform that ended more than 40 years of racially restrictive quotas on foreign immigration and made possible whole new streams of immigration from Asia, Latin America, and the rest of the world. Many of them, particularly immigrants from East and South Asia, came to Silicon Valley.

Even before that reform, immigrants and refugees were critical parts of the tech story. Take Andy Grove, legendary CEO of Intel, who came here as a 20-year-old refugee from Hungary, speaking little English and undoubtedly doing little to impress the immigration officials processing his entry paperwork when he arrived in 1956. Or Yahoo founder Jerry Yang, the California-raised son of a Taiwanese single mom. Or Sergey Brin, son of refugees from Soviet Russia. The list goes on and on.

 

Robin Lindley: What are your thoughts on regulation or other measures to address big tech as concerns deepen about monopolization, disinformation, privacy, and other issues?

Professor Margaret O’Mara: It’s up to lawmakers to decide the best path going forward, but this history is critical to helping them make informed decisions about how to do so. And American history, more broadly, provides instructive insight into understanding this moment.  

Over a century ago, Washington DC and the states were beset by similar debates about how to rein in the power of giant corporations and their billionaire CEOs. Then the industries in question were railroads, oil, and steel. Now its social media and search and e-commerce and cloud computing. But the basic questions of fairness, competition, and finding the right balance between capitalist enterprise and government guardrails remain.  

 

Robin Lindley: I wanted to close with your perspective as a historian. You have said that history makes you an optimist. That may be an unusual posture for a historian in view of the innumerable accounts of disaster, war, and injustice that you study. 

Professor Margaret O’Mara: I think history makes you a realist and it can make you an optimist. And a real, very important thing for historians who teach history and who care about history is that we need to interrogate and deconstruct narratives that don't actually align with historical truth. And we must discuss times when we didn’t live up to our ideals, and people who've been long been marginalized, and disempowered voices, and the privileging of some voices over others. That’s what we call being a realist. We must be real. 

Particularly now, in thinking about American democracy and global democracy, you both need to have realism, but you also have to provide the people who are reading your history or listening to you in class help in understanding where they can have grounds for optimism as well as a realistic sense about the past.

 Facts can be empowering. Knowledge is power. We can use that power to think about and give tangible examples of how people spoke truth to power. There are examples of collective mobilization or individual actions that have had significant societal consequences. 

There are examples in American history of dark, violent, horrible, horrible moments in our past, and so many times in which America did not live up to the ideals it purports to stand for. And yet these are ideals that were laid out in the first place that we are asked to aspire to. There are examples of particular people who have been excluded by the way that these ideals have been executed in practice, and they fought against that exclusion and for having a voice and their rights.

I go back to the fact that the descendant of slaves was our last First Lady. And that tells us there's some progress, right? And here I am, a senior tenured female history professor at the University of Washington. You go back to the era of my great grandmother and I would have not even have been given a job. And, if I had been given a job, I certainly wouldn't have been given tenure or job security or the authority to speak in the way I now can with this platform. And I feel that's an incredible privilege I have.

 So what can I do to use that in the way that lifts up as many other people and inspires people to change the world that they see and make it a truly better place. I have spent a lot of time writing about people who yammered about making the world a better place. I think they believed that. There’s a desire that lies within the human heart to make the world a better place. I ask how can society be arranged in a way that is as fair and as just as possible to advance that desire and to allow that human potential to be realized.

 

Robin Lindley: Thank you for your thoughtful and inspiring remarks Professor O’Mara and congratulations on your groundbreaking new book on Silicon Valley, The Code.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173696 https://historynewsnetwork.org/article/173696 0
The History of Black Incarceration Is Longer Than You May Think

 

The United States contains less than 5 percent of the world’s population but incarcerates one-quarter of all prisoners across the globe. Statistics have long shown that persons of color make up a disproportionate share of the U.S. inmate population. African Americans are five times more likely than whites to serve time in prison. For drug offenses alone, they are imprisoned at rates ten times higher. 

 

Recent scholarship has explored the roots of modern mass incarceration. Launched in the 1980s, the war on drugs and the emergence of private, for-profit prison systems led to the imprisonment of many minorities. Other scholarship has shown that the modern mass incarceration of black Americans was preceded by a nineteenth-century surge in black imprisonment during the Reconstruction era. With the abolition of slavery in 1865, southern whites used the legal system and the carceral state to impose racial, social, and economic control over the newly liberated black population. The consequences were stark. In Louisiana, for example, two-thirds of the inmates in the state penitentiary in 1860 were white; just eight years later, two-thirds were black.

 

The incarceration of African Americans did not begin suddenly with the end of the Civil War, however. Confinement functioned as a punishment during bondage as well. Masters were the law on their own plantations and routinely administered their own brand of justice. Although they usually relied on the whip, countless enslavers also chained their human property in plantation dungeons below the main dwelling house or in a barn. Some locked enslaved persons in a hot box under the scorching southern sun. The more formal legal system, too, sometimes deposited enslaved individuals in state or local incarceration facilities. 

 

Charlotte, an enslaved woman from northern Virginia, experienced several of these institutions firsthand over a seventeen-year period. Using court records to trace her life illustrates the many official, lawful forms of imprisonment that the enslaved might encounter in the antebellum era.

 

In 1840, Charlotte was held in bondage in Clarke County, Virginia, west of Washington, D.C. She was only sixteen or eighteen years old, a dark-skinned, diminutive young woman, standing just four feet eleven inches tall. Legally, she was the property of Eliza Pine, a white woman whom Charlotte despised. Reportedly thinking that committing a crime would prompt Pine to sell her, on March 10, Charlotte set fire to a house in the town of Berryville. She was arrested for starting the blaze and placed in the local jail as she awaited trial.

 

Enslaved people were imprisoned briefly in local public jails or workhouses under a variety of circumstances. Masters sometimes made use of such facilities to punish bondpeople deemed troublesome or, if needed, to store them securely. Enslaved individuals apprehended as runaways or awaiting trial or sale at auction also saw the inside of city or county jail cells. In all of these instances, the enslaved usually measured their terms of incarceration in just days or weeks.

 

Even that was too long for most slave owners. Local jails were notoriously overcrowded, damp, and disease-ridden. The deplorable conditions inside endangered inmates’ health and imperiled their lives. Consequently, most masters preferred to keep their valuable human property out of jail.

 

Charlotte was taken out of her cell for trial on Monday, March 23, 1840. Although she pleaded not guilty, the five Clarke County justices who heard her case convicted her of arson – a capital crime – and sentenced her to hang. They scheduled Charlotte’s date with the gallows for Friday, June 26, between the hours of 10 a.m. and 2 p.m. They valued her at $500, which represented the amount her owner would receive from the commonwealth of Virginia as compensation for the loss of the valuable young bondwoman. As customary, after the trial, authorities escorted Charlotte back to her cell in the Clarke County jail. There she would bide her remaining days until her planned execution.

 

Meanwhile, whites in Clarke County labored to prevent Charlotte’s impending doom. The five justices who had convicted her, in fact, recommended at the time of the verdict that Virginia governor David Campbell commute Charlotte’s punishment to sale and transportation outside the limits of the United States – a lawful alternative to hanging – due to her “Youth and evident Simplicity.” They sent the governor a separate petition as well, also signed by the prosecuting attorney at Charlotte’s trial. Two other petitions from dozens of citizens of Berryville and the surrounding area likewise reached the Virginia governor. Citing Charlotte’s youthful age and purported deficiency in intellect, they begged for executive mercy on her behalf.

 

Newly inaugurated Virginia governor Thomas Walker Gilmer viewed Charlotte’s case sympathetically and issued the desired reprieve. Since Charlotte would now be sent outside of the United States, authorities transferred her to the Virginia State Penitentiary in Richmond, where she and other enslaved convicts awaited purchase by a slave trader willing to carry them out of the country for sale. She was admitted on April 15. In her new prison world, Charlotte listened for her name at roll call each morning, wore prison garb, swept her cell daily, ate carefully doled out rations, and labored for the commonwealth, all the while struggling to avoid punishment and disease.

 

Enslaved people like Charlotte rarely saw the inside of a penitentiary in the pre-Civil War South. Maryland sentenced bondpeople to the penitentiary from its opening in 1812 until 1819, taking in some sixty slaves during those years. Arkansas permitted the imprisonment of enslaved convicts in the state penitentiary for certain, specified crimes only briefly, before changing the law in 1858. After 1819, only the state of Louisiana habitually punished enslaved criminals with prolonged sentences in the penitentiary, usually for life. Virginia courts did not sentence enslaved people directly to confinement in the penitentiary, although the commonwealth did house on a temporary basis those individuals such as Charlotte as the process of sale and transportation outside of the United States unfolded. Virginia bondpeople typically spent only months to a year or two in the penitentiary before being purchased by a slave trader.

 

Charlotte remained in the Virginia State Penitentiary for five months before she was bought by a slave trader willing to carry her out of the country. On September 16, Rudolph Littlejohn, an agent for Washington, D.C., slave dealer William H. Williams, took delivery of her and twenty-six other enslaved captives. Altogether, Williams and a partner paid the commonwealth $12,500 for the lot.

 

Charlotte and the other enslaved transports first made their way to Williams’ private slave jail in Washington, D.C., known as the Yellow House. This was the same establishment, just south of the National Mall and within easy sight of the U.S. Capitol, where the kidnapped free black man Solomon Northup would find himself enchained in a basement dungeon the following year. Williams and his agents purchased enslaved people from throughout the Chesapeake and stored them in the Yellow House until they had assembled and prepared a full shipment for sale in the Deep South, where enslaved people were in high demand and attracted high prices. New Orleans was the usual port of destination.

 

After less than a month, William H. Williams had gathered enough enslaved men and women to fill a ship. His slaving voyage set sail from Alexandria aboard the brig Uncas on October 10, with sixty-eight total captives on board, including the enslaved convicts purchased in Richmond. On November 1, Williams and his human cargo arrived in New Orleans.

 

Authorities in New Orleans had been warned, however, that Williams may appear in their city with enslaved convicts in tow, a violation of a Louisiana state law passed in 1817 that prohibited the introduction of enslaved criminals. When officials spotted Williams, they confirmed the criminal pasts of the transports, some of whom had been convicted of violent offenses against whites. Concerned for the safety of Louisiana’s citizens, the New Orleans Day Police confiscated the convict bondpeople and carried them to the Watch House at city hall for safekeeping. Charlotte and the other convicts entered yet another jail.

 

Williams protested that he was merely passing through Louisiana en route to Texas, in 1840 a foreign country eligible to receive enslaved convicts. As Williams launched his defense in the Louisiana court system, Charlotte and the other transports were transferred from the Watch House to the recently completed Orleans Parish prison. Several of the convicted bondpeople from Virginia remained there for years as Williams pursued his case; others ended up in various other incarceration facilities within Louisiana. Litigation continued, and years elapsed before the Louisiana Supreme Court ultimately ruled against the slave trader.

 

Soon thereafter, on March 13, 1845, Charlotte and nine of her fellow transports were transferred to the Louisiana State Penitentiary in Baton Rouge. Listed as “forfeited to the state,” their new master was the state of Louisiana. Some two hundred enslaved people were held in the Louisiana State Penitentiary in the antebellum decades. While enslaved male inmates toiled in the brickyard or cotton factory for the penitentiary lessees, Charlotte and the other female convicts did the washing and mending in the prison laundry. Prisoners at the penitentiary donned the convict’s uniform, which included an iron ring around the leg, linked by an iron chain to a belt around the waist. The penitentiary itself consisted of a three-story brick structure. Prison guards deposited inmates in cramped, individual cells, three and one-half feet wide and seven feet deep, secured by a iron door, poorly ventilated, and unheated in the winter. Prisoners slept on mattresses placed on the floor and, at mealtime, ate mush and molasses from a tin plate in their cell, in the dark and alone. Eventually, overcrowding at the institution forced inmates to share the space with another prisoner, although new accommodations were eventually built for female convicts in 1856.

 

Segregation by sex or race was never perfect during the antebellum decades. Imprisoned bondwomen routinely bore offspring more than nine months after they entered the penitentiary. Charlotte gave birth while in prison to three children – John, Mary Ann, and Harriet – before January 1855. The identity and race of the father or fathers are unknown, the circumstances surrounding conception uncertain. With both black and white men among the prison population, enslaved women may have willingly participated, in spite of vigilant officials, in loving relationships or clandestine affairs with fellow prisoners. At least as likely, female convicts proved captive, convenient, and vulnerable targets for the unwanted advances of inmates, coercive white guards, or other penitentiary authorities who wielded power over them. The prospect of rape was ever-present. At the same time, it is possible that the relatively few enslaved women in the Louisiana State Penitentiary were able to leverage their sexuality to extract various favors from those in charge or from inmates able to smuggle in goods from the outside. Given the range of possible encounters, Charlotte’s son and daughters may have been the products of consensual acts, forced sex, coercion, or some combination thereof.

 

A Louisiana law of 1848, unique among the slaveholding states, declared that children born to enslaved female prisoners confined in the penitentiary belonged to the state. An act of 1829 forbade the sale of enslaved children under the age of ten away from their mothers, however, so the state was legally obligated to keep them together until the child’s tenth birthday. At that time, the state could seize the youngster as state property and auction him or her off to the highest bidder. The proceeds of such sales went to the free school fund, to finance the education of Louisiana’s white schoolchildren.

 

Charlotte and her children met a different fate. Slave trader William H. Williams spent years lobbying the Louisiana state legislature for the return of the enslaved convicts confiscated from him in November 1840. Finally, in 1855 and 1856, lawmakers passed a pair of individual acts for his relief. By the terms of these agreements, Williams regained possession of the surviving enslaved transports from Virginia as well as the “issue” – the children – born to the enslaved women of that shipment. On February 7, 1857, the Louisiana governor discharged Charlotte, her son and two daughters, and the other Virginia convicts from the penitentiary and restored them to Williams for sale to new owners. At that point, Charlotte can no longer be tracked in the historical record. Presumably, the only lingering form of imprisonment she suffered prior to emancipation was the institution of slavery itself.

 

Over the course of her lifetime, Charlotte was incarcerated, sequentially, in at least six different facilities: a local Clarke County jail, the Virginia State Penitentiary, the Yellow House slave pen, the New Orleans Watch House, the Orleans Parish prison, and a second state penitentiary, in Louisiana. Although a few bondwomen in Louisiana served prison terms in excess of two decades prior to abolition, Charlotte’s seventeen years in confinement ranked her among the longest-serving felons, black or white, in antebellum U.S. prison history.

 

Individually, Charlotte’s experience was unusual among the enslaved. Masters wanted their human property working profitably, not imprisoned, except perhaps briefly as a punishment that owners themselves determined. Charlotte’s story is nevertheless significant in demonstrating the range of carceral institutions to which black people were subjected even during slavery. With the exception of William H. Williams’ private slave jail, designed specifically to accommodate enslaved captives bound for sale, whites outnumbered blacks detained in all of these facilities in the antebellum decades. But African Americans were nevertheless present in these institutions even during slavery. By the outbreak of the Civil War, the seeds for the later mass incarceration of black people were already planted, the institutional structures already in place, and the precedents for black imprisonment already set. With the end of slavery, prisons were well positioned to transition from a secondary to a primary form of black oppression.

 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173701 https://historynewsnetwork.org/article/173701 0
A Marvelous Christmas Carol

 

The Christmas season has opened before you even smelled the turkeys from Thanksgiving Day and, thanks to New York’s new and wonderful play A Christmas Carol, it is a joyous season to behold.

 

This new A Christmas Carol, based on Charles Dickens’ novel, has a different look to it, a different musical score, a different Scrooge and different ghosts. But it is the same heart-warming story of cheap old Ebenezer Scrooge and how three ghosts appear from nowhere and help him change his miserable life. And, of course, it has Tiny Tim in all of his working-class glory.

 

God Rest Ye Merry Gentlemen and God thank thee Charles DIckens.

 

The fun in this new production at the Lyceum Theater, on W. 45hth Street, that opened last week, starts before the show begins when more than a dozen men and women, dressed in md nineteenth century London clothing, scatter through the audience handing out oranges and bags of cookies to whoever they can find. You didn’t get any? Don’t worry. The folks scramble up on to the stage and toss them out to different spots in the crowd (one 30ish woman was throwing perfect strikes to people way up in the balcony. The New York Yankees should sign her as a pitcher).

 

Then, solemnly, the Brits all re-appear with bells and play Christmas carols. Then, at long last, cranky old Ebenezer, the Cratchits and the old gang appear on stage and plunge into this delicious chestnut of a play with glee and joy.

 

The story is simple and just about everybody knows it. Scrooge is an old creep who hates everybody and dismisses Christmas with a big “humbug.” He is mean to his employee, hard-working Bob Cratchit, the father of fragile young, crippled Tiny Tim, who is going to die because mom and dad have no money for doctors.

 

Scrooge is awakened from his sleep that night by his former partner, Marley, dead these long seven years now. Marley tells him that he will be visited by three ghosts representing Christmas past, Christmas present and, ominously, Christmas future. The ghosts, all women in this play, absolutely terrify Scrooge. They take him around London and remind him that in his youth he was a normal human being. He loved a woman, Belle, worked for a witty and warm man, Fezziwig and had a fine nephew, Fred. Then he began to chase the almighty dollar and gave up his relationships with everybody. Oh, he became very rich. What did he have for all his money?  Well, not much. The ghosts point that out.

 

Can he be saved on this long and cold Christmas Eve or will he fly into the arms of Satan well below upon his death? He sees himself dead in the play and shivers.  

 

In ghost round three, Scrooge struggles to keep his senses as he finally sees his life as a real heartless, soulless tragedy, and not a trip to the bank.

 

Playwright Jack Thorne has kept most of the legendary Scrooge story intact, but he has taken out some parts and added others. Example: most productions of A Christmas Carol are heavy on mid nineteenth century sets, large casts and plenty of onstage parties. Thorne keeps it simple and streamlines the story. He works hand in hand with gifted director Matthew Warchus to create a handsome new version of the classic play. A Christmas Carol has got to be the most produced play in the world during the holidays. It seems like everybody presents a version and the different film productions of it made in Hollywood are on television constantly. This new one on Broadway is clearly one of the best, and has all those bags of chocolate chip cookies flying through the air, too (I’ll bet that old Scrooge saw no “humbug” in chocolate chip cookies).

 

If you like history, you’ll enjoy this play about the 1850s era in England, and the moves, too. Dickens wrote it after observing numerous incidents of discrimination and persecutions against poor people in London and studied the lives of children who persevered in the numerous workhouses of the day. A Christmas Carol is Christmas for the lower classes, but, really, Christmas for everybody.

 

Much of the success of the play must be attributed to the sterling performance of Campbell Scott as Scrooge. He is happy and sad and joyful and miserable. He had a forlorn movement to his step in much of the play, but a bouncy one at the end. Most importantly, he plays Scrooge as a 60ish man and not the very old grouch seen in most productions of the play. He is young enough to be saved, and young enough to save himself. Scott s just wonderful.

 

Director Warchus, who does a fine job recreating 1850s London, gets other fine performances from Andrea Martin and Lachanze as the ghosts, Dashiell Eaves as Bob Cratchit, Brandon Gill as nephew Fred, Evan Harrington as Fezziwig, Sarah Hunt as Belle, Dan Piering as young Ebenezer and Sebastian Ortiz as lovable little Tiny Tim.

 

So, a Merry Christmas to all of Scrooge’s friends in merry olde London town and a merry Christmas to all in America, too.

 

PRODUCTION: The play is produced by the Old Vic Theater, in London. Sets and Costumes: Rob Howell, Lighting: Hugh Vanstone, Sound:  Simon Baker. The play is directed by Matthew Warchus. It runs through January 2, 2020.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173702 https://historynewsnetwork.org/article/173702 0
Fake News and the Founders: Get Used to It!

 

“American Nation Debauched by WASHINGTON!” screamed a newspaper headline before charging the Father of Our Country with “the foulest designs against the liberties of a people.” 

President Donald Trump would call it “fake News” and George Washington most certainly would agree. 

 

After he read Philadelphia’s Aurora in December 1796, President Washington blasted the story as “indecent …devoid of truth and fairness.”—and most of America’s Founding Founders concurred. Indeed, when South Carolina’s Charles Pinckney and Elbridge Gerry, of Massachusetts, proposed to the 1787 Constitutional Convention “that the liberty of the press should be inviolably observed,” Connecticut’s Roger Sherman responded angrily, “It is unnecessary!” Most delegates agreed, rejecting the proposal seven to four against. 

 

Four years later, however, the First Congress included free-press guarantees in the first of ten constitutional amendments, collectively called the Bill of Rights. Freed from government constraints, many newspapers used First Amendment rights to uproot government corruption, but others used them as licenses for libel, giving birth to “fake news” in America. 

 

After Washington left office, the press assailed his successor, President John Adams, with fake news that he was “a warmonger,” “insane,” and possibly “a hermaphrodite.” Adams’s successor Thomas Jefferson fared no better, as opposition newspapers tarred him as “an atheist, radical, and libertine” and “son of a half-breed Indian squaw sired by a Virginia mulatto.”

 

Jefferson had championed press freedom until fake news changed his thinking: “Nothing can now be believed which is seen in a newspaper,” he charged. “Truth itself becomes suspicious in that polluted vehicle.” The press had the last word, however, publishing the not-so-fake news of Jefferson’s sexual relationship with Sally Hemings, a slave girl Jefferson inherited. 

 

Fake news did not diminish as the nation matured. Indeed, it became entwined in the nation’s literary fabric. In the run-up to the 1828 presidential election, the Cincinnati Gazette “exposed” candidate Andrew Jackson, the hero of the Battle of New Orleans in the War of 1812, as a “murderer, swindler, adulterer, and traitor…. 

 

General Jackson’s mother was a COMMON PROSTITUTE, brought to this country by the British Soldiers! She afterwards married a MULATTO MAN, with whom she had several children, of which number General JACKSON IS ONE!!!

 

Americans ignored the fake news and elected Jackson President, but Rachel Jackson, the new President’s wife, suffered a heart attack and died before his inauguration.

 

Twenty years later in 1848, fake news that “Canada’s woods are full of Chinese…ready to make a break for the United States,” provoked American ‘49ers to run thousands of Chinese ‘49ers off the Sierra Nevada gold-laden mountains at gun point and steal their claims. As fake news of a “yellow menace” intensified, Congress passed the 1882 Chinese Exclusion Act barring Chinese entry into the United States for the next 60 years. 

 

Fake news about Asians grew more virulent after Japan’s December 1941 attack on Pearl Harbor. Newspapers across America clamored for expulsion of everyone of Japanese ancestry.  “They are not to be trusted.” AndThe Argus in Seattle predicted: “If any Japs are allowed to remain in this country, it might spell the greatest disaster in history." The Bakersfield Californian concurred, claiming, “We have had enough experiences with Japs.” 

 

President Franklin D. Roosevelt’s responded with an executive order that sent anyone with at least 1/16th Japanese ancestry into concentration camps without trial or due process--120,000 in all, including 17,000 American children.

 

Since its first appearance during the early days of the republic, the wide variety of fake news has often made it difficult to identify. Some is born of innocent misreporting or failure to uncover all facets of a story, but as much or more results from bias in the form of misstatement, misreporting, or misinterpretation.  Deliberate placement of a story on the front or inside page—or omission of a story--can also reflect bias by lessening the story’s impact on readers.

.

Aside from its ill effects on American politics, fake news can have dangerous consequences, as in 2011, when newspapers published a rogue scientist’s contention that vaccinations caused diseases they were meant to prevent. Enough parents responded by refusing to allow their children to be vaccinated. A nation-wide epidemic of measles followed--years after compulsory universal vaccinations had eradicated the disease.

 

When President Washington ended his second term in 1797, he had so tired of fake news by “infamous scribblers,” he rejected pleas to remain in office. “I am become a private citizen,” he wrote with joy, “under my own vine and fig tree, free from the intrigues of court” -- and, he might have added, “fake news.” 

 

© Harlow Giles Unger 2019

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173695 https://historynewsnetwork.org/article/173695 0
Russian Victories in the Post-Cold War Era

 

For over four decades following World War II, the Soviet Union engaged in a global Cold War with the U.S., aiming to destabilize America’s status as a world power. By any measure, the U.S.S.R. lost that war. But years later --after the fall of the Berlin Wall and dissolution of the Soviet bloc – Russia, under the leadership of Vladimir Putin, sought a different approach to continue to battle against the United States. Unlike the previous outcome, Russia is clearly winning this new war! 

 

The original goal to chip away at American global dominance was fairly simple, but old Cold War tactics were mostly obsolete in the 21st century. Combining sophisticated misinformation and hacking initiatives, in addition to artfully using old Cold War methods of espionage (especially targeting Americans who could be compromised], Russia under Putin has tallied some remarkable achievements beyond wildest dreams of his predecessors who tried unsuccessfully to undermine the United States.  Russia’s timing was perfect, as Putin and his oligarchs put in place the pieces of a puzzle that have been wildly successful in weakening its Western nemesis.

 

Robert S. Muellers’ Report On The Investigation Into Russian Interference In The 2016 Presidential Electiondocuments extensively the coordinated Russian cyber attacks with the intent of influencing the American electorate with a disinformation campaign and to sow long standing racial and other discord. This effort could only have its intended impact by aligning with a U.S. presidential candidate --who by any calculation had little chance of being elected but who was obviously already compromised--  and hoping that somehow he could pull off an upset victory.

 

When Donald Trump was elected as U.S. President in November 2016 (to the great surprise of most Americans and probably to Putin’s astonishment as well), Russia achieved what no regime had ever achieved before. The golden prize was an American president, perhaps compromised far beyond what U.S. intelligence has revealed thus far – the leader of the Free World who has consistently advocated a pro-Russian agenda. This was a remarkable feat on the heels of an equally successful campaign to lure and reel in several close associates to Trump to do Russia’s bidding with the new president and his administration. 

 

Putin’s plan worked like magic: a U.S. president who at every step supports Russia’s international agenda and publicly advocates pro-Russian positions. The list grows every month of Trump’s efforts to bolster Russia: from inciting divisions within NATO,  to the recent G6 Summit where the president tried to argue on your behalf for Russia to be readmitted to the group, and most recently the departure of U.S. armed forces from northern Syria clearing the path for Russian dominance in the region. We can only wonder what information Putin has on Trump to make the president an ardent defender and enthusiastic pro-Russian advocate.

 

Putin’s Russia is winning battles to destabilize the U.S. that former USSR leaders such as Joseph Stalin, Nikita Khrushchev, and Leonid Brezhnev had tried but failed.  Russia’s “new” war against the U.S., weaponized with a president who every day undermines American democratic institutions, has opened an unprecedented front in the ongoing battles between the two nations. Who will win this war remains uncertain.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173694 https://historynewsnetwork.org/article/173694 0
Overcoming Cold War Narratives: Remembering the Progressive Politics of Louis Adamic

 

From the 1930s through the 1950s, Louis Adamic was one of the best known journalists and immigrant rights activists in the United States. His editorials and columns championing immigrant and African American rights, workplace justice, and anti-colonialism appeared in the New York Times, The Nation, Harper’s Weekly, and the Saturday Evening Post. His advocacy of ethnic equality changed middle school, high school, and college curricula by encouraging teachers to recognize cultural differences as an asset for students to discuss not an obstacle to be overcome. Eleanor Roosevelt praised his efforts to fight nativist and racist policies in her syndicated newspaper column, My Day. In fact, her appreciation for his work resulted in Adamic and his wife being invited to dinner at the White House.

 

During World War II Adamic’s writings, speeches, and radio interviews about politics in his native Yugoslav helped convince Americans to support communist guerrilla fighter Josip Broz Tito’s Partisan fighters there. However, Adamic’s support of Tito led Federal Bureau of Investigation agents to surveil him until his death in 1951 because they labeled anyone who praised a Communist as a subversive. Simultaneously, Americans’ growing admiration for the Partisans convinced the Treasury Department to make Adamic one of its leading spokespeople for selling war bonds.

 

Adamic’s nuanced positions have been forgotten today because the binary nature of anticommunism in the mid-twentieth century made his progressive politics incomprehensible to a majority of Americans. Adamic urged Americans to embrace pluralism. The US, he argued, was not an Anglo Protestant nation, but a land of migrants and refugees continuously redefining themselves. Adamic called for a global embrace of pluralism. He pointed to Tito’s promise to his Serb, Slovene, Croat, Montenegrin, Bosnian Muslim, and Macedonian soldiers that victory over their Axis occupiers would result in a Yugoslavia based on ethnic equality. Adamic continued to support Tito after the Yugoslav leader broke from the Soviets in 1948. He hoped that  the new Yugoslavia’s pluralist roots would ultimately lead Communist Yugoslavia to evolve into a true democratic republic. He believed both the Soviets and Western powers were imperialists and he predicted, correctly, that Yugoslavia would join with colonial nations to champion a world where countries did not have to choose an alliance with either the US and USSR. During the early Cold War period Adamic’s thinking presented such a conundrum to the FBI, the agency created a special category of subversive for him, “a pro-Tito Communist.” At the same time the US poured millions of dollars into Yugoslavia hoping to exploit divisions between the Soviets and Tito.

 

As my research shows, most scholars have failed to grasp Adamic’s politics. He called himself a progressive which for him meant supporting racial and ethnic equality, workers’ rights, and a foreign policy that granted nations the right to self-determination. He criticized both liberals and Communists as unreliable. Liberals, he charged were “too wishy washy” and Communists always followed the dictates of Moscow. He wanted to advance a globally conscious anti-colonial left. His views threatened the emerging Cold War consensus, and anticommunists of all stripes purposefully mischaracterized his positions in order to silence him. Anticommunist activists feared that the growing anti-colonial sentiment he and African American activists, non-Communist labor leftists, and peace activists advocated would challenge the growing contingent of Cold War liberals. 

 

Adamic’s quest to convince Americans to find an alternative to a Cold War ended on September 4, 1951. His local coroner ruled his death a suicide despite the fact that New Jersey State Police detectives suspected foul play. Prior to his death, Adamic had been moving from hotel to hotel in New York City while he worked on his thirteenth book, an account of his 1949 visit to Tito’s Yugoslavia. He went into hiding because he believed that former Croatian fascist soldiers (Ustaše), who came to the US as Displaced Persons, would follow through on their threats to kill him. The Nazis allowed the Ustaše, under the leadership Ante Pavelić, to set up the Indepndent State of Croatia during World War II. Living in exile in Argentina after the war, Pavelićordered assassins still loyal to his vision of Croatian nationalism to kill his enemies all over the world. According to interviews conducted by both FBI agents and journalists of Adamic’s friends and associates, Adamic believed they wanted to kill him for his role in convincing the American public to support Tito during World War II. The FBI agents who monitored Adamic noted that although a Soviet agent also threatened him, he only feared the Croatian fascists. By early September, he thought the danger had passed. He was clearly wrong. 

 

Adamic’s anticommunist critics used his death to portray him as an agent of Stalin who broke with the Kremlin by supporting Tito and suffered death as a consequence. Smearing Adamic as a Communist made him toxic. Teachers and college professors stopped assigning his books, and he and his progressive politics, rooted in his opposition to fascism, faded into obscurity.

 

Death to Fascism demonstrates that the reemergence of progressive politics today, and its links to antifascism, has a long history. For Adamic, fascism, at its root, was an ethos uniting disparate strands of conservative and reactionary thinking into an anti-Enlightenment counterrevolution that sought to destroy democracy by appealing to beliefs in racial superiority and glorifying violence. For democracy to truly be fascism’s antithesis, those claiming to fight for democracy then and now need to commit to his progressive agenda of racial and ethnic equality, workers’ rights, and the rights of nations to self-determination.     

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173691 https://historynewsnetwork.org/article/173691 0
Trump Skips ASEAN Summit, Continuing a Presidential Tradition

 

The Association of Southeast Asian Nations (ASEAN) held its 35th Summit in Thailand in early November 2019. ASEAN holds two summits annually. The second this year included meetings with Dialogue Partners including the United States. The 14th East Asian Summit –(EAS), a gathering of countries formed in 2005 initiated by ASEAN to nurture an East Asian community, which the United States belatedly joined in 2011, was held back to back. 

 

On 29 October, the White House announced that President Trump would not attend the summit and that Robert O’Brien, the new National Security Advisor, and Secretary of Commerce Wilbur Ross would go instead. Trump also invited the leaders of ASEAN to meet in the United States for a “special summit” at “a time of mutual convenience in the first quarter of 2020”. The invitation reminds us of two previous invitations in the recent past.  

 

In May 2007, it was announced that then-US President George W. Bush would visit Singapore in September to attend the ASEAN-US commemorative summit marking 30 years of relations. However, in July it was reported that Bush would not be coming to Singapore after all and that the meeting of ASEAN leaders would be rescheduled “for a later date”. Compounding the disappointment, Secretary of State Condoleezza Rice also decided to skip the ASEAN Regional Forum (ARF) because of developments in the Middle East that required her attention. Her deputy, John Negroponte, represented her. Rice’s absence was certainly a “dampener”. 

 

Not surprisingly, the Southeast Asian countries felt that they were, to quote the late Surin Pitsuwan who would assume the post of Secretary-General of ASEAN the following year, “marginalised, ignored and given little attention ”while Washington and other allies were “moving firmly and systematically to cultivate a closer and stronger relationship in the Asean region”. President Bush attempted to make up for the aborted meeting with ASEAN leaders in Singapore by inviting them to a meeting at his Texas ranch on a date convenient for all. Bush apparently reserved such invitations “as a diplomatic plum for close allies”, but in the end the meeting did not take place because of scheduling difficulties and dis-agreements over Myanmar. Bush also announced that Washington would appoint an ambassador to ASEAN “so that we can make sure that the ties we’ve established over the past years remain firmly entrenched”. However, there remained the feeling that these actions were an after-thought. 

After Barack Obama became president, many in Southeast Asia hoped that the United States under Obama would pay attention to their region. To a certain degree, Obama delivered,  but the Obama administration was also distracted by domestic politics and the financial crisis (beginning in 2008). President Obama made a final sprint to shore up US-Southeast Asia relations when he hosted the Sunnylands Special Summit in February 2016 - the first such summit between the US and ASEAN held in the United States. In a short statement issued at the end of two days of relatively informal talks, everyone reiterated their “firm adherence to a rules-based regional and international order” and “a shared commitment to peaceful resolution of disputes, including full respect for legal and diplomatic processes, without resorting to threat or use of force”. ISEAS Senior Fellow Malcolm Cook commented that “it was late-term exercise in symbolism over substance, lacking any clear affirmation of future U.S. commitment to Washington’s Asian rebalance policy”. 

President Trump’s decision to skip the meetings in Southeast Asia this month must feel like déjà vu to the leaders in the region. It is too early to tell whether the proposed summit will take place given the small window of opportunity before the US election season gets into full swing. If it materializes, it is uncertain how productive it will be given the many distractions facing President Trump. The Southeast Asian states critically need a countervailing force, both in form and substance against China, which only the United States can provide. It looks like the region will have to wait with bated breath till late-2020 before we can tell how US-Southeast Asia relations will further develop. 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173698 https://historynewsnetwork.org/article/173698 0
Lincoln – not Pilgrims – responsible for Thanksgiving holiday

 

Most Americans believe that the Thanksgiving holiday originated with New England’s Pilgrims in the early autumn of 1621 when they invited the Wampanoag Indians to a feast to celebrate their first harvest.                  

 

However, the Pilgrims’ Thanksgiving was actually a continuation of a European agricultural tradition in which feasts and ceremonies were held during harvest time.                            

 

In fact, President Abraham Lincoln established the holiday in 1863 as a permanent fixture on the calendar to celebrate Union victories in the Civil War and to pray to God to heal a divided nation.

 

Prior to 1863, the U.S. government informally recognized periodic days of thanksgiving. In 1777, for example, Congress declared a day of thanksgiving to celebrate the Continental Army’s victory over the British at Saratoga. Similarly, President George Washington, in 1789, declared a day of thanksgiving and prayer to honor the new Federal Constitution.  But it took the national trauma of a Civil War to make Thanksgiving a formal, annual holiday.                                                

 

With the war raging in the autumn of 1863, Lincoln had very little for which to be thankful.  The Union victory at Gettysburg the previous July had come at the dreadful human cost of 51,000 estimated casualties, including nearly 8,000 dead.  Draft riots were breaking out in northern cities as many young men, both native and immigrant, refused to go to war. There was personal tragedy, too.                        

 

Lincoln and his wife, Mary, were still mourning the loss of their 11-year-old son, Willie, who had died of typhoid fever the year before. In addition, Mary, who was battling mental illness, created tremendous emotional angst for her husband.           

 

Despite - or perhaps because of - the bloody carnage, civil unrest and personal tragedy, Lincoln searched for a silver lining. Sarah Josepha Hale, editor of Godey's Lady's Book, provided the necessary inspiration.                                                 

 

Hale, who had been campaigning for a national Thanksgiving holiday for nearly two decades, wrote to the president on September 23 and asked him to create the holiday “as a permanent American custom and institution.”                           

 

Only days after receiving Hale’s letter at the White House, Lincoln asked his Secretary of State William Seward to draft a proclamation that would “set the last Thursday of November as a day of Thanksgiving and Praise.”                                             

 

On October 3, the president issued the proclamation, which gave “thanks and praise” to God that “peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere, except in the theater of military conflict.”                                 

 

Unlike other wartime presidents, Lincoln did not have the arrogance to presume that God favored the Union side. Instead, he acknowledged that these “gracious gifts” were given by God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy.                                         

 

Lincoln also asked all Americans to express thanks to God and to “commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife,” to “heal the wounds of the nation,” and to restore it as soon as may be consistent with Divine purposes to the full enjoyment of peace, harmony, tranquility and Union.”                                                                     

 

Since 1863, Thanksgiving has been observed annually in the United States. Congress insured that tradition by codifying the holiday into law in 1941, days after the U.S. entered World War II.                             

 

At a time when we are struggling with the volatile issues of race, immigration and the impeachment of a president who has divided the nation along partisan lines, Lincoln’s Thanksgiving proclamation reminds us of the necessity to put aside our differences, if only for a day, and celebrate the good fortune that unites us as a people regardless of ethnicity, race or creed.                           

 

Perhaps then we can do justice to the virtuous example set by Lincoln, who urged us to act on the “better angels of our nature.”    

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173700 https://historynewsnetwork.org/article/173700 0
American Exceptionalism and Why We Must Impeach Trump

 

It has become an article of faith for many, even among those with no faith, that the idea of American exceptionalism is at best outmoded or at worst a delusional construct of the elite to bully the rest of the world.   

   

This attitude, perhaps formed by years of unmet expectations, is as shortsighted as it is unfortunate.  People lucky enough to be native born or those who earn naturalization are the most fortunate fraction of the human race. America is the wealthiest, most powerful and on the whole the freest nation in all history. The benefits of geography, the abundance of resources, the fluidity of society and above all our commitment to self-government under the rule of law have made America exceptional. Exceptional does not mean flawless. America, like any human endeavor, is imperfect. The fact that we can see this in our country and work to remedy it does not mark us as deficient, but rather people committed to the more perfect union cited in the preamble of the constitution. 

 

To recognize the blessing of exceptionalism does not mark Americans as braggarts, but rather grateful heirs of history’s gift with the duty to maintain and improve it for future generations.  Nor is it an excuse for nationalism. Our alliances and engagement with the rest of the world are a source of strength, not as the demagogues rant a system of weakness and lost prestige. 

 

American exceptionalism has come under its most serious attack from the presidency, what would have seemed in normal times the least likely source . The current occupant of the White House has done more to destroy this country and injure its place in the world than any foreign army has ever achieved. Under the cynical and empty slogan “Make America Great Again,” he has made this country far worse. Behind the façade of the bumper stickers and the baseball caps, the president has trashed NATO, which more than anything prevented WW III. Our allies around the world are throwing aside decades of hard earned trust. He perverted our fragile diplomatic relations by extorting Ukraine to obtain nonexistent dirt on a potential political opponent. Without warning, he brutally betrayed our Kurdish allies in the field of combat, an act of treachery so brazen and despicable as to stain America for centuries. He has consistently alienated leaders of democratic countries while getting in bed with the authoritarian and despotic leaders of Russia, Turkey, North Korea, and the Philippines. He has embraced the phony populism of Poland, Hungary, Italy and most regrettably the United Kingdom, where Prime Minister Boris Johnson is his clone experiment gone awry. As the evidence of the Mueller report clearly showed, the right wing howling aside, he colluded with Russia to win office. 

 

On the home front he has defended white supremacists, openly ordered the obstruction of Congress and the courts, discriminated against transgender service members with absolutely no justification other than bigotry, violated campaign finance laws to silence a mistress, thumbed his nose at the constitution-based ban on emoluments, and allegedly committed forcible rape in a department store dressing room.  Even under the narrowest definition of high crimes and misdemeanors, he has violated the letter and spirit of his oath of office. For this and for other reasons too numerous to name, he must be impeached, convicted and removed immediately.  

 

American exceptionalism has been wounded but it is still alive. The surviving remnant of its tattered spirit must rally itself once again to extricate the malignancy that crept its way into our country. The weight of our heritage and the promise of our future demand that we act in the present to restore our exceptional place in the world and in our hearts.

 

© Greg Bailey 2019

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173693 https://historynewsnetwork.org/article/173693 0
“You furnish the pictures and I’ll furnish the war” In this and like communities, public sentiment is everything. With public sentiment, nothing can fail; without it, nothing can succeed. Consequently he who moulds public sentiment, goes deeper than he who enacts statutes or pronounces decisions. He makes statutes and decisions possible or impossible to be executed.

-Abraham Lincoln, Ottawa, Illinois, August 21, 1858, Debate with Stephen Douglas

 

Whatever is right can be achieved through the irresistible power of awakened and informed public opinion. Our object, therefore, is not to inquire whether a thing can be done, but whether it ought to be done, and if it ought to be done, to so exert the forces of publicity that public opinion will compel it to be done.

-William Randolph Hearst, unpublished editorial memorandum, date unknown

 

 

William Randolph Hearst’s journalistic credo reflected Abraham Lincoln’s wisdom, applied most famously in his January 1897 cable to the artist Frederic Remington at Havana: “Please remain [in Cuba]. You furnish the pictures and I’ll furnish the war.”

             

For the past two decades, journalism professor W. Joseph Campbell has argued in labored academese that the story of Hearst’s telegram is a myth. In an online monograph I have refuted Campbell’s inaccurate and misleading assertions, and have countered his analytical approach. Here I shall address the historiographical aspects of the debate: How can one write credible history from incomplete, ambiguous, and at times contradictory evidence? How can one test the reliability of witnesses?

 

In the absence of the actual telegram, or of confirmation by the sender or the recipient or both, historians must rely on second-hand reports. Each report must be independently and conjunctively evaluated in its respective context. Authors’ motives need to be understood, if any might influence their declarations. 

 

Most primary sources that quote or paraphrase Hearst’s cable to Remington are excerpts from personal memoirs. The passage of time often corrodes and corrupts recollections. Authors of integrity sometimes succumb to exaggeration and embellishment. With those cautionary concerns in mind, let us proceed to explicate two pertinent records.

 

Charles Michelson’s Reminiscence

 

In his 1944 autobiography The Ghost Talks, President Franklin D. Roosevelt’s press agent Charles Michelson recounted his youthful ordeal as Hearst’s New York Journal and San Francisco Examiner correspondent in Havana in 1895 and 1896:

 

All this time Hearst was plugging for war to free Cuba from the Spaniards. Fiery editorials and flaming cartoons came out daily picturing Weyler the Butcher, Weyler being the new governor-general of the island. One day the paper came in with a two-page illustration of Weyler flourishing a blood-dripping sword over the female figure supposed to represent Cuba. Just before this I had gone to a little town in western Cuba where a battle was being fought, according to reports. There wasn’t any battle. A rebel troop had marched in, had drilled in the public square, and the men had marched away again. A couple of hours later Spanish troops appeared, and there was some shooting; but as far as I could learn, the casualties were only among the civilian population. I had to show my credentials to the Spanish authorities, so they knew my identity. The night of the day on which the ghastly picture appeared, my door flew open and a Spanish secret service official told me that I was under arrest. They took me down to the water’s edge, put me in a boat, and took me over to Morro Castle, where I was locked up.

 

Through a combination of diplomacy and bribery, Michelson was released.

 

And I took the first boat I could get to Key West.

 

Later I was sent back to the Caribbean with Richard Harding Davis and Frederick Remington to join the rebels. Mr. Hearst kindly furnished us with his yacht Vamoose. That craft was a hundred and ten feet long with a ten-foot beam. It was a grand thing in the Hudson River, but I never could find a captain who would take us across the Gulf Stream to Cuba. Whenever we got to the turbulent current, something would go wrong with the machinery and the captain would insist that we go limping back to Key West. So the jeweled sword I was to present to General Gomez of the rebel forces was not delivered. It was in the course of this incident that a famous telegraphed correspondence between Remington and Mr. Hearst was supposed to have taken place. According to the story, Remington wired that he was returning, as he did not think there was going to be any war involving the United States; and Hearst is reported to have replied, “Go ahead, you furnish the pictures and I’ll furnish the war.”

 

When Hearst’s war with Spain belatedly began in 1898, he dispatched Michelson to cover it, along with Stephen Crane. Michelson continued to work for Hearst until 1918. He disagreed with Hearst’s editorial stance that opposed United States involvement in World War I, so he switched to a paper that favored entry.

 

Michelson’s memoir falls short of affirming that Hearst had sent the telegram to Remington. But he had served as a close associate of Hearst for more than twenty years — during which national media had published the undisputed story in 1901, 1906, and 1912. It seems likely to me that if Hearst had credibly repudiated its essence, Michelson would have heard and reported that also. 

 

More than that, the story was consistent with Hearst’s editorial stance and with his assignments to Michelson. The excerpt I quoted from his book doesn’t prove the telegram’s authenticity, but it supplies a morsel of positive evidence.

 

Jimmy Breslin’s Version

 

On page 4 of the New York Daily News for Sunday, February 20, 1983, legendary columnist and feature writer Jimmy Breslin wrote:

 

In my past business, I sat one night in the sports department of the old Hearst paper, the Journal-American, checking the horse-race charts. On the other side of the space there was an old man from the wire room who one night showed me ancient copies, or maybe they were facsimiles, I couldn’t tell, of telegrams that were sent to and from the Hearst paper in 1898.

 

One was from Frederic Remington, the Western artist, who was at the Hotel Inglaterra in Havana. His wire was addressed to Mr. William Randolph Hearst Sr., and it read:

 

“Everything is quiet. There is no trouble here. There will be no war. I wish to return.”

 

And the wire sent back to him from the New York Journal, as it was known then, read:

 

“The Chief says: Please remain. You furnish the pictures and I’ll furnish the war.

“Signed Willicombe, Secretary to Mr. Hearst”

 

The lead article on page 2 of the paper was headlined “Libyans score U.S.” over a report about the Libyan government’s threatened reprisals if the aircraft carrier USS Nimitzwere to enter Libyan waters as President Ronald Reagan had commanded. In bold italic type above the article appeared this note, which established the context for Breslin’s report of his encounter with the exchange of telegrams between Remington and Hearst: “Jimmy Breslin remembers the Maine and hopes he won’t have to remember the Nimitz. Page 4.”

 

From mid-1959 to early 1962, Breslin worked as a sports reporter for Hearst’s New York Journal American. His published recollection came at least 20 years later, long enough for memory to fray and fade. Not surprisingly, he got at least two details wrong. The occasion for the telegrams was in January 1897, not 1898. Joseph P. Willicombe did not become Hearst’s private secretary until the 1920s, but he held that position for the rest of his career, and his was the only name widely associated with that title. 

 

Neither error is reason to distrust the essential truth of Breslin’s statement, which otherwise rings true. It is congruent with credible reports going back to July of 1901, and it adds information that previous authors omitted. Attributing authorship to Hearst’s secretary seems likely, but to my knowledge has not been reported by earlier writers who probably had not examined copies of the actual documents. 

 

(As one who collects artifacts of postal history and written communication, including some 19th century telegrams, I would add that written messages and signatures are often puzzling to read. An unintelligible scrawl might have appeared to be Willicombe’s name to a reader predisposed to identify his name with the title Breslin knew he had held.)

 

Conclusion

 

The sentiment Hearst expressed in his telegram to Remington represented a consistent application of his philosophy, which implemented Lincoln’s axiom as a business principle. No writer who knew Hearst personally seems to have doubted it. I can see no ulterior motive in either Michelson’s or Breslin’s report that would cause me to distrust or to disbelieve either one. If these were my only sources, they would be insufficient to press my case, but in the absence of credible contrary sources, they enhance my earlier exposition.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173692 https://historynewsnetwork.org/article/173692 0
The Mysterious Assassination That Unleashed Jihadism

 

At quarter past noon on 24 November 1989, a red Chevrolet Vega approached the Sab’ al-Layl mosque in Peshawar, Pakistan. As the crowd prepared to greet the arrival, a roadside bomb ripped through the car, killing everyone inside. Peshawar at this time was plagued with violence, but this was no ordinary assassination.

 

The victim in the passenger seat was Abdallah Azzam, the spiritual leader of the so-called Afghan Arabs, the foreign fighters who travelled in the thousands to fight the Soviets in Afghanistan in the 1980s. The struggle to rid Afghanistan of Russian occupation was viewed in most of the Muslim world as a legitimate case of military jihad, or religiously sanctioned resistance war. A Palestinian cleric, Azzam had joined the Afghan jihad in 1981 and spent the decade recruiting internationally for the war. By 1989, he was a living legend and the world’s most influential jihadi ideologue. 

 

Azzam is not well known in the West today, but he is arguably the father of the jihadi movement, the cluster of transnational violent Islamist groups, such as al-Qaida and Islamic state, who describe their own activities as jihad. Azzam led the mobilization of foreign fighters to Afghanistan, thereby creating the community from which al-Qaida and other radical groups later emerged. His Islamic scholarly credentials, international contacts, and personal charisma made him a uniquely effective recruiter. Without him, the Afghan Arabs would not have been nearly as numerous.

 

He also articulated influential ideas. He notably argued that Muslims have a responsibility to defend each other, so that if one part of the Muslim world is under attack, all believers should rush to its defence. This is the ideological basis of Islamist foreign fighting, a phenomenon which later manifested itself in most conflicts in the Muslim world, from Bosnia and Chechnya in the 1990s, via Iraq and Somalia in the 2000s, to Syria in the 2010s. Moreover, he urged Islamists –people involved in activism in the name of Islam –to shift attention from domestic to international politics, thus preparing the grounds ideologically for the rise of anti-Western jihadism in the 1990s. Earlier militants had focused on toppling Muslim rulers,such as in Egypt, where the group Egyptian Islamic Jihad killed President Anwar al-Sadatin 1981, or in Syria, where a militant faction of the Muslim Brotherhood launched a failed insurgency against the Assad regime in the late 1970s. Azzam, by contrast, said it was more important to fight non-Muslim aggressors. Azzam himself never advocated international terrorism, but his insistence on the need to repel Islam’s external enemies later became a central premise in al-Qaida’s justification for attacking America.

 

Azzam’s most fateful contribution, however, was the idea that Muslims should disregard traditional authorities – be they governments, religious scholars, tribal leaders, or parents - in matters of jihad. In Azzam’s view, Islamic Law was clear: if Muslim territory is infringed upon, all Muslims have to mobilize militarily for its defence, and all ifs and buts are to be considered misguided. This opened a Pandora’s box of radicalism, creating a movement that could not be controlled. After all, how do you get people to listen to you once you have told them not to listen to anyone? 

 

Azzam felt the early effects of this problem in his lifetime, but managed to keep order in the ranks through his immense status in the community. After his assassination, however, there was nobody left to rein in youthful hotheads, and the jihadi movement entered a downward spiral of fragmentation and brutality. While the Afghan Arabs in the 1980s had only used guerrilla tactics, some of their successors turned to international terrorism, suicide bombings, and beheadings. The ultra-violence of Islamic State in recent years is only the latest iteration of this process.

 

So who killed him? Some have suggested a fellow Afghan Arab, such as Osama bin Laden or Ayman al-Zawahiri, but this seems unlikely given the reputational cost to anyone caught red-handed trying to kill the godfather of jihad. Others have blamed foreign intelligence services such as the CIA or the Mossad, but Azzam was not important enough for them. Yet others have mentioned Saudi or Jordanian intelligence, but these countries had no habit of assassinating Islamists at this time. Afghan intelligence (KhAD) had reason to kill Azzam earlier in the war, but not in late 1989. Many have accused the Afghan warlord Gulbuddin Hekmatyar, who resented Azzam’s growing affection for his archenemy Ahmed Shah Massoud, but new evidence shows that Hekmatyar and Azzam were actually close personal friends. 

 

We are left with the Pakistani Inter-Services Intelligence (ISI), which had both the capability and a motive. In the late 1980s, the Afghan Arabs had become a nuisance, criticizing Pakistan more openly and meddling in Afghan Mujahidin politics. No hard evidence pins ISI to the crime, but the circumstantial evidence is compelling. The operation was sophisticated and required personnel movement around the site before, during, and after the attack. The location and timing suggest a desire to shock the Arabs, because Azzam could easily have been liquidated quietly in a drive-by shooting. Still, we cannot put two lines under the answer, and the Azzam assassination remains the biggest murder mystery in the history of Islamism. 

 

The story of Abdallah Azzam suggests that a root cause of modern jihadism was the collapse in respect for religious authority among young Islamists in the late 1980s. Azzam initiated it, his disappearance accelerated it, and the repercussions have been devastating. This is also one of History’s many lessons in unintended consequences, for it is a fair bet that neither Azzam himself nor his assassins intended for things to turn out this way. 

 

The question now is whether the genie can be put back in the bottle and religious authority reasserted over young people seduced by jihadism. There are some signs that the excesses of Islamic State has undermined the movement’s appeal, but it is too early to tell whether it is possible to undo the damage of that mysterious blast thirty years ago.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173697 https://historynewsnetwork.org/article/173697 0
The Best Work in History Illuminates Life Now: An Interview with Angela Woollacott

 

Angela Woollacott is the Manning Clark Professor of History at the Australian National University, an elected Fellow of the Royal Historical Society, the Academy of the Social Sciences in Australia, and the Australian Academy of Humanities, and a former president of the Australian Historical Association. Hernew book Don Dunstan: The visionary politician who changed Australia (Sydney: Allen and Unwin, 2019) was supported by an Australian Research Council Discovery grant. She has published widely in the fields of Australian and British Empire history; women’s history; colonialism, race and gender; biography, transnational and political history. She is currently on the editorial advisory board for the Historical Research and Publications Unit at the Australian Department of Foreign Affairs and Trade; on the editorial advisory boards of three academic journals and has recently served on an advisory panel at the Reserve Bank of Australia for its new generation of banknotes.

What books are you reading now?

 

Daily life as an academic necessitates becoming a promiscuous and, to some extent, cursory reader. It seems that I always have several books part-read, despite my natural inclination being to finish one before starting another. The idea of reading an entire book in a leisurely way, in a comfortable armchair, often seems remote. For the course that I am currently teaching on 19th century Australian history, right now I’m dipping into Iain McCalman, Alexander Cook and Andrew Reeves (eds.), Gold: Forgotten Histories and Lost Objects of Australia (Cambridge UP, 2001). In order to develop my ideas about my next research project, I am in the midst of Tracey Banivanua Mar, Decolonisation and the Pacific: Indigenous Globalisation and the Ends of Empire (Cambridge UP, 2016). In my pile of upcoming deadlines, the book that I am reviewing for an Australian literary magazine is Margaret Simons, Penny Wong: Passion and Principle (Carlton, Vic.: Black Inc., 2019), a biography of the current Leader of the Opposition in the Australian Senate. And, of course, there is always a novel for which I wish I had more time. At the moment, it is Andrew Sean Greer, Less (Abacus, 2017), the 2018 Winner of the Pulitzer Prize for Fiction.

 

What is your favorite history book?

 

I hate to pick just one favourite, because there are so many that I admire. But, if it must be just one, a book I often tell students about is Judith C. Brown, Immodest Acts: The Life of a Lesbian Nun in Renaissance Italy (Oxford UP, 1986). In piecing together the story of Sister Benedetta Carlini, Brown demonstrates the possibilities of imaginative historical research. She shows how exciting an unlikely archival find can prove to be, and provides a model of taking a limited quantity of archival evidence, and spinning a rich historical monograph from it through contextual material and vivid writing. In quite a short book, Brown explicates early modern convent life and acts of resistance; patriarchal control and officious administration within the Catholic church; and the fabric of social life in regional Italy, including fears and superstitions people of the valleys held for those of the mountains. Sister Benedetta’s sexual relationship with another nun is the dramatic core of the narrative. Yet part of the book’s richness is that the sex cannot be understood without grappling with the role of supernatural visions in religious belief and practice. 

 

Why did you choose history as your career?

 

Looking back, it’s almost as though history found me. I was always an avid reader, a habit nurtured within my family. But as I was in the first generation in the family to have the privilege of a university education, my parents were not surprisingly pleased when I studied law, and less enthusiastic when I dropped that to pursue research in history. Nor was I any more certain than they that it could lead to remunerative work. I just kept following one opportunity after another, starting with an Honours degree in History (a fulltime, disciplinary-specific fourth year that is a peculiarity of Australian universities). Next came a research position at a museum of political history, then post-graduate study in History at the University of California, Santa Barbara, followed by a fortuitous appointment as an Assistant Professor in History at Case Western Reserve University, in Cleveland, Ohio, immediately after I completed my PhD. 

 

When I chose to specialize in History as a discipline, it followed on from an interest in Political Science. Political Science’s preoccupation with paradigms had never sat very well with me. History offered the full explanations, the fascinating stories, and the fabulous breadth of topics and questions. And it is as gratifying and rich a discipline to me now as it was when I started out—albeit a discipline that has had many twists and turns along the way.

 

What qualities do you need to be a historian?

 

Curiosity and a vivid imagination help. And a willingness to spend many hours in the archives, persistently going through box after box. Perhaps most of all, historians need to care about literature and writing. There is always the debate about whether history is a social science or a humanities field; in fact, it is both. But because we are in the humanities too, more than some of the other social sciences, we need to pay attention to the grace and flair with which we write. 

 

I’ve heard it said that historians were the shy ones at school: just wanting to hide in the library reading a good book. There may be a grain of truth in that, but we also need to be engaged with the current world, because the best work in history illuminates life now, even if not in superficially obvious ways.

 

Who was your favorite history teacher?

 

I benefitted from inspiring teachers both at school and university, and again I hate to pick just one. But I will mention one seminar in my postgraduate program that was especially stimulating. During my time as a PhD student at the University of California, Santa Barbara in the 1980s, the History Department was fortunate to have as a visitor Professor Robert Darnton of Princeton University. He came for one term and offered a seminar in Cultural History and Anthropology that was capped (I think at 12) and evenly split between History and Anthropology graduate students. Word got out quickly and enrolment filled up within days; fortunately I signed up early. Each week we read one work of history and one of anthropology, connected by theme. It was a fascinating intellectual experience, and I learnt so much. It was fun to interact with the anthropology graduate students, and wonderful to get to know Robert Darnton – including at the casual dinners most of us went to following the seminar. Cultural analysis was the new buzz in historical methodology in the 1980s (we all became afficionados of Clifford Geertz’s ‘thick description’), so it was a very timely educational experience which enriched my work. But it also opened me up to other interdisciplinary approaches, so that I became interested in the ‘linguistic turn’ when that erupted in the 1990s. I felt fortunate to have had that seminar. 

 

What is your most memorable or rewarding teaching experience?

 

Like many academics, I truly enjoy lecturing, and tutorials (Australian for discussion sections) can be wonderful when they go well. Marking (Australian for grading) is not my favourite part! When I think about memories of teaching across my career, some students spring to mind. Naturally, a few very bright and talented students stay with you, especially when they pursue academic careers and one can take pleasure in tracking their progress. But others stay in one’s mind too. There are a few whom I recall particularly because of the life experiences they shared with me—survivors of family trauma. Also I remember a student who chose to study despite enduring a terribly debilitating terminal condition, which made every aspect of study challenging; his interest in history seemed to help him keep going, which was very moving.

 

What are your hopes for history as a discipline?

 

We seem to be at a moment in the historical discipline when scholars are seeking to reconcile national historical frames with global, international, transnational and world history. I’m hoping that we can move forward fruitfully, recognizing that national frames are inevitable, and global and transnational ones are indispensable for understanding the past and the dynamics of change. 

 

On another note, as a long-term stalwart of women’s, feminist and gender history, I hope that we can maintain the vital insights that feminist methodology has given to the humanities and social sciences. Certainly conferences and journals in the field are flourishing, but I worry that women’s and gender history courses have been dropped from university curricula. We need to keep presenting these approaches and insights as a core part of undergraduate education.

 

More broadly, we need to reclaim history’s importance in the public intellectual domain. Biography and war histories do well in bookstores, but they are often most of what you find in the section labelled “History.” Historians must actively participate in the arenas of public discourse, to promote the vital role of our discipline in civic society.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I’m not a collector per se. I do own some rare books, but obtained them when I was researching particular topics. For example, years ago, I was interested in the history of the early 20th century birth control movement. I bought some books by Marie Stopes and Margaret Sanger from second-hand bookshops, and still have them – it’s a mini-library of early birth control advice! Right now, though, I’m not sure what will stay in my library and what won’t. The School of History here at the Australian National University will move into a new building in about six months. It’s shaping up to be a beautiful, striking building. But the offices are all half the size of my current office, and I’m going to have to dispense with most of my filing cabinets and around half of my books!

 

What have you found most rewarding and most frustrating about your career? 

 

Being a historian is a very rewarding and privileged life. I’m often aware of how fortunate I am to have a well-paid career pursuing my intellectual interests, and spending a lot of time reading things I find interesting. When I look at people in the corporate world, I thank my lucky stars. Apart from following my own interests, spending one’s life at a university (albeit various ones) means that there are always stimulating events to attend. 

 

Of course, teaching has its pains (grading) as well as its pleasures. The best part of teaching, for me, is supervising good PhD students. It’s so rewarding to work with mostly younger historians, passionately pursuing their own intellectual creations, and to watch their success.

 

The most frustrating thing about being an academic is the workload, and the near-impossibility of having any reasonable work-life balance. It helps to have, as I very fortunately do, a partner who is also an academic and is understanding and supportive. But the demands on us are extreme, and it is very difficult to juggle them. Teaching is enormously time-consuming, and we get virtually no professional rewards for it. Promotion is always based on research and publishing, but managing to publish when one is also teaching, supervising, sitting on committees, reviewing books and manuscripts for others, writing letters of reference etcetera, means a major overload. And email is the rock of Sisyphus!

 

How has the study of history changed in the course of your career?

 

The discipline has changed in so many ways. Subjects that were radically transformative when first emerging (such as Black and Indigenous history, women’s/feminist history, history of sexuality, postcolonial history etc.) in the 1970s-90s have become more or less mainstream. As a feminist historian, I’m a bit sad that thematic women’s and gender history courses have lost their popularity—though just today in a discussion class one student commented that she has enjoyed gender being a recurrent theme in our course this semester, rather than the one week at the end she thinks is now typical. 

 

Looking back there were moments when the field was riven by heated and personal debate—such as over the ‘linguistic turn’ and post-structuralist theory in the 1990s, and here in Australia what we call the History Wars of the same decade over the extent of frontier warfare here in the 18th– 19th centuries. Presently we don’t seem to have such exchanges, and little theoretical discussion, other than parsing the terms global, transnational, world and imperial and their implications. 

 

What is your favorite history-related saying? Have you come up with your own?

 

I quite like the oft-quoted notion that the past is a foreign country. It suggests that, no matter who you are, you need to do the research to explore the past, and to be open to surprises and discoveries.

 

What are you doing next?

 

My latest book just came out two months ago. It’s a biography of an Australian politician who was a leading progressive reformer in the 1960s – 1970s, and it’s published by Allen and Unwin, Australia’s leading trade press. I’ve never done a trade book before, and it’s been quite an exciting ride! The book went into a second printing less than a month after it came out. There have been two launches, each by a nationally-prominent political figure. I’ve been on the program of one writers’ festival and will do another. And it’s been very widely and positively reviewed, with considerable newspaper coverage and radio interviews. So I think now I will just sit back and read some books by other scholars for a while!

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173690 https://historynewsnetwork.org/article/173690 0
Going blue in the Bluegrass State? History echoes in Kentucky’s gubernatorial results

 

Conservatives dismiss it as an aberrationor maybe the natural consequence of an incompetent and unpopular incumbentLiberals spin it as the GOP’s vulnerability going into 2020.

 

But Republican Gov. Matt Bevin’s narrow defeat this month in Kentucky – which Donald Trump carried by 30 points in 2016 – could mean something else, too:

 

Perhaps deep-red regions like Eastern Kentucky aren’t as reliable for Republicans and unwinnable for Democrats as conventional wisdom suggests.

 

Mainstream perspective – that these areas are politically intransigent – means they don’t draw serious campaign attention, that voters aren’t seeing due respect from candidates. The absence perpetuates a cycle, propping up those political assumptions about rural America while proving Lee Atwater’s riff on Marshall McLuhan: Perception is reality.

 

For those living in hidden-seeming pockets of Eastern Kentucky, where I was born and raised, the perception-turned-reality is that they don’t matter, that the country’s leadership doesn’t care about them or their needs. Over the long term, these perceptions help enable damaging, all-too-real conditions of rural poverty, unemployment and poor health care.

 

I know this to be true. I grew up with people who display fierce independence, quiet generosity and a suspicion of authority, while working hard and holding strong pro-union sympathies. Such was natural given the long history of struggle against the exploitation and destruction wrought by mining companies.  Yet politicians and public officials often benefit from popular images that paint a different picture, seeking to inflame social, cultural and political divisions as a means of maintaining power. While we point fingers at convenient hillbilly stereotypes, these divisions, and the injustices they produce, become only more widespread and destructive.

 

In reality, places like this aren’t as intransigent as the conventional wisdom would have us believe. Not all that long ago, Eastern Kentucky was a Democratic stronghold.  This historical preference is punctuated by wide margins of victory for Democratic presidential candidates in 1980, 1992, and 2000. In Floyd County, in the heart of Eastern Kentucky and an area that Democratic gubernatorial candidate Andy Beshear just won, Jimmy Carter's margin of victory over Ronald Reagan was 44 points, while Clinton claimed a 53-point advantage over George H.W. Bush. Likewise, Al Gore's margin over George W. Bush was 32 points.

 

Even as far back as 1972, despite what a writer like J.D. Vance would have the public believe, voters in Breathitt County, part of Eastern Kentucky’s coal-mining country, preferred George McGovern over Richard Nixon by a whopping 18 points. McGovern, who captured a paltry 17 electoral votes, lost Kentucky overall by 29 points.

 

Data like these urge a rethinking of the binary logic of red- and blue-state politics, an order that ultimately works to silence (and disempower) local populations and communities.

For me, such results were made clear in a more personal way by a tattered portrait of John F. Kennedy that my grandmother displayed on the wall of her front porch. This image seemed to convey a faith that there were politicians out there who really cared about the working poor, the plight of coal miner and the farmer who worked the hillside fields in the hollows.

But faith can't sustain action forever, especially in the absence of any tangible results.

It all makes me wonder: Is the current slant of rural voters toward the GOP – and Eastern Kentucky’s slant in particular – less about political ideology and more a natural consequence of feeling neglected? Or, worse yet, a result of having been betrayed and lied to?

 

As the presidential election cycle continues to heat up, especially in light of events such as Bevin's defeat, perhaps there’s a reminder here: that politicians should not turn their backs on rural voters. Assuming some monolithic ideology in rural America ignores the diversity and agency of entire communities. Instead, politicians should work to reject political isolation. Speak to the issues, commit to change and, yes, campaign there along those narrow country roads, within those hollows. Seek to understand the history that has led us to the present, and give rural Americans the respect and commitment that come from fighting for them – and for their support.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173666 https://historynewsnetwork.org/article/173666 0
Coexistence and Sectarianism in the Modern Arab World

 

Every history of sectarianism is also a history of coexistence.  Every sectarian act or mobilization, after all, paradoxically calls attention to a pluralism it aims to counter or negate. In the case of the modern Arab world and the Middle East more broadly, the sectarian story has been highlighted in media, policy-circles, and academic scholarship.  Myriad antisectarian stories and narratives that also define modern Arab history have been largely ignored.

 

Part of this bias toward the study of sectarianism is understandable: the modern Arab world is afflicted by political and military upheavals that have often taken on (or have been represented as having essential) sectarian or religious overtones. Scholars and journalists who have tried toexplain and demystify current events have typically ended up imprisoned analytically.  As the best critical historical and anthropological work on the politicization of religion in the Middle East has shown, it is not simply sectarian networks and mobilizations that need to be exposed, but the ubiquitous language of sects and sectarian relations that undergirds them.  Sects are not natural.  They are produced by ideological and material effort.  

 

Some of the emphasis on sectarianism, however, is clearly politicized. For example, U.S. presidents as different as Barack Obama and Donald Trump have regurgitated the same demeaning imperial clichés about the Middle East. Both have insisted that the Middle East is haunted by allegedly endemic sectarian antagonisms and age-old tribal wars. Both have played up the idea of primordial oriental sectarianism to downplay US responsibility for creating the conditions that have encouraged a sectarian meltdown in the Middle East. 

 

In the face of such insidious notions of an age-old sectarianism, laypersons and scholars from the Arab world have often invoked a romanticized history of coexistence that too often has glossed over structures of inequality and violence. This romantic view of coexistence assumes it to be a static, idealized form of liberal equality rather than a variable state of affairs.  

 

More problematically, the term coexistence has often privileged the idea of monolithic ethnic or religious communities rather than understandingthem as dynamic arenas of struggle.  Those who adhere to deeply conservative, patriarchal notions of who and what represents any given community contend with those who insist upon more progressive understandings of what constitutes being Muslim or Christian.  To generalize about “the Christians” in the Middle East, for example, denies not only the obvious difference between a Phalangist Christian militiaman and a Christian liberation theologian; it flattens the different forms of Christian belonging, and the belonging of Christians, in and to the Middle East. 

 

In the case of the Middle East, in particular, the need to demythologize communities and their ideological underpinnings needs to go hand in hand with evoking a dynamic history of coexistence that transcends communalism. Sectarian ideology is inherently divisive.  It requires the conflation of historic religious communities with modern political ones, as if modern Maronites, Jews, or Shiis necessarily share a stronger bond with their medieval coreligionists than they do with their contemporaries of different faiths.  Sectarian ideology does not ask how religious or ethnic communities have been historically formed and transformed, how havethey policed and suppressed internal dissent, and who may represent or be represented within them. 

 

To pose such questions need not deny the historical salience of communal affiliations, but they do challenge their supposed uniformity. They constitute a crucial first step in seeing the inhabitants of the Arab world as politically, socially, and religiously diverse men and women rather than merely as“Sunnis,” “Shiis,” “Jews,” “Maronites,” “Alawis,” or, indeed as monolithic “Arabs” and “Kurds.”  Iraqis of different faiths, for example, have often made common cause that defy the alleged hold of normative sectarian identity, whether as communists or Iraqi Arab nationalists in the 1940s and 1950s, or, today, as outraged citizens fighting rampant corruption and sectarianism in their country.  

 

In fact, the contemporary fixation with the problem of sectarianism obscures one the most extraordinary stories of the modern Arab world: the way the idea of political equality between Muslims and non-Muslims, unimaginable at the turn of the nineteenth century Ottoman empire, became unremarkable across much of the Arab Mashriq by the middle of the post-Ottoman twentieth century.

 

This new age of coexistence depended at first on major reforms within the Ottoman empire that promulgated edicts of nondiscrimination and equality between 1839 and 1876.  It also depended on the antisectarian work of Arab subjects who began to think of themselves as belonging to a shared modern multireligious nation in competing secular and pietistic ways.

 

The transition from a world defined by overt religious discrimination and unequal subjecthood to the possibility of building a shared political community of multireligious citizens was—and remains—fraught by gendered, sectarian, and class limitations. The infamous anti-Christian riot in Damascus in July 1860, when Christians were massacred and their homes and churches pillaged, was clear evidence of the breakdown of the certainties of the long-established, highly stratified Ottoman Muslim imperial world.  In the new Ottoman national world that emerged in the second half of the nineteenth century, urban men took for granted their right to represent their communities and nations and to “civilize” their “ignorant” compatriots. 

 

The messiness of the story of modern ecumenical Arabnessthat encompassed Muslims, Christians,and Jews is undeniable.  But privileging a story of static sectarianism over one of dynamic coexistence downplays the significance of the fact that many Muslim Arabs protected their fellow-Damascenes in 1860.  After 1860, Arab Muslims, Christians,and Jews built, enrolled, and taught in ecumenical “national” schools across the Arab Mashriq that embraced rather than negated religious difference.  The rich heritage of the Islamic past from Andalusia to Baghdad, and a common Arabic language offereda treasure-trove of metaphors around which to build an antisectarian imagination for the future.  

 

Rather than divide the Arabs into binary and mutually exclusive “religious” and “secular” categories, as if these are the only conceptual categories that matter, much canbe gained by focusing on the ways Arabs struggled to build new ecumenical societies that fundamentally accepted the reality of religious difference and the possibility of its political transcendence. The ethno-religious nationalisms of the Balkans, the terrible fate of the Armenians at the hands of the Young Turks, the arrival of militant Zionists in a multireligious Palestine, and the puritanical Wahabis of Arabia offer fascinating counterpoints to the ecumenism of the late Ottoman and post-Ottoman Arab East.

 

The inhabitants of the modern Arab world, however, have hardly ever been masters of their own political fate.  The Ottoman empire was colonized and partitioned along sectarian lines by Britain and France after the First World War.  Nevertheless, in the post-Ottoman states of Syria, Iraq, Egypt, Lebanon, and Palestine,the anticolonial imperative to forge political communities that transcended religious difference gathered force.  So too did countervailing and often chauvinistic minoritarian, nationalist,and religious politics begin to crystallize.  While the latter politics has been well documented, it is the active will to coexist that needs to be studied today more urgently than ever before.  The persistence of a complex culture of coexistence provides a powerful antidote to the misleading and pernicious idea of the sectarian Middle East.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173667 https://historynewsnetwork.org/article/173667 0
Roundup Top 10!  

The impeachment hearings are a battle between oligarchy and democracy

by Heather Cox Richardson

Ukraine’s leaders were accustomed to wielding power by prosecuting their political opponents for corruption, and Yovanovitch’s push to end that practice earned their ire.

 

Why family separation is so central to Trump’s immigration vision

by Maddalena Marinari

Strengthening family ties has been key to overcoming nativism — and in 2020, it can do so again.

 

 

American Slavery and ‘the Relentless Unforeseen’

by Sean Wilentz

The neglect of historical understanding of the antislavery impulse, especially in its early decades, alters how we view not just our nation’s history but the nation itself.

 

 

Between the Lines of the Xinjiang Papers

by James A. Millward

The Chinese Communist Party is devouring its own and cutting itself off from reality.

 

 

Today’s problems demand Eleanor Roosevelt’s solutions

by Mary Jo Binker

It’s time to banish fear and take up constructive action.

 

 

Ten rules for succeeding in academia through upward toxicity

by Irina Dumitrescu

Universities preach meritocracy but, in reality, bend over backwards to protect toxic personalities.

 

 

The Last Time America Turned Away From the World

by John Milton Cooper

The unknown story behind Henry Cabot Lodge’s campaign against the League of Nations.

 

 

The GOP Appointees Who Defied the President

by Michael Koncewicz

Before Watergate became a story that dominated the national media in the spring of 1973, there were individuals within the Office of Management and Budget (OMB) and the IRS that took dramatic steps to block Nixon’s attempts to politicize their work.

 

 

The War on Words in Donald Trump’s White House

by Karen J. Greenberg

How to Fudge, Obfuscate, and Lie Our Way into a New Universe

 

 

Why abruptly abandoning the drug war is a bad idea for Mexico

by Aileen Teague

Long-term economic initiatives are good, but a power vacuum will make things more violent in the short term.

 

 

 

What Recognizing the Armenian Genocide Means for U.S. Global Power

by Charlie Laderman

It could spark a recognition that America First is the wrong course.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173687 https://historynewsnetwork.org/article/173687 0
Sondland Sings: Here's How Historians Are Responding Click inside the image below and scroll down to see articles and Tweets. 

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173146 https://historynewsnetwork.org/article/173146 0
Too Important or Too Irrelevant? Why Beijing Hesitates on Hong Kong

 

Two competing narratives possibly explain why Beijing’s authoritarian communist rulers have not so far interfered in the increasingly violent protests in Hong Kong, now six months old and heading into a deadly new phase. Whichever explanation is correct will determine how long Beijing will stay patient if the impasse drags on and the violence continues to grow.

 

The answer may also likely decide whether the ‘one country, two systems’ formula can survive intact.

 

One account, in London’s Financial Times, says that the Chinese have largely remained on the sidelines, leaving it to local police and authorities to find or force a resolution because Hong Kong is no longer largely significant to China. This theory argues that Hong Kong’s days as the mainland’s key financial base are long over. Thus, the territory can be left alone to clean up its own mess.

 

The other says just the opposite. Beijing is hesitant to intervene directly—militarily or by ending the territory’s autonomy—because Hong Kong remains too important, as a financial and as an international symbol. Any direct intervention—for example, by sending in the People’s Armed Police—would be devastating to China’s international image while it is already burdened with a slow economy and a trade war with the United States.

 

Which is right? Is Hong Kong too irrelevant or too important for China to directly intervene? There is evidence to buttress both sides.

 

At the time the territory was handed back to China in 1997, after one hundred and fifty years as a British colony, Hong Kong accounted for nearly 20% of China’s GDP. Today, that figure is less than 3% The neighboring Chinese city of Shenzhen, just across the border has surpassed Hong Kong in the size of its economy and its still soaring annual growth rate. Shenzhen and Guangzhou are already surpassing Hong Kong when it comes to start-ups and new technologies.

 

Hong Kong saw its role as the entrepot for trade with China shrink once the mainland joined the World Trade Organization after 2001. Chinese citizens now have their own stock markets, and more than 150 companies have bypassed Hong Kong to list on major American stock exchanges.

 

According to this view, China’s communist leaders under hardline ruler Xi Jining might be satisfied to let Hong Kong burn, so long as the contagion doesn’t spread north. With its liberal values and British colonial holdovers, Hong Kong was always a troublesome source of suspicion and mistrust.

 

If Hong Kong’s internal conflagration hastens it replacement by Shenzhen or Shanghai as China’s most important city, Xi might actually see that as a long-term benefit. If the protests eventually lead to an exodus of local elites with overseas passports, as well as foreigners, multinationals, and foreign media outfits that make Hong Kong their headquarters, so much the better.

 

But the competing view holds that while Hong Kong’s importance to China’s overall GDP has lessened, the territory remains crucial to the mainland’s economic well- being. 

 

Hong Kong still accounts for more than 60% of all foreign direct investment flowing into the mainland and that number has grown despite the months of protest. Hong Kong’s stock exchange is still the third largest behind New York and Tokyo, and ahead of London, and China’s markets remain closed to foreign investors. Hong Kong’s credit rating is higher, its legal system is internationally respected, and money can be freely exchanged. In China, strict capital control prevents this occurrence.

 

President Donald Trump’s trade war has heightened the importance of Hong Kong being treated globally as a legal economic entity distinct from China. Chinese officials have warned Washington that the Hong Kong Human Rights and Democracy Act of 2019, which unanimously passed the House of Representatives last month and enjoyed bi-partisan support in the Senate, is “an attempt to use Hong Kong to contain China’s development”, in the words of Yang Guang, a spokesman for the Hong Kong and Macao Affairs Office which handles the territory’s affairs in Beijing.

 

There are even opposed views on whether the continuing unrest in Hong Kong poses the threat of contagion to the mainland.

 

One theory maintains that Beijing’s rulers fear that the pro-democracy protests in Hong Kong might spark similar demonstrations in Guangdong and other Chinese cities. They are wary about making even common-sense concessions—for example allowing an independent commission to examine police brutality or restarting the stalled political reform process--for fear of sparking like demands at home.

 

Yet the opposite position also has plausibility. That is, with strict control of the mainland media and the internet, Chinese propagandist have succeeded in painting Hong Kong protests as an anti-China secessionist movement launched and financed by notorious ‘black hand’ foreign foes, namely the American CIA. Instead of sympathy, many mainland Chinese have only distain for the residents of Hong Kong, who they see as wealthy, spoiled, and lacking in love for the motherland.

 

Chinese officials lately seem to be signaling more repression and no reform as the road to resolving Hong Kong’s crisis. China’s, Vice-Premier, Han Zheng, told Hong Kong CEO, Carrie Lam, that “extreme, violent, and destructive” activities would not be tolerated. Mainland officials have called for strengthening China’s supervision over the territory, imposing ‘more patriotic education’ in Hong Kong schools, introducing stricter vetting of civil servants to ensure loyalty to Beijing and implementing a long-delayed national security law.

 

Those calls are only likely to grow louder after the violence on 11 November, which included widespread vandalism, the forced shut down of university campuses, and the police shooting of a protester in the stomach.

 

It is becoming more apparent that Beijing’s leadership is caught somewhere between—fear about allowing the unrest to continue, but paralyzed from intervening by the concern of making a tragic, perhaps fatal, mistake. 

 

Chinese communist rulers are now facing the most serious political unrest on their territory since the June 1989 pro-democracy protests in Tiananmen Square. The massacre by People’s Liberation Army troops on that occasion cost China nearly a decade of sanctions, international isolation, and restrictions on technology transfers.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173662 https://historynewsnetwork.org/article/173662 0
England’s Richard III as Murderous, Royal Thug

 

William Shakespeare’s bone-chilling play Richard III portrays England’s deformed monarch as a murderous thug, one of the great villains of world history. That portrayal is underlined yet again a new production of the play that opened last week at the Gerald Lynch Theater of John Jay College in New York The play, staged by Ireland’s DruidShakepeare company, is part of the White Light Festival sponsored annually by the city’s Lincoln Center. The play stars Aaron Monaghan in a scalding, searing performance as the duplicitous, arrogant and diabolical Richard, who marched to the throne amid a long series of bloody murders and executions conducted on his orders.

 

The blood-soaked drama, that has been staged thousands of times, is one of Shakespeare’s great plays and here, at Lincoln Center, is a splendid production of it, an absorbing staging that builds the drama and tension of the tale as it unfolds in all of its treachery and gore.

 

The play starts just after the two major houses in England have ended a long war. Richard, a young royal hobbling about on his canes, miserably dragging himself around the royal court, plots to become King. He hires some ambitious friends and has them do his dirty work for him. Slowly but surely, Richard climbs up the power ladder and takes the crown (he puts it on his own head, a la Napoleon several hundred years later). He battles the men in his kingdom, but he battles the women, too, even killing a few. He has horrific confrontations with his mother and the mother of some men he had slain. He denies culpability most of the time, putting the slayings off on someone else. When he does admit his guilt, he tries to convince the women that he was right and that, oh well, he’ll marry one of their daughters to make up for it. 

 

As the play unfolds, we meet numerous characters, good and bad, all caught up in the gory hurricane Richard has unleashed. Most of them don’t survive it.

 

The wonder of it all is how on earth RIchard did not expect the friends and relatives of those he had murdered, at some point, to come after him.

 

The play has a streamlined cast (sometimes 50 people appear in other productions). Director Garry Hynes does a laudable job of running the play and mixing solo appearances by Richard at some points and Richard in groups in others. She gives Richard a lot of humor and at times has Richard paint himself as a victim of some sort and a royal whose only goal is to united the troubled kingdom and move on peacefully with all. 

 

Richard III is not for everyone. It lasts about three hours and the plot is extremely complicated. You need a scorecard to keep track of who is backing who, who is conspiring against who and who is being butchered in the King’s name. The most famous murders in the play, of course, are those of two children, Richard’s nephews, Edward V and Richard of Shrewsbury, whom Richard III fears. He has them carted off to the Tower of London and then slain. There are so many murders going on in the play, one right after the other, that the boys’ executions sad as they are, get a bit lost in the story, mere historical road kill. You’ve got to have a stomach for blood and violence, too. Richard III makes Tony Soprano look like the President of the local Chamber of Commerce.

 

Is Richard III historically accurate? Not really. Richard suffered from scoliosis, a physical condition in which one shoulder is lower than the other and causes an odd walk. He was not a hunchback with a withered arm, as Shakespeare portrayed him, and did not drag himself around, leaning on two canes, as most of the actors who have played him, including Monaghan, do. He was involved in some royal chicanery but did not commit all of the murders that Shakespeare attributed to him (the Bard was heavily influenced by the Tudor age in which he lived), although British historian are still out on the murders of the two young boys. When Richard first appears in the story he appears very broken and, referring to himself, scowls to the audience, “thou lump of foul deformity.” He blurts out, too, in the first few minutes of the play, that he has set himself on a path of destruction to claim the crown and keep it. He does just that, too, with a nasty group of henchmen.

 

There was royal turbulence in Richard’s era and the real Richard was caught up in it. The crown changed hands four times in just 22 years and in some of those years Richard had to live outside of the country for his own protection. 

 

Hynes also does a fine job of showcasing the sweep of Shakespeare’s drama, giving the audience a deep and rich look at the royal court in that era. There are also a number of fine battle scenes at the end of the play, where Richard, in mortal danger, shouts that famous line “a horse, a horse, my kingdom for a horse.”

 

Director Hynes gets a memorable, truly memorable performance from Monaghan as Richard, but she also gets good work from a talented ensemble cast.  Some of the fine performances are by Garrett Lombard as Hastings,  Marie Mullen as Queen Margaret, Rory Nolan as Buckingham, Marty Res as Clarence,  Frank Blake as Dorset, Bosco Hogan as King Edward IV, Jane Brennan as the Duchess of York and John Olohan as Stanley. 

 

They all work to give the audience a memorable play and a fine looking at British history. 

 

At the end of the play I walked out of the theater on to the chilly, chilly streets of New York and started thinking about this chilly, chilly play.

 

PRODUCTION: The play is produced by the DruidShakepeare Company in conjunction with the White Light Festival. Sets and Costumes Francis O’Connor, Lighting: James F. Ingalls,  Sound: Gregory Clarke, Fight Choreographer: David Bolger. The play runs through November 23

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/article/173665 https://historynewsnetwork.org/article/173665 0
What Have the Latest Impeachment Hearings Revealed?

 

When I wrote for the Journal-Courier, I had to send in my columns by Sunday evening for Tuesday publishing. I was not able to broadcast breaking news. I’m not a reporter, that’s their job. My job was to put together some writing that synthesized as much breaking news as I could. The heat was dissipating, it was time for light.

 

I believe that’s the opinion writer’s job. But often I could not say anything about what had changed since Sunday. If my synthesis was good, it would blend well with and partly explain the latest bombshell. Or perhaps my synthesis was already outdated.

 

Because I’m now self-published, but no longer printed, I enjoy many new degrees of freedom. It’s Tuesday, this is going out later today, and I can be almost up-to-the-minute about impeachment. The evidence is easy to find. Wikipedia has a lengthy narrative about “Impeachment inquiry against Donald Trump”. The Washington Postput together a timelinethrough last week.

 

During last week’s hearing, I thought the most effective defense mounted by the forever-Trumpers was that nothing happened. Whatever may have been said or done, in the end President Zelensky did not announce an investigation of the Bidens and the military aid was released. No harm, no foul. I didn’t buy it for a second, because the facts we have learned already are so overwhelming that I’ve made up my mind. But if someone retains some positive view of Trump, for whatever reason, such an overview of events is greatly reassuring.

 

That defense was bogus, and the last few days of news are killing it, because in fact a lot happened. Anyone, especially someone both politically astute and internationally vulnerable like Zelensky, would understand from the July 25 phone call that Trump offered a meeting only if he got something specific in return. If anyone could doubt that Trump was demanding specifically a Biden investigation, yesterday’s evidence about Gordon Sondland’s mobile phone call from a Kiev restaurant on July 26 demonstrates Trump’s overriding focus on investigations.

 

But the key sequence of events began much earlier. Zelensky was elected on April 21. On April 23, Rudy Giuliani tweeted: “Now Ukraine is investigating Hillary campaign and DNC conspiracy with foreign operatives including Ukrainian and others to affect 2016 election. And there's no Comey to fix the result.” That wasn’t truth, it was pressure.

 

Less than 3 weeks later, Zelensky and his advisers met on May 7to talk about the Trump-Giuliani pressure to open investigations and avoiding entanglement in the American elections. He hadn’t yet been inaugurated, which happened on May 20.

 

Fiona Hill, a top deputy at the National Security Council inside the White House, explained to Congress about discussions in the White House in May, showing they already knew that Zelensky was feeling pressure to investigate the leading Democratic candidate.

 

Zelensky and his top advisors continued talking among themselves about the pressure that was being exerted on them and what to do about it. They realized that the life-saving military aid was included in the deal Trump was offering even before August 28, when Politicopublished an article about it. Top Ukrainian officials knew already in early August. William Taylor, new acting ambassador to Ukraine after Marie Yovanovitch was fired, characterized Ukraine’s defense minister afterwards as “desperate”.

 

Trump and Mulvaney and Pompeo and who knows who else decided to release the aid on September 11, only after Democrats in Congress threatened to investigate. The whistleblower spilled the beans to Congress on September 25 and to the public the next day about why it had been withheld.

 

We know now that Zelensky was preparing to go TV, in particular on Fareed Zakharia’s show on CNN, with a statement about Trump’s investigations. As soon as military aid was resumed, he cancelled the interview, because he had never wanted to do that.

 

So we already know what happened, and it wasn’t nothing. President Zelensky was desperate for a meeting with Trump and for good reasons. Trump said, “only if you do this favor”. In Ukraine, that message was being pounded home by people who said they were direct representatives of Trump – Rudy Giuliani, Rick Perry, Gordon Sondland. The American face in front of our efforts to help those Ukrainians trying to reduce corruption had been sent home, Ambassador Yovanovitch, accomplished by the Giuliani-Trump team.

 

Zelensky’s official communications with Americans displayed the heavy weight he put on a meeting with Trump. After the aid was released, the pressure continued. On October 3, Trump said on the White House lawn: “I would say, President Zelensky, if it was me, I would start an investigation into the Bidens,” and added the Chinese for good measure.

 

But Zelensky wasn’t going to do what Trump demanded, an announcement to the world that the Ukrainian government was investigating the Bidens. The Congressional Republican overview is false, because like Trump, they don’t care about Ukraine. President-elect and then President Zelensky refused for 4 months to do anything in response to Trump’s insistence on investigations, even though he desperately desired a meeting with Trump. When he found out that military aid was being withheld, he still refused for a month to become entangled in our election.

 

The Trump-Giuliani team caused great anxiety in the Ukrainian government. But with the highest stakes involved, the political neophyte Volodomyr Zelensky said “no” to corruption.

 

Later today there will be more news, and for many days to come. No facts have come to light that cast any part of this story into doubt. The timeline gets longer and more intense with each revelation. 

 

I don’t know how many people the Republican sleight-of-hand can fool. I don’t know if the elections last week point to any turn against Trump among Southern voters. I don’t know what tomorrow’s headlines will be.

 

But I know corruption when I see it.

]]>
Sat, 14 Dec 2019 16:44:11 +0000 https://historynewsnetwork.org/blog/154280 https://historynewsnetwork.org/blog/154280 0