Presenting my ‘Judas Superstar’ paper at the South West PG Theology and Religion conference

web_Stephanie_3col692
Stephanie Roberts, a current MA Theology and Religion student, talks about her experience of attending and presenting a paper at a recent PG conference, hosted by the University of Exeter’.

On Saturday 6 May, the 22nd Joint Postgraduate Conference on Theology and Religion was held at Exeter for the first time. With a total of 30 research papers presented, the day proved to be a fascinating exploration of the current research undertaken by MA and PhD students in the South West.

The presentations were split in to themes covering all things from ‘Belief and Practice in Antiquity’ to ‘Philosophy, Ethics and Revelation’. This is just some indication of the great diversity of papers explored during the day. The eclectic nature of this conference inspired a feeling of open-mindedness amongst the guests and speakers. This relaxed atmosphere, and the words of encouragement from my fellow postgraduate students, somewhat eased my nerves of presenting a paper for the first time later in the day.

For my part, I was curious to learn more about the areas of theology that some speakers had chosen to dedicate three years of their lives to, and yet had never crossed my path of study. I found throughout education that many students (including myself at times!) have a ‘but will this be in the exam?’ approach to learning, unwilling to clutter their minds with information that will not increase their final percentage. It was refreshing to be involved in a day where the audience were enthusiastic to widen their view of theology and engage in these new areas of study that are, no doubt, often outside their own chosen sub-discipline.

Following the coffee break, it was time for me to give my first presentation at an academic conference, with a paper entitled ‘Judas Superstar? A reflection on the relationship between Jesus and Judas in Jesus Christ Superstar’. Despite my apprehension as a fresh-faced MA student addressing an audience who, generally, had much more experience of academic conferences, I was relieved that the questions were less intimidating than I had feared!

As far as academic papers go, I have found this one particularly enjoyable to present as the inclusion of movie clips and rock-ballads naturally accompany my analysis and the sight of audience heads bobbing along to Judas’ solos made me feel more at ease throughout the presentation. It seems the perfect way to have introduced myself to this academic rite of passage.

One paper I found particularly thought- provoking was by Amna Nazir, a PhD student of Law and Theology with cross-institutional supervision from Birmingham City University and the University of Birmingham. She demonstrated the necessity of interdisciplinary study in her paper, ‘The Death Penalty in Islam: A Religious Necessity?’. Having never studied Islam, this is a topic I knew very little about but I was intrigued by the provocative title.

Nazir explored the death penalty in Sharia law and international law and argued that, until there is absolute justice in the courts and governments of these countries, the death penalty cannot be right. Nazir explained that court cases’ reports are often restricted and so it is impossible to know whether countries have properly adhered to both Sharia and international laws. Furthermore, the great variance of its practice across Islamic countries is indicative of the fact there is a lack of consensus even within the faith about when the death penalty is to be evoked.

I found the paper incredibly interesting to listen to and the discussion that followed was just as rich. It became clear that there are a small number of Muslim voices supporting this view and, where westerners attempt to enforce their views against it, their criticism is merely met with disdain or suspicion by Islamic countries. Ultimately, if the application of the death penalty in Islam is to change in any way, the impetus must come from within the religion.

It was exciting for me to be involved with such a packed day of research papers and, in my view, the conference aptly demonstrated the active interest present in the theology’s many manifestations. The post-graduate community highlighted their willingness to engage in interdisciplinary study and to recognise the impact religious studies can have in a modern context.

Arguing about Empire: the Dreyfus Affair and the Fashoda Crisis, 1898

This article was originally posted on Not Even Past, the public history website of The University of Texas at Austin. Reproduced with kind permission.

We are very happy to announce a new online collaboration with our colleagues in the Department of History at the University of Exeter in the UK. Not Even Past and Exeter’s Imperial & Global Forum, edited by Marc Palen (UT PhD 2011) will be cross-posting articles, sharing podcasts, and sponsoring discussions of historical publications and events.

We are launching our joint initiative this month with a blog based on a new book by two Exeter historians, Arguing About Empire: Imperial Rhetoric in Britain and France.

By Martin Thomas and Richard Toye 

“At the present moment it is impossible to open a newspaper without finding an account of war, disturbance, the fear of war, diplomatic changes achieved or in prospect, in every quarter of the world,” noted an advertisement in The Times on May 20, 1898. “Under these circumstances it is absolutely essential for anyone who desires to follow the course of events to possess a thoroughly good atlas.” One of the selling points of the atlas in question – that published by The Times itself – was that it would allow its owner to follow “most minute details of the campaign on the Atbara, Fashoda, Uganda, the Italian-Abyssinian conflict &c.” The name Atbara would already have been quite familiar to readers, as the British had recently had a battle triumph there as part of the ongoing reconquest of the Sudan.

Fashoda, underlined in red, lay on the eastern margins of the Sudanese province of Bahr el-Ghazal. As this 1897 map indicates, the French Foreign Ministry, too, needed help in identifying Marchand’s location. (Source: MAE, 123CPCOM15: Commandant Marchand, 1895-98.)

Fashoda, much further up the Nile, remained, for the time, more obscure. Newspaper readers might have been dimly aware that an expedition led by the French explorer Jean-Baptiste Marchand was attempting to reach the place via the Congo, but his fate remained a mystery. Within a few months, however, Captain Marchand and his successful effort to establish himself at Fashoda would be the hottest political topic, the subject of multitudes of speeches and articles on both sides of the English Channel as the British and French Empires collided, or at least scraped each other’s hulls. It never did come to “war,” but there was certainly sufficient “disturbance, fear of war and diplomatic changes achieved or in prospect” to justify a Times reader purchasing an atlas, perhaps even the half-morocco version, “very handsome, gilt edges,” that retailed at 26 shillings.

The clash at Fashoda was both a seminal moment in Anglo-French relations and a revealing one with respect to imperial language. In addition to rhetoric’s role in stoking up tensions, there are further angles to be considered. Falling at the height of the Dreyfus affair, in which a Jewish Army officer, Captain Alfred Dreyfus, endured a protracted retrial after being wrongly convicted of spying for Germany, British official readings of the Fashoda crisis were also conditioned by the growing conviction that the worst aspects of French political culture – an overweening state, an irresponsible military leadership, and an intrusive Catholic Church – were too apparent for comfort.

Viewed from the British perspective, dignity, above all, was at stake. The French were obsessed with the prospect of their own impending humiliation; whereas the British, from a position of strength, showed verbal concern for French amour propre, even while their own actions seemed guaranteed to dent it severely.

French Poodle to British Bulldog: “Well if I can’t have the bone I’ll be satisified if you’ll give me one of the scraps.” J. M. Staniforth, Evening Express (Wales).

What the rhetoricians of both countries had in common was their willingness to discuss the fate of the disputed area exclusively as a problem in their own relations, without the slightest reference to the possible wishes of the indigenous population. This is unsurprising, but there was more to the diplomatic grandstanding than appeared at first sight. It was the Dreyfus case that best illustrated how embittered French politics had become.

Dreyfus’s cause divided French society along several fault lines: institutional, ideological, religious, and juridical. By 1898 the issue was less about the officer’s innocence and more about the discredit (or humiliation) that would befall the Army and, to a lesser degree, the Catholic Church (notably imperialist institutions), were the original conspiracy against him revealed. So much so that the writer Emile Zola was twice convicted of libel over the course of the year after his fiery open letters in the new print voice of Radical-Socialism, L’Aurore in early 1898 compelled the Dreyfus case to be reopened,

Twelve months before Dreyfus was shipped back from Devil’s Island to be retried a safe distance from Paris at Rennes, Zola’s convictions confirmed that justice ran a poor second to elite self-interest.

High Command cover-ups, the ingrained anti-Semitism of the Catholic bishopric, and the grisly prison suicide on August 31 of Colonel Hubert Joseph Henry, the real traitor behind the original spying offense, brought French political culture to a new low. From the ashes would spring a new human rights lobby, the League of the Rights of Man (Ligue des droits de l’homme). Meanwhile, the Dreyfusard press, led since 1897 by the indomitable, if obsessive, L’Aurore, wrote feverishly of alleged coup plots to which Marchand, once he returned from Africa, might or might not be enlisted.

Charles Léandre, Caricature of Henri Brisson, Le Rire, November 5, 1898. Here caricatured as a Freemason.

At the start of November, Henri Brisson’s fledgling government finally decided to back down. A furious Marchand, who had arrived in Paris to report in person, was ordered to return and evacuate the mission. The right-wing press, fixated over the previous week on the likely composition of the new government and its consequent approach to the Dreyfus case, resumed its veneration of Marchand. La Croix went furthest, offering a pen portrait of Marchand’s entire family as an exemplar of nationalist rectitude. The inspiring, if sugary, narrative was, of course, a none-too-oblique way of criticizing the alleged patriotic deficiencies of the republican establishment and siding with the army as the institutional embodiment of an eternal (and by no means republican) France.

Something of a contrived crisis – or, at least, an avoidable one – Fashoda was also a Franco-British battle of words in which competing claims of imperial destiny, legal rights, ethical superiority, and gentility preserved in the face of provocation belied the local reality of yet more African territory seized by force. If the Sudanese were the forgotten victims in all this, the Fashoda crisis was patently unequal in Franco-British aspects as well.

“Come Professor. You’ve had a nice little scientific trip! I’ve smashed the dervishes — luckily for you — and now I recommend you to pack up your flags and go home!” John Tenniel, Punch, Oct. 8, 1898.

On the imperial periphery, Marchand’s Mission was outnumbered and over-extended next to Kitchener’s Anglo-Egyptian expeditionary force. In London a self-confident Conservative government was able to exploit the internal fissures within French coalition administrations wrestling with the unending scandal of the Dreyfus case. Hence the imperative need for Ministers to be seen to be standing up in Marchand’s defense. In terms of political rhetoric, then, the French side of the Fashoda crisis was conditioned by official efforts to narrow the country’s deep internal divisions in the same way that the Republic’s opponents in politics, in the press, and on the streets sought to widen them.

Martin Thomas and Richard Toye, Arguing about Empire: Imperial Rhetoric in Britain and France

Originally posted 1 May 2017

A Cathedral field trip

This post originally appeared on Katherine McDonald’s personal blog. Katherine is a lecturer in Classics and Ancient History at the University of Exeter.

One of my academic specialisms is the study of inscriptions, otherwise known as epigraphy. Most of the material I work with is epigraphic, and sometimes this is one of the biggest challenges in my work. Learning how to read inscriptions is a skill that you need to learn by trial-and-error and, ideally, by having someone with more experience than you show you the ropes. So how do you teach epigraphy to graduate students? With a field trip to Exeter Cathedral, of course.

There are a few practical skills associated with epigraphy, the most important of which is squeeze making. Squeezes are paper impressions of inscriptions, formed by hitting wet filter paper into an inscription with a specially made brush. Once the paper is dry, it holds the impression of the stone permanently. Squeezes can be rolled up and stored for centuries (for example, in Oxford) – and even posted to scholars on the other side of the world. They allow scholars anywhere in the world to study inscriptions which they could not see in person. And, strangely enough, the negative image they provide is often much easier to read than the original inscription, revealing details that were not visible to the naked eye. They can also be easily scanned and digitised.

With the help of cathedral staff, we chose an appropriate stone for making a squeeze. Here’s Dr Charlotte Tupman (Digital Humanities, University of Exeter) demonstrating to some of the group.

DSC08150Charlotte demonstrates the exact “thwack” noise the students should aim for with the squeeze brushDSC08164.JPG
The filter paper is dampened with de-ionised water.IMG_20170320_124758828.jpg
The reverse of one of our finished squeezes – you can see a huge amount of detail of the surface of the stone (and also a bit of dirt from the floor).

When I popped back the following morning, I found perfectly dried squeezes all ready to be taken up to the department. It was a really fun and practical (if slightly messy) session, and we ended up with a great squeeze of this slab. If you want to see squeezing in action, here is a page with two videos from a group of students in Athens showing how the process works.

It’s not always possible to make a squeeze, though. Some stone is too soft, or too damaged, to be hit with a squeeze brush without damaging it further. Many of the inscriptions from pre-Roman Italy, for example, are on tufa – a light, bubbly, porous stone which would be more or less impossible to squeeze. In these situations, knowing how to use light and photography can be really helpful. In Exeter Cathedral, some of the most damaged slabs would be too fragile to squeeze, so we experimented with reading them in different ways.

Students reading worn inscriptions using a torch

Using a torch can illuminate all sorts of details on difficult-to-read inscriptions. Many museums and churches are dimly lit, so any kind of light is helpful – but a steeply raking light, at a very sharp angle to the surface, is the most helpful. Here’s an example taken by one of the students last week.

DSC08155

With just the natural light available, this inscription is partly readable, but not very clear. There are distinctly worn sections where the lettering is difficult to read. If we introduce a light at a steeply raked angle, we see something very different.

DSC08158.JPG

With the light at this angle, the name “Katherine Berry” is suddenly revealed! You might be able to see the other lines more clearly as well – but of course the best thing is to take a number of photographs or readings with the light at different angles, to illuminate different letters on the stone.

What did we get out of this session? We weren’t dealing with Greek and Roman inscriptions this time, though the process and principles are much the same anyway. The students responded with lots of thoughts and questions, but there were two responses that really stood out for me:

(1) Keep in mind the language of the period

In the picture above, you might be able to make out that Katherine Berry “dyed” in 1687. To us, this seems like an “incorrect” spelling for the word died. But in 1687, English spelling wasn’t standardised in the way it is now, and this spelling was completely valid. So as readers we have to be sensitive to the practices of the time, otherwise our false expectations could affect our reading and lead us to assume, for example, that this couldn’t be a letter Y at all.

This is equally true of Greek and Latin inscriptions. The spellings – and even the shapes of the letters – are not what we are used to. The alphabet used in fifth-century Athens, for example, is not the same as the Ionic Greek alphabet that we use to read their texts – and this can make a big difference. Similarly, in the time of Augustus, as well as later and earlier, there are inscriptions with the spelling pleps as well as plebs – both are fine, and it’s our own later standards that make one look wrong. To read inscriptions accurately, we need to allow for spellings and letter-shapes to vary.

IGI(3)1.jpg
Squeeze of IG I(3) I, Late C6th-early C5th. Notice how the lambda and sigma of “Salamin” (the end of the second line) are a different shape to how we would print the Greek alphabet now.

VLUU L310 W / Samsung L310 W
The pleps spelling in action in a (probably) Augustan inscription. CIL 06 40310.

(2) Epigraphy is subjective

If you’ve only even looked at inscriptions as they are written in textbooks, or printed in an edition, they seem quite fixed and objective. But as soon as you start trying to read inscriptions which are worn, damaged or contain mistakes made by the original engraver, you realise how many judgement calls have to be made by the epigraphist. With experience and practice, you can get better at spotting traces of letters and making educated guesses, but sometimes they really are guesses. This is why, if you are interested in using inscriptions in your research, it is so important to see them in person (or as a squeeze) to decide whether you agree with past interpretations.

Many thanks to Exeter Cathedral and Charlotte Tupman for all their help with this session.

If this post has inspired you to take a closer look at inscriptions in buildings near you, please let me know in the comments or on Twitter. (But please don’t take squeezes without permission!)

 

Oscar Wilde would have been on Grindr – but he preferred a more clandestine connection

This post originally appeared on The Conversation. This article was written by Jack Sargent, PhD student in History. 

It has never been so easy to find love, or sex, quickly. In 2017, there is nothing shameful or illicit about using dating apps or digital tools to connect with someone else. More than 100 years ago, of course, things were very different.

Oscar Wilde and other men and women who, like him, desired same-sex relationships, had to resort to attending secret parties to meet potential partners. The idea that it would become normal to meet and flirt with an ever changing group of strangers, sending explicit pictures or a few cheeky sentences on a device you hold in your hand, would have amused the writer. The openness about conducting such relationships would have amazed him.

But would Oscar Wilde have enjoyed the most famous gay dating app, Grindr, and the way it has contributed to gay culture? We know he would probably have welcomed the fact that gay men and women could easily meet new sexual partners. In the late-Victorian period, Wilde’s membership of clandestine homoerotic networks of clubs and societies, was far more furtive. They were gatherings of forbidden passions and desires, shrouded in secrecy.

Wilde loved being part of this underground community. He adored being with crowds of immaculately dressed people in beautiful rooms. He believed the most important goal in life was to experience emotion and sensuality, to have intense connections and embrace beauty.

This belief came from his involvement in a movement called Aestheticism. Late-Victorian aesthetes proposed that beauty and sensation were the keys to an individual’s authentic experience of life. They argued that beauty and connections with beauty should be pursued even at the expense of conventional systems of morality, and what society considered right or wrong. For Wilde, this meant he thought about whether it was aesthetically – not morally – right to sleep with someone.

Oscar Wilde was born in Dublin in 1854 and died in Paris in 1900, a few years after his release from jail for “gross indecency” with other men. Before his imprisonment, Wilde was (I think almost uniquely) shockingly positive and active about his desire for other men. This was a time when same-sex desire and intercourse was illegal, seen as illicit and monstrous – an abhorrent illness which should be exercised from Christian culture.

Wilde met and slept with many other men, continuing relationships for years, months, weeks, or maybe even only a night, before effectively dropping them and moving on. Is this so different to how gay relationships are conducted now?

Every part of gay culture today stems from the way that Wilde and the group of men he mixed with lived their lives. Their philosophy that they should have their own dedicated spaces to meet still stands. At first they evolved into gay bars and clubs. Now those physical spaces are closing as members of the gay community go online to meet each other.


The importance of being on Grindr. Shutterstock

Grindr, now eight years old, allows people to make connections, if they like the look of someone’s body. It is the same type of connection that Wilde was interested in, but it doesn’t give people the intense, sensual involvement with another human being he was looking for. You might see someone you like on Grindr, but there is no promise they will respond to your message. Downloading and using the app doesn’t automatically make you part of a network of people that are thinking and feeling intense emotional sensations. Wilde, at his parties and gatherings, taking risks and breaking the law, must have felt part of a group who came together to all feel something special and exciting.

This excitement was not only to do with the illegal nature of the acts undertaken in secret. It had something to do with the vibrancy and sensuality offered by being in a particular place, engaging sensually and physically with other people, reading them for signs of interest, right down to the smallest gesture.

Digital declarations

This is not possible on Grindr. Grindr offers instead a potentially unlimited amount of possible connections, but connections which are digital, not physical. Once downloaded, the app offers a digital network of people that can be loaded and reloaded with a simple swipe of the screen. The continual possibility of meeting someone different or better means that users don’t necessarily need to commit to connecting. It seems we are in danger of creating a generation of potentially disconnected individuals, who rather than going to a gay bar, choose to spend the night in, waiting for a stranger to send them a message.

Had he been able to, Wilde would have downloaded Grindr, of that I think we can be certain. Would he have liked it? Well, he may have found some beauty in the technology and the freedom it represents. And perhaps, sometimes, he would have enjoyed the novelty.

But he would probably have preferred the clubs, societies and networks he engaged with during the late 1800s. For while they did not promise successful or happy encounters, they did foster physical relationships between men within spaces of affirmation, liberation and fulfilment. And although Grindr also offers the chance for casual sex, I think late Victorian gay men would have been saddened by the lack of opportunity for their counterparts today to connect emotionally with others.

Being lovesick was a real disease in the Middle Ages

This post originally appeared on The Conversation. The piece was written by Laura Kalas Williams, Postdoctoral Researcher in Medieval Literature and Medicine at Exeter. 

Love sure does hurt, as the Everly Brothers knew very well. And while it is often romanticised or made sentimental, the brutal reality is that many of us experience fairly unpleasant symptoms when in the throes of love. Nausea, desperation, a racing heart, a loss of appetite, an inability to sleep, a maudlin mood – sound familiar?

Today, research into the science of love recognises the way in which the neurotransmitters dopamine, adrenalin and serotonin in the brain cause the often-unpleasant physical symptoms that people experience when they are in love. A study in 2005 concluded that romantic love was a motivation or goal-orientated state that leads to emotions or sensations like euphoria or anxiety.

But the connection between love and physical affliction was made long ago. In medieval medicine, the body and soul were closely intertwined – the body, it was thought, could reflect the state of the soul.

Humoral imbalance

Text and tabular of humours and fevers, according to Galen, c.1420. In MS 49 Wellcome Apocalypse, f.43r. Wellcome Library

Medical ideas in the Middle Ages were based on the doctrine of the four bodily humours: blood, phlegm, black bile and yellow bile. In a perfectly healthy person, all four were thought to be perfectly balanced, so illness was believed to be caused by disturbances to this balance.

Such ideas were based on the ancient medical texts of physicians like Galen, who developed a system of temperaments which associated a person’s predominant humour with their character traits. The melancholic person, for example, was dominated by the humour of black bile, and considered to have a cold and dry constitution.

And as my own research has shown, people with a melancholic disposition were thought, in the Middle Ages, to be more likely to suffer from lovesickness.

The 11th-century physician and monk, Constantine the African, translated a treatise on melancholia which was popular in Europe in the Middle Ages. He made clear the connection between an excess of the black bile of melancholy in the body, and lovesickness:

The love that is also called ‘eros’ is a disease touching the brain … Sometimes the cause of this love is an intense natural need to expel a great excess of humours … this illness causes thoughts and worries as the afflicted person seeks to find and possess what they desire.

Curing unrequited love

Towards the end of the 12th century, the physician Gerard of Berry wrote a commentary on this text, adding that the lovesick sufferer becomes fixated on an object of beauty and desire because of an imbalanced constitution. This fixation, he wrote, causes further coldness, which perpetuates melancholia.

Whoever is the object of desire – and in the case of medieval religious women, the beloved was often Christ – the unattainability or loss of that object was a trauma which, for the medieval melancholic, was difficult to relieve.

But since the condition of melancholic lovesickness was considered to be so deeply rooted, medical treatments did exist. They included exposure to light, gardens, calm and rest, inhalations, and warm baths with moistening plants such as water lilies and violets. A diet of lamb, lettuce, eggs, fish, and ripe fruit was recommended, and the root of hellebore was employed from the days of Hippocrates as a cure. The excessive black bile of melancholia was treated with purgatives, laxatives and phlebotomy (blood-letting), to rebalance the humours.


Blood-letting in Aldobrandino of Siena’s ‘Régime du Corps’. British Library, MS Sloane 2435, f.11v. France, late 13thC. Wikimedia Commons

Tales of woe

It is little wonder, then, that the literature of medieval Europe contains frequent medical references in relation to the thorny issue of love and longing. Characters sick with mourning proliferate the poetry of the Middle Ages.

The grieving Black Knight in Chaucer’s The Book of the Duchess mourns his lost beloved with infinite pain and no hope of a cure:

This ys my peyne wythoute red (remedy),
Alway deynge and be not ded.

In Marie de France’s 12th-century Les Deus Amanz, a young man dies of exhaustion when attempting to win the hand of his beloved, who then dies of grief herself. Even in life, their secret love is described as causing them “suffering”, and that their “love was a great affliction”. And in the anonymous Pearl poem, a father, mourning the loss of his daughter, or “perle”, is wounded by the loss: “I dewyne, fordolked of luf-daungere” (I languish, wounded by unrequited love).


The lover and the priest in the ‘Confessio Amantis’, early 15th century. MS Bodl. 294, f.9r. Bodleian Library, Oxford University

The entirety of John Gower’s 14th-century poem, Confessio Amantis (The Lover’s Confession), is framed around a melancholic lover who complains to Venus and Cupid that he is sick with love to the point that he desires death, and requires a medicine (which he has yet to find) to be cured.

The lover in Confessio Amantis does, finally, receive a cure from Venus. Seeing his dire condition, she produces a cold “oignement” and anoints his “wounded herte”, his temples, and his kidneys. Through this medicinal treatment, the “fyri peine” (fiery pain) of his love is dampened, and he is cured.

The medicalisation of love has perpetuated, as the sciences of neurobiology and evolutionary biology show today. In 1621, Robert Burton published the weighty tome The Anatomy of Melancholy. And Freud developed similar ideas in the early 20th century, in the book Mourning and Melancholia. The problem of the conflicted human heart clearly runs deep.

So if the pain of love is piercing your heart, you could always give some of these medieval cures a try.

Children have long been unfairly hit by US presidential executive orders

This post originally appeared on The Conversation. This piece was written by Rachel Pistol, Associate Research Fellow (History). 

Around 75 years ago, in February 1942, US President Franklin Delano Roosevelt signed Executive Order 9066, which led to the forced relocation and internment of more than 110,000 individuals of Japanese ancestry. The majority of them were American citizens, and a large proportion were children.

But unlike President Trump’s 2017 executive order to halt immigration and ban refugees from American soil, Roosevelt’s sweeping political move did not provoke any protest or dissent. Both presidents had mentioned the notion of “national security’ in their orders, and both decrees were said to be aimed at specific national groups. So is President Trump merely copying the policy of one of his more popular predecessors?

From the moment the US entered World War II in late 1941, all “enemy aliens” living in America – German, Austrian, Italian, and Japanese – were subject to restrictions on their freedom. These included the imposition of curfews and a ban on owning radios. So the real significance of EO9066, as it is known, was that it authorised the detention not just of enemy aliens, but also of American citizens. In theory, any American citizen could be relocated by order of the military.

But EO9066 was created for a particular purpose, which was to enable the internment of Japanese Americans living on the West Coast of America. It also made it possible for further orders to be authorised, such as Civilian Exclusion Order No.79, which ordered that “all persons of Japanese ancestry, both alien and non-alien” be excluded from a portion of the West Coast.


Japanese American children pledging allegiance in California, 1942. US Library of Congress

Yet one of the most striking things about EO9066 is that, unlike Trump’s executive order, it does not once talk about nationality. Instead, Roosevelt gave military commanders the right to “prescribe military areas in such places and of such extent as he or the appropriate military commander may determine, from which any or all persons may be excluded”.


Roosevelt declares war against Japan.National Archives and Records Administration

The creation of protected military areas during times of war is not unusual, and makes sense for security reasons. However, usually these zones surround military installations and coastal areas where the threat of invasion is greatest. In the case of the US during World War II, the whole of the West Coast was designated a military protected area. The most likely place for invasion, however, was the only place on American soil that had already been attacked – Hawaii.

About 40% of the population of Hawaii was of Japanese descent, as opposed to the West Coast, where they made up just over 1%. The military knew that Hawaii could not function if all the Japanese people were removed, and therefore decided to impose martial law. Individuals (usually men) considered the greatest threat to national security were arrested and interned, while the rest of their families were able to live at liberty.

The military’s decision to selectively intern on Hawaii was backed up by J. Edgar Hoover, director of the FBI, who was quoted as saying: “This evacuation isn’t necessary; I’ve already got all the bad boys.”

Currently, any immigrant or refugee who is given entry to the US goes through a stringent vetting procedure. This is partly why, according to American think tank the Cato Institute, no refugees have been involved in terrorist attacks on US soil since the Refugee Act of 1980. It is also worth noting that those behind major terrorist attacks in the US have mostly been born in America, or were permanent legal residents from countries not covered by Trump’s ban.

Land of the free?

But perhaps the greatest similarity between Roosevelt’s and Trump’s orders is how American-born children are affected. Half of those interned under EO966 during World War II were American-born minors. Some have said this was inevitable because of the decision to intern both Japanese parents in the continental US. However, not all German, Austrian, or Italian mothers were interned, which meant that not all of their children were taken to camps.

In some cases, German-American children were left without care when both their father and mother were arrested. In other cases, families could “voluntarily” request to join husbands and fathers interned. There was no choice for Japanese-Americans. In other allied countries such as Great Britain, most enemy alien women were allowed to remain at liberty, along with their children. In the US, the children were considered as much of a threat as their foreign born parents, leading to the internment of entire family units.

This seems to still be the case today, as demonstrated by the fact that an American five-year-old boy was detained for more than four hours as a result of Trump’s immigration order because his mother was Iranian. Sean Spicer, Trump’s press secretary, defended the decision because “to assume that just because of someone’s age and gender that they don’t pose a threat would be misguided and wrong”.

American-born children, therefore, are still considered dangerous, but only, it seems, if they are born to non-white immigrant parents. For others born in the US their rights appear to remain linked to the country of their parents’ birth. Just as in 1942, the promise of “liberty and justice for all” still does not to apply to all American citizens.

The busy Romans needed a mid-winter break too … and it lasted for 24 days

This was originally appeared on The Conversation and is written by Dr Richard Flowers, Senior Lecturer in Classics and Ancient History, University of Exeter.

In the Doctor Who Christmas Special from 2010, Michael Gambon’s Scrooge-like character remarks that across different cultures and worlds people come together to mark the midpoint of winter. It is, he imagines, as if they are saying: “Well done, everyone! We’re halfway out of the dark!”

The actual reasons for celebrating Christmas at this particular time in the year have long been debated. Links have often been drawn to the winter solstice and the Roman festival of Saturnalia. Some people have also associated it with the supposed birthday of the god Sol Invictus, the “unconquered sun”, since a fourth-century calendar describes both this and Christ’s birth as taking place on December 25.

Such speculation has inevitably led to claims that this traditionally Christian festival is little more than a rebranding of earlier pagan activities. But questions about the “religious identity” of public celebrations are, in fact, nothing new and were being asked in the later periods of the Roman empire as well.

This is particularly evident in the case of a rather obscure Roman festival called the Brumalia, which started on November 24 and lasted for 24 days. We cannot be sure exactly when it began to be celebrated, but one of our best accounts of it comes from the sixth century AD. A retired public official called John the Lydian explained that it had its origins in earlier pagan rites from this time of year, including Saturnalia.

Some people celebrated Brumalia by sacrificing goats and pigs, while devotees of the god Dionysus inflated goat skins and then jumped on them. We also believe that each day of the festival was assigned a different letter of the Greek alphabet, starting with alpha (α) on November 24 and finishing with omega (ω) on December 17.

A person would wait until the day that corresponded to the first letter of their own name and then throw a party. This meant that those with a wide circle of friends – and, in particular, friends with a wide variety of names – might potentially get to go to 24 consecutive celebrations.

We also have other evidence for the popularity of the Brumalia during the sixth century. A speech by the orator Choricius of Gaza praises the festivities laid on by the emperor Justinian (527–565), remarking that the emperor and his wife, Theodora, celebrated the Brumalia on adjacent days, since the letter iota (ι) – for Justinian – directly follows theta (θ) – for Theodora – in the Greek alphabet. Surviving accounts from the cellars of a large estate in Egypt also detail the wine distributions to officials and servants for the Brumalia of the master, Apion, which fell on the first day of the festival.

Yet, the origins of the Brumalia are far from clear. It seems to have been related to the earlier Roman Bruma festival, which took place on a single day in November and looked ahead to the winter solstice (or bruma in Latin) a month later, but little is known about this.

It is only really from the sixth century onwards that it appears in surviving sources, even though by then most Romans were Christians and had been ruled by Christian emperors for more than two centuries. John the Lydian also states that the “name day” aspect of the celebrations was a recent innovation at this time. As far as we can tell, therefore, this was not merely a remnant from a distant pagan past, but had actually developed and grown at precisely the same time as emperors, including Justinian, were endeavouring to clamp down on perceived “paganism” in their empire.

The historian Roberta Mazza, in one of the most comprehensive modern discussions of the festival, has argued that the Brumalia was simply too popular to get rid of entirely, but that Justinian sought to strip it of “pagan” elements. She says that in doing so, the emperor “reshaped and reinvented the meanings and purposes of the feast” and made it “both acceptable from a religious point of view and useful for constructing a common cultural identity throughout the different provinces of the empire”.

The true meaning of Brumalia

We know that the Brumalia continued to be celebrated at the imperial court in Constantinople until at least the tenth century, but it was certainly not without its opponents. John the Lydian reports that the church was opposed to the Brumalia, and similar statements of disapproval and attempts to ban it were also made by church councils in 692 and 743. For some Christians, it remained just too pagan for comfort. Controversy also surrounded other celebrations in late antiquity, including the wearing of masks at New Year, the Roman Lupercalia (with its naked runners), and the processions and dancing involved in the “Bean Festival” at Calama in North Africa.

How then should we view the Brumalia? Was it still essentially “pagan”, or had it become safely Christianised or secularised? I think that any attempt to neatly categorise these festivals, let alone their participants, is destined to fail. For some people, the religious elements will have loomed larger, while for others they will have been almost entirely irrelevant, as also happens with Christmas today.

The Brumalia could be celebrated in a variety of ways and have a multitude of meanings to different people throughout the empire, even if all of them saw themselves as Christians. Rather than arguing that Justinian or others who enjoyed the Brumalia were “less Christian” than its opponents, we might instead treat it as a vivid illustration of the fluidity and malleability of notions of culture and identity.

We cannot ever discover the true meaning of Brumalia, but we can be sure that it brought people together to commemorate being halfway out of the dark.

Social Media, Outreach, and Your Thesis

Ever wondered about the benefits of social media and public outreach for your thesis? Matt Knight presents some of his experiences and why he thinks everyone should be trying it.

It’s been hectic few weeks in which I have inadvertently immersed myself in the world of public engagement, outreach, social media, and everything in between. Two years ago I would have had no idea what I was doing – for the most part I still don’t! But I thought I’d try and tie some of my incessant thoughts together about why I’ve bothered trying to engage with the complexities of social media and general public outreach and its overall benefit to me and my thesis.

To give you some background, I’ve been using social media (Twitter and Facebook mainly) and blogging about my research since I started my PhD two years ago. It started as a way to help my mum understand what I do (a problem I think most us have encountered!), while also giving me an avenue for processing some of my thoughts in an informal environment, without the fear of academic persecution that comes with a conference. I coupled this with helping out on the odd public engagement gig.

It’s safe to say this has steamrolled somewhat, as four weeks ago I found myself sat in a conference workshop dedicated entirely to Social Media and its benefits for research, and two weeks ago I was one of four on a communications and networking panel for Exeter’s Doctoral College to offer information and advice on communicating their research. This has been intermitted with a presentation of my semi-scientific archaeological research to artists, as well as educating a class of 10/11 year olds, alongside teaching undergrads. To top it all off, last weekend, I inadvertently became the social media secretary of a national archaeological group.

presenting to primary school kids

– A picture of me nervously stood in front a class of 10 year olds!

As you read this, please be aware, I don’t consider myself an expert in this field whatsoever. I have 300+ followers on Twitter, 230ish on Facebook, 40ish followers on my blog and minimal training in public engagement – these are not impressive facts and figures. Much of what I’ve done is self-taught and there are much better qualified people who could be writing a post such as this. And yet, I want to make clear that the opportunities, experiences, and engagements I’ve had are beyond anything I could have hoped for.

alifeinfragments facebook page

– A screenshot of the Facebook page I established to promote my research

A lot of this stems from the belief that there is no point doing what I do – what many of you reading this also do – if no one knows or cares about it. From the beginning of undertaking my PhD, I knew I wanted to make my research relevant. For an archaeologist, or indeed, any arts and humanities student, this can be difficult. Every day can be a battle with the ultimate question plaguing many of us:

What’s the point?

Social media and general outreach events are a great way to get to grips with this and have certainly kept me sane on more than one occasion. Last year I participated in the University’s Community Day, in which members of the public were able to attend and see the ongoing research at what is such an inherent part of their city. That day was one of the most exhausting and exhilarating days of my PhD thus far.

Archaeology Community Day

– Myself and a fellow PhD researcher setting up for Exeter’s Community Day 2015

But then, 6 non-stop hours of presenting your research to nearly 2000 people will do that to you.

It will also help you gain perspective on the value of what you do. Children are particularly unforgiving – if they don’t think something is interesting or matters, they will let you know. The key I’ve found is to work out one tiny bit of your research that people can relate to or find interesting and hammer that home.

This rings true of outreach and engagement events, whether that’s to academics outside of your specialist field, or a room full of restless 10 year olds.

Where I’ve had my most success by far though has been online. My minimal online numbers inevitably stem from my niche field (i.e. Bronze Age metalwork), and yet it’s attracted the right people online. Through Twitter and Facebook I am in regular contact with some of the leading experts in my field, without the formality of “clunky” emails. They retweet and share pictures of what I’m doing. They ask me questions. They share ideas with me.

I’ve recently found out that my blog has become a source of reference for several upcoming publications. This is huge in a competitive academic world where getting yourself known matters.

alifeinfragments blog page

– A screenshot of my blog site where I summarise lots of my ongoing research

Beyond this, you’d be amazed what members of the public might contribute to your thesis. So many of my ideas have come from discussions with people who have general archaeological interests, wanting to know more, and asking questions that have simply never crossed my mind.

I’m not going to lie – maintaining this sort of approach is time-consuming and exposing. It’s something that needs to be managed, and needs careful consideration. You need to be prepared that it opens you up to criticism from a wide audience and can add another nag to the back of your already stressed mind. But I know without a doubt my PhD experience, and indeed my research, would be weaker without it.

This blog post has inevitably been largely anecdotal, and by no means explores all of the possibilities open to you. But hopefully it might encourage a couple of you to think about the benefits of engaging with outreach events (there are hundred on offer through the university), as well as turning social media from a form of procrastination into a productive avenue.


Matt Knight is a PhD researcher in Archaeology studying Bronze Age metalwork. He frequently posts about his research and can be followed on Twitter @mgknight24.


 

Coat Tales

In 1853, the blacksmith Joshua Payge of Buckland Brewer, Devon, married Ann Cole. He arrived for his wedding in his best waistcoat, a garment made from Manchester cotton velvet – swirls of blue patterning over a deep red ground.  From its style and manufacture we are able to date the waistcoat back to the 1830s, and might deduce from this fact that it had been handed down to Joshua by his father.  The enamel buttons down the front are a later, joyful addition: tiny flowers of red, white and blue enamel.  From such customization, from the stains and the stitching we are able to turn the waistcoat from an item of clothing to a record that provides us with clues as to what it means to be human.

Coat Tales: The Stories Clothes Tell was a collaborative workshop designed to explore objects such as this and to consider the kinds of stories they tell. It was run by Shelley Tobin, assistant curator of Dress and Textiles at RAMM, Dr Tricia Zakreski, lecturer in Victorian Studies, Heather Hind, postgraduate student in Exeter’s English department, and me, Jane Feaver, lecturer in Creative Writing.  We share a fascination for objects – whether from a historical-practical, a literary-aesthetic, or a fictional-creative point of view – and wanted to investigate the synergies and ramifications in bringing those three enthusiasms together.

Between us we had identified three items from the museum holdings, which we’d selected for their resonance in terms of key moments in a life: Payne’s wedding waistcoat, an eighteenth-century pair of baby’s linen mittens worn by the donor’s “dear father” on his christening day, and a Victorian mourning necklace woven from the hair of the donor’s mother, and worn with a cross as testament to her memory.IMG_0044

– Dr Tricia Zakreski examining objects

Our event was split into three parts. The first part was led by Shelley, who gave the workshop participants a practical, fashion-curator’s view of the objects in hand, which we were able to see and experience at close quarters.  What questions do we ask of an object? How is it constructed? What does this tell us about the date it was made, the sort of person who would have worn it, the way the item was worn? What stories do particular stains or other features of wearing tell? Armed with pencils and notebooks participants were encouraged to take down their observations, which included drawing particular details of the item – stitching, pattern, texture…

The next part of our workshop aimed to give body and life to these objects – all of which, at over 150 years old, might have appeared a little arcane.  We wanted to show how they moved and operated ordinarily in the world.  Thomas Hardy, Charles Dickens, George Egerton, Emily Bronte and Wilkie Collins supplied wonderful examples for us: We found a blacksmith in Joe Gargery, dressed to the nines on his wedding day to Biddy; a maid’s revelation to her pregnant mistress of the baby clothes she treasures in a red-painted deal box; a letter from Wilkie Collins’ Hide and Seek, which details the making of a hair bracelet. How does our understanding of the objects change? How does our impression of the text alter, having embraced the physical presence of similar objects?

The last part of the workshop involved us thinking about any of the three objects as cues to writing our own stories.  Using the letter in Hide and Seek as a model, participants were asked to write a letter to an intimate friend describing their chosen object, and thinking about what function the object played in the scene they are about to relate, and what emotion drove the relating… In ten minutes, there was not a sound in the room but the industrious beavering of pencil leads.  Everyone managed a story – some, pages of story! – and as we read around the room, each story exposed some moment in a human life suffuse with the emotion that arises from close attention to detail at such life-changing moments – birth, marriage, death; how the memory of one moment often lies buried in another.

The workshop was an experiment.  The participants, we were clear from the start, were our lab rats.  They didn’t appear to mind.  Someone said out loud how helpful it had been to have this three-pronged approach: that by the time it came to writing for themselves, how much easier it made the task.  Each of us during the course of that afternoon, I think, learned something more about what it means to be human: why we value and invest in certain things, how we use particular objects to embody our memories and our stories, and, conversely, how we can get objects not personally connected to us to offer up their novel stories to us.


Dr Jane Feaver is a Lecturer in Creative Writing at the University of Exeter.


 

Medieval women can teach us how to smash gender rules and the glass ceiling

This post originally appeared on The Conversation. The post is written by Laura Kalas Williams, Postdoctoral researcher in medieval literature and medicine and Associate Tutor at the University of Exeter.

On the night of the US election, Manhattan’s magisterial, glass-encased Javits Centre stood with its ceiling intact and its guest-of-honour in defeated absence. Hillary Clinton – who has frequently spoken of “the highest, hardest glass ceiling” she was attempting to shatter – wanted to bring in a new era with symbolic aplomb. As supporters despaired in that same glass palace, it was clear that the symbolism of her defeat was no less forceful.

People wept, hopes were dashed, and more questions were raised about just what it will take for the most powerful leader on the planet to one day be a woman. Hillary Clinton’s staggering experience and achievements as a civil rights lawyer, first lady, senator and secretary of state were not enough.

The double-standards of gender “rules” in society have been disconcertingly evident of late. The Clinton campaign said FBI director James Comey’s handling of the investigation into Clinton’s private server revealed “jaw-dropping” double standards. Trump, however, lauded him as having “guts”. When no recriminating email evidence was found, Trump ran roughshod over the judicial process, claiming: “Hillary Clinton is guilty. She knows it. The FBI knows it, the people know it.” Chants of “lock her up” resonated through the crowd at a rally.

Mob-like cries for a woman to be incarcerated without evidence or trial? That’s medieval.

The heart of a king

Since time immemorial, women have manipulated gender constructs in order to gain agency and a voice in the political milieu. During her speech to the troops at Tilbury, anticipating the invasion of the Spanish Armada, Elizabeth I famously claimed:

I know I have the body but of a weak and feeble woman; but I have the heart and stomach of a king, and of a king of England too.


Elizabeth I, The Ditchley Portrait, c. 1592, National Portrait Gallery. Elizabeth stands upon England, and the top of the world itself. Her power and domination are symbolised by the celestial sphere hanging from her left ear. The copious pearls represent her virginity and thus maleness. Wikimedia Commons

Four hundred years later, Margaret Thatcher seemed obliged to follow the same approach, employing a voice coach from the National Theatre to help her to lower her voice. And Clinton told a rally in Ohio: “Now what people are focused upon is choosing the next president and commander-in-chief.” Not a million miles away from the kingly-identifications of Elizabeth, the pseudo-male “Virgin Queen”.

This gender-play has ancient origins. In the late fourth century AD, St Jerome argued that chaste women become male. Likewise, the early Christian non-canonical Gospel of Thomasclaimed that Jesus would make Mary “male, in order that she also may become a living spirit like you males”.


15th century ‘Disease Woman’. Wellcome Collection, MS Wellcome Apocalypse 49, f.38r.

By the Middle Ages, this idea of female bodily inferiority became material as well as spiritual as medical texts on the topic proliferated. Women’s bodies were considered inferior and more prone to disease. Because of the interiority of female anatomy, male physicians had to rely on diagrams and texts to interpret them, often with a singular focus on the reproductive system. Since men mostly wrote the books, the lexical and pictorial construction of the female body has therefore been historically, and literally, “written” by male authors.

So women, who were socially constrained by their female bodies and living in a man’s world, had to enact radical ways to modify their gender and even their very physiology. To gain authority, women had to be chaste, and to behave like men by adopting “masculine” characteristics. Such modifications might appear to compromise feminist, or proto-feminist, ambitions, but they were in fact sophisticated strategies to undermine or subvert the status quo.

Gender-play


Illuminated image from Hildegard of Bingen’s (1098-1179) Scivias, depicting her enclosed in a nun’s cell, writing. Wikimedia Commons

Medieval women who desired a voice in religious circles (the Church was, of course, the unelected power of the day) shed their femininity by adapting their bodies, the way that they used them, and therefore the way in which they were “read” by others. Through protecting their virginity, fasting, mortifying their flesh, perhaps reading, writing, or becoming physically enclosed in a monastery or anchorhold, they reoriented the way in which they were identified.

Joan of Arc (1412-1431) famously led an army to victory in the Hundred Years War dressed as a soldier, in a time when women were not supposed to fight.

Catherine of Siena (1347-1380), defying social codes of female beauty, shaved her hair in defiance of her parents’ wish to have her married. She later had a powerful mystical experience whereby she received the heart of Christ in place of her own; a visceral transformation which radically altered her body and identity.

And St Agatha (231-251), whose story was widely circulated in the Middle Ages, refused to give in to sexual pressure and was tortured, finally suffering the severing of her breasts. She has since been depicted as offering her breasts on a plate to Christ and the world. Agatha subverted her torturers’ aim, exploited her “de-feminised” self and instead offered her breasts as symbols of power and triumph.


Saint Agatha bearing her severed breasts on a platter, Piero della Francesca (c. 1460–70). Wikimedia Commons

Some scholars have even argued that monks and nuns were a considered a “third gender” in the Middle Ages: neither fully masculine nor feminine.

These flexible gender systems show how medieval people were perhaps more sophisticated in their conceptualisation of identity that we are today, when challenges to binary notions of gender are only now becoming widely discussed. Medieval codes of chastity might not be to most 21st-century tastes, but these powerful women-in-history took control of their own identification: found loopholes in the rules, found authority in their own self-fashioning.

The US presidential campaign has without doubt reinvigorated the politics of gender. Hillary Clinton has said: “If I want to knock a story off the front page, I just change my hairstyle”. It is easy to leap at such a comment, seeing Clinton as a media-sycophant, playing to the expectation that women are defined by their appearance. But in fact, like myriad women before her, Clinton was manipulating and exploiting the very rules that seek to define her.

Complete liberation this is not. Only when the long history of gender rules is challenged will powerful women no longer be compared to men. Like the response of Joan of Arc and her troops, it is surely now time for another call to arms: for the freedoms of tolerance, inclusion, equality and compassion. We must turn grief into optimism and words into action. To shatter not the dreams of girls around the world, but the glass ceilings that restrain them.