The busy Romans needed a mid-winter break too … and it lasted for 24 days

This was originally appeared on The Conversation and is written by Dr Richard Flowers, Senior Lecturer in Classics and Ancient History, University of Exeter.

In the Doctor Who Christmas Special from 2010, Michael Gambon’s Scrooge-like character remarks that across different cultures and worlds people come together to mark the midpoint of winter. It is, he imagines, as if they are saying: “Well done, everyone! We’re halfway out of the dark!”

The actual reasons for celebrating Christmas at this particular time in the year have long been debated. Links have often been drawn to the winter solstice and the Roman festival of Saturnalia. Some people have also associated it with the supposed birthday of the god Sol Invictus, the “unconquered sun”, since a fourth-century calendar describes both this and Christ’s birth as taking place on December 25.

Such speculation has inevitably led to claims that this traditionally Christian festival is little more than a rebranding of earlier pagan activities. But questions about the “religious identity” of public celebrations are, in fact, nothing new and were being asked in the later periods of the Roman empire as well.

This is particularly evident in the case of a rather obscure Roman festival called the Brumalia, which started on November 24 and lasted for 24 days. We cannot be sure exactly when it began to be celebrated, but one of our best accounts of it comes from the sixth century AD. A retired public official called John the Lydian explained that it had its origins in earlier pagan rites from this time of year, including Saturnalia.

Some people celebrated Brumalia by sacrificing goats and pigs, while devotees of the god Dionysus inflated goat skins and then jumped on them. We also believe that each day of the festival was assigned a different letter of the Greek alphabet, starting with alpha (α) on November 24 and finishing with omega (ω) on December 17.

A person would wait until the day that corresponded to the first letter of their own name and then throw a party. This meant that those with a wide circle of friends – and, in particular, friends with a wide variety of names – might potentially get to go to 24 consecutive celebrations.

We also have other evidence for the popularity of the Brumalia during the sixth century. A speech by the orator Choricius of Gaza praises the festivities laid on by the emperor Justinian (527–565), remarking that the emperor and his wife, Theodora, celebrated the Brumalia on adjacent days, since the letter iota (ι) – for Justinian – directly follows theta (θ) – for Theodora – in the Greek alphabet. Surviving accounts from the cellars of a large estate in Egypt also detail the wine distributions to officials and servants for the Brumalia of the master, Apion, which fell on the first day of the festival.

Yet, the origins of the Brumalia are far from clear. It seems to have been related to the earlier Roman Bruma festival, which took place on a single day in November and looked ahead to the winter solstice (or bruma in Latin) a month later, but little is known about this.

It is only really from the sixth century onwards that it appears in surviving sources, even though by then most Romans were Christians and had been ruled by Christian emperors for more than two centuries. John the Lydian also states that the “name day” aspect of the celebrations was a recent innovation at this time. As far as we can tell, therefore, this was not merely a remnant from a distant pagan past, but had actually developed and grown at precisely the same time as emperors, including Justinian, were endeavouring to clamp down on perceived “paganism” in their empire.

The historian Roberta Mazza, in one of the most comprehensive modern discussions of the festival, has argued that the Brumalia was simply too popular to get rid of entirely, but that Justinian sought to strip it of “pagan” elements. She says that in doing so, the emperor “reshaped and reinvented the meanings and purposes of the feast” and made it “both acceptable from a religious point of view and useful for constructing a common cultural identity throughout the different provinces of the empire”.

The true meaning of Brumalia

We know that the Brumalia continued to be celebrated at the imperial court in Constantinople until at least the tenth century, but it was certainly not without its opponents. John the Lydian reports that the church was opposed to the Brumalia, and similar statements of disapproval and attempts to ban it were also made by church councils in 692 and 743. For some Christians, it remained just too pagan for comfort. Controversy also surrounded other celebrations in late antiquity, including the wearing of masks at New Year, the Roman Lupercalia (with its naked runners), and the processions and dancing involved in the “Bean Festival” at Calama in North Africa.

How then should we view the Brumalia? Was it still essentially “pagan”, or had it become safely Christianised or secularised? I think that any attempt to neatly categorise these festivals, let alone their participants, is destined to fail. For some people, the religious elements will have loomed larger, while for others they will have been almost entirely irrelevant, as also happens with Christmas today.

The Brumalia could be celebrated in a variety of ways and have a multitude of meanings to different people throughout the empire, even if all of them saw themselves as Christians. Rather than arguing that Justinian or others who enjoyed the Brumalia were “less Christian” than its opponents, we might instead treat it as a vivid illustration of the fluidity and malleability of notions of culture and identity.

We cannot ever discover the true meaning of Brumalia, but we can be sure that it brought people together to commemorate being halfway out of the dark.

Social Media, Outreach, and Your Thesis

Ever wondered about the benefits of social media and public outreach for your thesis? Matt Knight presents some of his experiences and why he thinks everyone should be trying it.

It’s been hectic few weeks in which I have inadvertently immersed myself in the world of public engagement, outreach, social media, and everything in between. Two years ago I would have had no idea what I was doing – for the most part I still don’t! But I thought I’d try and tie some of my incessant thoughts together about why I’ve bothered trying to engage with the complexities of social media and general public outreach and its overall benefit to me and my thesis.

To give you some background, I’ve been using social media (Twitter and Facebook mainly) and blogging about my research since I started my PhD two years ago. It started as a way to help my mum understand what I do (a problem I think most us have encountered!), while also giving me an avenue for processing some of my thoughts in an informal environment, without the fear of academic persecution that comes with a conference. I coupled this with helping out on the odd public engagement gig.

It’s safe to say this has steamrolled somewhat, as four weeks ago I found myself sat in a conference workshop dedicated entirely to Social Media and its benefits for research, and two weeks ago I was one of four on a communications and networking panel for Exeter’s Doctoral College to offer information and advice on communicating their research. This has been intermitted with a presentation of my semi-scientific archaeological research to artists, as well as educating a class of 10/11 year olds, alongside teaching undergrads. To top it all off, last weekend, I inadvertently became the social media secretary of a national archaeological group.

presenting to primary school kids

– A picture of me nervously stood in front a class of 10 year olds!

As you read this, please be aware, I don’t consider myself an expert in this field whatsoever. I have 300+ followers on Twitter, 230ish on Facebook, 40ish followers on my blog and minimal training in public engagement – these are not impressive facts and figures. Much of what I’ve done is self-taught and there are much better qualified people who could be writing a post such as this. And yet, I want to make clear that the opportunities, experiences, and engagements I’ve had are beyond anything I could have hoped for.

alifeinfragments facebook page

– A screenshot of the Facebook page I established to promote my research

A lot of this stems from the belief that there is no point doing what I do – what many of you reading this also do – if no one knows or cares about it. From the beginning of undertaking my PhD, I knew I wanted to make my research relevant. For an archaeologist, or indeed, any arts and humanities student, this can be difficult. Every day can be a battle with the ultimate question plaguing many of us:

What’s the point?

Social media and general outreach events are a great way to get to grips with this and have certainly kept me sane on more than one occasion. Last year I participated in the University’s Community Day, in which members of the public were able to attend and see the ongoing research at what is such an inherent part of their city. That day was one of the most exhausting and exhilarating days of my PhD thus far.

Archaeology Community Day

– Myself and a fellow PhD researcher setting up for Exeter’s Community Day 2015

But then, 6 non-stop hours of presenting your research to nearly 2000 people will do that to you.

It will also help you gain perspective on the value of what you do. Children are particularly unforgiving – if they don’t think something is interesting or matters, they will let you know. The key I’ve found is to work out one tiny bit of your research that people can relate to or find interesting and hammer that home.

This rings true of outreach and engagement events, whether that’s to academics outside of your specialist field, or a room full of restless 10 year olds.

Where I’ve had my most success by far though has been online. My minimal online numbers inevitably stem from my niche field (i.e. Bronze Age metalwork), and yet it’s attracted the right people online. Through Twitter and Facebook I am in regular contact with some of the leading experts in my field, without the formality of “clunky” emails. They retweet and share pictures of what I’m doing. They ask me questions. They share ideas with me.

I’ve recently found out that my blog has become a source of reference for several upcoming publications. This is huge in a competitive academic world where getting yourself known matters.

alifeinfragments blog page

– A screenshot of my blog site where I summarise lots of my ongoing research

Beyond this, you’d be amazed what members of the public might contribute to your thesis. So many of my ideas have come from discussions with people who have general archaeological interests, wanting to know more, and asking questions that have simply never crossed my mind.

I’m not going to lie – maintaining this sort of approach is time-consuming and exposing. It’s something that needs to be managed, and needs careful consideration. You need to be prepared that it opens you up to criticism from a wide audience and can add another nag to the back of your already stressed mind. But I know without a doubt my PhD experience, and indeed my research, would be weaker without it.

This blog post has inevitably been largely anecdotal, and by no means explores all of the possibilities open to you. But hopefully it might encourage a couple of you to think about the benefits of engaging with outreach events (there are hundred on offer through the university), as well as turning social media from a form of procrastination into a productive avenue.


Matt Knight is a PhD researcher in Archaeology studying Bronze Age metalwork. He frequently posts about his research and can be followed on Twitter @mgknight24.


 

Coat Tales

In 1853, the blacksmith Joshua Payge of Buckland Brewer, Devon, married Ann Cole. He arrived for his wedding in his best waistcoat, a garment made from Manchester cotton velvet – swirls of blue patterning over a deep red ground.  From its style and manufacture we are able to date the waistcoat back to the 1830s, and might deduce from this fact that it had been handed down to Joshua by his father.  The enamel buttons down the front are a later, joyful addition: tiny flowers of red, white and blue enamel.  From such customization, from the stains and the stitching we are able to turn the waistcoat from an item of clothing to a record that provides us with clues as to what it means to be human.

Coat Tales: The Stories Clothes Tell was a collaborative workshop designed to explore objects such as this and to consider the kinds of stories they tell. It was run by Shelley Tobin, assistant curator of Dress and Textiles at RAMM, Dr Tricia Zakreski, lecturer in Victorian Studies, Heather Hind, postgraduate student in Exeter’s English department, and me, Jane Feaver, lecturer in Creative Writing.  We share a fascination for objects – whether from a historical-practical, a literary-aesthetic, or a fictional-creative point of view – and wanted to investigate the synergies and ramifications in bringing those three enthusiasms together.

Between us we had identified three items from the museum holdings, which we’d selected for their resonance in terms of key moments in a life: Payne’s wedding waistcoat, an eighteenth-century pair of baby’s linen mittens worn by the donor’s “dear father” on his christening day, and a Victorian mourning necklace woven from the hair of the donor’s mother, and worn with a cross as testament to her memory.IMG_0044

– Dr Tricia Zakreski examining objects

Our event was split into three parts. The first part was led by Shelley, who gave the workshop participants a practical, fashion-curator’s view of the objects in hand, which we were able to see and experience at close quarters.  What questions do we ask of an object? How is it constructed? What does this tell us about the date it was made, the sort of person who would have worn it, the way the item was worn? What stories do particular stains or other features of wearing tell? Armed with pencils and notebooks participants were encouraged to take down their observations, which included drawing particular details of the item – stitching, pattern, texture…

The next part of our workshop aimed to give body and life to these objects – all of which, at over 150 years old, might have appeared a little arcane.  We wanted to show how they moved and operated ordinarily in the world.  Thomas Hardy, Charles Dickens, George Egerton, Emily Bronte and Wilkie Collins supplied wonderful examples for us: We found a blacksmith in Joe Gargery, dressed to the nines on his wedding day to Biddy; a maid’s revelation to her pregnant mistress of the baby clothes she treasures in a red-painted deal box; a letter from Wilkie Collins’ Hide and Seek, which details the making of a hair bracelet. How does our understanding of the objects change? How does our impression of the text alter, having embraced the physical presence of similar objects?

The last part of the workshop involved us thinking about any of the three objects as cues to writing our own stories.  Using the letter in Hide and Seek as a model, participants were asked to write a letter to an intimate friend describing their chosen object, and thinking about what function the object played in the scene they are about to relate, and what emotion drove the relating… In ten minutes, there was not a sound in the room but the industrious beavering of pencil leads.  Everyone managed a story – some, pages of story! – and as we read around the room, each story exposed some moment in a human life suffuse with the emotion that arises from close attention to detail at such life-changing moments – birth, marriage, death; how the memory of one moment often lies buried in another.

The workshop was an experiment.  The participants, we were clear from the start, were our lab rats.  They didn’t appear to mind.  Someone said out loud how helpful it had been to have this three-pronged approach: that by the time it came to writing for themselves, how much easier it made the task.  Each of us during the course of that afternoon, I think, learned something more about what it means to be human: why we value and invest in certain things, how we use particular objects to embody our memories and our stories, and, conversely, how we can get objects not personally connected to us to offer up their novel stories to us.


Dr Jane Feaver is a Lecturer in Creative Writing at the University of Exeter.


 

Medieval women can teach us how to smash gender rules and the glass ceiling

This post originally appeared on The Conversation. The post is written by Laura Kalas Williams, Postdoctoral researcher in medieval literature and medicine and Associate Tutor at the University of Exeter.

On the night of the US election, Manhattan’s magisterial, glass-encased Javits Centre stood with its ceiling intact and its guest-of-honour in defeated absence. Hillary Clinton – who has frequently spoken of “the highest, hardest glass ceiling” she was attempting to shatter – wanted to bring in a new era with symbolic aplomb. As supporters despaired in that same glass palace, it was clear that the symbolism of her defeat was no less forceful.

People wept, hopes were dashed, and more questions were raised about just what it will take for the most powerful leader on the planet to one day be a woman. Hillary Clinton’s staggering experience and achievements as a civil rights lawyer, first lady, senator and secretary of state were not enough.

The double-standards of gender “rules” in society have been disconcertingly evident of late. The Clinton campaign said FBI director James Comey’s handling of the investigation into Clinton’s private server revealed “jaw-dropping” double standards. Trump, however, lauded him as having “guts”. When no recriminating email evidence was found, Trump ran roughshod over the judicial process, claiming: “Hillary Clinton is guilty. She knows it. The FBI knows it, the people know it.” Chants of “lock her up” resonated through the crowd at a rally.

Mob-like cries for a woman to be incarcerated without evidence or trial? That’s medieval.

The heart of a king

Since time immemorial, women have manipulated gender constructs in order to gain agency and a voice in the political milieu. During her speech to the troops at Tilbury, anticipating the invasion of the Spanish Armada, Elizabeth I famously claimed:

I know I have the body but of a weak and feeble woman; but I have the heart and stomach of a king, and of a king of England too.

Elizabeth I, The Ditchley Portrait, c. 1592, National Portrait Gallery. Elizabeth stands upon England, and the top of the world itself. Her power and domination are symbolised by the celestial sphere hanging from her left ear. The copious pearls represent her virginity and thus maleness. Wikimedia Commons

Four hundred years later, Margaret Thatcher seemed obliged to follow the same approach, employing a voice coach from the National Theatre to help her to lower her voice. And Clinton told a rally in Ohio: “Now what people are focused upon is choosing the next president and commander-in-chief.” Not a million miles away from the kingly-identifications of Elizabeth, the pseudo-male “Virgin Queen”.

This gender-play has ancient origins. In the late fourth century AD, St Jerome argued that chaste women become male. Likewise, the early Christian non-canonical Gospel of Thomasclaimed that Jesus would make Mary “male, in order that she also may become a living spirit like you males”.

15th century ‘Disease Woman’. Wellcome Collection, MS Wellcome Apocalypse 49, f.38r.

By the Middle Ages, this idea of female bodily inferiority became material as well as spiritual as medical texts on the topic proliferated. Women’s bodies were considered inferior and more prone to disease. Because of the interiority of female anatomy, male physicians had to rely on diagrams and texts to interpret them, often with a singular focus on the reproductive system. Since men mostly wrote the books, the lexical and pictorial construction of the female body has therefore been historically, and literally, “written” by male authors.

So women, who were socially constrained by their female bodies and living in a man’s world, had to enact radical ways to modify their gender and even their very physiology. To gain authority, women had to be chaste, and to behave like men by adopting “masculine” characteristics. Such modifications might appear to compromise feminist, or proto-feminist, ambitions, but they were in fact sophisticated strategies to undermine or subvert the status quo.

Gender-play

Illuminated image from Hildegard of Bingen’s (1098-1179) Scivias, depicting her enclosed in a nun’s cell, writing. Wikimedia Commons

Medieval women who desired a voice in religious circles (the Church was, of course, the unelected power of the day) shed their femininity by adapting their bodies, the way that they used them, and therefore the way in which they were “read” by others. Through protecting their virginity, fasting, mortifying their flesh, perhaps reading, writing, or becoming physically enclosed in a monastery or anchorhold, they reoriented the way in which they were identified.

Joan of Arc (1412-1431) famously led an army to victory in the Hundred Years War dressed as a soldier, in a time when women were not supposed to fight.

Catherine of Siena (1347-1380), defying social codes of female beauty, shaved her hair in defiance of her parents’ wish to have her married. She later had a powerful mystical experience whereby she received the heart of Christ in place of her own; a visceral transformation which radically altered her body and identity.

And St Agatha (231-251), whose story was widely circulated in the Middle Ages, refused to give in to sexual pressure and was tortured, finally suffering the severing of her breasts. She has since been depicted as offering her breasts on a plate to Christ and the world. Agatha subverted her torturers’ aim, exploited her “de-feminised” self and instead offered her breasts as symbols of power and triumph.

Saint Agatha bearing her severed breasts on a platter, Piero della Francesca (c. 1460–70). Wikimedia Commons

Some scholars have even argued that monks and nuns were a considered a “third gender” in the Middle Ages: neither fully masculine nor feminine.

These flexible gender systems show how medieval people were perhaps more sophisticated in their conceptualisation of identity that we are today, when challenges to binary notions of gender are only now becoming widely discussed. Medieval codes of chastity might not be to most 21st-century tastes, but these powerful women-in-history took control of their own identification: found loopholes in the rules, found authority in their own self-fashioning.

The US presidential campaign has without doubt reinvigorated the politics of gender. Hillary Clinton has said: “If I want to knock a story off the front page, I just change my hairstyle”. It is easy to leap at such a comment, seeing Clinton as a media-sycophant, playing to the expectation that women are defined by their appearance. But in fact, like myriad women before her, Clinton was manipulating and exploiting the very rules that seek to define her.

Complete liberation this is not. Only when the long history of gender rules is challenged will powerful women no longer be compared to men. Like the response of Joan of Arc and her troops, it is surely now time for another call to arms: for the freedoms of tolerance, inclusion, equality and compassion. We must turn grief into optimism and words into action. To shatter not the dreams of girls around the world, but the glass ceilings that restrain them.