Category Archives: research

How Ross Poldark was a victim of Cornwall’s changing industrial landscape

Dr Joseph Crawford, English lecturer in College of Humanities, takes a look at hit BBC series Poldark. He asks if the main character, Ross Poldark, was victim of the changing face of industry in the South West.

This post first appeared in The ConversationConversation logo

Joseph Crawford, University of Exeter

In July 2016, the BBC announced the commissioning of a third season of costume drama Poldark, months before the second series was even due to be broadcast. This represents an impressive vote of confidence in the series, especially as season two will apparently not be repeating the famous “topless scything” scene which won the National Television Awards’ prize for TV Moment of the Year.

The real pivotal moment depicted by Poldark, however, is one of historical change in south-west England. In the mid-18th century, Cornwall and Devon were major commercial and industrial centres. Cornwall’s tin and copper mines were some of the largest and most sophisticated in Europe, while the profits from the Cornwall and Devonshire wool trade helped make Exeter one of the biggest and richest cities in England.

By the mid-19th century however, much had changed. The rise of the mechanised cloth industry in England’s North and Midlands sent the south-western wool trade into serious decline. And while Cornwall’s mining industry survived well into the 20th century, it experienced repeated crises from the 1770s onwards. This was primarily due to newly discovered tin and copper mines elsewhere in the world, leading to the large-scale emigration of Cornish miners to countries such as Mexico, Australia and Brazil.

The era depicted in Poldark shows the region on the very tipping-point of this transition. Ross Poldark’s struggles to keep his mine open and profitable are symptomatic of the economic difficulties experienced by the region as a whole during the late 18th and early 19th centuries.

As south-western towns lost their traditional role as centres of trade and industry, their focus shifted increasingly to tourism. This was especially true during the long years of the Napoleonic Wars which form the backdrop to the later Poldark novels. Cut off by war from their favoured resorts in France and Italy, a generation of English tourists began taking holidays in Devon and Cornwall instead.

By the late 18th century, writers in Devon were praising their native county for its natural beauty and its ancient history, rather than for the wealth and industry of which their parents and grandparents had been so proud. By the mid-19th century, the same was increasingly true of Cornwall.

This economic shift led, in turn, to the development of the Victorian mythology of the “romantic South-West”, still beloved of local tourist boards today.

This mythology is built upon a version of the region’s history which emphasises its remote and wild character, playing on associations with Merlin and King Arthur, druids and witches, smugglers and wreckers and pirates.

Like most costume dramas, Poldark’s primary concern is with the travails of cross-class romance. But it is also a narrative about de-industrialisation, and about the struggle of local businesses to remain competitive and economically viable within an increasingly globalised economy – a story which has some resonance in early 21st-century Britain.

The poverty of the Cornish miners with whom Ross Poldark identifies is not simply the result of gratuitous oppression. Instead they are the victims of a new economic order which has little interest in preserving local industry for its own sake.

Wild West

The show has certainly not been shy about making lavish use of the beauty of its Cornish setting, and has already triggered something of a tourism boom, with visitors flocking to the region to see for themselves the moors, cliffs, and beaches which Poldark employs to such dramatic visual effect.

Industry by the sea.
Shutterstock

But it also depicts the historical struggles of the region’s inhabitants to preserve the South West as something more than just a pretty place for other people to visit on holiday. In this sense, it is rather symbolic that season one of Poldark ends with Ross being falsely accused of wrecking. The legend of the Cornish wreckers, which reached its definitive form in Du Maurier’s Jamaica Inn, is founded on extremely slender historical evidence, but it persists because it fits in so neatly with the Victorian mythology of the South West in general, and Cornwall in particular: a mythology which viewed it as a lawless and desperate land, filled with crime and adventure, and remote from all true civilisation.

In Poldark, the looting of the wrecked vessel is motivated by hunger and poverty, which have in turn been caused by the economic depression besetting the region. But after spending the whole season struggling against Cornwall’s industrial decline, Ross finds himself in danger of being absorbed into a new kind of narrative about the South West – one which will have no place for men like him, except as picturesque savages.

Of course, in this respect, Poldark rather wants to both have its grain and (shirtlessly) reap it, too. Ross Poldark and Demelza appeal to their audience precisely because they embody the kind of romantic wildness which, since the Victorian era, has been the stock-in-trade of the south-western tourist industry.

They are passionate, free-spirited, and dismissive of class boundaries and social conventions: hardly the kind of people that the self-consciously respectable merchants and industrialists of the 18th-century South West would have wanted as their representatives or champions. But by setting its story of class antagonism against the backdrop of this crucial turning-point in the history of the South West, Poldark does serve as a reminder that the quietness of the region, which has proven so attractive to generations of tourists, is not the natural state of a land untouched by commerce or industry. It is the silence which follows their enforced departure.

The Conversation

Joseph Crawford, Lecturer in English, University of Exeter

This article was originally published on The Conversation. Read the original article.

Is Katla crying wolf? Icelandic volcano’s rumblings don’t mean airspace chaos is imminent

Physical Geography lecturer, Dr Kate Smith reflects on the 2010 eruption of Icelandic volcano Eyjafjallajökull, and the recent rumblings of its close neighbour Katla.

This post first appeared in The Conversation.Conversation logo

iceland-1498123

Kate Smith, University of Exeter

There have been rumblings in Iceland recently.

But given its position in the North Atlantic, this is perhaps no surprise. The location makes the country a notorious volcanic hot spot, regularly hit by seismic activity.

Two recent earthquakes in August 2016 have now led to international reports of imminent eruption risk of Katla volcano. They both occurred within the 9km by 14km central crater – or “caldera” – of the volcano in southern Iceland, and were the two largest earthquakes at the volcano since 1977.

So how worried should we be? Will there be a repeat of the chaos in the skies after an ash cloud drifted over Europe in 2010 caused by an eruption at Eyjafjallajökull, Katla’s close neighbour?

In historic terms, we’ve certainly been waiting for an explosive eruption of Katla, one of Iceland´s most active volcanoes, for quite some time. Lying beneath an icecap up to 700 metres thick, over the last 1,100 years explosive eruptions large enough to melt through the ice have occurred on average every 50 years. The last such eruption was in 1918, and the current pause of 98 years is the longest on record.

Katla in 1918.
wikicommons

In the past earthquakes were felt two to 10 hours before Katla erupted through the ice. Since June, earthquake activity in the caldera has been elevated. The large earthquakes on August 29 marked the peak of this seismicity, followed by a series of more than 100 smaller events, finishing early on August 30.

Meanwhile, the rivers that drain from the ice-capped volcano have been smelling of rotten eggs. This smell comes from hydrogen sulphide (H2S), a volcanic gas found in fluids within the caldera beneath the ice. Unusually high H2S levels near Múlakvísl river have prompted official advice to avoid the immediate area. Changes in levels of volcanic gases around volcanoes and enhanced seismic activity can be signs of increased movement of magma and a future volcanic eruption.

So does all of this physical evidence point to an imminent eruption? Well, probably not. This activity is not actually unusual. Since the 1950s, periods of enhanced seismicity and increased gas pollution have not been followed by explosive eruptions, and there is no sign of swelling of the volcano or harmonic tremor (continuous rhythmic earthquakes) suggesting magma movement.

Earthquake activity at Katla also regularly increases in summer and autumn. As the summer progresses, more glacial ice melts, and small floods can occur. These larger volumes of melt water also increase pore pressure in the crustal rocks and can trigger earthquakes. Changes to water flow can also alter how much geothermal fluid is in the rivers, hence the smell.

Land of fire and ice.
Chmee2/Valtameri, CC BY

In winter the melting reduces and water exists only in small pockets at the base of the ice, which reduces pore pressure in the crust and reduces seismic activity.

After effects

However, volcanoes can change rapidly and in unexpected ways. We can’t say for sure that Katla will not erupt in the near future. A 98-year repose period is long, and twice before eruptions of Katla have followed Eyjafjallajökull in the 1820s and in 1612.

So what would happen if Katla did erupt? Eruptions from this volcano over the last 120,000 years have been varied, but the most likely eruption would be from the central caldera. A very small eruption wouldn’t melt through the ice, but a larger one could melt through explosively. Explosive Katla eruptions typically involve ash clouds and large floods (jökulhlaups) of meltwater, ice and sediment that flow across the surrounding lowlands. Lava is not usually seen since most eruptions are subglacial.

A small explosive eruption from Katla is the most common eruption type and would last from days to a couple of weeks, produce a plume up to 14km high, ash fall in Iceland and a large flood, but would be unlikely to affect anywhere outside of Iceland.

A larger-scale event could last weeks or even months. In this case the plume would be up to 25km in height and could impact air quality and air travel in the UK and Europe within two days if the wind blows ash in that direction. Iceland could expect heavy ashfall, with implications for travel, agriculture and air quality, and a large flood. Such floods can have peak discharges (water flow rates) greater than the Amazon, and cause major landscape change and local tsunamis.

If there is an eruption, both Iceland and the UK should be well prepared. Evacuation plans exist for local communities and information is available in several languages. The UK Meteorological Office monitors ash in the atmosphere and is able to predict what areas could be affected.

Eyjafjallajökull ash cloud of 2010.
PA

When I began studying Katla in 1999 I was told she could erupt any time. This remains true today and with every day that passes, we get closer to an eruption. But I wouldn’t bet on it happening right now.

Icelandic and international scientists work hard to investigate Katla´s past and carry out 24-hour monitoring with sophisticated equipment to understand its present and future. We study volcanic ash, rocks, jökulhlaup deposits, river levels and gas emissions on the ground. Earthquakes and GPS records are analysed remotely and we use satellites and overflights to examine the ground and ice surfaces from the air. An eruption may not be imminent – but it is exciting that we can detect and interpret these clues to assess Katla’s next move.

The Conversation

Kate Smith, Lecturer in Physical Geography, University of Exeter

This article was originally published on The Conversation. Read the original article.

The social life of sea mammals is key to their survival

Biosciences PhD student, Philippa Brakes, looks at the diversity in behaviour of our ‘oceanic cousins’. From the tool using sea otters to the ‘bubble netting’ strategy of the humpback whale.

 This post first appeared in The ConversationConversation logo
dolphins-378217

Philippa Brakes, University of Exeter

The beguiling behaviour of marine mammals in their natural environment is fascinating to us human observers. Watching dolphins leap gracefully through the surf, or whales making waves with their massive tale flukes is the stuff of countless bucket lists and high definition wildlife documentaries.

Perhaps part of the appeal lies in marine mammals behaving a lot like we do. They share many of our biological characteristics, such as bearing live young, suckling them with rich fatty milk and investing time and energy into rearing them into adulthood.

Studying their behaviour, what makes them tick as complex social creatures, is essential for their conservation in an aquatic environment which remains almost entirely alien to us. And while it may be alien, it is an environment with a massive human footprint.

For unfortunately mammalian attributes are not all we share with our oceanic cousins. There is arguably now no marine habitat that remains unaffected by human activities. Prey depletion caused by humans, noise, temperature changes, chemical pollution and entanglements in fishing gear have all changed the place in which marine mammals evolved.

Our research shows that this simple reality makes the need to understand their behaviour and social structures all the more urgent, a call which is being echoed by the charity Whale and Dolphin Conservation.

In recent decades there has been a strong emphasis in conservation circles on understanding the population size and distribution of marine mammals, as well as their genetic diversity.

Conservation is principally obsessed with conserving genetic diversity, which is exactly as it should be, as diverse gene pools help ensure resilience against environmental change. But genes may not be the whole story.

Marine mammal behaviour, just like ours, is partially determined by genes and the environment in which they live. However there are also social factors at play, and what makes marine mammals behave the way they do is potentially as complex as the processes which drive human behaviour.

In some cases, nurture may play just as important a role as nature for marine mammals. Almost 20 years ago, renowned conservation biologist Bill Sutherland examined how the behaviour of different species could be used to improve conservation efforts. He concluded that behavioural ecology needed to be better integrated into conservation science and policy making. But to what extent has this message from 1998 been taken on board?

Marine mammals exhibit a wide range of fascinating behaviours, from the complex cooperative bubble-net feeding of humpback whales, to ice-cave building polar bears and tool-using sea otters. They also have a great diversity in their social structures and changing social dynamics, which range between the complex third-order alliances of bottlenose dolphins, close relationships between non-related males, to the more solitary lives of beaked whales.

As well as having innate behaviours, which they acquire through their genes, these species, like humans, can learn individually and from each other. Social learning may be of particular importance to conservation efforts, because it can influence the resilience of a population to changes in their environment.

Killer whales learn foraging strategies from their social group and tend to stick to them. As a result, if there is a decline in their preferred prey, they may be less likely to switch to other species, or use alternative foraging tactics. Such a behaviourally conservative species is likely to be more vulnerable to change.

Survival skills

But learning is only part of marine mammal social dynamics. Social structure, and the various roles played by individuals may also be important for how a population responds to change. The loss of individuals that hold key information on the location of critical habitat or a food source may have significant consequences.

Humans live in many different types of cultures, environments and circumstances. We make important choices about what to eat, who to socialise with, where to live and how many offspring to have. These factors can strongly influence our fertility rates, survival, and even our evolution. It is certainly plausible that many of these factors influence the success of marine mammals as well.

A better understanding of the behavioural ecology of marine mammals is therefore hugely important. It is difficult to envision an approach toward conserving a population of modern humans which merely preserved their genetic integrity and did not also consider their socially learnt behaviour.

While there are some attempts to incorporate behaviour, efforts to conserve marine mammal biodiversity still focus strongly on maintaining genetic integrity and diversity. But the emerging evidence indicates that social and behavioural diversity may also be central to individual, group, and population viability. The challenge ahead is teasing out the most relevant factors and understanding how to incorporate this new knowledge into conservation efforts.

On the whole, policy makers have been slow to keep up with the emerging behavioural research. More alarming still, from the perspective of the marine mammals themselves, is that the degradation of their environment has continued apace.

The Conversation

Philippa Brakes, Post-grad Researcher, University of Exeter

This article was originally published on The Conversation. Read the original article.

Teen obesity caused by going into ‘power-saving mode’

As new research on the subject of teen obesity hits the headlinesProfessor Terry Wilkin – Professor of Endocrinology and Metabolism in University of Exeter Medical School – looks at the evolutionary trait of ‘power-saving’ which may be trapping them.

This post first appeared in The Conversation. Conversation logo

feet-932346

Terry Wilkin, University of Exeter

It is possible that modern teenagers are trapped by a trait which evolved thousands of years ago to help them through puberty, but which now leaves them vulnerable to obesity.

Adolescents need an extra 20-30% energy every day to fuel the growth and changes in body composition that characterise the six years or so of pubertal development. Energy comes from calories in the food they eat, but how could hunter-gatherers guarantee the extra calories they needed as adolescents when their food supply was limited?

We believe they may have unearthed a strategy that worked well for our ancestors, but which does quite the opposite now.

In our research, we have been monitoring a group of children as they progress through childhood from five to 16 years of age (the EarlyBird study). We found, as expected, that more energy was burnt as children got bigger. However, after the age of ten, the calories they burnt unexpectedly fell, despite the fact that they were growing faster than ever. The amount of calories burnt by age 15 was around 25% lower in both boys and girls. Only at 16 years of age, when the growth spurt was over, did the energy spend begin to increase again.

The study has three important qualities: it is longitudinal (which means that it measures the same group of children throughout), its age spread is very narrow (which means that age-related changes can be more accurately identified), and few people dropped out of the study (important statistically).

In a publication last year, we described two distinct waves of weight gain; one occurring sometime between birth and five years of age, and the other in adolescence. The early wave affected only some children – the offspring of obese parents – while the later wave in adolescence involved children generally.

Poor parental eating habits passed on to their children seemed a likely explanation for early obesity, but we had no good explanation for the later wave of obesity, until now.

Mystery solved?

Energy balance can be thought of as a bank account. Calories are deposited, and calories are spent. Body size (the balance in the account) depends on the difference between the two. So, although the explanation we offer is entirely speculative, and we will never really know because we don’t have the data on our ancestors, the researchers proposed that a downward shift of energy expenditure into “power-saving mode” might help to conserve the calories needed for the growth spurt in puberty.

The energy burnt over 24 hours has two components: voluntary and involuntary. The voluntary component is physical activity, which is easy to understand. What people understand less readily is that the involuntary component is by far the bigger one. Involuntary energy expenditure (so-called resting energy expenditure) is used just to keep alive; to keep the blood temperature at 36.8°C, fuel the brain to think and enable the organs to function.

Involuntary energy expenditure accounts for around 75% of the total calories burnt in a day, which explains why physical activity has a limited impact on obesity. A fall of 25% in resting energy expenditure makes a big hole in the calories burnt each day.

Why does all this matter, and why does it occur? It matters because it makes obesity more difficult to avoid if teenagers are trapped by a long period of low-calorie burn. We don’t know for sure why it occurs, but could speculate that it may be a throw-back to earlier evolutionary times, when calories were scarce but adolescents still needed 25-30% more calories a day to fuel growth and bodily changes.

Not as easy as buying a burger.
Nicolas Primola/Shutterstock.com

How did hunter-gatherers assure the supply of extra calories needed to reach maturity? It is possible that their bodies adapted by switching down its calorie expenditure, so as to divert the calories to the energy needed to grow. Obesity is a recent problem, and the adaptation now works adversely in a world where calories are cheap and readily available in a highly palatable mix of sugary drinks and calorie-dense foods.

The worst outcome is that adolescents and their families take these findings to mean that they can do nothing about teenage obesity. The best is that a new explanation for teenage obesity leads to better understanding, and an avoidance of the foods that are the cause.The Conversation

Terry Wilkin, Professor, University of Exeter

This article was originally published on The Conversation. Read the original article.

Whither anarchy: freedom as non-domination

Dr Alex Prichard, Senior Lecturer in the College of Social Science and International Studies, and Professor Ruth Kinna, Professor of Political Theory at Loughborough University, have been collaborating on a research project entitled ‘Constitutionalisng Anarchy – looking at the principles, processes, and rules of anarchy.

In this blog, they look at the idea of anarchist constitutionalism, the anarchists’ view of property, and, the non-domination of freedom.

This post appeared in The ConversationConversation logo

Alex Prichard, University of Exeter and Ruth Kinna, Loughborough University

This article is part of the Democracy Futures series, a joint global initiative with the Sydney Democracy Network. The project aims to stimulate fresh thinking about the many challenges facing democracies in the 21st century. This article is the second of four perspectives on the political relevance of anarchism and the prospects for liberty in the world today.


Which institutions are best suited to realising freedom? This is a question recently asked by the republican political theorist Philip Pettit.

Anarchists, by contrast to republicans, argue that the modern nation-state and the institution of private property are antithetical to freedom. According to anarchists, these are historic injustices that are structurally dominating. If you value freedom as non-domination, you must reject both as inimical to realising this freedom.

But what is freedom as non-domination? In a nutshell, by a line of thinking most vocally articulated by Pettit, I’m free to the degree that I am not arbitrarily dominated by any other. I am not free if someone can arbitrarily interfere in the execution of my choices.

If I consent to a system of rules or procedures, anyone that then invokes these rules against me cannot be said to be curtailing my freedom from domination. My scope for action might be constrained, but since I have consented to the rules that now curtail my freedom, I am not subject to arbitrary domination.

Imagine, for instance, that I have a drinking problem and I’ve asked my best friend to keep me away from the bar. If she sees me heading in that direction and prevents me from getting anywhere near the alcohol, she dominates, but not arbitrarily, so my status as a free person is not affected.

Republican theory diverges from liberal theory because the latter treats any interference in my actions as a constraint on my freedom – especially if I paid good money for the drink, making it my property.

Neither republicans nor liberals suggest that private property and the state might themselves be detrimental to freedom, quite the opposite. By liberal accounts, private property is the bedrock of individual rights. In contemporary republican theory, property ownership is legitimate as long as it is non-dominating.

Republicans further argue that a state that tracks your interests and encourages deliberative contestation and active political participation will do best by your freedom.

The special status of property and the state

But why should we assume that property or the state is central to securing freedom as non-domination? The answer seems to be force of habit. For republicans like Pettit, the state is like the laws of physics while private property is akin to gravity. In ideal republican theory, these two institutions are just background conditions we simply have to deal with, neither dominating nor undominating, just there.

While anarchists don’t disagree that property and the state exist, they seek to defend a conception of freedom as non-domination that factors in their dominating, slavish and enslaving effects. Anarchism emerged in the 19th century, when republicanism, particularly in the US, was perfectly consistent with slavery and needed the state to enforce that state of affairs.

Anarchists denounce the institutions of dominance under industrial capitalism. Quinn Dombrowski/flickr

The abolition of slavery and the emergence of industrial capitalism were predicated on the extension of the principle of private property to the propertyless, not only slaves, who were encouraged to see themselves as self-possessors who could sell their labour on the open market at the market rate.

Likewise, in Europe millions of emancipated serfs were lured into land settlements that left them permanently indebted to landlords and state functionaries. They were barely able to meet taxes and rents and frequently faced starvation.

The anarchists uniformly denounced this process as the transformation of slavery, rather than its abolition. They deployed synonyms like “wage slavery” to describe the new state of affairs. Later, they extended their conception of domination by analysing sex slavery and marriage slavery.

Proudhon’s twin dictums “property is theft” and “slavery is murder” should be understood in this context. As he noted, neither would have been possible but for the republican state enforcing and upholding the capitalist property regime.

The state became dependent on taxes, while property owners were dependent on the state to keep recalcitrant populations at bay. And, by the mid-20th century, workers were dependent on the state for welfare and social security because of the poverty-level wages paid by capitalists.

As Karl Polanyi noted, there was nothing natural about this process. The unfurling of the “free market”, the liberal euphemism for this process, had to be enforced and continues to be across the world.

Republicans might encourage us to think of the state and property like the laws of physics or gravity because this helps them argue that their conception of freedom as non-domination is not moralised – that is, their conception of freedom as non-domination does not depend on a prior ethical commitment to anything else.

But as soon as you strip away the physics, it appears that republican freedom is in fact deeply moralised – the state and private property remain central to the possibility of republican freedom in an a priori way. Republican accounts of freedom demand we ignore a prior ethical commitment to two institutions that should themselves be rejected.

Anarchists argue that private property and the state precipitate structures of domination that position people in hierarchical relations of domination, which are often if not always exacerbated by distinctions of race, gender and sexuality. These are what Uri Gordon calls the multiple “regimes of domination” that structure our lives.

Looking to constitutionalism as a radical tool

Anarchists are anarchists to the extent that they actively combat these forces. How should they do this?

Typically, the answer is through a specific form of communal empowerment (“power with” rather than “power over”). This would produce structural power egalitarianism, a situation in which no one can arbitrarily dominate another.

But is this realistic or desirable? Would a reciprocal powers politics not simply result in the very social conflicts that anarchists see structuring society already, as Pettit has argued?

Even anarchists need rules to guide group decision-making – such as these ones at Occupy Vancouver. Sally T. Buck/flickr, CC BY-NC-ND

And what about radical democracy? Perhaps anarchists could replace engagement with the state with radical practices of decision-making? The problem is that anarchists haven’t even defined the requisite constituencies or how they should relate to one another. What if my mass constituency’s democratic voice conflicts with yours?

There is one implement in the republican tool box that anarchists once took very seriously and which might be resurrected: constitutionalism. Without a state to fall back on or private property to lean on, anarchists like Proudhon devised radically anti-hierarchical and impressively imaginative constitutional forms.

Even today, when constitutionalism is almost uniformly associated with bureaucracy and domination, anarchists continue to devise constitutional systems. By looking at anarchist practices like the Occupy movement’s camp rules and declarations (We are the 99 per cent!), we can revive anarchist constitutionalism and show how freedom as non-domination may be revised and deployed as an anti-capitalist, anti-statist emancipatory principle. You can see more about this here.


You can read other articles in the series here.

The Conversation

Alex Prichard, Senior Lecturer in Politics, University of Exeter and Ruth Kinna, Professor of Political Theory, Loughborough University

This article was originally published on The Conversation. Read the original article.

The long history behind the power of Royal Portraits

Professor Andrew McRae, Head of Department and Professor of Renaissance Studies in the Department of English and PhD candidate, Anna-Marie Linnell have written a blog which looks at the risks and rewards of Royal portraits.

This post first appeared in The ConversationConversation logo

Andrew McRae, University of Exeter and Anna-Marie Linnell, University of Exeter

Queen Elizabeth family portrait

The royal portraits released to mark the 90th birthday of Queen Elizabeth II deliberately emphasise her status as the matriarch within a flourishing family. The oldest and newest generations of royals smile together for the camera, projecting the Windsor line as safely secured into the future.

These well choreographed and well publicised pictures blend longevity and authority with an appreciation of renewal and dynastic security. Across British history, however, the idea that the monarch’s nuclear family is necessarily a unit of stable authority has been hard won.

While of course there have been royal families for as long as there have been monarchs, spouses and offspring haven’t always shared the limelight. The pivotal era of change is that of the Stuarts (1603-1714), who reigned when print culture exploded and new forms of visual media emerged.

Successive Stuart monarchs quickly grasped the value of royal imagery, keenly sponsoring portraits of themselves or holding lavish events which promoted their reign and policies. In turn, authorised images were disseminated more extensively through cheaply printed pamphlets.

Sharing the royal image with their subjects was a new and powerful tool – but it also carried risks.

Promoting his family held advantages for the first Stuart monarch, James VI of Scotland, who assumed the English throne in 1603, becoming James I. After the turbulent reigns of the early Tudors, and decades of rule by a Virgin Queen, James brought his subjects a healthy royal family. While he perhaps had little affection for his wife, Queen Anna of Denmark – and rather more for his succession of male royal favourites – he appreciated the importance of dynastic continuity and placed his three young children clearly in the public eye.

James I and his royal progeny.
Wikicommons

London’s printers devoted much attention to the king and his family. Genealogical charts and portraits of the family were disseminated in cheap printed form. James’s great book of political theory, Basilikon Doron, was also rushed into print in London in 1603. This text was an extended essay on dynastic continuity, addressed to his eldest son, Prince Henry. And while James’s family experienced more than its share of upheaval, with Henry dying suddenly in 1612, and his sister Elizabeth being sucked into the morass of the Thirty Years’ War, royal imagery stressed the continuity of Stuart rule.

Charles I, James’s second son, went on to exploit even more fully the potential of the royal family image. Charles’s marriage to the French princess Henrietta Maria coincided with his father’s unexpected death in 1625, meaning his reign began at the same time as the start of a stable and happy marriage. His image as a ruler was virtually indistinguishable from his profile as a husband and father.

Scholars have even argued that Charles only established an identifiable and independent reputation as a ruler from around 1630, when Henrietta Maria gave birth to the first of a succession of seven children.

Charles I, family guy.
Royal Collection

All the media of the age were mobilised to celebrate the family. Volumes of poetry were published to mark each royal birth, while the greatest court artists were commissioned to paint portraits. One portrait of Charles, Henrietta Maria and their first two children, by Anthony Van Dyck, hangs to this day in Buckingham Palace. Yet the risks of this approach also became apparent, as the perceived influence of a foreign, Catholic queen became a focus of resentment in the 1640s.

Traditional gender roles

Set against a more traditional model of masculine authority, Charles was derided by his critics as weak and vulnerable. The publication of secret correspondence between the couple, in The King’s Cabinet Opened (1645), fuelled the flames of civil war.

The royal family remained a source of tension in the second half of the Stuart era. Charles II’s childless marriage to Katherine of Braganza lacked the intensity of his parents’ union. For his brother and heir, James II, the birth of a Catholic son in 1688 in fact precipitated his downfall. While his subjects were prepared to tolerate James II’s leadership, they were anxious about the prospect of a future line of Catholic Stuarts.

His opponents challenged the maternity of the child, James Francis Edward, alleging that he was an imposter smuggled into the Queen’s rooms in a bedpan. Hundreds of pamphlets, histories and even plays were produced about this “warming pan plot”.

Months later, James’s son-in-law, William of Orange, capitalised on discontent and invaded England to seize the crown. Thereafter, images of the Stuart royal family tended to be divisive, often associated with the “Jacobites” who sought to restore James to the throne.

Today, the Windsors can congratulate themselves on their evident success in creating an image suited to the times. Yet a glance back in history, to the very century when the royal family was invented as a media product, underlines the challenges that they face in promoting – and maintaining – the positive royal image in a digital age.

The Conversation

Andrew McRae, Head of English, Professor of Renaissance Studies, University of Exeter and Anna-Marie Linnell, PhD candidate, University of Exeter

This article was originally published on The Conversation. Read the original article.

The Medieval Somme: forgotten battle that was the bloodiest fought on British soil

This blog is by Professor James Clark; The Professor of History in the College of Humanities.

This post first appeared in The ConversationConversation logo

James Clark, University of Exeter

A Battle of the Somme on British soil? It happened on Palm Sunday, 1461: a day of fierce fighting in the mud that felled a generation, leaving a longer litany of the dead than any other engagement in the islands’ history – reputed in some contemporary reports to be between 19,000 – the same number killed or missing in France on July 1 1916 – and a staggering 38,000.

The battle of Towton, fought near a tiny village standing on the old road between Leeds and York, on the brink of the North York Moors, is far less known than many other medieval clashes such as Hastings or Bosworth. Many will never have heard of it.

But here, in a blizzard on an icy cold March 29 1461, the forces of the warring factions of Lancaster and York met in a planned pitched battle that soon descended into a mayhem known as the Bloody Meadow. It ran into dusk, and through the fields and byways far from the battlefield. To the few on either side that carried their weapon to the day’s end, the result was by no means clear. But York in fact prevailed and within a month (almost to the day), the towering figure of Duke Edward, who stood nearly six-feet-five-inches tall, had reached London and seized the English crown as Edward IV. The Lancastrian king, Henry VI, fled into exile.

Victor: the Yorkist Edward IV. The National Portrait Gallery

Towton was not merely a bloody moment in military history. It was also a turning-point in the long struggle for the throne between these two dynasties whose rivalry has provided – since the 16th century – a compelling overture to the grand opera of the Tudor legend, from Shakespeare to the White Queen. But this summer, as national attention focuses on the 100th anniversary of The Battle of the Somme, we might also take the opportunity to recall a day in our history when total war tore up a landscape that was much closer to home.

An English Doomsday

First, the historian’s caveats. While we know a remarkable amount about this bloody day in Yorkshire more than 550 years ago, we do not have the benefits granted to historians of World War I. Towton left behind no battle plans, memoranda, maps, aerial photographs, nor – above all other in value – first-hand accounts of those who were there. We cannot be certain of the size of the forces on either side, nor of the numbers of their dead.

A death toll of 28,000 was reported as early as April 1461 in one of the circulating newssheets that were not uncommon in the 15th century – and was taken up by a number of the chroniclers writing in the months and years following. This was soon scaled up to nearly 40,000 – about 1% of England’s entire male population – by others, a figure which also came to be cemented in the accounts of some chroniclers.

This shift points to the absence of any authoritative recollection of the battle – but almost certainly the numbers were larger than were usually seen, even in the period’s biggest clashes. Recently, historians have curbed the claims but the latest estimate suggests that 40,000 men took to the field, and that casualties may have been closer to 10,000.

Lethal: an armour-piercing bodkin arrow, as used at Towton. by Boneshaker

But as with the Somme, it is not just the roll-call, or death-toll, that matters, but also the scar which the battle cut across the collective psychology. Towton became a byword for the horrors of the battlefield. Just as July 1 1916 has become the template for the cultural representation of the 1914-18 war, so Towton pressed itself into the popular image of war in the 15th and 16th centuries.

When Sir Thomas Malory re-imagined King Arthur for the rising generation of literate layfolk at the beginning of the Tudor age, it was at Towton – or at least a battlefield very much like it – that he set the final fight-to-the-death between Arthur and Mordred (Morte d’Arthur, Book XXI, Chapter 4). Writing less than ten years after the Yorkist victory, Malory’s Arthurian battleground raged, like Towton, from first light until evening, and laid waste a generation:

… and thus they fought all the long day, and never stinted till the noble knights were laid to the cold earth and ever they fought still till it was near night, and by that time there was there an hundred thousand laid dead upon the ground.

Lions and lambs

In his history plays, Shakespeare also presents Towton as an expression of all the terrible pain of the years of struggle that lasted over a century, from Richard II to Henry VIII. He describes it in Henry VI, Part 3, Act 2, Scene 5:

O piteous spectacle! O bloody times! While lions war and battle for their dens, poor harmless lambs abide their enmity. Weep, wretched man, I’ll aid thee tear for tear.

Both the Somme and Towton saw a generation fall. But while it was a young, volunteer army of “Pals” that was annihilated in 1916, osteo-analysis suggests that Towton was fought by grizzled older veterans. But in the small society of the 15th century, this was no less of a demographic shock. Most would have protected and provided for households. Their loss on such a scale would have been devastating for communities. And the slaughter went on and on. The Lancastrians were not only defeated, they were hunted down with a determination to see them, if not wiped out, then diminished to the point of no return.

Battle of Towton: initial deployment. by Jappalang, CC BY-SA

For its time, this was also warfare on an unprecedented scale. There was no be no surrender, no prisoners. The armies were strafed with vast volleys of arrows, and new and, in a certain sense, industrial technologies were deployed, just as they were at the Somme. Recent archaeology confirmed the presence of handguns on the battlefield, evidently devastating if not quite in the same league as the German’s Maschinengewehr 08 in 1916.

These firearm fragments are among the earliest known to have been in used in northern European warfare and perhaps the very first witnessed in England. Primitive in their casting, they presented as great a threat to the man that fired them as to their target. Surely these new arrivals would have added considerably to the horror.

Fragments of the past

Towton is a rare example in England of a site largely spared from major development, and vital clues to its violent past remain. In the past 20 years, archaeological excavations have not only extended our understanding of the events of that day but of medieval English society in general.

The same is true of the Somme. That battlefield has a global significance as a place of commemoration and reconciliation, especially as Word War I passes out of even secondhand memory. But it also has significance as a site for “live” research. Its ploughed fields and pastures are still offering up new discoveries which likewise can carry us back not only to the last moments of those lost regiments but also to the lost world they left behind them, of Late Victorian and Edwardian Britain.

It is essential that these battlefields continue to hold our attention. For not only do they deepen our understanding of the experience and mechanics of war, they can also broaden our understanding of the societies from which such terrible conflict springs.

The Conversation

James Clark, Professor of Medieval History, University of Exeter

This article was originally published on The Conversation. Read the original article.

Obstacles to social mobility in Britain date back to the Victorian education system

Postgraduate researcher, Jonathan Godshaw Memel, takes a closer look at how the social mobility problems affecting new university students, dates back to  the Victorian era.

This article first appeared in The Conversation.Conversation logo

students

Jonathan Godshaw Memel, University of Exeter

Despite the growing number of young people attending university, comparatively few disadvantaged students are accepted into Britain’s most prestigious institutions. Many of the most selective universities have missed recent targets to improve access, as the least privileged students remain more than eight times less likely to gain places than their peers from the most prosperous backgrounds.

The government has made recent promises that universities will become “engines” of “social mobility”. Yet a more stratified, less fair university sector seems the likely result of the new competitive landscape announced in the Higher Education and Research Bill that is currently making its way through parliament.

Of the 13 most selective institutions identified by the Sutton Trust in 2015, ten were either founded or significantly reformed in the 19th century. Many of the problems surrounding university access date back to the Victorian era.

Workers and thinkers

The Victorians determined the quality and content of teaching according to the class background of students, establishing varied levels of school attainment that still challenge admissions officers today. The 19th-century school system was organised in relation to the economy, and many more workers than thinkers were required to support the rapid growth of manufacturing.

Under Lord Taunton’s leadership in the 1860s, the Schools Inquiry Commission organised school allocation based on family occupation. Pupils from the most prosperous backgrounds attended the public and first-grade secondary schools, where they prepared for university admission by following an extensive classical education until the age of 18.

The Great Hall at Durham University at the end of the 19th century. A. D. White Architectural Photographs, Cornell University Library/wikimedia.com, CC BY

Those from the lower-middle classes were to focus on applied subjects until the age of 14. “It is obvious,” stated the Taunton Report, “that these distinctions correspond roughly, but by no means exactly, to the gradations of society”. When the system was re-evaluated in 1895, the Bryce Commission agreed that traditional universities should continue to admit students from the upper and upper-middle classes who had been best prepared for its demands.

Social exclusivity was also central to the model of Victorian higher education outlined by Cardinal Newman in his 1852 lecture series titled The Idea of a University. Newman’s influential idea of liberal education relied on strict distinctions between applied and non-applied forms of work.

He emphasised “inutility” and “remoteness from the occupations and duties of life” as important features of an ideal university. A privileged few developed their intellectual faculties amid collegiate surroundings while applied forms of work and knowledge remained the “duty of the many”.

Ideas above your station

Conservative commentators were surprised that so many still aspired towards this elite form of education. As the MP Sir John Gorst wrote in 1895:

It is remarkable that the desire, even of the poorer workers, for knowledge seems to be directed towards abstract science and general culture, rather than towards those studies which could be turned to practical use in the manufacturing industry.

In his novel Jude the Obscure, also written in 1895, Thomas Hardy explored how this traditional university model helped to support existing social hierarchies. The stonemason Jude Fawley pursues his educational dreams at Christminster, a fictional version of Oxford. The letter rejecting his application to “accumulate ideas, and impart them to others” reveals the class bias underpinning his failure. As a “working man”, Jude is told that he would have a “much better chance of success in life by remaining in your own sphere and sticking to your trade”.

The tragedy of Jude’s failure depends on his wholehearted identification with Christminster’s mission as “a unique centre of thought and religion – the intellectual and spiritual granary of this country”. Hardy’s novel shows that the Victorian university was rarely the disinterested institution it appeared to be.

Access to the marketplace

University education has become much fairer since 1895. Over 40 per cent of young people now enter higher education, when even in 1963 only about four in every 100 young people studied at university.

Yet, the traditional university model still remains associated with social privilege, distinguished from newer institutions that suffer for their association with what some parts of the media have called “Mickey Mouse courses”.

The prospects of disadvantaged students at prestigious universities may be harmed by the rise of student tuition fees in line with inflation. While the most selective institutions deliver the best career prospects upon graduation, their high fees and entry requirements may deter students from disadvantaged backgrounds.

To avoid further stratification of the university system, proper financial support must exist to ensure the least wealthy students at in-demand institutions do not suffer from the highest levels of debt. Admission requirements should also be closely considered to account for different levels of prior attainment.

The drive to create a market out of the university sector must not prompt a return to Victorian principles. A student’s educational prospects should not be determined by their family background.

The Conversation

Jonathan Godshaw Memel, Postdoctoral researcher and AHRC Cultural Engagement Fellow,, University of Exeter

This article was originally published on The Conversation. Read the original article.

How your parent’s lifespan affects your health

A study of nearly 200,000 volunteers has shown how your parents influence your life expectancy. In this blog, Dr Luke Pilling and Dr Janice Atkins, Research fellows in University of Exeter Medical School reflect on the recently published report on how long-lived parents can effect your own longevity.

This post first appeared in The ConversationConversation logo

Luke Pilling, University of Exeter and Janice Atkins, University of Exeter

The longer your parents live, the more likely you are to live longer and have a healthy heart. These are the results of our latest study of nearly 200,000 volunteers.

The role of genetics in determining the age at which we die is increasingly known, but the relationship between parental age at death and survival and health in their offspring is complex, with many factors playing a part. Shared environment and lifestyle choices also play a large role, including diet and smoking habits, for example. But, even accounting for these factors, parents lifespan is still predictive in their offspring – something we have also shown in previous research. However it was unclear how the health advantages of having longer-lived parents was transferred to children in middle age.

In the new study, published in the Journal of the American College of Cardiology, we used information on people in the UK Biobank study. The participants, aged 55 to 73, were followed for eight years using data from hospital records. We found that for each parent that lived beyond their seventies, the participants had 20 per cent less chance of dying from heart disease. To put this another way, in a group of 1,000 people whose fathers died at 70 and who were followed for ten years, around 50 on average would die from heart disease. But when compared to a group whose fathers died at 80, on average only 40 would die from heart disease over the same ten-year period. Similar trends were seen when it came to the age of mothers.

Interestingly, family history of early heart attacks is already used by physicians to identify patients at increased risk of disease.

Family history?
Shutterstock

All is not lost

The biggest genetic effects on lifespan in our studies affected the participant’s blood pressure, their cholesterol levels, their body mass index and their likelihood to be addicted to tobacco. These are all factors that affect risk of heart disease, so is consistent with the lower rates of heart disease that we saw in the offspring. We did find some clues in our analysis of novel genetic variants that there might also be other pathways to longer life, for example through better repair of damage to DNA, but much more work is needed on these.

It is really important to note that our findings were group-level effects. These effects do not necessarily apply to individuals, as so many factors affect one’s health. So the results are really positive – although people with longer-lived parents are more likely to live longer themselves, but they do not mean people with shorter-lived parents should lose hope. There are lots of ways for those with shorter-lived parents to improve their health.

Current public health advice about being physically active (for example going for regular walks), eating well and not smoking are very relevant – and people can really take their health into their own hands. People can overcome their increased risk by choosing the healthy options in terms of not smoking, keeping active, avoiding obesity and so on and getting their blood pressures and cholesterol levels tested. Of course, they should discuss their family history with their physicians, as there are some good treatments for some of the causes of premature deaths.

Conversely, people with long-lived parents cannot assume they will therefore live long lives – if you are exposed to the big health risk factors, this will be more important to your health than the age at which your parents died.

The Conversation

Luke Pilling, Research Fellow in Genomic Epidemiology, University of Exeter and Janice Atkins, Research Fellow, University of Exeter

This article was originally published on The Conversation. Read the original article.

Is Brexit the will of the people?

In this blog, by Professor Darren Schreiber, we look at the complications surrounding the results of the EU referendum.

Professor Schreiber is a senior lecturer in the Politics department
shutterstock_276112982

In the UK, the “Queen-in-Parliament” is sovereign and as a core tenant of democracy we expect our sovereign governments to follow the will of the people expressed in majority rule.  It might seem obvious then that the recent vote for Brexit is the definitive expression of the public will.

While ancient Athens may have used direct democracy, where citizens vote on specific policies, most modern democracies rely on representative government because of suspicions about the tyranny of the majority and other concerns of good governance.

Facing a momentous question, around 10 per cent of the voters still were undecided in the week leading up to the Brexit referendum.  To anyone who studies public opinion, this is not at all surprising.  For a chunk of the electorate, their political attitudes are so changeable that they are the equivalent of mental coin flips.

This instability is not the result of some deep moral failing, but rather a consequence of a lack of practice.  Like learning to ride a bicycle, we are unstable in our political views when we do not have experience thinking about political issues.  As we become more informed and more practiced our views of the political world crystalize and even how our brains think about politics changes.

We might ideally want everyone to be politically informed about all issues, but in politics as in all other domains of our lives we rely on people who can develop practice.  Most of us have friends that we can ask for a basic opinion when issues arise with our plumbing, our cars, or even our health.  And, when those issues are serious, we consult plumbers, mechanics, and doctors.

In the political world, we elect leaders who both represent our political views and help inform us about the complicated issues we face.  They in turn consult with specialists who are knowledgeable about domains connected to the policies to be decided on.  The world is just too complex for any one person to be sufficiently informed about the myriad of complicated problems and their interconnections.  While Leave campaigner Michael Gove may have claimed “people in this country have had enough of experts,” there is just no other good option, even for politicians.

The complication with Brexit is that the UK has expressed its will in some incongruous ways.  In the elections for representatives to the European Parliament, the Brexit oriented party UKIP obtained more votes (26.6 per cent) and seats (32.8 per cent) than any of its competitors, but in the 2015 elections to the House of Commons they got just 12.7 per cent of the votes and only managed to win one of the seats (0.15 per cent).  Less than a quarter of all the Members of Parliament elected came out in favour of Leave.  So while 52 per cent of the electorate may have voted Leave in a non-binding referendum, they filled a Parliament with nearly three-quarters Remain supporters.

The Brexit vote reflects exactly the kind of conflicted and contradictory views that we often find in the general public.  Did this vote mean that the UK should make no deal with the European Union?  Remain connected to the single market?  Rescind the residency rights of EU citizens living in the UK?  Split into separate nations?  With Leave leaders conceding “there is no plan” it is apparent that the referendum was a Rorschach test, rather than a coherent vision.

Writing more detailed referenda or making them legally binding may help with some of the ambiguity, but creates complications of its own as the morass of direct democracy in California demonstrates.  The solution that nearly all modern democracies have converged upon is representatives deliberating in legislatures.  There is already movement afoot to ensure that Parliament is at the centre of any further Brexit discussion.  Prime Ministers, political parties, and Members of Parliament can do a far more nuanced job of discerning the public will and translating it into policy than a referendum ever could.  This is why modern states reject direct democracy and rely on leaders who practice politics daily.