Author Archives: Lara Faith Cronin

The long history behind the power of Royal Portraits

Professor Andrew McRae, Head of Department and Professor of Renaissance Studies in the Department of English and PhD candidate, Anna-Marie Linnell have written a blog which looks at the risks and rewards of Royal portraits.

This post first appeared in The ConversationConversation logo

Andrew McRae, University of Exeter and Anna-Marie Linnell, University of Exeter

Queen Elizabeth family portrait

The royal portraits released to mark the 90th birthday of Queen Elizabeth II deliberately emphasise her status as the matriarch within a flourishing family. The oldest and newest generations of royals smile together for the camera, projecting the Windsor line as safely secured into the future.

These well choreographed and well publicised pictures blend longevity and authority with an appreciation of renewal and dynastic security. Across British history, however, the idea that the monarch’s nuclear family is necessarily a unit of stable authority has been hard won.

While of course there have been royal families for as long as there have been monarchs, spouses and offspring haven’t always shared the limelight. The pivotal era of change is that of the Stuarts (1603-1714), who reigned when print culture exploded and new forms of visual media emerged.

Successive Stuart monarchs quickly grasped the value of royal imagery, keenly sponsoring portraits of themselves or holding lavish events which promoted their reign and policies. In turn, authorised images were disseminated more extensively through cheaply printed pamphlets.

Sharing the royal image with their subjects was a new and powerful tool – but it also carried risks.

Promoting his family held advantages for the first Stuart monarch, James VI of Scotland, who assumed the English throne in 1603, becoming James I. After the turbulent reigns of the early Tudors, and decades of rule by a Virgin Queen, James brought his subjects a healthy royal family. While he perhaps had little affection for his wife, Queen Anna of Denmark – and rather more for his succession of male royal favourites – he appreciated the importance of dynastic continuity and placed his three young children clearly in the public eye.

James I and his royal progeny.
Wikicommons

London’s printers devoted much attention to the king and his family. Genealogical charts and portraits of the family were disseminated in cheap printed form. James’s great book of political theory, Basilikon Doron, was also rushed into print in London in 1603. This text was an extended essay on dynastic continuity, addressed to his eldest son, Prince Henry. And while James’s family experienced more than its share of upheaval, with Henry dying suddenly in 1612, and his sister Elizabeth being sucked into the morass of the Thirty Years’ War, royal imagery stressed the continuity of Stuart rule.

Charles I, James’s second son, went on to exploit even more fully the potential of the royal family image. Charles’s marriage to the French princess Henrietta Maria coincided with his father’s unexpected death in 1625, meaning his reign began at the same time as the start of a stable and happy marriage. His image as a ruler was virtually indistinguishable from his profile as a husband and father.

Scholars have even argued that Charles only established an identifiable and independent reputation as a ruler from around 1630, when Henrietta Maria gave birth to the first of a succession of seven children.

Charles I, family guy.
Royal Collection

All the media of the age were mobilised to celebrate the family. Volumes of poetry were published to mark each royal birth, while the greatest court artists were commissioned to paint portraits. One portrait of Charles, Henrietta Maria and their first two children, by Anthony Van Dyck, hangs to this day in Buckingham Palace. Yet the risks of this approach also became apparent, as the perceived influence of a foreign, Catholic queen became a focus of resentment in the 1640s.

Traditional gender roles

Set against a more traditional model of masculine authority, Charles was derided by his critics as weak and vulnerable. The publication of secret correspondence between the couple, in The King’s Cabinet Opened (1645), fuelled the flames of civil war.

The royal family remained a source of tension in the second half of the Stuart era. Charles II’s childless marriage to Katherine of Braganza lacked the intensity of his parents’ union. For his brother and heir, James II, the birth of a Catholic son in 1688 in fact precipitated his downfall. While his subjects were prepared to tolerate James II’s leadership, they were anxious about the prospect of a future line of Catholic Stuarts.

His opponents challenged the maternity of the child, James Francis Edward, alleging that he was an imposter smuggled into the Queen’s rooms in a bedpan. Hundreds of pamphlets, histories and even plays were produced about this “warming pan plot”.

Months later, James’s son-in-law, William of Orange, capitalised on discontent and invaded England to seize the crown. Thereafter, images of the Stuart royal family tended to be divisive, often associated with the “Jacobites” who sought to restore James to the throne.

Today, the Windsors can congratulate themselves on their evident success in creating an image suited to the times. Yet a glance back in history, to the very century when the royal family was invented as a media product, underlines the challenges that they face in promoting – and maintaining – the positive royal image in a digital age.

The Conversation

Andrew McRae, Head of English, Professor of Renaissance Studies, University of Exeter and Anna-Marie Linnell, PhD candidate, University of Exeter

This article was originally published on The Conversation. Read the original article.

How will self-driving cars affect your insurance?

Matthew Channon, a PhD student in University of Exeter Law School, takes a look at the implications that the driverless car might have for your insurance.

This article first appeared in The ConversationConversation logo

car-1360121

Matthew Channon, University of Exeter

Mark Molthan admits he wasn’t paying attention when his car crashed into a fence, leaving him with a bloody nose, according to a news report. The Texan had left control of his Tesla Model S to its autopilot system, which failed to turn at a curve and instead drove the car off the road. But Tesla, like other car manufacturers, stresses its self-driving technology is there just to assist drivers, who should remain ready to take over at any time.

One of the big questions about cars with self-driving technology is who’s to blame when something goes wrong. The driver in this case may have reportedly admitted he was at fault. But that hasn’t stopped his insurance company from requesting a joint inspection of the written-off car, which raises the prospect the firm may sue Tesla to pay for the damage.

Insurance firms will always try to prove they shouldn’t have to pay for an accident. And software bugs in self-driving cars could create a new reason manufacturers might have to shoulder the cost of crashes. Yet if drivers remain legally responsible for a car even as technology encourages them to take their eyes off the road, will manufacturers be able to avoid blame, leaving insurance companies to recoup their costs through higher premiums?

The British government is already hoping to address this issue with a new piece of legislation to be introduced in autumn 2017. In anticipation of this, it is currently consulting the public and experts about how driverless cars should be insured in the future.

One model that could be introduced would build on the current system of compulsory insurance. But as well as every driver needing insurance, manufacturers of any car with a form of self-driving technology would also have to take out a policy to cover any liability for accidents. The costs of this would likely be passed on to drivers through higher purchase costs.

Eyes off the road: still not recommended. Shutterstock

Liability will then be determined on the circumstances of each individual accident. If the accident is caused entirely by the vehicle, it is the manufacturer’s insurers who will be liable. If the accident is caused by both vehicle malfunction and driver error, then it is likely to fall on both insurers.

As with the current system of compulsory insurance, there would be a battle between insurers as to exactly who will pay for any damage or injury caused. This therefore will not make much of a difference to the driver, except for an increase in premium if they are found liable.

Other premiums

However, a different system could stop liability battles between insurers from clogging up the courts and ultimately cost drivers less. Instead of having to buy insurance for cars with self-driving technology, drivers would simply pay an extra fee on top of the cost of the car or on the petrol or electricity they use to power the vehicle. This money would go into a central fund that would pay for any damage caused. This would be held by the government or (in the UK) the Motor Insurer’s Bureau, which compensates victims of accidents caused by uninsured drivers funded by a similar levy on insurance premiums.

This would eventually mean drivers would have to pay less in the long run because they wouldn’t be paying for insurance company costs and profits, just for the damage of accidents. A similar system is already used in New Zealand for conventional vehicles.

Either system won’t have much of an effect on how much you have to pay for insurance in the meantime. And in fact, premiums will most likely fall as self-driving technology actually appears to make the overall risk of an accident lower – something that will surely be welcomed by all. In the future, however, we may not need driver insurance at all. If cars become fully autonomous with no need for humans to do any driving, then the manufacturers will probably become responsible for every journey.

The Conversation

Matthew Channon, PhD Candidate in Motor Insurance Law, University of Exeter

This article was originally published on The Conversation. Read the original article.

Why Blair really went to war

Dr Owen Thomas, Lecturer in Politics and International Relations for the College of Social Sciences and International Studies, looks at Tony Blair’s motivation for war in 2003 and whether it meets he conditions set out in, Blair’s own Doctrine of the International Community.

This post first appeared in The Conversation.Conversation logo

Owen D. Thomas, University of Exeter

As Sir John Chilcot’s Iraq Inquiry findings are published, we should resist what’s become the easy refrain: “Blair Lied. Thousands Died.”

If we actually want to learn from what happened, we should recognise that Tony Blair has been remarkably consistent in his view that the removal of the regime was necessary, whether or not Saddam Hussein actually possessed weapons of mass destruction (WMD). Blair, it seems, genuinely believes that the war was in our best interests because it may have prevented an unlikely (but not impossible) catastrophe. This way of thinking has not gone away.

Ever since the inquiry began in 2009, Sir John Chilcot has said: “The inquiry is not a court of law and nobody is on trial. ” This provoked plenty of consternation, despite Sir John’s promise that the committee will “not shy away” from frank criticism.

One of the questions that we can expect the inquiry to resolve is whether the public was deceived about the factual basis for war. Were lies told? Was there an environment in Number 10 whereby public deception was seen as a necessary evil?

Important as these concerns are (and Piers Robinson has written about these questions here), they are secondary to another mystery: why did Blair sincerely believe that war was necessary in the first place, and that it was necessary to persuade a requisite amount of parliamentarians and the public to support intervention?

The story of Britain’s decision to go to war is about not just bad apples, but bad barrels. In other words, to understand what happened in 2002-3, we need to focus not just on what people did, but also why they did it.

The politics of catastrophe

Way back in April 1999, Blair gave a speech in Chicago called The Doctrine of the International Community, in which he set out five conditions under which it would be legitimate to intervene militarily.

The first condition for intervention would be: “Are we sure of our case?” Are we sure, in other words, that intervention will ultimately do more good than harm? War kills innocent people, and it is only the least-worst option when it saves more lives than it destroys.

That speech was given in the context of the Kosovo conflict, but the ideas also underpinned Blair’s rationalisation of the Iraq war. As the scholar Jason Ralph has argued, Blair sincerely believes that the war, which was presented as an act of collective security, was a continuation of the internationalist doctrine he laid out.

So did the Iraq War meet the conditions that Blair himself set out in 1999 – and specifically, was there a good reason to think that war would do more good than harm?

A true believer in Basra. EPA/Peter MacDiarmid

Blair certainly seems to think so. And to understand why, we only need to look at what he said to the inquiry in 2010.

The crucial thing after September 11 is that the calculus of risk changed … it is absolutely essential to realise this: if September 11 hadn’t happened, our assessment of the risk of allowing Saddam any possibility of him reconstituting his programmes would not have been the same … The point about [9/11] was that over 3,000 people had been killed on the streets of New York, an absolutely horrific event, but this is what really changed my perception of risk, the calculus of risk for me: if those people, inspired by this religious fanaticism could have killed 30,000, they would have.

This is what some call the politics of catastrophe, a concern with the unknown and unanticipated. We now think about the future as a place where it is more certain that terrible things will happen, but we are uncertain about exactly what will happen, when it will happen and where.

Blair’s belief in the need and urgency for war was underpinned by three assumptions: that Iraq was at least capable of, and interested in, producing WMD; that Iraq was obstructing and deceiving the existing inspections and sanctions regime; and that if Iraq did produce WMD, these weapons could be acquired by terrorist organisations, who could use them to mount devastating attacks.

Blair’s assumptions effectively short-circuited the calculus of benefit versus harm. The number of soldiers and civilians who may lose their lives in a war can, plausibly, be calculated. But, for Blair, the number of lives that could be lost in a single terrorist attack was incalculable. While a future terrorist attack using WMD was unlikely, it could kill an unprecedented number of people.

How to think about security

It is because of this kind of thinking that Blair still believes that going to war was the right thing to do. As he said when giving evidence in 2010:

It’s a decision. And the decision I had to take was, given Saddam’s history, given his use of chemical weapons, given the over one million people whose deaths he had caused, given 10 years of breaking UN resolutions, could we take the risk of this man reconstituting his weapons programmes or is that a risk that it would be irresponsible to take? The reason why it is so important … is because, today, we are going to be faced with exactly the same types of decisions.

Here Blair is quite right: we are facing the same decisions today. Should drones kill suspected terrorists pre-emptively because of what they may do in the future? Should we surveil individuals that exhibit radical views for the same reason?

This worldview transcends the intentions of individual decision-makers and the machinations of “bad apples”. It has become endemic in Western governments and societies; it persists on an unconscious level, and it cannot be pinned on “Teflon Tony” alone.

To grapple with it, we need to have a serious discussion about the meaning and value of security, and how we weigh the costs of war (let’s remember that there have been as many as 250,000 violent deaths since the invasion). We need a clear way of deciding what constitutes an urgent threat, and what constitutes an appropriate response.

Kindred spirits. EPA/Claudio Onorati

At least one member of the Chilcot committee will know all about Blair’s “Doctrine of the International Community”. On April 16 1999, now-Chilcot committee member Sir Lawrence Freedman sent a fax to Number 10 with some ideas for the Chicago speech – and it was he who suggested the five conditions.

Freedman recently published an essay titled with a very pertinent question: Can there be a liberal military strategy? The final lines may be telling. Freedman argues that:

It is best to acknowledge that force invariably carries … risks and that they cannot necessarily be avoided … at the very least it requires paying much more attention to the interaction between the military and political strands of strategy when deciding upon the use of military force and explaining its purpose to the public.

Perhaps this is the one of the most valuable lessons we can learn. Whether or not Blair and other senior figures within government actually misled the British public, and whether they did so deliberately or not, they genuinely believed in the need for action based on a particular political strategy of security. That strategy is what we now need to evaluate.

The Conversation

Owen D. Thomas, Lecturer in Politics and International Relations, University of Exeter

This article was originally published on The Conversation. Read the original article.

German rape law finally accepts that no means no – but is a statute enough?

Dr Sarah Cooper reflects on what her teenage self would have thought of Germany’s recent legislation change, in the rape laws.

Dr Cooper is a lecturer in College of Social Science and International Studies’ Politics department.

This post first appeared in The Conversation.Conversation logo

Sarah Cooper, University of Exeter

In July 2016, Germany changed its legislation on rape to clarify that “no means no”. That’s right … in July 2016. Until now, by virtue of Section 177 of the German Criminal Code, a guilty verdict in cases of sexual assault demanded, shockingly, signs of physical defence.

Such laws, unsurprisingly, have long had a pernicious effect on the experience of victims. To characterise the recent changes as timely is a ridiculous understatement.

Whether or not it is accompanied by a physical struggle, fighting back, or screaming at the top of one’s lungs, the use of the term “no” signifies a lack of consent to sexual activity. To disregard this simple word amounts to rape, plain and simple. The need to assert this statement in 2016 should be redundant or, at the very least, tiresomely obvious. But such a conversation was commonplace in Germany just a few short weeks ago, when a change to the country’s legal system finally introduced the “no means no” statute.

So what will the response be across Germany – from the criminal law and justice systems, and from German society itself? As a former law student, now a public policy academic, and always an engaged citizen, the congratulatory response in my mind towards this legal “breakthrough” soon shifted towards a scathing critique, accompanied by a strong air of cynicism.

As an undergraduate bogged down by heavy statute books, my adolescent self would have welcomed Germany’s recent changes to the black letter of the law. Former miscarriages of justice would undoubtedly have enraged my idealistic young mind, such as the recent shocking case of model and television personality Gina-Lisa Lohfink, who was fined after a court ruled she had falsely accused two men of rape. This was despite a video surfacing in which she can be heard saying the word “no” several times, and the decision has been appealed.

But is this new statute enough to rectify such ills? Should we have faith that the change in law will have any real impact on German society?

Ushered in with other changes catalysed by the Cologne attacks on New Year’s Eve 2015, when around 500 women filed complaints of sexual assault, it appears that the legal shift was motivated more by exceptional events than a gentle evolution of the law in line with global trends.

An unforeseen shock to the political system can lead to serious, and often hasty, change. Of crucial importance to its success is a cultural and social environment which welcomes the new approach. And it is here that the red flags appear.

German media response. Shutterstock

The United Nations has long promoted an appropriate standard for sexual assault legislation, yet Germany has continually ignored its demands for the removal of physical resistance as a necessary element of a guilty verdict. It is frightening to consider what the state of play might be once the level of public empathy for the events in Cologne dissipates. How strictly and enthusiastically will bureaucrats implement this law? Can German victims really now look forward to a paragon of criminal justice?

Victims turned away

In England and Wales in 2014, a damning HMIC report revealed that those claiming to have been sexually assaulted were less likely to be believed by the police than any other potential victim of alternative crimes. One explanation for this, according to criminology research, is a continued consideration of clothing worn, intoxication levels and previous sexual history when assessing the validity of the claim. These amount to a host of “rape myths” that are as prevalent across the police force as they are wider society.

If the institutional barriers to proving sexual assault proliferate across the UK, in which a requirement for bodily defence has long been disregarded, how can this be so easily removed in Germany? Running alongside my own academic analysis, therefore, is a strong air of cynicism from a concerned member of the public.

Certainly, in a global society in which flippant jokes are just one example of a flourishing rape culture, the need for a shift in perception appears vital for the protection of victims. That Germany’s statute book now boasts this standard is unequivocally a hopeful first step. But it must be acknowledged that the justice system may yet remain ingrained with an understanding that a lack of consent is demonstrated through physically fighting back. Although the publicly appetising “no means no” tag line garners much attention, a far more complicated culture looks set to constrain the impact of those vital words which have finally been voiced in parliament.

The Conversation

Sarah Cooper, Lecturer in Politics, University of Exeter

This article was originally published on The Conversation. Read the original article.

The Medieval Somme: forgotten battle that was the bloodiest fought on British soil

This blog is by Professor James Clark; The Professor of History in the College of Humanities.

This post first appeared in The ConversationConversation logo

James Clark, University of Exeter

A Battle of the Somme on British soil? It happened on Palm Sunday, 1461: a day of fierce fighting in the mud that felled a generation, leaving a longer litany of the dead than any other engagement in the islands’ history – reputed in some contemporary reports to be between 19,000 – the same number killed or missing in France on July 1 1916 – and a staggering 38,000.

The battle of Towton, fought near a tiny village standing on the old road between Leeds and York, on the brink of the North York Moors, is far less known than many other medieval clashes such as Hastings or Bosworth. Many will never have heard of it.

But here, in a blizzard on an icy cold March 29 1461, the forces of the warring factions of Lancaster and York met in a planned pitched battle that soon descended into a mayhem known as the Bloody Meadow. It ran into dusk, and through the fields and byways far from the battlefield. To the few on either side that carried their weapon to the day’s end, the result was by no means clear. But York in fact prevailed and within a month (almost to the day), the towering figure of Duke Edward, who stood nearly six-feet-five-inches tall, had reached London and seized the English crown as Edward IV. The Lancastrian king, Henry VI, fled into exile.

Victor: the Yorkist Edward IV. The National Portrait Gallery

Towton was not merely a bloody moment in military history. It was also a turning-point in the long struggle for the throne between these two dynasties whose rivalry has provided – since the 16th century – a compelling overture to the grand opera of the Tudor legend, from Shakespeare to the White Queen. But this summer, as national attention focuses on the 100th anniversary of The Battle of the Somme, we might also take the opportunity to recall a day in our history when total war tore up a landscape that was much closer to home.

An English Doomsday

First, the historian’s caveats. While we know a remarkable amount about this bloody day in Yorkshire more than 550 years ago, we do not have the benefits granted to historians of World War I. Towton left behind no battle plans, memoranda, maps, aerial photographs, nor – above all other in value – first-hand accounts of those who were there. We cannot be certain of the size of the forces on either side, nor of the numbers of their dead.

A death toll of 28,000 was reported as early as April 1461 in one of the circulating newssheets that were not uncommon in the 15th century – and was taken up by a number of the chroniclers writing in the months and years following. This was soon scaled up to nearly 40,000 – about 1% of England’s entire male population – by others, a figure which also came to be cemented in the accounts of some chroniclers.

This shift points to the absence of any authoritative recollection of the battle – but almost certainly the numbers were larger than were usually seen, even in the period’s biggest clashes. Recently, historians have curbed the claims but the latest estimate suggests that 40,000 men took to the field, and that casualties may have been closer to 10,000.

Lethal: an armour-piercing bodkin arrow, as used at Towton. by Boneshaker

But as with the Somme, it is not just the roll-call, or death-toll, that matters, but also the scar which the battle cut across the collective psychology. Towton became a byword for the horrors of the battlefield. Just as July 1 1916 has become the template for the cultural representation of the 1914-18 war, so Towton pressed itself into the popular image of war in the 15th and 16th centuries.

When Sir Thomas Malory re-imagined King Arthur for the rising generation of literate layfolk at the beginning of the Tudor age, it was at Towton – or at least a battlefield very much like it – that he set the final fight-to-the-death between Arthur and Mordred (Morte d’Arthur, Book XXI, Chapter 4). Writing less than ten years after the Yorkist victory, Malory’s Arthurian battleground raged, like Towton, from first light until evening, and laid waste a generation:

… and thus they fought all the long day, and never stinted till the noble knights were laid to the cold earth and ever they fought still till it was near night, and by that time there was there an hundred thousand laid dead upon the ground.

Lions and lambs

In his history plays, Shakespeare also presents Towton as an expression of all the terrible pain of the years of struggle that lasted over a century, from Richard II to Henry VIII. He describes it in Henry VI, Part 3, Act 2, Scene 5:

O piteous spectacle! O bloody times! While lions war and battle for their dens, poor harmless lambs abide their enmity. Weep, wretched man, I’ll aid thee tear for tear.

Both the Somme and Towton saw a generation fall. But while it was a young, volunteer army of “Pals” that was annihilated in 1916, osteo-analysis suggests that Towton was fought by grizzled older veterans. But in the small society of the 15th century, this was no less of a demographic shock. Most would have protected and provided for households. Their loss on such a scale would have been devastating for communities. And the slaughter went on and on. The Lancastrians were not only defeated, they were hunted down with a determination to see them, if not wiped out, then diminished to the point of no return.

Battle of Towton: initial deployment. by Jappalang, CC BY-SA

For its time, this was also warfare on an unprecedented scale. There was no be no surrender, no prisoners. The armies were strafed with vast volleys of arrows, and new and, in a certain sense, industrial technologies were deployed, just as they were at the Somme. Recent archaeology confirmed the presence of handguns on the battlefield, evidently devastating if not quite in the same league as the German’s Maschinengewehr 08 in 1916.

These firearm fragments are among the earliest known to have been in used in northern European warfare and perhaps the very first witnessed in England. Primitive in their casting, they presented as great a threat to the man that fired them as to their target. Surely these new arrivals would have added considerably to the horror.

Fragments of the past

Towton is a rare example in England of a site largely spared from major development, and vital clues to its violent past remain. In the past 20 years, archaeological excavations have not only extended our understanding of the events of that day but of medieval English society in general.

The same is true of the Somme. That battlefield has a global significance as a place of commemoration and reconciliation, especially as Word War I passes out of even secondhand memory. But it also has significance as a site for “live” research. Its ploughed fields and pastures are still offering up new discoveries which likewise can carry us back not only to the last moments of those lost regiments but also to the lost world they left behind them, of Late Victorian and Edwardian Britain.

It is essential that these battlefields continue to hold our attention. For not only do they deepen our understanding of the experience and mechanics of war, they can also broaden our understanding of the societies from which such terrible conflict springs.

The Conversation

James Clark, Professor of Medieval History, University of Exeter

This article was originally published on The Conversation. Read the original article.

Obstacles to social mobility in Britain date back to the Victorian education system

Postgraduate researcher, Jonathan Godshaw Memel, takes a closer look at how the social mobility problems affecting new university students, dates back to  the Victorian era.

This article first appeared in The Conversation.Conversation logo

students

Jonathan Godshaw Memel, University of Exeter

Despite the growing number of young people attending university, comparatively few disadvantaged students are accepted into Britain’s most prestigious institutions. Many of the most selective universities have missed recent targets to improve access, as the least privileged students remain more than eight times less likely to gain places than their peers from the most prosperous backgrounds.

The government has made recent promises that universities will become “engines” of “social mobility”. Yet a more stratified, less fair university sector seems the likely result of the new competitive landscape announced in the Higher Education and Research Bill that is currently making its way through parliament.

Of the 13 most selective institutions identified by the Sutton Trust in 2015, ten were either founded or significantly reformed in the 19th century. Many of the problems surrounding university access date back to the Victorian era.

Workers and thinkers

The Victorians determined the quality and content of teaching according to the class background of students, establishing varied levels of school attainment that still challenge admissions officers today. The 19th-century school system was organised in relation to the economy, and many more workers than thinkers were required to support the rapid growth of manufacturing.

Under Lord Taunton’s leadership in the 1860s, the Schools Inquiry Commission organised school allocation based on family occupation. Pupils from the most prosperous backgrounds attended the public and first-grade secondary schools, where they prepared for university admission by following an extensive classical education until the age of 18.

The Great Hall at Durham University at the end of the 19th century. A. D. White Architectural Photographs, Cornell University Library/wikimedia.com, CC BY

Those from the lower-middle classes were to focus on applied subjects until the age of 14. “It is obvious,” stated the Taunton Report, “that these distinctions correspond roughly, but by no means exactly, to the gradations of society”. When the system was re-evaluated in 1895, the Bryce Commission agreed that traditional universities should continue to admit students from the upper and upper-middle classes who had been best prepared for its demands.

Social exclusivity was also central to the model of Victorian higher education outlined by Cardinal Newman in his 1852 lecture series titled The Idea of a University. Newman’s influential idea of liberal education relied on strict distinctions between applied and non-applied forms of work.

He emphasised “inutility” and “remoteness from the occupations and duties of life” as important features of an ideal university. A privileged few developed their intellectual faculties amid collegiate surroundings while applied forms of work and knowledge remained the “duty of the many”.

Ideas above your station

Conservative commentators were surprised that so many still aspired towards this elite form of education. As the MP Sir John Gorst wrote in 1895:

It is remarkable that the desire, even of the poorer workers, for knowledge seems to be directed towards abstract science and general culture, rather than towards those studies which could be turned to practical use in the manufacturing industry.

In his novel Jude the Obscure, also written in 1895, Thomas Hardy explored how this traditional university model helped to support existing social hierarchies. The stonemason Jude Fawley pursues his educational dreams at Christminster, a fictional version of Oxford. The letter rejecting his application to “accumulate ideas, and impart them to others” reveals the class bias underpinning his failure. As a “working man”, Jude is told that he would have a “much better chance of success in life by remaining in your own sphere and sticking to your trade”.

The tragedy of Jude’s failure depends on his wholehearted identification with Christminster’s mission as “a unique centre of thought and religion – the intellectual and spiritual granary of this country”. Hardy’s novel shows that the Victorian university was rarely the disinterested institution it appeared to be.

Access to the marketplace

University education has become much fairer since 1895. Over 40 per cent of young people now enter higher education, when even in 1963 only about four in every 100 young people studied at university.

Yet, the traditional university model still remains associated with social privilege, distinguished from newer institutions that suffer for their association with what some parts of the media have called “Mickey Mouse courses”.

The prospects of disadvantaged students at prestigious universities may be harmed by the rise of student tuition fees in line with inflation. While the most selective institutions deliver the best career prospects upon graduation, their high fees and entry requirements may deter students from disadvantaged backgrounds.

To avoid further stratification of the university system, proper financial support must exist to ensure the least wealthy students at in-demand institutions do not suffer from the highest levels of debt. Admission requirements should also be closely considered to account for different levels of prior attainment.

The drive to create a market out of the university sector must not prompt a return to Victorian principles. A student’s educational prospects should not be determined by their family background.

The Conversation

Jonathan Godshaw Memel, Postdoctoral researcher and AHRC Cultural Engagement Fellow,, University of Exeter

This article was originally published on The Conversation. Read the original article.

How your parent’s lifespan affects your health

A study of nearly 200,000 volunteers has shown how your parents influence your life expectancy. In this blog, Dr Luke Pilling and Dr Janice Atkins, Research fellows in University of Exeter Medical School reflect on the recently published report on how long-lived parents can effect your own longevity.

This post first appeared in The ConversationConversation logo

Luke Pilling, University of Exeter and Janice Atkins, University of Exeter

The longer your parents live, the more likely you are to live longer and have a healthy heart. These are the results of our latest study of nearly 200,000 volunteers.

The role of genetics in determining the age at which we die is increasingly known, but the relationship between parental age at death and survival and health in their offspring is complex, with many factors playing a part. Shared environment and lifestyle choices also play a large role, including diet and smoking habits, for example. But, even accounting for these factors, parents lifespan is still predictive in their offspring – something we have also shown in previous research. However it was unclear how the health advantages of having longer-lived parents was transferred to children in middle age.

In the new study, published in the Journal of the American College of Cardiology, we used information on people in the UK Biobank study. The participants, aged 55 to 73, were followed for eight years using data from hospital records. We found that for each parent that lived beyond their seventies, the participants had 20 per cent less chance of dying from heart disease. To put this another way, in a group of 1,000 people whose fathers died at 70 and who were followed for ten years, around 50 on average would die from heart disease. But when compared to a group whose fathers died at 80, on average only 40 would die from heart disease over the same ten-year period. Similar trends were seen when it came to the age of mothers.

Interestingly, family history of early heart attacks is already used by physicians to identify patients at increased risk of disease.

Family history?
Shutterstock

All is not lost

The biggest genetic effects on lifespan in our studies affected the participant’s blood pressure, their cholesterol levels, their body mass index and their likelihood to be addicted to tobacco. These are all factors that affect risk of heart disease, so is consistent with the lower rates of heart disease that we saw in the offspring. We did find some clues in our analysis of novel genetic variants that there might also be other pathways to longer life, for example through better repair of damage to DNA, but much more work is needed on these.

It is really important to note that our findings were group-level effects. These effects do not necessarily apply to individuals, as so many factors affect one’s health. So the results are really positive – although people with longer-lived parents are more likely to live longer themselves, but they do not mean people with shorter-lived parents should lose hope. There are lots of ways for those with shorter-lived parents to improve their health.

Current public health advice about being physically active (for example going for regular walks), eating well and not smoking are very relevant – and people can really take their health into their own hands. People can overcome their increased risk by choosing the healthy options in terms of not smoking, keeping active, avoiding obesity and so on and getting their blood pressures and cholesterol levels tested. Of course, they should discuss their family history with their physicians, as there are some good treatments for some of the causes of premature deaths.

Conversely, people with long-lived parents cannot assume they will therefore live long lives – if you are exposed to the big health risk factors, this will be more important to your health than the age at which your parents died.

The Conversation

Luke Pilling, Research Fellow in Genomic Epidemiology, University of Exeter and Janice Atkins, Research Fellow, University of Exeter

This article was originally published on The Conversation. Read the original article.

Turkey crisis: how will oil and gas supplies be affected?

The recent attempted military coup in Turkey has had far reaching consequences for the country.

In this blog, Associate Research Fellow James Dutton looks at the coup’s potential impact on our oil and gas supplies.

James is based in Geography’s Energy Policy Group.

This post first appeared in The ConversationConversation logo

freight-863449 (1)

Joseph Dutton, University of Exeter

The Turkish military’s attempted coup to topple president Recep Tayyip Erdogan didn’t last long. The government restored control the following day and soon declared a three-month state of emergency, with more than 60,000 people since arrested or placed under investigation.

This isn’t just Turkey’s problem. The country’s pivotal position in the transport of oil and gas gives it huge geopolitical significance. Straddling Europe and the Middle East and providing export routes from Central Asia to the rest of the world, Turkey is an important and growing energy transit hub.

Shipping in the Bosphorus straits between the Black Sea and Mediterranean Sea was halted in the immediate aftermath of the attempted coup because of security concerns, though it was soon reopened. Although oil and gas flows were broadly unaffected by the coup, the prospect of prolonged instability raises the spectre of disruption to their transit.

Two major oil pipelines pass through Turkey to the Ceyhan terminal on its Mediterranean coastline. One begins in Baku, the capital of oil-rich Azerbaijan, before passing through Georgia. The other delivers oil from Kirkuk in northern Iraq and has been affected by fighting with Islamic State and Kurdish insurgents, and disputes between Baghdad and the Kurdish government. When fully operational the two piplines have a combined capacity of 2.7m barrels per day, which is more than three times greater than the UK’s daily production.

The country’s location at the mouth of the Black Sea means it plays an equally key role in the seaborne oil trade. Around 3 per cent of the world’s oil and petroleum products pass through the Bosphorus from Russia, Ukraine and central Asia.

Turkey’s pipelines link Europe with Central Asia, Russia and the Middle East. US Energy Information Agency

Turkey is also an important transit state for the EU’s natural gas imports. The Blue Stream pipeline, which runs underneath the Black Sea from Russia, carried 14.7 billion m³ of gas in 2013 – equivalent to 9 per cent of the total Russian gas supplies to Europe that year. Blue Stream was built in the early 2000s in response to growing disruption of gas flows through Ukraine and Belarus. The recent conflict in Ukraine highlights how Turkey’s importance has grown in recent years.

Further into the future, the planned Southern Gas Corridor development would see gas from fields in Azerbaijan flowing through Turkey to the EU by 2018, while the planned Turkish Stream pipeline across the Black Sea would also circumvent Ukraine.

This refinery in Izmit, near Istanbul, processes crude oil delivered by tanker. ARTEM ARTEMENKO / shutterstock

Turkey’s key location for energy supplies and regional affairs means the EU has always considered it a strategic partner. The country’s accession process for membership started back in 2005. And this same strategic importance may be useful even today – some have suggested Europe’s timid response to Erdogan and his crackdown after the attempted coup is because of Turkey’s crucial role in supplying the EU with energy. Although European Commission president Jean-Claude Juncker said Turkey was “not in a position to become a member” following the coup, the ongoing migrant crisis and concerns over energy supplies mean it will be difficult for the EU to take too strict a position.

As the EU seeks to become less dependent on Russian gas it will need to develop supplies from Central Asia through Turkey’s Southern Corridor, while also increasing its supply from global Liquefied Natural Gas (LNG) markets. At the same time, Russia will continue to reroute its gas exports away from Ukraine, instead increasing flows through Turkey and through the expanded Nordstream pipeline in Germany.

It is therefore highly unlikely that the strategic nature of EU-Turkey relations will change in the foreseeable future, even if Erdogan’s government places further restrictions on society. But, given further civil unrest or terrorism could increase political instability and threaten Turkey’s energy sector, Europe is right to be worried.

The Conversation

Joseph Dutton, Research Fellow, Energy Policy Group, University of Exeter

This article was originally published on The Conversation. Read the original article.

Hinkley Point C delay: how to exploit this attack of common sense in energy policy

Dr Bridget Woodman,director MSc energy policy and a member of the Energy Policy Group  writes about the 11th hour U turn on the fate of Hinkley Point C.

This post originally appeared in The Conversation Conversation logo

Bridget Woodman, University of Exeter

These are extraordinary times for energy policy in the UK. After years of resigned acceptance that the Hinkley Point C nuclear power station would be built no matter how much of a basketcase it was, the government has surprised everyone by calling a halt to the process until the autumn.

The proposed Hinkley Point C would have two European Pressurised Reactors (EPRs), providing around 3GW of electricity or about seven per cent of the UK’s total usage. The construction would be paid for by French energy firm EDF and Chinese nuclear companies, but the expense of building it would be underpinned by long-term supply contracts with the UK government, as well as a series of other financial undertakings designed to reduce the financial risks for the developers.

Few people argue that Hinkley Point C makes sense. The project’s budget has grown from original estimates of £16 billion to £24.5 billion today. Even this might be an underestimate given the experience of cost overruns similar reactors under construction in Finland, France and China.

The long-term supply contracts – known as Contracts for Difference (CfDs) – are designed to guarantee a set income of £92.50 per MegaWatt hour of output, regardless of the actual price of electricity in the market. Since the contracts were agreed, the wholesale price of electricity has fallen, meaning that the estimated subsidy for the lifetime of the project has risen from £6.1 billion to £29.7 billion, a huge burden for UK electricity consumers.

France gets most of its energy from nuclear plants – managed by EDF. hal pand / shutterstock

And the CfD subsidy is complemented by a suite of other UK taxpayer subsidies and guarantees designed to mitigate investment risks for the French and Chinese investors and to guarantee costs for dealing with nuclear waste or paying compensation for nuclear accidents.

Putting all of these subsidies in place has required the UK government to essentially redesign the electricity market over the past few years in an effort to create a situation where investment in a new plant looked attractive. Pretty much every major policy design has been geared towards creating a perfect environment for Hinkley Point C. That’s why it’s such a surprise to see the government has now stepped back – a bit – from the brink.

The get-out clauses?

The contracts to put many of the subsidy structures are not yet signed – that was meant to happen today, as part of the official approval process – so the government could still pull out. Obviously that wouldn’t please the French and Chinese, but risking their short-term displeasure could avoid locking the UK into the extortionate project for decades to come. Once the contracts are signed the legal and financial ramifications are so high that the project will go ahead, whatever the evidence against it.

The UK has form on this, notably the THORP nuclear fuel reprocessing plant at Sellafield, which began operation despite the case for it collapsing on every front. But without those contracts in place the project can still falter at the last hurdle.

The controversial Sellafield is due to close in 2018. Ashley Coates, CC BY-SA

So why the delay? There is all sorts of speculation going round: the new Theresa May administration is not ideologically linked to new nuclear plants in the way that Cameron’s administration was – and therefore has had an attack of common sense about the costs of the project. The government also has security concerns over allowing significant Chinese investment in the UK electricity system – in a post-Brexit world the government is worried about the level of overseas ownership of UK electricity assets – most are owned by European rather than British companies. Then there was the less than ringing endorsement of the EDF board (which reportedly voted only 10-7 in favour of going ahead, following a couple of high-profile resignations) which has rung alarm bells in both the UK and French governments.

The real reason behind the decision may emerge over the next few weeks as the government mulls over the pros and cons of the project. That will be fascinating. Equally fascinating, though, will be the debate that must take place at the same time about what an alternative might look like. What might the UK energy landscape look like without the project that has shaped it for so many years?

Energy policy is often seen as a bit of a backwater – little tweaks to existing approaches tend to be preferred to massive shifts in strategy. The latest decision has the potential to change that. Without Hinkley Point C, the potential to have a real and considered debate about the future shape of the electricity system has loomed into view. Now is the time to start considering the sorts of options being considered widely around the world, with measures to encourage more flexible, smaller-scale, renewable systems incorporating demand-side measures and new technologies such as storage. A system that is the absolute antithesis of what Hinkley Point C represents. Suddenly UK energy policy has become very exciting indeed.

The Conversation

Bridget Woodman, Course Director, MSc Energy Policy, University of Exeter

This article was originally published on The Conversation. Read the original article.

The messy history of British party politics

Professor Richard Toye and Dr Richard Jobson from College of Humanities’s Department of History;in this blog, they take a look at the splits and divisions which have plagued both the Conservative and the Labour parties.

This post first appeared in The Conversation. Conversation logo

london-285175

Richard Toye, University of Exeter and Richard Jobson, University of Exeter

Brexit has triggered an unprecedented scenario in British politics. Never before has a sitting prime minister been forced to resign because of a referendum defeat. Never before has a governing party in apparent meltdown been faced by an official opposition whose situation, if anything, seems worse.

But the two main parties have faced perilous and damaging situations before, and have a history of drawing back from the brink.

In the case of the Tories, the most profound crisis occurred in 1846, when the party split over the repeal of the Corn Laws instigated by its leader Sir Robert Peel. For more than 20 years, the party was kept out of office, save for brief intervals, until at last it saw an electoral revival under Benjamin Disraeli at the general election of 1874.

In 1903, the Conservatives again divided into Free Trade and protectionist camps. This time, there was no major exodus of MPs – although Winston Churchill, who joined the Liberals, was a notable exception. The electoral consequences were nonetheless calamitous, with a landslide defeat in 1906. In spite of a partial recovery four years later, there was no clear prospect of the Tories returning to office. It took a world war to bring about the eventual Conservative recovery – and, despite winning a huge victory at the polls in 1918, it was not until 1922 that the Tories had the confidence to rid themselves of their coalition partners, the Lloyd George Liberals.

That process itself provoked another split, but because it was driven by
personality rather than ideology the healing process was fairly quick and the party dominated for the remainder of the interwar years.

The recent divisions under John Major in the 1990s had more direct similarities to today’s situation. The atmosphere within the Tory party was wholly toxic and became a major factor in New Labour’s 1997 triumph. But it did not create a constitutional impasse of the kind that Britain now seems to face.

On his bike. Shutterstock

The divisions in the party over the EU today are ideological, so are potentially very damaging. On the other hand, there has been no formal split in the party – and if it can hold together it may live to fight another day.

The opposition position

For Labour, the first obvious historical point of comparison is the Ramsay MacDonald “betrayal” of 1931. On that occasion, a minority Labour government split over the question of cuts to unemployment benefit. MacDonald continued as prime minister of a cross-party “National Government” and then defeated his former Labour allies at the general election that followed.

It was a catastrophe for Labour. Yet although the party was reduced to around 50 seats in the Commons, it was still the largest organised body of opposition to the government of the day. Labour’s continued strength in working-class areas also made it almost inevitable that it would at some stage be called upon to form a government (although this would not occur until the very special circumstances of the post-war election in 1945).

The other key comparison is the split of 1981, when some MPs broke away to form the Social Democratic Party.

Labour in opposition today faces a much more confused situation, having lost votes to UKIP and also the SNP. And the MPs who have rebelled against Corbyn’s leadership clearly represent a majority within the parliamentary party, whereas the MPs who were expelled and went on to form the National Labour Organisation in 1931 and those who left to form the SDP in 1981 were in the minority.

Labour’s current problems are likely to be amplified by future disagreements over the rules governing the party’s leadership contests. After the adoption of the 2014 Collins Review it is not entirely clear whether an incumbent leader needs to meet the same nomination threshold as any challenger.

Gathering momentum.
PA

Significant changes in the nature of the party’s membership and its registered supporters make the outcome of any future leadership contest hard to predict. However, in the absence of a leader who can unify the party and make a broad-based electoral appeal, it is possible that the consequences for Labour will be worse than either of the previous splits, especially if there is a general election before the end of the year.

It seems that neither party is well placed to turn the current crisis into an opportunity, even though the Conservatives still have the advantage of a majority in the House of Commons.

However, the problems that both parties face should be seen as symbolic of a broader political crisis. On the one hand, this crisis reflects the fact that the electorate is seriously divided, albeit not on straightforward class lines. On the other hand, it reflects the fact that voters in both Leave and Remain camps lack faith in the capacity of politicians to deliver positive change or even to tell the truth. The vote for Brexit was a product of these divisions and of this crisis of trust, and it also looks set to be exacerbated by them.

The Conversation

Richard Toye, Professor of Modern History, University of Exeter and Richard Jobson, Associate Research Fellow, University of Exeter

This article was originally published on The Conversation. Read the original article.