Monthly Archives: August 2016

Whither anarchy: freedom as non-domination

Dr Alex Prichard, Senior Lecturer in the College of Social Science and International Studies, and Professor Ruth Kinna, Professor of Political Theory at Loughborough University, have been collaborating on a research project entitled ‘Constitutionalisng Anarchy – looking at the principles, processes, and rules of anarchy.

In this blog, they look at the idea of anarchist constitutionalism, the anarchists’ view of property, and, the non-domination of freedom.

This post appeared in The ConversationConversation logo

Alex Prichard, University of Exeter and Ruth Kinna, Loughborough University

This article is part of the Democracy Futures series, a joint global initiative with the Sydney Democracy Network. The project aims to stimulate fresh thinking about the many challenges facing democracies in the 21st century. This article is the second of four perspectives on the political relevance of anarchism and the prospects for liberty in the world today.


Which institutions are best suited to realising freedom? This is a question recently asked by the republican political theorist Philip Pettit.

Anarchists, by contrast to republicans, argue that the modern nation-state and the institution of private property are antithetical to freedom. According to anarchists, these are historic injustices that are structurally dominating. If you value freedom as non-domination, you must reject both as inimical to realising this freedom.

But what is freedom as non-domination? In a nutshell, by a line of thinking most vocally articulated by Pettit, I’m free to the degree that I am not arbitrarily dominated by any other. I am not free if someone can arbitrarily interfere in the execution of my choices.

If I consent to a system of rules or procedures, anyone that then invokes these rules against me cannot be said to be curtailing my freedom from domination. My scope for action might be constrained, but since I have consented to the rules that now curtail my freedom, I am not subject to arbitrary domination.

Imagine, for instance, that I have a drinking problem and I’ve asked my best friend to keep me away from the bar. If she sees me heading in that direction and prevents me from getting anywhere near the alcohol, she dominates, but not arbitrarily, so my status as a free person is not affected.

Republican theory diverges from liberal theory because the latter treats any interference in my actions as a constraint on my freedom – especially if I paid good money for the drink, making it my property.

Neither republicans nor liberals suggest that private property and the state might themselves be detrimental to freedom, quite the opposite. By liberal accounts, private property is the bedrock of individual rights. In contemporary republican theory, property ownership is legitimate as long as it is non-dominating.

Republicans further argue that a state that tracks your interests and encourages deliberative contestation and active political participation will do best by your freedom.

The special status of property and the state

But why should we assume that property or the state is central to securing freedom as non-domination? The answer seems to be force of habit. For republicans like Pettit, the state is like the laws of physics while private property is akin to gravity. In ideal republican theory, these two institutions are just background conditions we simply have to deal with, neither dominating nor undominating, just there.

While anarchists don’t disagree that property and the state exist, they seek to defend a conception of freedom as non-domination that factors in their dominating, slavish and enslaving effects. Anarchism emerged in the 19th century, when republicanism, particularly in the US, was perfectly consistent with slavery and needed the state to enforce that state of affairs.

Anarchists denounce the institutions of dominance under industrial capitalism. Quinn Dombrowski/flickr

The abolition of slavery and the emergence of industrial capitalism were predicated on the extension of the principle of private property to the propertyless, not only slaves, who were encouraged to see themselves as self-possessors who could sell their labour on the open market at the market rate.

Likewise, in Europe millions of emancipated serfs were lured into land settlements that left them permanently indebted to landlords and state functionaries. They were barely able to meet taxes and rents and frequently faced starvation.

The anarchists uniformly denounced this process as the transformation of slavery, rather than its abolition. They deployed synonyms like “wage slavery” to describe the new state of affairs. Later, they extended their conception of domination by analysing sex slavery and marriage slavery.

Proudhon’s twin dictums “property is theft” and “slavery is murder” should be understood in this context. As he noted, neither would have been possible but for the republican state enforcing and upholding the capitalist property regime.

The state became dependent on taxes, while property owners were dependent on the state to keep recalcitrant populations at bay. And, by the mid-20th century, workers were dependent on the state for welfare and social security because of the poverty-level wages paid by capitalists.

As Karl Polanyi noted, there was nothing natural about this process. The unfurling of the “free market”, the liberal euphemism for this process, had to be enforced and continues to be across the world.

Republicans might encourage us to think of the state and property like the laws of physics or gravity because this helps them argue that their conception of freedom as non-domination is not moralised – that is, their conception of freedom as non-domination does not depend on a prior ethical commitment to anything else.

But as soon as you strip away the physics, it appears that republican freedom is in fact deeply moralised – the state and private property remain central to the possibility of republican freedom in an a priori way. Republican accounts of freedom demand we ignore a prior ethical commitment to two institutions that should themselves be rejected.

Anarchists argue that private property and the state precipitate structures of domination that position people in hierarchical relations of domination, which are often if not always exacerbated by distinctions of race, gender and sexuality. These are what Uri Gordon calls the multiple “regimes of domination” that structure our lives.

Looking to constitutionalism as a radical tool

Anarchists are anarchists to the extent that they actively combat these forces. How should they do this?

Typically, the answer is through a specific form of communal empowerment (“power with” rather than “power over”). This would produce structural power egalitarianism, a situation in which no one can arbitrarily dominate another.

But is this realistic or desirable? Would a reciprocal powers politics not simply result in the very social conflicts that anarchists see structuring society already, as Pettit has argued?

Even anarchists need rules to guide group decision-making – such as these ones at Occupy Vancouver. Sally T. Buck/flickr, CC BY-NC-ND

And what about radical democracy? Perhaps anarchists could replace engagement with the state with radical practices of decision-making? The problem is that anarchists haven’t even defined the requisite constituencies or how they should relate to one another. What if my mass constituency’s democratic voice conflicts with yours?

There is one implement in the republican tool box that anarchists once took very seriously and which might be resurrected: constitutionalism. Without a state to fall back on or private property to lean on, anarchists like Proudhon devised radically anti-hierarchical and impressively imaginative constitutional forms.

Even today, when constitutionalism is almost uniformly associated with bureaucracy and domination, anarchists continue to devise constitutional systems. By looking at anarchist practices like the Occupy movement’s camp rules and declarations (We are the 99 per cent!), we can revive anarchist constitutionalism and show how freedom as non-domination may be revised and deployed as an anti-capitalist, anti-statist emancipatory principle. You can see more about this here.


You can read other articles in the series here.

The Conversation

Alex Prichard, Senior Lecturer in Politics, University of Exeter and Ruth Kinna, Professor of Political Theory, Loughborough University

This article was originally published on The Conversation. Read the original article.

It will take more than giving people a share in shale gas profits to sway public opinion on fracking

Associate Research Fellow, Joseph Dutton, looks at the US’s shale gas production, what lies behind the industry’s boom in the US, and what can give the UK’s shale industry the boost that it needs.

James is part of the Energy Policy Group.

This post first appeared in The Conversation.Conversation logo

fracturing-333330

Joseph Dutton, University of Exeter

Britain is due to receive its first delivery of shale gas imported from the US, which will arrive at Grangemouth petrochemical plant in Scotland next month. That the UK is importing a tanker-full of shale gas from the US despite sitting on substantial resources of its own reveals just how far advanced the US shale gas industry is over that in the UK.

While the US was once predicted to be the world’s largest importer of natural gas, instead by 2015 it had delivered record gas exports due to its booming shale gas industry. The use of gas for generating electricity also recently reached record levels. On the other hand, there is no commercial shale gas production in the UK and only a handful of wells have been drilled. In May, the first application approval in five years was given to Third Energy, to begin exploration in the village of Kirby Misperton in North Yorkshire. In Scotland, despite interest from a number of companies, development was halted by a 2015 moratorium.

Shale gas as percentage of US natural gas production. Plazak/US EIA, CC BY-SA

According to surveys undertaken by the Department for Energy and Climate Change in April, public support for fracking stood at only 19 per cent, while 31 per cent were explicitly opposed. In an effort to win over the public and speed up the development process, the government launched a consultation on its proposals to distribute payments from the production of shale gas to the communities affected by the drilling and under which the gas lies.

This would be dealt with through a Shale Wealth Fund, which would distribute 10 per cent of all shale gas tax revenues to local communities. Initially there would be a maximum payout of £10m to each community associated with a shale drilling site over the period of 25 years – payments the government believes could add up to £1 billion over the fund’s lifetime. Crucially, the government proposed that payments could be made to households directly, rather than as benefits to communities through payments made to local authorities, for example.

This latest proposal builds on a community benefit charter that was drawn up by industry trade body UK Oil and Gas in 2013. This committed companies to community payments of £100,000 per drilling site during exploration, and one per cent of revenues during production. But, unlike the government’s proposals, this would entail payments to communities as a whole, not to individual households.

Playing with incentives

Part of what lies behind the US shale gas boom is that landowners there own the rights to minerals or oil below their land, so providing a financial incentive for them to allow oil and gas companies to get to work. In the UK, mineral rights belong to the Crown: the Shale Wealth Fund is essentially an attempt to create a financial incentive for those affected to look more positively towards drilling near their homes or land, and to provide a more tangible benefit than just greater employment, economic activity and tax revenues.

But would paying off householders actually give the industry the boost it needs? Some have reacted angrily to the proposals. Keith Taylor, Green Party MEP for the south-east, described them as “immoral and tantamount to bribery” – a view echoed by other Green MEPs. A subsequent poll commissioned by Friends of the Earth found that 33 per cent of people would support fracking if households were paid, while 43 per cent

 

 

 

would still oppose fracking regardless of payments – fewer than in the April survey, but still substantial opposition.

Certainly, direct payments are a more progressive and ambitious step than the industry’s 2013 charter. But the government’s consultation document noted that in reality such payments would be “relatively small per-household”.

In fact, regardless of the estimates the government provides, the ultimate value of the fund and therefore the payments it would distribute is wholly dependent on the tax regime in place when production begins, and the revenue a company derives from a shale gas site once costs are taken into account. Until actual gas production begins, it’s impossible to estimate how much tax the operating company will pay – or even if the shale industry would be a success in the UK at all.

As the price of oil and gas has plummeted in the last two years, the economic case for developing potentially expensive shale gas deposits has weakened. More exploration drilling is needed to determine how much gas can be produced and at what cost. But more than this, public support remains the largest hurdle to the shale sector, and in the absence of a social licence to operate – essentially, public support to do so – the promise of payments to communities will do little to get it up and running.

The Conversation

Joseph Dutton, Research Fellow, Energy Policy Group, University of Exeter

This article was originally published on The Conversation. Read the original article.

The long history behind the power of Royal Portraits

Professor Andrew McRae, Head of Department and Professor of Renaissance Studies in the Department of English and PhD candidate, Anna-Marie Linnell have written a blog which looks at the risks and rewards of Royal portraits.

This post first appeared in The ConversationConversation logo

Andrew McRae, University of Exeter and Anna-Marie Linnell, University of Exeter

Queen Elizabeth family portrait

The royal portraits released to mark the 90th birthday of Queen Elizabeth II deliberately emphasise her status as the matriarch within a flourishing family. The oldest and newest generations of royals smile together for the camera, projecting the Windsor line as safely secured into the future.

These well choreographed and well publicised pictures blend longevity and authority with an appreciation of renewal and dynastic security. Across British history, however, the idea that the monarch’s nuclear family is necessarily a unit of stable authority has been hard won.

While of course there have been royal families for as long as there have been monarchs, spouses and offspring haven’t always shared the limelight. The pivotal era of change is that of the Stuarts (1603-1714), who reigned when print culture exploded and new forms of visual media emerged.

Successive Stuart monarchs quickly grasped the value of royal imagery, keenly sponsoring portraits of themselves or holding lavish events which promoted their reign and policies. In turn, authorised images were disseminated more extensively through cheaply printed pamphlets.

Sharing the royal image with their subjects was a new and powerful tool – but it also carried risks.

Promoting his family held advantages for the first Stuart monarch, James VI of Scotland, who assumed the English throne in 1603, becoming James I. After the turbulent reigns of the early Tudors, and decades of rule by a Virgin Queen, James brought his subjects a healthy royal family. While he perhaps had little affection for his wife, Queen Anna of Denmark – and rather more for his succession of male royal favourites – he appreciated the importance of dynastic continuity and placed his three young children clearly in the public eye.

James I and his royal progeny.
Wikicommons

London’s printers devoted much attention to the king and his family. Genealogical charts and portraits of the family were disseminated in cheap printed form. James’s great book of political theory, Basilikon Doron, was also rushed into print in London in 1603. This text was an extended essay on dynastic continuity, addressed to his eldest son, Prince Henry. And while James’s family experienced more than its share of upheaval, with Henry dying suddenly in 1612, and his sister Elizabeth being sucked into the morass of the Thirty Years’ War, royal imagery stressed the continuity of Stuart rule.

Charles I, James’s second son, went on to exploit even more fully the potential of the royal family image. Charles’s marriage to the French princess Henrietta Maria coincided with his father’s unexpected death in 1625, meaning his reign began at the same time as the start of a stable and happy marriage. His image as a ruler was virtually indistinguishable from his profile as a husband and father.

Scholars have even argued that Charles only established an identifiable and independent reputation as a ruler from around 1630, when Henrietta Maria gave birth to the first of a succession of seven children.

Charles I, family guy.
Royal Collection

All the media of the age were mobilised to celebrate the family. Volumes of poetry were published to mark each royal birth, while the greatest court artists were commissioned to paint portraits. One portrait of Charles, Henrietta Maria and their first two children, by Anthony Van Dyck, hangs to this day in Buckingham Palace. Yet the risks of this approach also became apparent, as the perceived influence of a foreign, Catholic queen became a focus of resentment in the 1640s.

Traditional gender roles

Set against a more traditional model of masculine authority, Charles was derided by his critics as weak and vulnerable. The publication of secret correspondence between the couple, in The King’s Cabinet Opened (1645), fuelled the flames of civil war.

The royal family remained a source of tension in the second half of the Stuart era. Charles II’s childless marriage to Katherine of Braganza lacked the intensity of his parents’ union. For his brother and heir, James II, the birth of a Catholic son in 1688 in fact precipitated his downfall. While his subjects were prepared to tolerate James II’s leadership, they were anxious about the prospect of a future line of Catholic Stuarts.

His opponents challenged the maternity of the child, James Francis Edward, alleging that he was an imposter smuggled into the Queen’s rooms in a bedpan. Hundreds of pamphlets, histories and even plays were produced about this “warming pan plot”.

Months later, James’s son-in-law, William of Orange, capitalised on discontent and invaded England to seize the crown. Thereafter, images of the Stuart royal family tended to be divisive, often associated with the “Jacobites” who sought to restore James to the throne.

Today, the Windsors can congratulate themselves on their evident success in creating an image suited to the times. Yet a glance back in history, to the very century when the royal family was invented as a media product, underlines the challenges that they face in promoting – and maintaining – the positive royal image in a digital age.

The Conversation

Andrew McRae, Head of English, Professor of Renaissance Studies, University of Exeter and Anna-Marie Linnell, PhD candidate, University of Exeter

This article was originally published on The Conversation. Read the original article.

How will self-driving cars affect your insurance?

Matthew Channon, a PhD student in University of Exeter Law School, takes a look at the implications that the driverless car might have for your insurance.

This article first appeared in The ConversationConversation logo

car-1360121

Matthew Channon, University of Exeter

Mark Molthan admits he wasn’t paying attention when his car crashed into a fence, leaving him with a bloody nose, according to a news report. The Texan had left control of his Tesla Model S to its autopilot system, which failed to turn at a curve and instead drove the car off the road. But Tesla, like other car manufacturers, stresses its self-driving technology is there just to assist drivers, who should remain ready to take over at any time.

One of the big questions about cars with self-driving technology is who’s to blame when something goes wrong. The driver in this case may have reportedly admitted he was at fault. But that hasn’t stopped his insurance company from requesting a joint inspection of the written-off car, which raises the prospect the firm may sue Tesla to pay for the damage.

Insurance firms will always try to prove they shouldn’t have to pay for an accident. And software bugs in self-driving cars could create a new reason manufacturers might have to shoulder the cost of crashes. Yet if drivers remain legally responsible for a car even as technology encourages them to take their eyes off the road, will manufacturers be able to avoid blame, leaving insurance companies to recoup their costs through higher premiums?

The British government is already hoping to address this issue with a new piece of legislation to be introduced in autumn 2017. In anticipation of this, it is currently consulting the public and experts about how driverless cars should be insured in the future.

One model that could be introduced would build on the current system of compulsory insurance. But as well as every driver needing insurance, manufacturers of any car with a form of self-driving technology would also have to take out a policy to cover any liability for accidents. The costs of this would likely be passed on to drivers through higher purchase costs.

Eyes off the road: still not recommended. Shutterstock

Liability will then be determined on the circumstances of each individual accident. If the accident is caused entirely by the vehicle, it is the manufacturer’s insurers who will be liable. If the accident is caused by both vehicle malfunction and driver error, then it is likely to fall on both insurers.

As with the current system of compulsory insurance, there would be a battle between insurers as to exactly who will pay for any damage or injury caused. This therefore will not make much of a difference to the driver, except for an increase in premium if they are found liable.

Other premiums

However, a different system could stop liability battles between insurers from clogging up the courts and ultimately cost drivers less. Instead of having to buy insurance for cars with self-driving technology, drivers would simply pay an extra fee on top of the cost of the car or on the petrol or electricity they use to power the vehicle. This money would go into a central fund that would pay for any damage caused. This would be held by the government or (in the UK) the Motor Insurer’s Bureau, which compensates victims of accidents caused by uninsured drivers funded by a similar levy on insurance premiums.

This would eventually mean drivers would have to pay less in the long run because they wouldn’t be paying for insurance company costs and profits, just for the damage of accidents. A similar system is already used in New Zealand for conventional vehicles.

Either system won’t have much of an effect on how much you have to pay for insurance in the meantime. And in fact, premiums will most likely fall as self-driving technology actually appears to make the overall risk of an accident lower – something that will surely be welcomed by all. In the future, however, we may not need driver insurance at all. If cars become fully autonomous with no need for humans to do any driving, then the manufacturers will probably become responsible for every journey.

The Conversation

Matthew Channon, PhD Candidate in Motor Insurance Law, University of Exeter

This article was originally published on The Conversation. Read the original article.

Why Blair really went to war

Dr Owen Thomas, Lecturer in Politics and International Relations for the College of Social Sciences and International Studies, looks at Tony Blair’s motivation for war in 2003 and whether it meets he conditions set out in, Blair’s own Doctrine of the International Community.

This post first appeared in The Conversation.Conversation logo

Owen D. Thomas, University of Exeter

As Sir John Chilcot’s Iraq Inquiry findings are published, we should resist what’s become the easy refrain: “Blair Lied. Thousands Died.”

If we actually want to learn from what happened, we should recognise that Tony Blair has been remarkably consistent in his view that the removal of the regime was necessary, whether or not Saddam Hussein actually possessed weapons of mass destruction (WMD). Blair, it seems, genuinely believes that the war was in our best interests because it may have prevented an unlikely (but not impossible) catastrophe. This way of thinking has not gone away.

Ever since the inquiry began in 2009, Sir John Chilcot has said: “The inquiry is not a court of law and nobody is on trial. ” This provoked plenty of consternation, despite Sir John’s promise that the committee will “not shy away” from frank criticism.

One of the questions that we can expect the inquiry to resolve is whether the public was deceived about the factual basis for war. Were lies told? Was there an environment in Number 10 whereby public deception was seen as a necessary evil?

Important as these concerns are (and Piers Robinson has written about these questions here), they are secondary to another mystery: why did Blair sincerely believe that war was necessary in the first place, and that it was necessary to persuade a requisite amount of parliamentarians and the public to support intervention?

The story of Britain’s decision to go to war is about not just bad apples, but bad barrels. In other words, to understand what happened in 2002-3, we need to focus not just on what people did, but also why they did it.

The politics of catastrophe

Way back in April 1999, Blair gave a speech in Chicago called The Doctrine of the International Community, in which he set out five conditions under which it would be legitimate to intervene militarily.

The first condition for intervention would be: “Are we sure of our case?” Are we sure, in other words, that intervention will ultimately do more good than harm? War kills innocent people, and it is only the least-worst option when it saves more lives than it destroys.

That speech was given in the context of the Kosovo conflict, but the ideas also underpinned Blair’s rationalisation of the Iraq war. As the scholar Jason Ralph has argued, Blair sincerely believes that the war, which was presented as an act of collective security, was a continuation of the internationalist doctrine he laid out.

So did the Iraq War meet the conditions that Blair himself set out in 1999 – and specifically, was there a good reason to think that war would do more good than harm?

A true believer in Basra. EPA/Peter MacDiarmid

Blair certainly seems to think so. And to understand why, we only need to look at what he said to the inquiry in 2010.

The crucial thing after September 11 is that the calculus of risk changed … it is absolutely essential to realise this: if September 11 hadn’t happened, our assessment of the risk of allowing Saddam any possibility of him reconstituting his programmes would not have been the same … The point about [9/11] was that over 3,000 people had been killed on the streets of New York, an absolutely horrific event, but this is what really changed my perception of risk, the calculus of risk for me: if those people, inspired by this religious fanaticism could have killed 30,000, they would have.

This is what some call the politics of catastrophe, a concern with the unknown and unanticipated. We now think about the future as a place where it is more certain that terrible things will happen, but we are uncertain about exactly what will happen, when it will happen and where.

Blair’s belief in the need and urgency for war was underpinned by three assumptions: that Iraq was at least capable of, and interested in, producing WMD; that Iraq was obstructing and deceiving the existing inspections and sanctions regime; and that if Iraq did produce WMD, these weapons could be acquired by terrorist organisations, who could use them to mount devastating attacks.

Blair’s assumptions effectively short-circuited the calculus of benefit versus harm. The number of soldiers and civilians who may lose their lives in a war can, plausibly, be calculated. But, for Blair, the number of lives that could be lost in a single terrorist attack was incalculable. While a future terrorist attack using WMD was unlikely, it could kill an unprecedented number of people.

How to think about security

It is because of this kind of thinking that Blair still believes that going to war was the right thing to do. As he said when giving evidence in 2010:

It’s a decision. And the decision I had to take was, given Saddam’s history, given his use of chemical weapons, given the over one million people whose deaths he had caused, given 10 years of breaking UN resolutions, could we take the risk of this man reconstituting his weapons programmes or is that a risk that it would be irresponsible to take? The reason why it is so important … is because, today, we are going to be faced with exactly the same types of decisions.

Here Blair is quite right: we are facing the same decisions today. Should drones kill suspected terrorists pre-emptively because of what they may do in the future? Should we surveil individuals that exhibit radical views for the same reason?

This worldview transcends the intentions of individual decision-makers and the machinations of “bad apples”. It has become endemic in Western governments and societies; it persists on an unconscious level, and it cannot be pinned on “Teflon Tony” alone.

To grapple with it, we need to have a serious discussion about the meaning and value of security, and how we weigh the costs of war (let’s remember that there have been as many as 250,000 violent deaths since the invasion). We need a clear way of deciding what constitutes an urgent threat, and what constitutes an appropriate response.

Kindred spirits. EPA/Claudio Onorati

At least one member of the Chilcot committee will know all about Blair’s “Doctrine of the International Community”. On April 16 1999, now-Chilcot committee member Sir Lawrence Freedman sent a fax to Number 10 with some ideas for the Chicago speech – and it was he who suggested the five conditions.

Freedman recently published an essay titled with a very pertinent question: Can there be a liberal military strategy? The final lines may be telling. Freedman argues that:

It is best to acknowledge that force invariably carries … risks and that they cannot necessarily be avoided … at the very least it requires paying much more attention to the interaction between the military and political strands of strategy when deciding upon the use of military force and explaining its purpose to the public.

Perhaps this is the one of the most valuable lessons we can learn. Whether or not Blair and other senior figures within government actually misled the British public, and whether they did so deliberately or not, they genuinely believed in the need for action based on a particular political strategy of security. That strategy is what we now need to evaluate.

The Conversation

Owen D. Thomas, Lecturer in Politics and International Relations, University of Exeter

This article was originally published on The Conversation. Read the original article.

German rape law finally accepts that no means no – but is a statute enough?

Dr Sarah Cooper reflects on what her teenage self would have thought of Germany’s recent legislation change, in the rape laws.

Dr Cooper is a lecturer in College of Social Science and International Studies’ Politics department.

This post first appeared in The Conversation.Conversation logo

Sarah Cooper, University of Exeter

In July 2016, Germany changed its legislation on rape to clarify that “no means no”. That’s right … in July 2016. Until now, by virtue of Section 177 of the German Criminal Code, a guilty verdict in cases of sexual assault demanded, shockingly, signs of physical defence.

Such laws, unsurprisingly, have long had a pernicious effect on the experience of victims. To characterise the recent changes as timely is a ridiculous understatement.

Whether or not it is accompanied by a physical struggle, fighting back, or screaming at the top of one’s lungs, the use of the term “no” signifies a lack of consent to sexual activity. To disregard this simple word amounts to rape, plain and simple. The need to assert this statement in 2016 should be redundant or, at the very least, tiresomely obvious. But such a conversation was commonplace in Germany just a few short weeks ago, when a change to the country’s legal system finally introduced the “no means no” statute.

So what will the response be across Germany – from the criminal law and justice systems, and from German society itself? As a former law student, now a public policy academic, and always an engaged citizen, the congratulatory response in my mind towards this legal “breakthrough” soon shifted towards a scathing critique, accompanied by a strong air of cynicism.

As an undergraduate bogged down by heavy statute books, my adolescent self would have welcomed Germany’s recent changes to the black letter of the law. Former miscarriages of justice would undoubtedly have enraged my idealistic young mind, such as the recent shocking case of model and television personality Gina-Lisa Lohfink, who was fined after a court ruled she had falsely accused two men of rape. This was despite a video surfacing in which she can be heard saying the word “no” several times, and the decision has been appealed.

But is this new statute enough to rectify such ills? Should we have faith that the change in law will have any real impact on German society?

Ushered in with other changes catalysed by the Cologne attacks on New Year’s Eve 2015, when around 500 women filed complaints of sexual assault, it appears that the legal shift was motivated more by exceptional events than a gentle evolution of the law in line with global trends.

An unforeseen shock to the political system can lead to serious, and often hasty, change. Of crucial importance to its success is a cultural and social environment which welcomes the new approach. And it is here that the red flags appear.

German media response. Shutterstock

The United Nations has long promoted an appropriate standard for sexual assault legislation, yet Germany has continually ignored its demands for the removal of physical resistance as a necessary element of a guilty verdict. It is frightening to consider what the state of play might be once the level of public empathy for the events in Cologne dissipates. How strictly and enthusiastically will bureaucrats implement this law? Can German victims really now look forward to a paragon of criminal justice?

Victims turned away

In England and Wales in 2014, a damning HMIC report revealed that those claiming to have been sexually assaulted were less likely to be believed by the police than any other potential victim of alternative crimes. One explanation for this, according to criminology research, is a continued consideration of clothing worn, intoxication levels and previous sexual history when assessing the validity of the claim. These amount to a host of “rape myths” that are as prevalent across the police force as they are wider society.

If the institutional barriers to proving sexual assault proliferate across the UK, in which a requirement for bodily defence has long been disregarded, how can this be so easily removed in Germany? Running alongside my own academic analysis, therefore, is a strong air of cynicism from a concerned member of the public.

Certainly, in a global society in which flippant jokes are just one example of a flourishing rape culture, the need for a shift in perception appears vital for the protection of victims. That Germany’s statute book now boasts this standard is unequivocally a hopeful first step. But it must be acknowledged that the justice system may yet remain ingrained with an understanding that a lack of consent is demonstrated through physically fighting back. Although the publicly appetising “no means no” tag line garners much attention, a far more complicated culture looks set to constrain the impact of those vital words which have finally been voiced in parliament.

The Conversation

Sarah Cooper, Lecturer in Politics, University of Exeter

This article was originally published on The Conversation. Read the original article.

The Medieval Somme: forgotten battle that was the bloodiest fought on British soil

This blog is by Professor James Clark; The Professor of History in the College of Humanities.

This post first appeared in The ConversationConversation logo

James Clark, University of Exeter

A Battle of the Somme on British soil? It happened on Palm Sunday, 1461: a day of fierce fighting in the mud that felled a generation, leaving a longer litany of the dead than any other engagement in the islands’ history – reputed in some contemporary reports to be between 19,000 – the same number killed or missing in France on July 1 1916 – and a staggering 38,000.

The battle of Towton, fought near a tiny village standing on the old road between Leeds and York, on the brink of the North York Moors, is far less known than many other medieval clashes such as Hastings or Bosworth. Many will never have heard of it.

But here, in a blizzard on an icy cold March 29 1461, the forces of the warring factions of Lancaster and York met in a planned pitched battle that soon descended into a mayhem known as the Bloody Meadow. It ran into dusk, and through the fields and byways far from the battlefield. To the few on either side that carried their weapon to the day’s end, the result was by no means clear. But York in fact prevailed and within a month (almost to the day), the towering figure of Duke Edward, who stood nearly six-feet-five-inches tall, had reached London and seized the English crown as Edward IV. The Lancastrian king, Henry VI, fled into exile.

Victor: the Yorkist Edward IV. The National Portrait Gallery

Towton was not merely a bloody moment in military history. It was also a turning-point in the long struggle for the throne between these two dynasties whose rivalry has provided – since the 16th century – a compelling overture to the grand opera of the Tudor legend, from Shakespeare to the White Queen. But this summer, as national attention focuses on the 100th anniversary of The Battle of the Somme, we might also take the opportunity to recall a day in our history when total war tore up a landscape that was much closer to home.

An English Doomsday

First, the historian’s caveats. While we know a remarkable amount about this bloody day in Yorkshire more than 550 years ago, we do not have the benefits granted to historians of World War I. Towton left behind no battle plans, memoranda, maps, aerial photographs, nor – above all other in value – first-hand accounts of those who were there. We cannot be certain of the size of the forces on either side, nor of the numbers of their dead.

A death toll of 28,000 was reported as early as April 1461 in one of the circulating newssheets that were not uncommon in the 15th century – and was taken up by a number of the chroniclers writing in the months and years following. This was soon scaled up to nearly 40,000 – about 1% of England’s entire male population – by others, a figure which also came to be cemented in the accounts of some chroniclers.

This shift points to the absence of any authoritative recollection of the battle – but almost certainly the numbers were larger than were usually seen, even in the period’s biggest clashes. Recently, historians have curbed the claims but the latest estimate suggests that 40,000 men took to the field, and that casualties may have been closer to 10,000.

Lethal: an armour-piercing bodkin arrow, as used at Towton. by Boneshaker

But as with the Somme, it is not just the roll-call, or death-toll, that matters, but also the scar which the battle cut across the collective psychology. Towton became a byword for the horrors of the battlefield. Just as July 1 1916 has become the template for the cultural representation of the 1914-18 war, so Towton pressed itself into the popular image of war in the 15th and 16th centuries.

When Sir Thomas Malory re-imagined King Arthur for the rising generation of literate layfolk at the beginning of the Tudor age, it was at Towton – or at least a battlefield very much like it – that he set the final fight-to-the-death between Arthur and Mordred (Morte d’Arthur, Book XXI, Chapter 4). Writing less than ten years after the Yorkist victory, Malory’s Arthurian battleground raged, like Towton, from first light until evening, and laid waste a generation:

… and thus they fought all the long day, and never stinted till the noble knights were laid to the cold earth and ever they fought still till it was near night, and by that time there was there an hundred thousand laid dead upon the ground.

Lions and lambs

In his history plays, Shakespeare also presents Towton as an expression of all the terrible pain of the years of struggle that lasted over a century, from Richard II to Henry VIII. He describes it in Henry VI, Part 3, Act 2, Scene 5:

O piteous spectacle! O bloody times! While lions war and battle for their dens, poor harmless lambs abide their enmity. Weep, wretched man, I’ll aid thee tear for tear.

Both the Somme and Towton saw a generation fall. But while it was a young, volunteer army of “Pals” that was annihilated in 1916, osteo-analysis suggests that Towton was fought by grizzled older veterans. But in the small society of the 15th century, this was no less of a demographic shock. Most would have protected and provided for households. Their loss on such a scale would have been devastating for communities. And the slaughter went on and on. The Lancastrians were not only defeated, they were hunted down with a determination to see them, if not wiped out, then diminished to the point of no return.

Battle of Towton: initial deployment. by Jappalang, CC BY-SA

For its time, this was also warfare on an unprecedented scale. There was no be no surrender, no prisoners. The armies were strafed with vast volleys of arrows, and new and, in a certain sense, industrial technologies were deployed, just as they were at the Somme. Recent archaeology confirmed the presence of handguns on the battlefield, evidently devastating if not quite in the same league as the German’s Maschinengewehr 08 in 1916.

These firearm fragments are among the earliest known to have been in used in northern European warfare and perhaps the very first witnessed in England. Primitive in their casting, they presented as great a threat to the man that fired them as to their target. Surely these new arrivals would have added considerably to the horror.

Fragments of the past

Towton is a rare example in England of a site largely spared from major development, and vital clues to its violent past remain. In the past 20 years, archaeological excavations have not only extended our understanding of the events of that day but of medieval English society in general.

The same is true of the Somme. That battlefield has a global significance as a place of commemoration and reconciliation, especially as Word War I passes out of even secondhand memory. But it also has significance as a site for “live” research. Its ploughed fields and pastures are still offering up new discoveries which likewise can carry us back not only to the last moments of those lost regiments but also to the lost world they left behind them, of Late Victorian and Edwardian Britain.

It is essential that these battlefields continue to hold our attention. For not only do they deepen our understanding of the experience and mechanics of war, they can also broaden our understanding of the societies from which such terrible conflict springs.

The Conversation

James Clark, Professor of Medieval History, University of Exeter

This article was originally published on The Conversation. Read the original article.

Obstacles to social mobility in Britain date back to the Victorian education system

Postgraduate researcher, Jonathan Godshaw Memel, takes a closer look at how the social mobility problems affecting new university students, dates back to  the Victorian era.

This article first appeared in The Conversation.Conversation logo

students

Jonathan Godshaw Memel, University of Exeter

Despite the growing number of young people attending university, comparatively few disadvantaged students are accepted into Britain’s most prestigious institutions. Many of the most selective universities have missed recent targets to improve access, as the least privileged students remain more than eight times less likely to gain places than their peers from the most prosperous backgrounds.

The government has made recent promises that universities will become “engines” of “social mobility”. Yet a more stratified, less fair university sector seems the likely result of the new competitive landscape announced in the Higher Education and Research Bill that is currently making its way through parliament.

Of the 13 most selective institutions identified by the Sutton Trust in 2015, ten were either founded or significantly reformed in the 19th century. Many of the problems surrounding university access date back to the Victorian era.

Workers and thinkers

The Victorians determined the quality and content of teaching according to the class background of students, establishing varied levels of school attainment that still challenge admissions officers today. The 19th-century school system was organised in relation to the economy, and many more workers than thinkers were required to support the rapid growth of manufacturing.

Under Lord Taunton’s leadership in the 1860s, the Schools Inquiry Commission organised school allocation based on family occupation. Pupils from the most prosperous backgrounds attended the public and first-grade secondary schools, where they prepared for university admission by following an extensive classical education until the age of 18.

The Great Hall at Durham University at the end of the 19th century. A. D. White Architectural Photographs, Cornell University Library/wikimedia.com, CC BY

Those from the lower-middle classes were to focus on applied subjects until the age of 14. “It is obvious,” stated the Taunton Report, “that these distinctions correspond roughly, but by no means exactly, to the gradations of society”. When the system was re-evaluated in 1895, the Bryce Commission agreed that traditional universities should continue to admit students from the upper and upper-middle classes who had been best prepared for its demands.

Social exclusivity was also central to the model of Victorian higher education outlined by Cardinal Newman in his 1852 lecture series titled The Idea of a University. Newman’s influential idea of liberal education relied on strict distinctions between applied and non-applied forms of work.

He emphasised “inutility” and “remoteness from the occupations and duties of life” as important features of an ideal university. A privileged few developed their intellectual faculties amid collegiate surroundings while applied forms of work and knowledge remained the “duty of the many”.

Ideas above your station

Conservative commentators were surprised that so many still aspired towards this elite form of education. As the MP Sir John Gorst wrote in 1895:

It is remarkable that the desire, even of the poorer workers, for knowledge seems to be directed towards abstract science and general culture, rather than towards those studies which could be turned to practical use in the manufacturing industry.

In his novel Jude the Obscure, also written in 1895, Thomas Hardy explored how this traditional university model helped to support existing social hierarchies. The stonemason Jude Fawley pursues his educational dreams at Christminster, a fictional version of Oxford. The letter rejecting his application to “accumulate ideas, and impart them to others” reveals the class bias underpinning his failure. As a “working man”, Jude is told that he would have a “much better chance of success in life by remaining in your own sphere and sticking to your trade”.

The tragedy of Jude’s failure depends on his wholehearted identification with Christminster’s mission as “a unique centre of thought and religion – the intellectual and spiritual granary of this country”. Hardy’s novel shows that the Victorian university was rarely the disinterested institution it appeared to be.

Access to the marketplace

University education has become much fairer since 1895. Over 40 per cent of young people now enter higher education, when even in 1963 only about four in every 100 young people studied at university.

Yet, the traditional university model still remains associated with social privilege, distinguished from newer institutions that suffer for their association with what some parts of the media have called “Mickey Mouse courses”.

The prospects of disadvantaged students at prestigious universities may be harmed by the rise of student tuition fees in line with inflation. While the most selective institutions deliver the best career prospects upon graduation, their high fees and entry requirements may deter students from disadvantaged backgrounds.

To avoid further stratification of the university system, proper financial support must exist to ensure the least wealthy students at in-demand institutions do not suffer from the highest levels of debt. Admission requirements should also be closely considered to account for different levels of prior attainment.

The drive to create a market out of the university sector must not prompt a return to Victorian principles. A student’s educational prospects should not be determined by their family background.

The Conversation

Jonathan Godshaw Memel, Postdoctoral researcher and AHRC Cultural Engagement Fellow,, University of Exeter

This article was originally published on The Conversation. Read the original article.

How your parent’s lifespan affects your health

A study of nearly 200,000 volunteers has shown how your parents influence your life expectancy. In this blog, Dr Luke Pilling and Dr Janice Atkins, Research fellows in University of Exeter Medical School reflect on the recently published report on how long-lived parents can effect your own longevity.

This post first appeared in The ConversationConversation logo

Luke Pilling, University of Exeter and Janice Atkins, University of Exeter

The longer your parents live, the more likely you are to live longer and have a healthy heart. These are the results of our latest study of nearly 200,000 volunteers.

The role of genetics in determining the age at which we die is increasingly known, but the relationship between parental age at death and survival and health in their offspring is complex, with many factors playing a part. Shared environment and lifestyle choices also play a large role, including diet and smoking habits, for example. But, even accounting for these factors, parents lifespan is still predictive in their offspring – something we have also shown in previous research. However it was unclear how the health advantages of having longer-lived parents was transferred to children in middle age.

In the new study, published in the Journal of the American College of Cardiology, we used information on people in the UK Biobank study. The participants, aged 55 to 73, were followed for eight years using data from hospital records. We found that for each parent that lived beyond their seventies, the participants had 20 per cent less chance of dying from heart disease. To put this another way, in a group of 1,000 people whose fathers died at 70 and who were followed for ten years, around 50 on average would die from heart disease. But when compared to a group whose fathers died at 80, on average only 40 would die from heart disease over the same ten-year period. Similar trends were seen when it came to the age of mothers.

Interestingly, family history of early heart attacks is already used by physicians to identify patients at increased risk of disease.

Family history?
Shutterstock

All is not lost

The biggest genetic effects on lifespan in our studies affected the participant’s blood pressure, their cholesterol levels, their body mass index and their likelihood to be addicted to tobacco. These are all factors that affect risk of heart disease, so is consistent with the lower rates of heart disease that we saw in the offspring. We did find some clues in our analysis of novel genetic variants that there might also be other pathways to longer life, for example through better repair of damage to DNA, but much more work is needed on these.

It is really important to note that our findings were group-level effects. These effects do not necessarily apply to individuals, as so many factors affect one’s health. So the results are really positive – although people with longer-lived parents are more likely to live longer themselves, but they do not mean people with shorter-lived parents should lose hope. There are lots of ways for those with shorter-lived parents to improve their health.

Current public health advice about being physically active (for example going for regular walks), eating well and not smoking are very relevant – and people can really take their health into their own hands. People can overcome their increased risk by choosing the healthy options in terms of not smoking, keeping active, avoiding obesity and so on and getting their blood pressures and cholesterol levels tested. Of course, they should discuss their family history with their physicians, as there are some good treatments for some of the causes of premature deaths.

Conversely, people with long-lived parents cannot assume they will therefore live long lives – if you are exposed to the big health risk factors, this will be more important to your health than the age at which your parents died.

The Conversation

Luke Pilling, Research Fellow in Genomic Epidemiology, University of Exeter and Janice Atkins, Research Fellow, University of Exeter

This article was originally published on The Conversation. Read the original article.

Turkey crisis: how will oil and gas supplies be affected?

The recent attempted military coup in Turkey has had far reaching consequences for the country.

In this blog, Associate Research Fellow James Dutton looks at the coup’s potential impact on our oil and gas supplies.

James is based in Geography’s Energy Policy Group.

This post first appeared in The ConversationConversation logo

freight-863449 (1)

Joseph Dutton, University of Exeter

The Turkish military’s attempted coup to topple president Recep Tayyip Erdogan didn’t last long. The government restored control the following day and soon declared a three-month state of emergency, with more than 60,000 people since arrested or placed under investigation.

This isn’t just Turkey’s problem. The country’s pivotal position in the transport of oil and gas gives it huge geopolitical significance. Straddling Europe and the Middle East and providing export routes from Central Asia to the rest of the world, Turkey is an important and growing energy transit hub.

Shipping in the Bosphorus straits between the Black Sea and Mediterranean Sea was halted in the immediate aftermath of the attempted coup because of security concerns, though it was soon reopened. Although oil and gas flows were broadly unaffected by the coup, the prospect of prolonged instability raises the spectre of disruption to their transit.

Two major oil pipelines pass through Turkey to the Ceyhan terminal on its Mediterranean coastline. One begins in Baku, the capital of oil-rich Azerbaijan, before passing through Georgia. The other delivers oil from Kirkuk in northern Iraq and has been affected by fighting with Islamic State and Kurdish insurgents, and disputes between Baghdad and the Kurdish government. When fully operational the two piplines have a combined capacity of 2.7m barrels per day, which is more than three times greater than the UK’s daily production.

The country’s location at the mouth of the Black Sea means it plays an equally key role in the seaborne oil trade. Around 3 per cent of the world’s oil and petroleum products pass through the Bosphorus from Russia, Ukraine and central Asia.

Turkey’s pipelines link Europe with Central Asia, Russia and the Middle East. US Energy Information Agency

Turkey is also an important transit state for the EU’s natural gas imports. The Blue Stream pipeline, which runs underneath the Black Sea from Russia, carried 14.7 billion m³ of gas in 2013 – equivalent to 9 per cent of the total Russian gas supplies to Europe that year. Blue Stream was built in the early 2000s in response to growing disruption of gas flows through Ukraine and Belarus. The recent conflict in Ukraine highlights how Turkey’s importance has grown in recent years.

Further into the future, the planned Southern Gas Corridor development would see gas from fields in Azerbaijan flowing through Turkey to the EU by 2018, while the planned Turkish Stream pipeline across the Black Sea would also circumvent Ukraine.

This refinery in Izmit, near Istanbul, processes crude oil delivered by tanker. ARTEM ARTEMENKO / shutterstock

Turkey’s key location for energy supplies and regional affairs means the EU has always considered it a strategic partner. The country’s accession process for membership started back in 2005. And this same strategic importance may be useful even today – some have suggested Europe’s timid response to Erdogan and his crackdown after the attempted coup is because of Turkey’s crucial role in supplying the EU with energy. Although European Commission president Jean-Claude Juncker said Turkey was “not in a position to become a member” following the coup, the ongoing migrant crisis and concerns over energy supplies mean it will be difficult for the EU to take too strict a position.

As the EU seeks to become less dependent on Russian gas it will need to develop supplies from Central Asia through Turkey’s Southern Corridor, while also increasing its supply from global Liquefied Natural Gas (LNG) markets. At the same time, Russia will continue to reroute its gas exports away from Ukraine, instead increasing flows through Turkey and through the expanded Nordstream pipeline in Germany.

It is therefore highly unlikely that the strategic nature of EU-Turkey relations will change in the foreseeable future, even if Erdogan’s government places further restrictions on society. But, given further civil unrest or terrorism could increase political instability and threaten Turkey’s energy sector, Europe is right to be worried.

The Conversation

Joseph Dutton, Research Fellow, Energy Policy Group, University of Exeter

This article was originally published on The Conversation. Read the original article.