Category Archives: Uncategorized

Multiple sclerosis survivors swear by hyperbaric oxygen – but does it work?

Dr Paul Eggleton, Senior Lecturer in Immunology in University of Exeter Medical School and Visiting Professor University of Alberta writes about the use of oxygen therapy for patients with MS.

This article first appeared in The Conversation.  Conversation logo

Paul Eggleton, University of Exeter

There is no cure for multiple sclerosis (MS) yet. As a complex neurodegenerative disease of the brain, it is incredibly difficult to treat. Despite the development of new and sophisticated therapies to control the inflammation and physical symptoms of the disease, these treatments don’t work for everyone. This is because MS comes in many guises and one treatment does not fit all. Perhaps for this reason people with MS are turning to alternative means of controlling their condition.

Many of the 100,000 people with MS in the UK have taken charge of managing their treatment. With the assistance of 60 or more independent charitable MS therapy centres, people with the disease regularly enter a chamber and breathe oxygen under moderate pressure (hyperbaric oxygen). Some people have done so for more than 20 years.

The air we breathe contains 21% oxygen, but 100% oxygen is considered a drug and is prescribed in hospitals to aid people’s recovery. In the case of MS, people self-prescribe the hyperbaric oxygen, which is delivered to them by trained operators. But does breathing pure oxygen under pressure on a weekly basis do them any good?

The idea to use oxygen as a treatment for MS began over 45 years ago. In 1970, two Romanian doctors, Boschetty and Cernoch, treated patients with brain injuries with pressurised oxygen to help more oxygen enter their tissues – oxygen helps protect nerve cells from damage and maintains the integrity of the blood-brain barrier. In a study of MS patients, they found that symptoms in 15 out of 26 volunteers improved. This led to further interest in the use of hyperbaric oxygen to treat MS specifically.

Since Boschetty and Cernoch’s discovery, around 14 clinical trials have been conducted. The trials have been on relatively small numbers of people and have reported conflicting results, ranging from great improvements to none at all. This has led to a dilemma: should clinicians endorse the use of hyperbaric oxygen for MS or not?

Not officially sanctioned

The clinical regulatory bodies in the US and the UK, the FDA and NICE respectively, do not feel the clinical trial evidence is strong enough to endorse the procedure, yet thousands of people in the UK and elsewhere continue to treat themselves with hyperbaric oxygen. Between 1982 and 2011, over 20,000 people with MS in the UK used hyperbaric oxygen over 2.5m times.

Multiple sclerosis is a chronic inflammatory disease of the brain. It is usually diagnosed between the ages of 20 and 40. Lesions in the brain develop as a result of inflammatory autoimmune cells crossing the blood-brain barrier and destroying the protective protein coat (myelin) that surrounds the axon of some nerve cells. Over time MS develops into a neurodegenerative disease, leading to problems with vision, bladder control and mobility.

The brain’s ability to repair some of this damage helps people with MS to feel better for a while before relapsing once more. Eventually the disease becomes chronic and the ability to repair the damage and undergo remission declines. Most conventional treatments focus on the early phases of the disease. Unfortunately, there are few treatments for the later stages of MS.

Perhaps the inability of prescribed drugs that work for all people with MS, or indeed work for some but produce unpleasant side-effects, has driven people to seek other treatments. Despite the scepticism of some doctors, many people with MS claim that hyperbaric oxygen therapy has benefits. The benefits include improvements in mobility, bladder control, pain relief and gait. However, since the treatment is transient, regular exposure to pressurised oxygen is required to sustain any benefit.

The increase in oxygen to the brain may lead to a number of effects such as speeding repair to damaged tissue, or inhibiting the ability of immune cells to cross the blood-brain barrier and cause damage. These possibilities are being investigated.

Poorly designed trials

So why are many clinicians sceptical of hyperbaric oxygen? The main reason is various MS disability-status scores are used to judge improvement. In the former clinical trials, hyperbaric oxygen was not used over a sustained periods of time (only a few weeks) and often people with irreversible damage were used, so no or very little improvement in scores was seen.

So are poorly controlled clinical trials to blame for the conflict of opinion? Probably, yes. Until we understand more at the molecular level about how oxygen under pressure can make sustained changes to various biological processes in the brain, people with MS will continue to use the treatment and the majority of the medical community will remain unconvinced of its merits.

The Conversation

Paul Eggleton, Senior lecturer in Immunology, University of Exeter

This article was originally published on The Conversation. Read the original article.

Melania Trump, the Daily Mail and a history of libel tourism

Dr Timon Hughes-Davies, a lecturer in the Law School looks at the recent complaint between Melania Trump and The Daily Mail. 

This article first appeared in The ConversationConversation logo

Timon Hughes-Davies, University of Exeter

Readers of the Daily Mail were recently treated to a fairly rare event: the paper published a retraction of a news story it had run about Melania Trump, the wife of the Republican presidential candidate and prospective first lady of the United States. The retraction related to a story published both in the newspaper and the Mail’s website, which repeated allegations from an unofficial biography of the third Mrs Trump.

These allegations – which are recited in the complaint and have been widely repeated on the internet – relate to her immigration status, the circumstances in which she met her husband and her employment when she first came to the United States. Given her husband’s position on illegal immigration, the first of these might have proven to be the most politically sensitive allegation.

The retraction followed the filing of a complaint, in a Maryland court, against the Daily Mail and another defendant – a blogger who made similar allegations. It appears that the complaint is only in respect of the article’s publication on the Daily Mail’s website, rather than in the print version. However, the retraction appeared online and in the paper.

Libel tourism

Such a claim, by a US citizen against a British publication, raises issues of jurisdiction and it is interesting that the complaint was filed in the US, rather than in the more claimant-friendly jurisdiction of England and Wales. While the Daily Mail is a British newspaper, its website has a significant international readership: the court papers refer to United States web traffic of 2m visitors per day.

But the Daily Mail’s article was most prominently published in England and Wales and it might have been reasonable for Trump to issue proceedings in the High Court. However, recent changes to both English and US law have significantly restricted the ability of claimants to start legal claims outside their own country of residence.

Retraction as it appeared in the Daily Mail, Friday September 2.
Daily Mail

Until fairly recently, English courts were relatively relaxed about “forum shopping” or “libel tourism” in defamation. In cases where the publication took place outside the claimant’s own jurisdiction, English courts were easy to persuade that they should hear the claim. When Liberace was defamed by the Daily Mirror in 1956, he chose to sue in England and Wales – he was entitled to protect his considerable reputation in England. If the Daily Mirror had been available in the United States, then he might have chosen to sue in that country. To a large extent, the decision was up to the claimant.

However, there has never been an unrestricted right for non-residents of England and Wales to bring claims in English courts. In 1937, M. Kroch, a resident of France, who had been defamed in a Belgian newspaper, was refused permission to bring a claim by the Court of Appeal. The report does not explain why M. Kroch wished to sue in England, but it does record that he failed to establish any sort of reputation or connection within the jurisdiction, apart from staying in rented rooms while bringing his claim.

However, as long as the claimant had a reputation within the UK, and the libel had been published – in defamation terms, that the words had been read by a person other than the author or subject of the statement – the courts would grant permission for the case to proceed.

It is fair to say that the bar was very low, both in terms of the claimant’s public profile within England and Wales and the extent to which the statement was published. In the 2005 case of Khalid Salim Bin Mahfouz and others vs Dr Rachel Ehrenfeld, the Saudi businessman sued Ehrenfeld, an American author, for alleging that he had helped to fund terrorism. While the claimant had some connection with England and Wales, Ehrenfeld had none – and the book in question was not published in the UK. However, the first chapter was available online and 23 copies had been sold, via the internet, to buyers in England. This was a sufficient connection with England and Wales for the High Court to allow the claim to proceed. The court found for the plaintiff.

Protecting free speech

The Ehrenfeld case, and other high-profile claims, provoked a strong reaction in the United States – the state of New York enacted the Libel Terrorism Protection Act in 2008 and, in 2010, Congress followed suit with the Securing the Protection of our Enduring and Established Cultural Heritage (SPEECH) Act.

Both these acts prevent American courts from enforcing English (and other jurisdictions’) libel judgments, unless the foreign court provides at least the same level of protection to free speech as American courts. It should be noted that, by treaty or as a matter of international comity, courts will generally enforce international judgments.

On the British side of the pond, the Defamation Act 2013 also tackled the issue of libel tourism – now, where the publisher is not a resident of the UK or other Lugano Convention countries, the court has to be satisfied that England and Wales is “clearly the most appropriate place in which to bring an action”. However, this would not necessarily prevent a non-resident, such as Mrs Trump, from bringing a claim in respect of a publication within England and Wales.

Given the context of the current presidential election campaign and the importance of Melania Trump’s reputation as reflecting on her husband, it is how this will play out with American voters that is important – so it is the coverage in America and the chance to answer those allegations to the American public that matters most. Given this, it would have been difficult to argue that England was the most appropriate place to take action.

The Conversation

Timon Hughes-Davies, Lecturer in Law, University of Exeter

This article was originally published on The Conversation. Read the original article.

Teen obesity caused by going into ‘power-saving mode’

As new research on the subject of teen obesity hits the headlinesProfessor Terry Wilkin – Professor of Endocrinology and Metabolism in University of Exeter Medical School – looks at the evolutionary trait of ‘power-saving’ which may be trapping them.

This post first appeared in The Conversation. Conversation logo

feet-932346

Terry Wilkin, University of Exeter

It is possible that modern teenagers are trapped by a trait which evolved thousands of years ago to help them through puberty, but which now leaves them vulnerable to obesity.

Adolescents need an extra 20-30% energy every day to fuel the growth and changes in body composition that characterise the six years or so of pubertal development. Energy comes from calories in the food they eat, but how could hunter-gatherers guarantee the extra calories they needed as adolescents when their food supply was limited?

We believe they may have unearthed a strategy that worked well for our ancestors, but which does quite the opposite now.

In our research, we have been monitoring a group of children as they progress through childhood from five to 16 years of age (the EarlyBird study). We found, as expected, that more energy was burnt as children got bigger. However, after the age of ten, the calories they burnt unexpectedly fell, despite the fact that they were growing faster than ever. The amount of calories burnt by age 15 was around 25% lower in both boys and girls. Only at 16 years of age, when the growth spurt was over, did the energy spend begin to increase again.

The study has three important qualities: it is longitudinal (which means that it measures the same group of children throughout), its age spread is very narrow (which means that age-related changes can be more accurately identified), and few people dropped out of the study (important statistically).

In a publication last year, we described two distinct waves of weight gain; one occurring sometime between birth and five years of age, and the other in adolescence. The early wave affected only some children – the offspring of obese parents – while the later wave in adolescence involved children generally.

Poor parental eating habits passed on to their children seemed a likely explanation for early obesity, but we had no good explanation for the later wave of obesity, until now.

Mystery solved?

Energy balance can be thought of as a bank account. Calories are deposited, and calories are spent. Body size (the balance in the account) depends on the difference between the two. So, although the explanation we offer is entirely speculative, and we will never really know because we don’t have the data on our ancestors, the researchers proposed that a downward shift of energy expenditure into “power-saving mode” might help to conserve the calories needed for the growth spurt in puberty.

The energy burnt over 24 hours has two components: voluntary and involuntary. The voluntary component is physical activity, which is easy to understand. What people understand less readily is that the involuntary component is by far the bigger one. Involuntary energy expenditure (so-called resting energy expenditure) is used just to keep alive; to keep the blood temperature at 36.8°C, fuel the brain to think and enable the organs to function.

Involuntary energy expenditure accounts for around 75% of the total calories burnt in a day, which explains why physical activity has a limited impact on obesity. A fall of 25% in resting energy expenditure makes a big hole in the calories burnt each day.

Why does all this matter, and why does it occur? It matters because it makes obesity more difficult to avoid if teenagers are trapped by a long period of low-calorie burn. We don’t know for sure why it occurs, but could speculate that it may be a throw-back to earlier evolutionary times, when calories were scarce but adolescents still needed 25-30% more calories a day to fuel growth and bodily changes.

Not as easy as buying a burger.
Nicolas Primola/Shutterstock.com

How did hunter-gatherers assure the supply of extra calories needed to reach maturity? It is possible that their bodies adapted by switching down its calorie expenditure, so as to divert the calories to the energy needed to grow. Obesity is a recent problem, and the adaptation now works adversely in a world where calories are cheap and readily available in a highly palatable mix of sugary drinks and calorie-dense foods.

The worst outcome is that adolescents and their families take these findings to mean that they can do nothing about teenage obesity. The best is that a new explanation for teenage obesity leads to better understanding, and an avoidance of the foods that are the cause.The Conversation

Terry Wilkin, Professor, University of Exeter

This article was originally published on The Conversation. Read the original article.

How the Battle of the Bastards squares with medieval history

As Professor of History in the College of Humanities, Professor James Clark‘s research interests include themes in religion, intellectual and cultural life which reach across the traditional boundaries of medieval and early modern history.

In this blog, Professor Clark looks at how the medieval  period is portrayed in film and television.  From Game of Thrones to Lord of the Rings, we look at how medieval culture id represented, and misrepresented.

This post first appeared in The ConversationConversation logo     

James Clark, University of Exeter

This article contains spoilers for Game of Thrones season six, episode nine.

A 12-foot giant, his unhuman features oddly familiar (almost homely, after two screen decades colonised by combat-ready orcs) wheels around a wintry courtyard, wondering at the thicket of arrow shafts now wound around his torso. He stops, sways somewhat, and falls, dead. So Wun Wun the Wilding met his doom in The Battle of the Bastards, the penultimate episode of this season of Game of Thrones.

One casualty which, with countless others in the scenes before and after, might have a claim to a place in history, apparently. “The most fully realised medieval battle we’ve ever seen on the small screen (if not the big one too)”, is the breathless verdict from The Independent.

As a full-time historian of the other Middle Ages – Europe’s, every bit as feuding and physical as the Seven Kingdoms but with better weather – I am struck by the irony that Martin’s mock-medieval world might now be seen to set the bar for authenticity. There’s no doubt that for much of screen’s first century, medieval was the Cinderella era: overlooked, patronised and pressed into service for clumsy stage-adaptations, musical comedy and children. But over the past two decades – almost from the moment that Marsellus fired the line in Pulp Fiction (1994) – we have been “getting medieval” more and more.

Medieval millennium

Any connection between Braveheart (1995) and recorded history may have been purely coincidental, but its representation of the scale and scramble of combat at the turn of the 13th century set a new standard, pushing even Kenneth Branagh’s earnest Henry V (1989) closer to the Panavison pantomime of Laurence Olivier’s film (1944). Branagh had at least toned down the hues of his happy breed from the bold – indeed, freshly laundered – primary colours of Sir Laurence’s light brigade, but his men-at-arms still jabbed at each other with the circumspection of the stage-fighters while noble knights strutted and preened.

Of course, at times it threatened to be a false dawn: First Knight (1995) and A Knight’s Tale (2001) are undeniable obstacles in making the case for a new realism. But new epics have extended the territory taken in Mel Gibson’s first rebel assault.

Now already a decade old, Kingdom of Heaven (2005) achieved a level of accuracy without reducing the cinematic to the documentary. For the first time, the scene and size of the opposing forces were not compromised by either budget or technological limitations. The audience is led to gates of the Holy City as it would have appeared to the Crusaders. The armies’ subsequent encounter with one another is captured with the same vivid colour and fear that the contemporary chroniclers conjure them, catching especially the crazy spectacle of Christian liturgical performance – crucifixes, chanting priests – on the Middle Eastern plain. And descriptive details were not lost, particularly in the contentious arena of Crusader kit, now a hobbyists’ domain into which only the brave production designer – and braver historian – strays.

Meanwhile, Peter Jackson painted energetically with his medieval palate in the Lords of the Rings trilogy, not, of course, pointing us to a place or time but certainly providing a superior visual vocabulary for the experience of combat in a pre-industrial age.

Back to basics

So, has Game of Thrones bettered this?

There are certainly some satisfyingly authentic twists and turns woven around The Battle of the Bastards. The most significant casualties occur away from the melee of the pitched battle in one of a number of routs (medieval battles always ended with a ragged rout, not a decisive bloodbath). And the principal actors in the drama do not readily present themselves for a tidy dispatch. The mounted forces of Westeros are rarely decisive and even fighters of the highest status do not see out the day in the saddle.

Also accurate are the individual acts of near-bestial violence which occur, are witnessed and go on to define the significance of battle. The deliberate breaking of Ramsey’s face by Jon Snow is a point-of-entry into a central but still under-researched dimension of medieval conflict: ritual violence, such as the systematic, obscene dismemberment of the dead and dying English by their Welsh enemies during the Glyn Dwr wars.

Before the fall.
©2016 Home Box Office, Inc.

Yet I suspect that these are not the snapshots that have won the superlatives. No doubt it is the standout features of the battle scenes: their scale, the weaponry and the “reality” of wounding in real time that have held most attention. And these threaten to turn us again in the direction of that Ur-Middle Ages which we had every reason to hope we had left for good.

Because medieval armies were always smaller than was claimed, far smaller than we see here. Weaponry was not fixed in time, but – more like the Western Front in 1917 than you might imagine – a fluid domain of fast-developing technology. It is time that directors gave space to firearms, which were the firsthand experience of any fighting man from the final quarter of the 15th century. They must also shed their conviction that “medieval” means hand-to-hand combat. It was sustained arrow-fire that felled armies, not swordplay, nor fisticuffs.

Life on the medieval battle path also meant poor health, rapid ageing and no personal grooming. So we are also overdue sight of a medieval fighting force as it might actually have arrived on the field: neither sporting sexy hairstyles, nor match-fit for action. They of course arrived after months of marching, if they arrived at all: dysentery passed through campaigning forces with fatal routine. They faced their foe in a youth that would have felt more like middle age to you and me.

And in the middle of this Ur-medieval battlefield there is a 12-foot giant, just to confirm that this not medieval Europe, by any means.

The Conversation

James Clark, Professor of Medieval History, University of Exeter

This article was originally published on The Conversation. Read the original article.

Why mortgage rates will rise with Brexit

money-18554

As we near the EU referendum, Professor Alan Gregory – Professor of Corporate Finance at the University of Exeter Business School explains why a vote to leave the European Union may result in our mortgage rates rising.

This post first appeared in The Conversation.Conversation logo

Alan Gregory, University of Exeter

How Brexit would affect house prices and homeowners is one of the big questions in the build up to the UK’s EU referendum vote. George Osborne has said that mortgage rates will rise if there’s an Out vote. Meanwhile, the Bank of England’s governor, Mark Carney, is currently reviewing the possibility of an emergency interest rate cut in the event of a Brexit vote. The two outcomes would seem to be contradictory, but this is a feasible outcome if Britain votes to leave the EU.

As with most issues affecting the economy, there are several factors at play. First off, there is an important difference between long and short run rates. One of the Bank of England’s main concerns is controlling inflation – something it does by manipulating interest rates. But interest rates do not exist in a vacuum. Exchange rates, inflation and interest rates are all related to one another. There is also the UK’s balance of payments problem, which could be of vital importance.

The UK’s current account deficit was at an all-time record £96.2 billion last year, equivalent to 5.8 per cent of the country’s total economic output for the year. That is one big chunk of the economy. Strangely, this in itself isn’t a problem as, provided capital can move freely, inward investment in the UK can plug the gap. That inward investment could be foreign firms investing directly in UK businesses or foreign investors buying UK government debt (known as gilts) if they are long term.

Unfortunately, this inward investment can very easily go into reverse. Most obviously, foreign investors can stop investing in new factories, and can move their production lines abroad. For example, the new Chinese owners of Sunseeker, a top-end powerboat manufacturer, have warned of precisely that danger. Similarly, foreign investors may choose not to buy UK government debt.

Substantial shock

All this matters greatly because, with the uncertainty surrounding Britain’s future, foreign investors are seeing investment in Britain as very risky. Investors do not like uncertainty, which abounds at the prospect of Brexit – particularly regarding what kind of trade deal the UK will manage to negotiate with the EU, and how long this will take. There is also likely to be a substantial fall in the value of the pound in the event of Brexit, making it less attractive to investors to make any UK investments in the run-up to the vote.

It is a given in finance that high risks demand high returns. Thus, in order to prevent an exodus of capital from the UK, foreign investors would need to be offered increased returns. This translates into higher borrowing costs for the UK government, and higher costs of capital for UK businesses. And if UK firms have to provide higher returns to banks and shareholders, that means investments in business assets look less appealing. The result would be less growth and fewer jobs being created.

The other immediate problem the UK would face in the event of a Brexit vote stems directly from the projected fall in the value of sterling. The consensus forecasts are that the exchange rate would fall from its current value of around £1 for €1.27 to something more like parity with the euro. The latest forecast from the National Institute of Economic and Social Research think tank is of a 20 per cent fall in the value of sterling. Prior to opinion polls suggesting that exit from the EU was a distinct possibility, the level of the pound was around £1 to €1.40. All this adds up to a sharp increase in the cost of imported goods, including oil, industrial raw materials, clothing and food.

Governor of the Bank of England, Mark Carney. Bank of England, CC BY-NC-ND

Bringing all this together, Britain will face a substantial short-term economic shock if it votes to leave the EU. There is simply no credible argument that says otherwise, though there are arguments about the scale of the effect and the long-term consequences for the economy.

To mitigate against this shock, the governor of the Bank of England would naturally want to cut interest rates in an attempt to stimulate the economy, although with rates at rock-bottom there is little room for manoeuvre. The bank will be concerned about the potential inflationary impact of the fall in the pound. It must balance economic growth (which would normally suggest lower interest rates were needed) with inflation risk (which would normally trigger an increase in rates). Then there is the concern that foreign investors will demand higher returns on UK government debt.

Meanwhile, lower short term interest rates will hurt bank profits. The only way banks can recover these profits is by lending at higher rates, while offering lower rates to savers. The sad irony is therefore that neither savers nor borrowers would gain from Brexit.

A world of higher short term borrowing costs, higher long term borrowing costs and lower savings returns looks an all too plausible outcome. At the same time, investment in jobs is choked off, economic growth declines, and inflation starts to raise its head again. All combined with a worsening balance of payments position and a sterling crisis.

We have, of course, been here before – in the bad old days of the 1970s – so we know that this really can happen in the UK. The ultimate irony is that if polling data is to be believed, it seems to be the older voters, having lived through all that, that are the ones keenest to go back to it.

The Conversation

Alan Gregory, Professor of Corporate Finance, University of Exeter

This article was originally published on The Conversation. Read the original article.

Why give to charity? A Romantic view of helping the needy

Dr Andrew Rudd is a Lecturer in the College of Humanities’ English department; his research explores the moral imagination’s role in shaping literary cultures and communities,

This post first appeared in The ConversationConversation logo

hobo-315962

Andrew Rudd, University of Exeter

“Is it possible I could have steeled my purse against him?” the Romantic essayist Charles Lamb asked in 1822, writing about a man who sat each day by the road begging alms. “Give, and ask no questions.” Today, charities must answer plenty of questions before they can persuade an often wary public to untie their purse strings.

The charity sector as a whole is facing a wave of scrutiny. A glance at some recent scandals suggests that the root of this discontent lies in a perception that the direct connection between the individual giver and the recipient has broken down; that the charity is not acting as we would if we were delivering the aid ourselves. On an almost daily basis, we read complaints that charities are too large, or spend too much on back-office costs, or use aggressive fundraising techniques, or have become distracted by political campaigning.

‘Give, and ask no questions.’ enki22/flikr, CC BY-ND

The government’s commitment to spend 0.7 per cent of GDP on international aid rankles with many because taxpayers have no direct control over how the money is spent, or whether it should be spent at all. And the collapse of Kids Company in 2015 sparked further questions and concerns about how charities operate.

And yet the idea that charitable giving is something we weigh up in our own minds is a relatively recent invention. Traditionally, the church taught that it was good to give to charity for the benefit of one’s soul, no questions asked. It was only after the Enlightenment and the French Revolution, when traditional sources of authority began to fall away, that individuals had to make up their own minds about when to give to charity and why. The Romantic movement, which reflected a new focus on emotion and individualism, has a lot to teach us about the questions we tend to ask today when giving to charity and the reasons why we give to charity at all.

Seeing and giving

William Wordsworth, contemplating the ruins of Tintern Abbey (once a centre of monastic almsgiving) wrote that the “little, nameless, unremembered acts of kindness and of love” that make up the “best portion of a good man’s life” could be found in the natural world, now that religion could no longer provide all the answers. For him, nature could inspire moral goodness just as Tintern Abbey’s monks drew inspiration from daily prayer.

In another poem, The Old Cumberland Beggar, Wordsworth wrote that seeing the objects of charity kindles benevolence in us and throughout the whole community. The visible presence of poverty reminds us of the good we have done and what we have yet to do.

But what if our minds are in no fit state to reshape society in our own image, asked John Polidori in his lurid tale The Vampyre? His bloodsucking villain Lord Ruthven (modelled on Byron) lavishes “rich charity” on the “profligate” and the “vicious” man in order “to sink him still deeper in his iniquity”, while the virtuous man who has suffered innocently is turned away “with hardly suppressed sneers”. Polidori’s nightmare philanthropist spends money on the worst possible causes, reminding us how individual caprices can skew charitable priorities.

Charles Lamb. Wikimedia Commons

Lamb’s essay, A Complaint of the Decay of Beggars in the Metropolis, tried to banish such egotism. He argued that begging was “the oldest and honourablest form of pauperis” and taught us not to value our own dignity too highly. The “all-sweeping besom [broom] of societarian reformation” is what happens when we think we know best, tidying away the emblems of poverty that act as “the standing morals, emblems, dial-mottos, the spital sermons, the books for children, the salutary checks and pauses to the high and rushing tide of greasy citizenry”.

For Lamb, the beggar was a defiant figure – “the only free man in the universe” – and it is better to be deceived by fraudsters than not to give to charity at all.

Romantic literature teaches us that many concerns about charities today, such as how effectively money is spent, are perpetual ones which, extreme cases aside, we should learn to accept. It reveals to us how important our feelings have become when we decide how to give to charity. But as Lamb wrote, we are not always in the best position to judge what needs to be done. If we had time to do everything ourselves there would be no need for charities at all. Sometimes it is better to step back, accept that running a charity isn’t easy and let good charities get on with the work on our behalf.

It also reminds us that charitable organisations are filling in for individual acts of charity that we cannot perform ourselves. By pointing out the power and pitfalls of imagination, the Romantics help us to navigate the complexities of the charitable encounter and to know when to step back and let a responsive and realistic charity sector carry out its work.

The Conversation

Andrew Rudd, Lecturer in English, University of Exeter

This article was originally published on The Conversation. Read the original article.

Shakespeare, skulls and tombstone curses – thoughts on the Bard’s deathday

Professor Philip Schwyzer, a specialist in early modern English literature in the College of Humanities, shares his thoughts on the fate of Shakespeare’s remains.

This blog originally appeared in The Conversation.Conversation logo

Philip Schwyzer, University of Exeter

The image of a man holding a skull while ruminating upon mortality will always call Hamlet, and Shakespeare, to mind. How appropriate then, that four centuries after it was first laid beneath the earth, Shakespeare’s skull may be missing from his tomb. Then again, it may not. A radar survey of the poet’s Stratford grave in March has only deepened the mystery over what lies beneath his slab. As the world prepares to celebrate the sombre yet irresistible anniversary of Shakespeare’s death on April 23, how much do we know how about his own wishes for the fate of his remains?

The resuscitation of an improbable Victorian anecdote about 18th-century grave robbers grabbed the headlines, but all we know for certain is that the upper part of the burial has been disturbed in some way. Confirming the presence or absence of Shakespeare’s head would require the physical opening of his grave. This is unlikely to take place anytime soon, as the vicar of Holy Trinity – the church within which he is buried – has confirmed, thanks to the four lines of forbidding verse inscribed upon the ledger stone:

Good frend for Iesus sake forbeare,

To digg the dvst encloased heare,

Bleste be ye man yt spares thes stones,

And curst be he yt moves my bones.

Shakespeare’s epitaph.
Tom Reedy/Wikimedia Commons, CC BY-SA

In one of his sonnets, Shakespeare boasts that his “powerful rhyme” will outlive marble monuments and ostentatious tombs. At Holy Trinity his verse has had the power to ensure the survival of his own simple monument. Since the 19th century, numerous campaigns to open the grave have run aground on this uncompromising quatrain. Indeed, it would be difficult to find four other lines by Shakespeare that have had such power to influence events centuries after his death. Poetry, WH Auden said, makes nothing happen. In this case, poetry is ensuring that literally nothing happens to Shakespeare’s grave.

Many must have wished that Shakespeare’s final words to posterity consisted of something more than a curt demand to be left alone. Perhaps for this reason, most scholars and editors have been highly reluctant to ascribe these verses to Shakespeare himself. The magisterial biographer, Samuel Schoenbaum, dismissed the quatrain as “a conventional sentiment in commonplace phrases” – and deemed it the work of a local hack, perhaps specialising in funerary inscriptions. Following this lead, Holy Trinity reassures visitors that curses on gravestones were “not at all uncommon at the time”.

But in fact, though the verse may be halting, it is anything but conventional. No researcher has been able to produce a comparable inscription from a 16th or 17th-century English grave. A few early modern epitaphs (such as John Skelton’s memorial to the Countess of Richmond in Westminster Abbey) threaten divine punishment against irreligious vandals who might deface the monument or the inscription itself. Shakespeare’s epitaph, by contrast, threatens to curse a church official, the sexton, who in the ordinary pursuit of his duties would periodically open graves to make room for further burials. In this respect, it is both highly audacious and very probably unique.

The anxiety about exhumation expressed on Shakespeare’s slab may be unusual in the context of an epitaph, but it resonates powerfully with the content of his plays. A surprising number of the tragedies and histories feature disturbing scenes involving exhumation or interference with the bodies of the dead. In Richard III, the long-dead body of Henry VI bleeds afresh when brought face-to-face with its murderer. The heroine of Romeo and Juliet, buried alive in her ancestral crypt, fantasises that she may “madly play with my forefather’s joints”, and dash out her brains “with some great kinsman’s bone”.

Confronted with the ghost of Banquo, Macbeth assumes he is looking at an exhumed corpse:

If charnel-houses and our graves must send

Those that we bury back, our monuments

Shall be the maws of kites.

Sarah Bernhardt as Hamlet. 

Most memorably, Hamlet watches a gravedigger wrench dry bones from a grave to make room for the body of Ophelia: “That skull had a tongue in it and could sing once. How the knave jowls it to the ground!” The indignities meted out to the bodies of the dead seem to unsettle the Prince of Denmark more than the fact of death itself.

With the possible exception of John Donne, no other writer of Shakespeare’s age fantasised so persistently or so morbidly about the prospect of exhumation. It is no accident that his work is an emblem today in the image of a man gazing into the empty sockets of a disinterred skull. The suggestion that his own skull may have suffered Yorick’s fate in the hands of some 18th-century grave robber is almost too ironically appropriate.

Given the uniqueness of Shakespeare’s grave inscription, and the anxiety about the fate of the corpse evident in his plays, there is strong reason to believe that the verses on his slab are his own work, that they spell out his final wish. Whether we are obligated to respect his wishes is a separate question. But one thing is certain: when we find ourselves marking the 400th anniversary of his death by wondering over the whereabouts of his skull, we are dreaming Shakespeare’s nightmares for him.

The Conversation

Philip Schwyzer, Professor in English, University of Exeter

This article was originally published on The Conversation. Read the original article.

Why life is tougher for short men and overweight women

Timothy Frayling, University of Exeter and Jessica Tyrrell, University of Exeter

Are you a short man or an overweight woman? If so, you may have a slight disadvantage in life compared with taller men and thinner women.

Our latest study has found evidence that men who are shorter due to their genes have lower incomes, lower levels of education, and lowlier occupations than their taller counterparts. The effect of height on socioeconomic status was much weaker in women. In contrast, women who have a higher body mass index (BMI) due to their genes have lower standards of living and household incomes. Having a higher BMI didn’t seem to have the same negative effect on men.

Don’t we know this stuff already?

Why did we want to do this study? After all, didn’t we know that height and BMI are associated with socioeconomic status? And you can’t change your genes, so why is this study interesting?

It’s true that we have known for a long time that being short is associated with poverty, almost certainly because poor nutrition in childhood stunts growth. But the relationship between fatness and poverty is more nuanced.

In the not too distant past being thin was associated with poverty, and being overweight with wealth because people with more money were able to eat more. However, in the past few generations, in developed countries, that association has reversed. As we have moved to a world where calorie-dense food is readily and cheaply available, and life has become more sedentary, lower standards of living are associated with higher BMIs. But in this study we wanted to answer questions about causality rather than associations, which is why we turned to genetics.

You can’t change your genes

Associations between genes and human traits are likely to be cause not consequence. We can make this statement because your genes don’t change.

A disease can’t change your DNA sequence, but your DNA sequence can influence your chances of developing a disease, growing more, or your vulnerability to obesity. Once your father’s sperm has fertilised your mother’s egg, you are stuck with those two copies of the human genome and with some exceptions, such as in cancer cells, those two DNA sequences change very little during our lifetimes.

The different environments we encounter, the lifestyle choices we make, and the diseases we develop do not change the DNA sequences we inherit from our parents – to be clear, we are not discussing epigenetics here, where the environment can change how genes activate and deactivate.

Shorter men and heavier women are poorer

We used demographic and genetic data from 120,000 people (aged between 40 and 70) in the UK Biobank. The study used 400 genetic variants that are associated with height, and 70 associated with BMI, together with actual height and weight, to ask whether or not shorter stature or higher BMI could lead to lower chances in life – as measured by information the participants provided about their lives.

Having analysed the data, we found that men who were 7.5cm shorter, for no other reason than their genes, on average earned £1,500 a year less than their taller counterparts. Meanwhile, women who were 6.3kg heavier, for no other reason than their genes, on average earned £1,500 a year less than the lighter women of the same height.

It’s important to note that these are estimates and averages – short men and heavier women can, and do, succeed in life. Instead, it shows that across the population overweight women and shorter men are, on average, slightly worse off.

Short men can do well in life.
Featureflash / Shutterstock.com

What are the implications?

We now need to understand the factors that lead people who are overweight or short to lower standards of living. Is the link down to low self-esteem or depression, for example? Or is it more to do with discrimination?

In a world where we are obsessed with body image, are employers biased? And do we need to pay more attention to potential unconscious biases in order not to unfairly discriminate against people who are shorter (especially men) or overweight (especially women)?

More studies are needed using data from other birth cohorts – the UK Biobank is biased towards thinner people and wealthier people because they had to actively participate in a study about health and this bias may have affected the results (that is, made the associations slightly weaker).

The study was also limited to people born between 1935 and 1971 and so the effects may no longer exist in younger adults today. It will be interesting to study the effects in young adults – it may be that the higher levels of obesity would exacerbate the problem, or it may be that society is far more accepting of fatter people and that factors such as discrimination and social esteem, if they were key to this data, are less important in younger generations.

The study provides a much needed advance in understanding a classic chicken or egg problem. But something about having a higher BMI as a woman, and shorter height as a man, does lead to being worse off in life.

The Conversation

Timothy Frayling, , University of Exeter and Jessica Tyrrell, Research fellow, University of Exeter

This article was originally published on The Conversation. Read the original article.

Leonardo DiCaprio Oscar’s speech gets good marks for climate science

Leonardo DiCaprio’s Oscar acceptance speech calling for action on climate change has received a lot of attention. Celebrities who are passionate about stopping climate change often quote science, but this can be risky for a non-expert when it’s such a complex topic, and some carry it off better than others.

So, how did Leo DiCaprio do? Asks Professor Richard Betts is Chair in Climate Impacts at the University of Exeter.

Leonardo DiCaprio - image courtesy of shutterstock

Leonardo DiCaprio – image courtesy of shutterstock

I’ve taken a close look at the part of his speech that focussed on climate change, and I think he did rather well. Not counting the phrases which are his personal opinion on how to respond, he makes ten statements that relate to at least some extent on science. Since I’m used to marking students’ work, I’ve taken the liberty of awarding him points for each statement…. Let’s see how he did….

“Making The Revenant was about man’s relationship to the natural world. A world that we collectively felt in 2015 as the hottest year in recorded history.”

A good start. 2015 was very clearly the hottest year in all the datasets of global average surface temperature, by a long way. It was not the hottest year in the whole 4.5 billion year history of the Earth, but Leo was careful to say “recorded history”. It’s the hottest since we started actively measuring temperatures.

[One mark]

“Our production needed to move to the southern tip of this planet just to be able to find snow.”

Hmmm. Personally I’d have stayed well away from this example. It’s perfectly true that northern hemisphere snow cover has been in decline for some decades, but linking specific weather events or even individual seasons to long-term climate change is quite involved.

There are an increasing number of studies that look at the changing probabilities of particular weather events or extreme seasons, so in some cases we are able to make links with climate change, but I’m not aware of a specific study for the winter of 2014/15 in Western Canada yet.

It may be possible to argue the toss over this, but I’m going to be strict and withhold a mark here – sorry Leo! But don’t worry, you still have chance to redeem yourself….

[No mark]

“Climate change is real.”

Yes, absolutely. I assume he means human-caused climate change. The greenhouse effect is without doubt a real thing, increase carbon dioxide and other greenhouse gases will therefore cause warming, and the increase in carbon dioxide is definitely caused by humans.

There is no serious disputing of these facts, even from climate sceptics (at least, not the ones who have looked into it properly). The latest assessment by the Intergovernmental Panel on Climate Change (IPCC) was that it is: “Extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.”

[One mark]

“It is happening right now.”

Yep. Temperatures have been rising around the world, other signs of warming are also apparent. Many glaciers are shrinking, sea levels are rising (both due to melting land ice and expansion of water as it warms), and Arctic sea ice is in decline. Although Antarctic sea ice has increased, this is still less than the loss in the Arctic.

Signs of the onset of spring, such as flowers blooming, trees coming in to leaf, birds migrating and eggs hatching, are on the whole occurring earlier in the year than several decades ago.

Some (but not all) types of extreme weather event are becoming more frequent or severe, and this may well extend to other extremes in coming decades.

The average rate of warming at the surface did slow temporarily for a few years, but while there is an interesting academic debate over this ‘hiatus’ or ‘slowdown’, this does not affect the big picture because the long-term heating up of the climate system is still ongoing.

So yes, there is overwhelming evidence that climate change is happening.

[One mark] 

“It is the most urgent threat facing our entire species.”

There’s a lot to unpick here, and it depends to some extent on what he means by ‘threat’ – does he mean a threat of impacts that are unpleasant but not actually life-threatening, or is he going further than that and implying a threat to the actual survival of our species?

If the latter then I would not agree – I don’t see convincing evidence that the entire human race is going to be wiped out by climate change any time soon. Having said that, inexorable warming could ultimately take temperatures past tolerable limits in some areas. One climate modelling study suggested that the ‘wet bulb temperature’ (which factors in humidity) could eventually exceed a proposed human body tolerance limit of 35°C across wide regions of the world – if the planet warmed by 12°C. This is not at all likely this century, and is at the upper end of what might be reached in a couple of centuries, if fossil fuels are burned at high rates and the climate responds as fast as is thought plausibly possible. So, it’s not the most likely outcome, and won’t happen soon, but it can’t be ruled out for the longer term.

However, even if the ‘urgent threat’ is not actually to everyone’s life, there is no doubt that everyone on the planet is increasingly at risk of being affected in some way, either directly or indirectly. Direct effects could include risks to local food or water security, or loss of homes due to coastal flooding from sea level rise. Those who do not experience these could still be affected indirectly, through shocks to the economy or pressures of migration.

So yes, I do agree that every member of our species may see some impact of climate change. Since it’s not really clear what DiCaprio means, I’m going to be strict again and award half a mark instead of a whole one.

[Half a mark] 

“And we need to work collectively together.”

[Not a science point]

“And stop procrastinating.”

Fair point. Scientific research supports the view that the longer the delay in reducing global emissions, the harder it will be to avoid the risk of severe impacts.

[One mark]

“We need to support leaders around the world who do not speak for the big polluters, but who speak for all of humanity.”

[Not a science point]

“For the indigenous people of the world.”

Indigenous ways of life are often dependent on particular aspects of the local environment, and in many cases these are threatened by a changing climate, especially in cold regions.

[One mark]

“For the billions and billions of underprivileged people out there who would be most affected by this.”

Poorer people, communities and nations do often tend to be more vulnerable to environmental changes, having less capacity to adapt (eg, less able to afford expensive sea defences). Similarly they are often more vulnerable to extreme weather and its consequences, not having such resilient infrastructure and buildings for shelter, or well-established early warning systems. Moreover, crop yields in tropical regions are expected to be hardest hit, and this is where many developing countries are.

I’m not so sure about the ‘billions and billions’, – with a world population of seven billion, rising to nine or 10 billion by mid-century, that doesn’t leave much room for ‘billions and billions’ who are ‘most affected’ while others are less affected – but I’m possibly verging on the pedantic here so won’t labour this point!

[One mark]

“For our children’s children.”

This is a good way of communicating the timescale of the most severe impacts. While climate change is already happening, the highest impacts are still some way off (although we may become irreversibly committed to them soon).

In particular, sea level rise takes a long while to happen to the full, as it takes time for huge bodies of ice to melt and for heat to penetrate to the ocean depths to warm and expand the deep water. Those of us alive today may well not see the worst effects, but the potentially huge impacts in the early part of the next century would be within the life expectancy of our grandchildren.

[One mark]

“And for those people out there whose voices have been drowned out by the politics of greed.”

[not a science point]

“I thank you all for this amazing award tonight. Let us not take this planet for granted. I do not take tonight for granted.”

Sandwiched between two bits of awards ceremony-speak, ‘not taking this planet for granted’ is a very good point. Earth is the only planet in our solar system suitable and comfortable for human life, and even then it has gone through some very large changes in climate in its history.
Although such changes have happened before, humans were not around then, and there is no doubt that if major changes were to happen again, they could cause major upheavals to our civilisation.

By tinkering with a system that we don’t yet fully understand, it could be us that makes something major happen if we’re not careful. While we should not panic, we should not be complacent either. We should not take it for granted that the climate will remain within the bounds that we are used to, or that it will change gradually enough for us to keep up.

[One mark]

So for an overall mark, I give Leonardo DiCaprio 8.5 out of 10.

Still a bit of room for improvement, but overall that’s pretty good I think ….and I suspect much more than Leo would give me for my acting skills!!

Professor Richard Betts was a lead author on the IPCC’s Fourth and Fifth Assessment Reports, and leads a major EU-funded international climate research project called High-End cLimate Impacts and eXtremes (HELIX).  

This post is associated with Professor Betts’s presentation at the Weather, Art and Music festival hosted by the University of Exeter on 5th March 2016

Twitter:
@richardabetts
@HELIXclimate
@WAMfestival

When should doctors gain full registration with the General Medical Council?

Karen Mattick is a Professor of Medical Education, Co-Lead for the Centre for Research in Professional Learning at the University of Exeter, and Director of the Postgraduate Certificate in Academic Practice. She has over twelve years’ experience as a medical education researcher and educator. He she ask when junior doctors should gain full registration with the GMC…?

Having been heavily involved in establishing and developing successful UK Medical Schools, we believe more can still be done to prepare and support medical students in the transition to becoming junior doctors and encourage them to stay working as doctors in the UK.

Currently only 70 per cent of junior doctors feel they were well prepared for their first junior doctor role. And, in the UK, we are fast losing doctors in some areas of medicine through emigration, career breaks and early retirement, sometimes through mental ill-health.  Change is needed – but it must be the right kind of change, undertaken for the right reasons.

So what would the right kind of change look like? For us, this is about providing the best possible education and support to medical students and junior doctors, in order to achieve the best possible patient care.

With this in mind, as a team of academics from Cardiff, Exeter, Dundee and Belfast researching the preparedness of graduates for medical practice, we explored the implications of a recommendation made by an independent review of medical education, to award full registration for graduates to practise medicine as soon as they leave medical school.  This is a year earlier than the current point of registration, which includes a further ‘hands-on’ placement year. Our research team wanted to provide evidence to probe this recommendation and the ramifications it could have for education and patient care in the UK.

We conducted interviews with 185 doctors, health professionals and patients, and heard that the implications were far-reaching. Even if the graduation at registration recommendation is not adopted, it is clear that we can do more to adapt to a changing healthcare environment and respond to some of the challenges the sector faces, simply as best practice to support trainee doctors in a particularly demanding period in their careers.

At the moment, medical schools remain responsible for aspects of doctors’ training in the first year of practice. In this period new graduates receive provisional GMC registration, meaning they can only practice under close supervision and with some restrictions. Around 40 per cent of these junior doctors are employed by NHS Trusts that are remote from their medical school, potentially hundreds of miles away. They are both physically and psychologically far removed from Medical School. This can lead to fragmentation of support during a critical phase of training.

In the current system, doctors apply for full GMC registration only after completing this first year of practice. It takes at least four further years (sometimes much longer depending on specialty) for doctors to complete training and become independent practitioners.

Until they are fully registered, junior doctors cannot usually work abroad or take up temporary locum work.  But, with increasing numbers of UK graduates and applications from eligible European and International medical graduates, the Foundation Programme has been oversubscribed since 2011.

Suitable UK medical school graduates now regularly fail to secure Foundation Programme Year 1 (F1) jobs on the first pass and are put on a reserve list to await a place. Graduates without an F1 job have limited opportunities to progress their medical career in the UK, risking graduates leaving medicine or moving abroad to train. Heart-breaking and highly political headlines of talented students who have worked hard for years, in a degree heavily subsidised by the taxpayer, being forced into alternative careers, are now a very real possibility.

Changing the point of registration would help to address concerns about fragmentation of support and could potentially help with oversubscription but it would introduce other concerns. Provisional registration provides a ‘safety net’ year, in which new doctors can find their feet under close supervision and senior doctors can identify struggling trainees. Although the daily practice of newly qualified doctors might not change, full registration would imply higher expectations from the outset, making the transition even more daunting.

Our interviewees also raised concerns about medical schools making recommendations about full registration. Medical students are generally not embedded within the multidisciplinary healthcare teams their later employment will demand. They can undertake limited activities, which makes it difficult to assess their capabilities in clinical practice and professionalism in the workplace with confidence. Some felt that medical schools were reluctant to fail underperforming students and that universities were more focussed on producing graduates than on patient safety.

One surprise in our data was that participants did not raise the implications for four-year graduate entry medicine programmes, currently run by some medical schools alongside the standard five years for non-graduates. The implications for these programmes are profound, however, since European legislation requires a minimum duration of basic medical training of five years and 5,500 hours. Graduate entry programmes currently use the F1 year towards this count so, unless aspects of a first degree could be counted, these programmes might become untenable.

By thinking through the implications of a future change to the timing of registration, we have highlighted improvements that can be made to medical education. At a time when the training of doctors is under intense scrutiny, we hope this evidence will help shape the future by providing earlier practical experience to students, which would be beneficial right now for medical students, trainees and patients.