Tuesday, 10 March 2026

 The short answer: the UK taxes the very wealthy at roughly average levels compared with other rich countries, it is not among the highest. In some ways it’s high (for certain taxes), and in others it’s relatively moderate.

Below is a clearer comparison.


1. Top income tax rates (headline comparison)

For very high earners, the UK’s top marginal income tax rate is 45% (for income above £125,140).

How that compares internationally:

CountryTop income tax rate
Denmark~56%
France~45–55% (depending on surcharges)
Germany~45%
UK45%
US~37% federal (higher with state taxes)
Sweden~52%
Netherlands~49.5%

Example: Denmark’s top rate is about 56%, higher than the UK’s 45%.

Conclusion:

  • UK ≈ middle of the pack among developed countries

  • Lower than Scandinavia and some Western Europe

  • Higher than places like the US (federal level)


2. Overall tax burden compared internationally

A common measure is tax revenue as a share of GDP.

  • UK: 35.3% of GDP

  • OECD average: 33.9%

  • France: ~45%

  • Denmark: ~47%

Conclusion:

  • The UK overall is slightly above the OECD average

  • But still far below the highest-tax European states


3. How much the richest people actually pay

In the UK, tax is quite concentrated among high earners.

  • Top 1% of taxpayers pay about 33% of income and capital gains tax.

That is similar to other Anglo-American systems where a small group contributes a large share of revenue.

However, studies suggest that across G20 countries (including the UK), the effective tax rate of the super-rich can end up lower than that of average workers once all taxes are counted.

This happens because wealth can be structured through:

  • capital gains

  • dividends

  • offshore structures

  • tax planning


4. Where the UK is relatively high

The UK stands out in a few specific areas:

  • Property taxes are among the highest in the OECD.

  • Dividend taxes can reach 39.35%, higher than the OECD average.

  • Certain income ranges face very high effective marginal rates (up to ~60%) due to loss of allowances.


5. Big picture

Economists usually describe the UK system like this:

Compared with other rich countries:

  • Taxes on the rich → about average

  • Taxes on middle incomes → lower than many European welfare states

  • Overall tax levels → moderately high but not extreme

Countries with much heavier taxation of high earners tend to be:

  • Denmark

  • Sweden

  • Belgium

  • France

Countries with lower taxation of the wealthy include:

  • United States

  • Switzerland

  • Singapore


Summary:

  • The UK does tax wealthy people significantly (45% top rate).

  • But compared with Europe’s highest-tax countries, it’s roughly middle-range rather than unusually high.


If you want, I can also show how the UK’s rich are taxed compared with the US, France, and Scandinavia once you include capital gains, wealth taxes, and property taxes, which changes the picture quite a lot.

ok

When you include capital gains, wealth taxes, and property taxes, the picture of how the UK taxes the rich becomes more nuanced. Some countries tax income heavily, while others target wealth or assets much more.

Below is a simplified comparison.


1. Income tax on high earners

Top marginal rates on very high salaries:

CountryTop rate
UK45%
US37% federal (often 45–50% with state taxes)
France~45%
Sweden~52%
Denmark~56%

Takeaway:
The UK is mid-range. Scandinavian countries tax high incomes much more heavily.


2. Capital gains tax (important for the wealthy)

Very wealthy people often earn more from investments than salaries.

CountryTop capital gains tax
UK20% (28% for property)
US~20% federal (+ state taxes)
France~30% flat tax
Sweden30%
Denmarkup to ~42%

Takeaway:
The UK is relatively low to moderate here compared with many European countries.


3. Wealth taxes

A key difference internationally.

Countries that tax net wealth annually:

CountryWealth tax
Franceyes (mainly property wealth)
Spainyes
Norwayyes
Switzerlandyes
UKno wealth tax
USno federal wealth tax

Takeaway:
The UK is lighter on taxing accumulated wealth than several European countries.


4. Property and inheritance taxes

Where the UK is relatively high.

Property taxes

The UK collects a significant amount through:

  • council tax

  • stamp duty on expensive homes

Stamp duty on expensive properties can reach 12–15% at purchase.

Inheritance tax

The UK rate is 40% above £325,000 (with exemptions).

Comparable rates:

CountryTop inheritance tax
UK40%
US40% federal (large exemptions)
Franceup to 45–60%
Germanyup to 50%

5. Total tax burden on the rich (overall picture)

Economists usually place the UK in this category:

High-income tax but relatively modest wealth taxation.

Compared with other countries:

Higher taxation of the wealthy

  • Denmark

  • Sweden

  • Belgium

  • France

Similar to the UK

  • Germany

  • Netherlands

  • Canada

Lower taxation of the wealthy

  • United States (depending on state)

  • Singapore

  • Hong Kong

  • Switzerland (in some cantons)


Key point

The biggest difference is wealth vs income taxation:

  • The UK mainly taxes income and transactions

  • Some European countries tax wealth itself every year

That means someone extremely rich but living off investments can sometimes pay lower effective tax rates than high-salary professionals.


If you're interested, I can also show why London has attracted so many billionaires historically despite relatively high income taxes, because the explanation involves several unusual UK tax rules.

“Mushroom computers” are partly real science but mostly experimental, not practical computers yet.

Researchers have actually experimented with using fungi (mushrooms and their underground networks) as bio-electrical computing systems.


The real science behind it

The key structure is the mycelium—the underground network of fungal threads.

Mycelium

Mycelium networks can:

  • transmit electrical signals

  • respond to light, chemicals, pressure, and temperature

  • change their signaling patterns depending on stimuli

Because of this, some scientists think they could function as biological information processors.


Experiments with “fungal computers”

One of the main researchers is Andrew Adamatzky at the University of the West of England.

His lab studies “unconventional computing.”

Unconventional Computing

They’ve shown that fungal networks can:

  • produce distinct electrical spike patterns

  • react to environmental inputs

  • perform logic-like responses similar to basic computer gates

This means fungi can theoretically perform very simple computations.


What they can actually do (so far)

Current fungal computing experiments can:

  • detect environmental changes

  • generate electrical signals

  • simulate simple logic gates (basic computer operations)

But they cannot run software, store data like a laptop, or replace silicon computers.


Why scientists are interested

Possible future uses include:

1. Living sensors
Mycelium could be embedded in buildings or soil to detect:

  • toxins

  • structural stress

  • environmental changes

2. Self-healing bioelectronics
Living materials that repair themselves.

3. Hybrid biological computers
Systems combining electronics with biological networks.


Reality check

A mushroom laptop or fungal AI is still science fiction.

The current research is more like:

  • biological signal processing

  • experimental bio-hardware

It’s closer to a living sensor network than a traditional computer.


Fun fact

Fungi aren’t the first organism tested for biological computing. Scientists have also experimented with:

  • Physarum polycephalum

Slime molds have solved mazes and optimized networks in lab experiments.


If you're curious, there’s an even stranger idea scientists are exploring: “forest internet”—how fungal networks might act like communication systems between plants. I can explain that too, because it’s one of the weirdest real discoveries in biology.

Where could mushroom computers lead in the most 'optimistic' scenario possible.

In the most optimistic scenario, research into fungal computing could lead to a new class of living, adaptive technologies that complement or sometimes replace traditional electronics. The key idea comes from using the signaling behavior of Mycelium as a biological information-processing system.

Below are some of the most ambitious possibilities scientists discuss.


1. Living environmental monitoring networks

The most realistic optimistic outcome is planet-scale biological sensor systems.

Because mycelium naturally spreads through soil and organic material, it could become a distributed sensing network that monitors:

  • soil chemistry

  • pollution levels

  • ecosystem health

  • structural stress in buildings

Fungal materials embedded in walls or infrastructure could detect problems like cracks, toxins, or humidity and send signals to electronic systems.

This would create self-maintaining environmental monitoring with almost no energy input.


2. Self-growing computers and electronics

Another optimistic scenario is computing systems that grow rather than being manufactured.

Instead of fabricating silicon chips in factories, future devices might use biohybrid circuits where fungi form the information-processing layer.

Potential benefits:

  • self-repair if damaged

  • extremely low energy use

  • biodegradable hardware

  • growth into complex shapes

This research falls under Unconventional Computing, a field exploring systems that compute using physical processes rather than traditional electronics.


3. Smart buildings and materials

Mycelium is already being explored as a building material. If combined with fungal computing research, buildings could become “living structures.”

Walls or foundations could:

  • sense structural stress

  • detect mold or toxins

  • adjust ventilation systems automatically

  • warn occupants about environmental hazards

In this scenario, buildings would behave more like biological organisms that monitor themselves.


4. Bio-AI systems that think differently

Some researchers speculate that biological networks like fungi could lead to non-silicon artificial intelligence.

Instead of binary digital logic, fungal networks process information through complex electrical and chemical signaling patterns.

This could create:

  • adaptive pattern-recognition systems

  • highly energy-efficient decision networks

  • new forms of computing inspired by ecosystems

The work of Andrew Adamatzky has shown that fungal electrical activity can mimic simple logic operations, suggesting the possibility of primitive computation.


5. Integration with ecosystems

The most visionary idea is computers integrated directly into natural ecosystems.

Because fungi already connect plants through underground networks, computing systems could be embedded into ecological systems to help manage:

  • forests

  • agriculture

  • soil health

Scientists studying Mycorrhizal networks have shown that fungi already transmit nutrients and signals between plants.

In theory, technology could tap into these networks to monitor and stabilize ecosystems.


6. Ultra-low-energy computing

Biological networks operate using tiny electrical signals and chemical gradients, which require far less energy than silicon chips.

If harnessed effectively, fungal systems could enable:

  • extremely energy-efficient processors

  • sustainable computing infrastructure

  • biodegradable electronics

This would be valuable as global computing energy demand continues to rise.


Reality check

Even in optimistic scenarios, fungal computers would likely not replace traditional computers.

Instead, they would probably be used for:

  • sensing

  • adaptive materials

  • ecological monitoring

  • hybrid biological–electronic systems

They would complement silicon technology rather than replace it.


One especially fascinating possibility scientists discuss is that fungal networks may already behave somewhat like primitive information systems in forests. If you want, I can explain the controversial idea that forests might have something resembling a “biological internet.”

what are the similarities and difference between photo voltaics and photo synthesis

Photovoltaics and photosynthesis both convert sunlight into usable energy, but they do it in very different ways—one is a human-made electrical process, and the other is a biological chemical process.


Key similarity

Both rely on the same basic physical principle: light energy from the Sun is absorbed and converted into another form of energy.

In both systems:

  • Light photons hit a material.

  • Electrons become energized.

  • That energy is captured and used.


Photovoltaics (solar panels)

Photovoltaic Effect

Photovoltaic systems convert sunlight directly into electricity.

How it works:

  1. Sunlight hits a semiconductor material (usually silicon).

  2. Photons knock electrons loose.

  3. An electric field forces those electrons to flow.

  4. The flow of electrons becomes electric current.

Energy form:

  • Light → electricity

Common use:

  • solar panels

  • calculators

  • satellites


Photosynthesis (plants and algae)

Photosynthesis

Photosynthesis converts sunlight into chemical energy stored in sugar molecules.

How it works:

  1. Plants absorb light using pigments such as Chlorophyll.

  2. The energy splits water molecules.

  3. Carbon dioxide from the air is combined with hydrogen.

  4. The plant produces glucose and releases oxygen.

Energy form:

  • Light → chemical energy (food)

Chemical summary reaction:

CO₂ + H₂O + light → C₆H₁₂O₆ + O₂


Major differences

FeaturePhotovoltaicsPhotosynthesis
System typeHuman-made technologyBiological process
Energy outputElectricityChemical energy (sugars)
MaterialsSemiconductors (silicon)Cells and chloroplasts
By-productsNoneOxygen
Efficiency~15–25% typical~1–3% in plants

Another interesting similarity

Both systems rely on excited electrons moving through a structured system.

  • Solar panels move electrons through electrical circuits.

  • Plants move electrons through biochemical pathways inside Chloroplast.


Big picture difference

A simple way to think about it:

  • Solar panels produce power immediately.

  • Plants store solar energy as fuel.

In other words:

  • photovoltaic systems power devices

  • photosynthesis powers the entire food chain on Earth


If you're interested, there's also a fascinating emerging technology called artificial photosynthesis, where scientists try to combine ideas from both systems to produce fuels directly from sunlight.

explain this

Artificial photosynthesis is a technology that tries to copy the core idea of natural photosynthesis but in a man-made system. The goal is to use sunlight to produce fuel instead of electricity.

It combines ideas from:

  • Photosynthesis

  • Photovoltaic Effect

Instead of making sugar like plants do, artificial systems try to make usable fuels such as hydrogen or hydrocarbons.


The basic idea

Artificial photosynthesis aims to perform two main tasks using sunlight:

  1. Capture sunlight (like solar panels)

  2. Use that energy to drive chemical reactions that produce fuel (like plants)

So the overall concept is:

sunlight → chemical fuel

This is important because fuel can store energy, unlike electricity which must be used immediately or stored in batteries.


The two main reactions scientists try to replicate

1. Splitting water

One important reaction is breaking water into hydrogen and oxygen.

2H₂O → 2H₂ + O₂

Hydrogen gas can then be used as a clean fuel.

This process is called:

Water Splitting

Plants perform a similar step in photosynthesis when they split water to release oxygen.


2. Turning carbon dioxide into fuel

Another goal is converting carbon dioxide into useful fuels.

CO₂ + energy → fuels such as:

  • methane

  • methanol

  • synthetic gasoline-like molecules

This process falls under:

Carbon Dioxide Reduction

If powered by sunlight, this could potentially create carbon-neutral fuels.


How artificial photosynthesis systems work

A typical experimental system has three parts:

Light absorber

A material captures sunlight.

Examples:

  • semiconductors

  • special dyes

  • metal oxides

This part works similarly to a solar panel.


Catalyst

A catalyst helps drive chemical reactions.

Catalysts can be made from materials such as:

  • nickel

  • cobalt

  • platinum

  • synthetic enzyme-like molecules

They lower the energy required for reactions like water splitting.


Reaction chamber

The chemical reactions happen in a liquid or gas environment where:

  • water is split

  • CO₂ is converted

  • fuels are collected


Why scientists care about this technology

1. Storing solar energy as fuel

Solar power is intermittent (night, clouds).

Fuel can be stored and transported easily, making it much more flexible than electricity.


2. Carbon-neutral fuel production

Artificial photosynthesis could recycle atmospheric CO₂ into fuel instead of extracting fossil fuels.

That could potentially reduce net emissions.


3. Replacing fossil fuels in hard sectors

Some industries are difficult to electrify, such as:

  • aviation

  • shipping

  • heavy manufacturing

Liquid fuels produced from sunlight could power those sectors.


The biggest challenge

Plants do photosynthesis slowly but extremely reliably.

Artificial systems must become:

  • efficient

  • cheap

  • stable for years

Right now most lab systems work only for hours or days, and many rely on expensive materials.


The most optimistic vision

In the best-case scenario, artificial photosynthesis could lead to solar fuel farms.

Imagine fields of devices that:

  • absorb sunlight

  • pull CO₂ from the air

  • produce liquid fuel directly

Essentially sunlight-powered refineries.


A surprising fact

Natural plants are actually not very efficient at converting sunlight to stored energy (often around 1–3%). Engineers believe artificial systems could potentially reach 10–20% efficiency, which would be revolutionary.


If you'd like, I can also explain something fascinating: why some scientists think artificial photosynthesis might eventually become more important than solar panels for the future energy system.


Chapter 31 - UK Deinstitutionalisation: Neoliberal Values and Mental Health

from Part III - Implications in Practice


Published online by Cambridge University Press
By
Andrew Scull
Edited by
George Ikkos and
Nick Bouras

Summary


The number of patients resident in many hospitals had begun to decline several years before antipsychotics arrived on the scene. The population of asylums did shrink significantly throughout the 1960s and 1970s, though mental hospitals did not begin to vanish from the scene until the 1980s under Margaret Thatcher. Deinstitutionalisation was no accident. It was a consciously chosen neoliberal policy, pursued relentlessly over many decades. Welfare ‘reform’, in Britain as in the United States, has become a term of art disguising repeated assaults on the social safety net and the demonisation of those dependent upon it. ‘Community care’ in the era of neoliberal politics has turned out to be an Orwellian euphemism masking a nightmare existence for all too many of those afflicted with serious psychoses and for their families.

Keywords
deinstitutionalisationneoliberal valuesmental healthasylumscommunity care

Information
TypeChapter
Information
Mind, State and Society
Social History of Psychiatry and Mental Health in Britain 1960–2010 , pp. 306 - 314
DOI: https://doi.org/10.1017/9781911623793.033[Opens in a new window]
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons

This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/


Chapter 31 UK Deinstitutionalisation: Neoliberal Values and Mental Health

Introduction

The year 2021 marks the sixtieth anniversary of Enoch Powell’s Water Tower speech – a famous or infamous occasion depending upon one’s assessment of deinstitutionalisation and the advent of community care. Powell, the minister of health in Harold Macmillan’s Tory government, was a superb orator and a man not known to mince his words. In 1968, his ‘Rivers of Blood’ speech on immigration would cost him his position on the Opposition front bench and essentially end his political career (see also Chapters 1 and 2). In 1961, however, he was seen as a formidable figure in the Conservative Party and a possible future prime minister. Delivered at the annual conference of the National Association for Mental Health, the Water Tower speech announced a radical shift in mental health policy. Since the passage of the Asylums Act in 1845, which made construction of county lunatic asylums at taxpayer expense compulsory, Britain, like the rest of the Western world, had placed primary emphasis on institutionalisation as the preferred solution to the problems posed by serious mental illness. That era, Powell announced, was coming to a close.1

Neoliberal Dawn

The shift in policy was couched in the apocalyptic tones that Powell preferred. ‘There they stand,’ he intoned, ‘isolated, majestic, imperious, brooded over by the gigantic water-tower and chimney combined, rising unmistakable and daunting out of the countryside – the asylums our forefathers built with such immense solidity to express the notions of their day. Do not for a moment underestimate their powers of resistance to our assault.’ Yet assault them he would. The government’s plans ‘imply nothing less than the elimination of by far the greatest part of this country’s mental hospitals’. It was, he acknowledged, ‘a colossal undertaking’. Within fifteen years, however, half of the existing beds, 75,000, should be gone: ‘if we are to have the courage of our ambitions, we ought to pitch the estimate lower still, as low as we dare, perhaps lower.’ Mistakes might be made, but ‘if we err, it is our duty to err on the side of ruthlessness. For the great majority of these establishments there is no, repeat no, appropriate future use.’

Powell granted that vast sums had been spent on these edifices, and in the past decade much treasure had been expended to make them ‘less inadequate’. Those sunk costs were proffered as one argument against radical change. Other objections could be expected from the ‘hundreds of men and women who have given years, even lifetimes, to the service of a mental hospital or group of hospitals … have laboured devotedly, through years of scarcity and neglect, to render conditions in them more tolerable’. Too bad. The mental hospitals were ‘doomed’; and Powell volunteered to be ‘the first to set the torch to the funeral pyre’. In the following year, the NHS Hospital Plan began the process of enacting these proposals.

Powell spoke like an Old Testament prophet, and a prophet he proved to be. Though the asylums did not begin to vanish from the scene until the 1980s under Margaret Thatcher, their population did shrink significantly throughout the 1960s and 1970s. On an average day in 1960, the population of England’s mental hospitals was just over 140,000. By 1970, the census had fallen to 106,000 and, a decade later, to just over 79,000. Under administrations of all political stripes, Conservative, Labour and Coalition, the remorseless run down of institutional populations has continued apace. When David Cameron took power in 2010, the inpatient population had fallen to less than 23,000; and, while most mental hospitals remained open into the 1980s, notwithstanding the shrinking inpatient population, during the second half of Thatcher’s term in office they were largely abandoned. Some, like the massive Friern Hospital in north London, were converted to upscale flats for the nouveau riche (the building’s past coyly hidden as it became the Princess Park Manor); while others, like the old Devon County Asylum at Exeter, contaminated with asbestos and heavy metals, proved too costly to repurpose and were left to rot. ‘For dust thou art, and into dust shalt return’, as one of Powell’s favourite books would have it.

As Powell’s speech should make clear, deinstitutionalisation was no accident. It was a consciously chosen policy, pursued relentlessly over many decades, even as evidence materialised that the alternative to the hospital – ‘community care’ – was largely a figment of politicians’ imaginations, a phrase that sought to obscure the fact that there was little in the way of community to which most mental patients could return and still less in the way of care. Community care is, as I have suggested elsewhere, a shell game in which there is no pea.2 Those suffering from serious forms of mental illness have been cast into the wilderness – a wilderness that has little time for those with few resources of their own; for people who lack the capacity to function in a neoliberal environment whose architects regard them as little more than a drain on the public purse; and for ‘unfortunate creatures’ chronically dependent on the not-so-tender mercies of a shrinking welfare state, doubly stigmatised for their illness and because they show no signs of reform or recovery.

Psychiatrists have embraced the myth that the mass discharges of patients reflected advances in therapeutics, most especially the psychopharmacological revolution that began in 1954, with the marketing of Largactil (chlorpromazine). That, coincidentally, is the year in which the mental hospital census in England and Wales reached its zenith. The reality, however, is that the number of patients resident in many hospitals had begun to decline several years before antipsychotics arrived on the scene. Leading British psychiatrists such as Sir Aubrey Lewis and Michael Shepherd were rightly sceptical about the relationship between pharmacology and hospital discharges, and a host of later scholarship has confirmed their suspicions (see also Chapter 2).3 Drugs may have soothed professional anxieties when psychiatrists were asked to discharge long-stay patients, and the effects of Thorazine (chlorpromazine) and its copycat drugs on florid symptomatology may have made a difference at the margin, notwithstanding the tendency of disturbed patients to be non-compliant about taking their medications. Most awkwardly, though, for the advocates of the notion of a technological fix as the source of deinstitutionalisation, they have no way of accounting for the pattern of accelerated discharge that manifested itself from the 1970s onwards, a period where no breakthroughs in psychopharmacology materialised.

The record demonstrates that it was policy choices, not drugs, that fundamentally underpinned deinstitutionalisation, whose impetus did not come from the ranks of psychiatry. Indeed, the shift from the hospital to the community occurred largely behind the backs of the profession and independent of its predilections and actions. Political preferences ruled and, by and large, the profession followed along.

Deinstitutionalisation was not a uniquely British phenomenon, of course. Strikingly similar patterns can be observed in the United States and, later, in most advanced industrial societies. The mental hospital, once touted as ‘the most blessed manifestation of true civilization the world can present’,4 was now dismissed as a ‘total Institution’ on a par with such places as prisons and concentration camps, an anti-therapeutic engine of degradation that fomented chronicity rather than cured its inmates.5 That focus on the defects of the institution and its malign effects on those it purported to treat accounted for much of the support the new policy drew from civil libertarians and those concerned with patients’ rights (see also Chapters 3, 4, 13 and 20). Yet it substituted for careful assessment of what alternatives were being prepared, if any, for those discharged back into society at large.

Long before the principle that medical care would be provided free and as a right for all British citizens had been established, the special problems associated with serious mental illness had prompted the state to pay for the confinement and treatment of the overwhelming majority of mental patients. Such care, well into the twentieth century, had come at the price of being labelled a ‘pauper lunatic’, but the costs of housing and providing for the tens of thousands of patients who crowded the wards had long constituted a major draw on the public purse. Nor did that change much with the advent of the NHS in 1948. Mental hospitals continued to exist as a separate system, independent of the health service that dealt with physical ill health. The end of that separation was forecast in the 1962 Hospital Plan but only became a reality during the 1970s. No longer were the mental health services a separate administrative and financial system, and in the fight for resources, the generally low priority placed on mental illness has left the advocates for resources in a parlous position. Worse still, the incurable tendency of the political classes to reorganise and ‘reform’ the health services has repeatedly added to the chaos. Particularly pernicious in this regard was the 1990 National Health and Community Care Act, which passed responsibility for community care on to local authorities. Uneven and inadequate provision has been the inevitable result. No resources were allocated to the local authorities to provide for these new responsibilities, and the dislocations associated with establishment of NHS Trusts then compounded the problem (see also Chapters 10 and 30).

Neoliberal Supremacy

The 1970s saw the breakdown of the post-war consensus in British politics, which had seen both parties embrace a strong role for the government in the economy, an expanded welfare state and a major role for trade unions.6 Though they would be exacerbated by the external shock of the sharp rise in oil prices in 1973, the structural problems that already plagued the economy were exacerbated even further by the increasingly fractious state of labour relations. Unions flexed their power. Massive strikes, both official and unofficial, appeared to render Britain almost ungovernable. The country’s difficulties were made worse by the weakness and fecklessness of the politicians of both parties who ruled Britain during this decade.7 The miners’ strikes of 1972 and 1974, and the resultant blackouts and forced move to a three-day work week, brought the collapse of the Heath government, but Labour under Callaghan proved equally inept and incompetent in the face of industrial turmoil and political unrest. The ‘Winter of Discontent’ of 1978/9 saw more massive strikes, particularly by public sector trade unions, as Callaghan vainly sought to rein in rampant inflation. Bodies went unburied, rubbish uncollected and flying pickets restricted hospitals to taking emergency patients only. Public services appeared to be on the brink of collapse, as did the economy itself.

The sick man of Europe responded by electing a Conservative Party led by Margaret Thatcher in May 1979. Blessed with a thumping majority, her pro-market and neoliberal instincts, and visceral hostility to both unions and the public sector, would dominate British politics for decades. Within two years, she had ejected most of the one-nation Tories from her government and embarked on a radical reconstruction of the British economy and the British state. By the time Tony Blair brought ‘New Labour’ out of the political wilderness and back into power, following more than a decade and a half of Conservative rule, trade unions had been neutered as a political force, economic inequality had widened sharply and the parameters of political discourse and public policy had been altered, if not permanently, then certainly for the foreseeable future. Blair’s ‘Brave New Britain’ in many ways continued to embrace key plans of Thatcherite orthodoxy, albeit while putting a softer, more emollient face on its policies.8

Someone who could announce with a straight face (and meaning it) that ‘there is no such thing as society’, and who proceeded to amplify her meaning by insisting that ‘It is our duty to look after ourselves’, was not inclined to show much support for those who depended upon public provision of services for their very survival. Scornfully, in the same interview Thatcher noted that ‘we have gone through a period when too many children and people have been given to understand “I have a problem, it is the Government’s job to cope with it!” … they are casting their problems on society.’9 It was a view for which she evinced no sympathy. At best, she saw a role for Victorian charity, de haut en bas: ‘There is a living tapestry of men and women and people [sic] and the beauty of that tapestry and the quality of our lives will depend upon how much each of us is prepared to take responsibility for ourselves and each of us prepared to turn round and help by our own efforts those who are unfortunate.’ Victims of misfortune who sought collective, public responses to the difficulties they confronted were out of luck. Hers was not an administration who viewed such supplicants with favour. Handouts were to be in short supply.

In Thatcher’s view, British citizens had become far too dependent on the state, losing their sense of responsibility for their own lives. For her, the key to reviving Britain was to restore economic incentives and the discipline of the marketplace, rolling back the frontiers of the state and reducing expenditure on welfare, confining it to those in ‘real need’. Social security payments steadily eroded in value, and sickness and unemployment benefits were cut. The upshot was a doubling of the relative poverty level by the time she resigned in 1990, when 28 per cent of British children were found to be living below the poverty line (3.3 million, up from 1.7 million when she took office) – statistics that grew even worse under her successor, John Major. Levels of inequality showed no signs of declining under New Labour and have worsened still further since. 10 Neoliberalism, in other words privatisation, the deification of individualism, the destruction of union power, economic liberalisation and increased ideological hostility to the collective provision of public goods became the hallmarks of her tenure in office and have perceptibly changed the terms of political debate ever since. Welfare ‘reform’, in Britain as in the United States, has become a term of art disguising repeated assaults on the social safety net and the demonisation of those dependent upon it.

Neoliberal NHS and Mental Health Services

The sole and singular exception to this sustained ideological attack on the welfare state in the UK has been the NHS. Thatcher ostentatiously chose private health care, but this is one area where her ideology encountered stiff resistance (see also Chapter 9). At least at the level of rhetoric, governments of all stripes since she was forced from office have genuflected to the NHS and promised to protect it, a reflection of its overwhelming popularity among the public at large. Under Labour, the rhetoric was matched for a time with an attempt to boost resources for the health services, though little of this new money found its way into the mental health sector. The promised ‘parity of esteem’ for physical and mental health remained a slogan without substance. The great bulk of the additional money provided to the NHS was directed at physical illness, though administrative obfuscation and the fragmentation of service provision have made quantifying where the money went extraordinarily difficult.11 Moreover, though serious violence among those living with schizophrenia and psychosis is quite rare,12 media-inspired moral panics surrounding this issue have exercised a lopsided influence over public policy and have meant that such increases in resources for mental health provision as did occur under Labour were disproportionately spent on mitigating these risks – a distortion of policy that found legislative recognition in the new category of ‘Dangerously Severe Personality Disorder’ (DSPD) in the 2007 Mental Health Act (see also Chapters 23, 27 and 28). Labour’s decision to have the NHS Trusts mimic the marketplace, a standard feature of neoliberal policy, added additional layers of administration and increased the strain on the system, to no good effect.

Whatever slight progress towards improving the lot of the seriously mentally ill that had accompanied Labour’s thirteen years in power has since been reversed. David Cameron had given notice prior to wining the general election and entering a coalition with the Liberal Democrats in 2010 that ‘the age of irresponsibility was going to be giving way to the age of austerity’. With the enthusiastic support of his chancellor, George Osborne, he was as good as his word. The transfer of welfare to the private and voluntary sectors – a key part of the Thatcherite agenda – was soon under way. Over the ensuing decade, local authorities had their funding from Whitehall cut by 60 per cent. At the same time, caps were introduced on increases in council tax, placing local government under increasing fiscal strain. Under the 1990 legislation, local authorities bore primary responsibility for the community care of the mentally ill. What meagre provision they had previously made for their needs was increasingly threatened. Simultaneously, sharp cuts of the social care budget and the essential abandonment of any pretence to provide social housing worsened the situation further. Nor were these to be temporary measures to cope with the economic shock of the Great Recession. That provided the initial excuse for the cuts, but by 2013 (by which time public spending had already been cut by 14.3 billion pounds compared to 2009–10), Cameron announced that he had no intention of reversing the changes when and if the economy improved. The reduced commitment of the state was to be a permanent feature of Britain’s future.

The NHS was supposedly to be protected from austerity; its funding ‘ring-fenced’. Though this was true in nominal terms particularly as demands increased with the ageing of the population, the health service has suffered drastic cuts in real terms. Cash increases that averaged 1.4 per cent per year adjusted for inflation were entirely inadequate to meet demand. Forced to prioritise day-to-day expenditures, capital budgets were neglected, which led to rapid deterioration of both buildings and equipment. Over 32,000 overnight beds were lost in a decade. Waiting times for services of all sorts lengthened, despite promises to reduce them. And, as usual, in the face of this deteriorating fiscal environment, mental health care suffered disproportionately. The gap between needs and services available took another turn for the worse, and deeper cuts in social services added to the problems faced by those with serious mental illness.

The idea that we bear a collective moral responsibility to provide for the unfortunate – indeed, that one of the marks of a civilised society is its determination to provide as of right certain minimum standards of living for all its citizens – has been steadily eroding since Thatcher came to power. In its place, we have seen the resurgence of an ideology far more congenial and comforting to the privileged: the myth of the benevolent ‘Invisible Hand’ of the marketplace, and its corollary, an unabashed moral individualism. There is little place (and less sympathy) within such a worldview for those who are excluded from the race for material well-being by chronic disabilities and handicaps – whether physical or mental disease, or the more diffuse but cumulatively devastating penalties accruing to those belonging to racial minorities or living in dire poverty.

The punitive sentiments directed against those who must feed from the public trough extend only too easily to embrace those who suffer from the most severe forms of psychiatric misery. Those who seek to protect the long-term mental patient from the opprobrium visited upon the welfare recipient may do so by arguing that the patient is both dependent and sick. But I fear this approach has only a limited chance of success (see also Chapter 27). After all, despite two centuries of propaganda, the public still resists the straightforward equation of mental and physical illness. Moreover, the long-term mental patient in many instances will not get better, and often fails to collaborate with his or her therapist to seek recovery. Such blatant violations of the norms governing access to the sick role in our societies make it unlikely that people with severe and enduring mental ill-health will be extended the courtesies and exemptions accorded to the conventionally sick (see also Chapter 3).13 Instead, even those incapacitated by psychiatric disability all too often find themselves the targets of those who would abolish social programmes because they consider any social dependency immoral.

Conclusion

Seen in this larger context, the neglect that has been the hallmark of the shift from the asylum to community care should come as no surprise. Among those with more noticeable continuing impairment, ex-patients placed with their families seem overall to have fared best. It would be a serious mistake, though, to suppose that even here deinstitutionalisation has proceeded smoothly and has proved unambiguously beneficial. Quite early in the process, John Wing expressed ‘surprise’ that, in view of the greatly increased likelihood of someone with schizophrenia living at home instead of in a hospital, so little research was being done on the problems experienced by their relatives.14 His own work, and that of his associates demonstrated that ‘the burden on relatives and the community was rarely negligible, and in some cases, it was intolerable’ (see also Chapter 2).15 A good deal of the distress and misery continues to remain hidden because of families’ reticence about complaining, a natural tendency, but one which has helped to sustain a false optimism about the effects of the shifts to community treatment. As George Brown pointed out, ‘relatives are not in a strong position to complain – they are not experts, they may be ashamed to talk about their problems and they have come to the conclusion that no help can be offered which will substantially reduce their difficulties’.16 Such conclusions may have a strong factual basis, in view of the widespread inadequacies or even absence of after-care facilities, and the reluctance, often refusal, of the authorities to countenance rehospitalisation. Long delays in receiving appointments, the absence of any provision for respite care, and the lack of co-ordination among service providers compound the problems families face, and eventually many give up the struggle.

Many psychotic patients are thus left to shuffle between flop-houses, homelessness, and short periods in jail, when their illness and dependency cause them to commit what are usually minor offences. Misery and poverty remain their lot, till most of them succumb to an early death. Given the thrust of public policy for the past sixty years, it should come as no surprise to learn that those afflicted with serious mental illness have a life expectancy of between fifteen and twenty-five years less than the rest of us.17 It is, nonetheless, a disgrace.

Key Summary Points

The number of patients resident in many hospitals had begun to decline several years before antipsychotics arrived on the scene.


The population of asylums did shrink significantly throughout the 1960s and 1970s, though mental hospitals did not begin to vanish from the scene until the 1980s under Margaret Thatcher.


Deinstitutionalisation was no accident. It was a consciously chosen neoliberal policy, pursued relentlessly over many decades.


Welfare ‘reform,’ in Britain as in the United States, has become a term of art disguising repeated assaults on the social safety net and the demonisation of those dependent upon it.


‘Community care’ in the era of neoliberal politics has turned out to be an Orwellian euphemism, masking a nightmare existence for all too many of those afflicted with serious psychoses and for their families.

 The short answer: the UK taxes the very wealthy at roughly average levels compared with other rich countries , it is not among the highest ...