1980s on: Improving care

    Home / Timeline / 1980s on: Improving care

1980s on: Improving care

Contents

 Haemodialysis and kidney transplantation both emerged through the 1960s and 1970s as high cost high risk therapies with uncertain application to the many known to develop ESRD. But by the turn of the century they were established as routine therapies which should be widely available. Although the cost remained high, the evidence for benefit (not least through the information provided by the UK Renal Registry) was more than sufficient to drive all local and regional  health economies in the UK to ensure all RRT modalities were available – HD, PD and transplant. (This does not gainsay a continuing degree of inequity of access in different parts of the UK which was gradually overcome through service development and rationalisation (1980s to the modern day – meeting demand: link)

Much of the progress which improved outcomes for ESRD from the 1960s onwards was incremental, based on expertise gained through experience, associated with gradual technical improvements, for example in HD machinery, and gradual improvements in outcomes associated with new drug therapies.

But there were some step changes which gave benefit both for patients, in quality and quantity of life, as well as for clinical staff by providing greater efficiencies which enabled better quality care to be delivered, in turn making the work more rewarding.

All these changes were occurring while the prevalent populations of those treated by maintenance dialysis and transplantation were relentlessly increasing, and often outstripping growth in clinical staffing and facilities. Concerns continued that kidney units would ‘drown’ and simple be unable to provide the necessary care for the numbers of patients coming to their doors. Thus greater efficiency and effectiveness of care through step change improvements became an element in survival of specialist renal care.

Each of these changes is described in more detail elsewhere, but are brought together here to show the tempo and  chronology of the major therapeutic improvements.

Extracorporeal therapies

In the 1960s and 1970s the rebuildable Kiil dialyser was the norm in most dialysis units. The need to rebuild was very demanding for nurses, and perhaps more so for patients and carers dialysing at home. The cuprophan sheets had to be carefully laid between the plastic  framework of the Kiil, and if a leak was identified on testing the dialyser had to be disassembled and rebuilt; the process could take hours. The relative inefficiency of the Kiil also meant that dialysis schedules were extended by contemporary standards – eight hours thrice weekly was the norm. This was rigorous personally and socially for patients whether dialysing in centre or at home. The change to more efficient disposable hollow fibre dialysers reduced the preparation time and also  fuelled the wish for shorter treatment times. But a longstanding debate was initiated by those convinced that there were better clinical outcomes with longer slower HD, even if shorter hours were possible as judged by conventional measures such as urea reduction ratio.

The other step change in technology was the introduction in the 1990s of convective  haemofiltration as an alternative or an addition to diffusive haemodialysis. The expected clinical benefits from removing  ‘middle’ molecules as well as small solute were slow to emerge in clinical trials, but haemodiafiltration nevertheless, despite the higher cost of both hardware and consumables, came into widespread use.

Vascular access

In the earliest days of maintenance HD the usual mode of vascular access was the arteriovenous shunt, which was enormously demanding for patients and staff – characterised by frequent (and painful) declotting, and multiple re-sitings until possible sites for shunt placement might run out. By the mid-1970s the much superior arteriovenous fistula replaces the shunt as the routine vascular access technique. The next two decades were characterised by incremental progress in surgical creativity and  technique – not least to provide vascular access in the growing numbers of frail patients in whom small fragile vessels precluded successful creation of the primary radiocephalic fistula, and also to sustain vascular access in the increasing numbers of patients with prolonged dialysis vintage. It was also necessary to learn the limitations of the arteriovenous graft using prosthetic material – attractive because it allowed immediate use (compared to the necessary wait for maturation after forming an AV fistula) but associated with higher failure and infection rates.

Over the same period the use of tunnelled vascular catheters for long-term dialysis access underwent critical review. A vascular catheter was attractive  because it avoided painful needling and allowed immediate use. But it became clear that the usual  subclavian site was associated with venous stenosis and its complications, and the internal jugular route become the standard. Despite the best of care infection risks remained high, and in most centres its use became restricted to those in whom significant surgical efforts had failed to establish an AV fistula.

Peritoneal dialysis

CAPD rapidly became a widely used modality after its introduction in the UK in the late 1970s. In part driven by patient choice to allow home based therapy, but more so because it became the preferred therapy to avoid overwhelming the totally inadequate centre HD facilities in the UK.  CAPD had three main technique weaknesses which limited its use. The first was the risk of peritonitis, which was significantly reduced by the simple innovation of a double bag Y system.

Another practical limitation of CAPD was the inconvenient time-consuming process of four exchanges a day. Inconvenient for anyone, it was completely incompatible with school or many employments. Automated PD (usually nocturnal) was therefore popular and widely adapted as soon as it became available.

Another limitation was the variable ultrafiltration achieved by the peritoneal membrane; glucose was the first osmotic agent used resulting in substantial glucose loading in some patients. A significant clinical benefit came with the  introduction in the 1980s of icodextrin as a non-glucose osmotic agent. Icodextrin was first investigated and put through clinical trials at Manchester Royal Infirmary (work led by Chandra Mistry, Netar Mallick, and Ram Gokal)and has become an enduring feature of routine PD.

Transplant

In the 1970s 1 year graft survival for kidney transplantation was typically 40% or less, and the risk of an acute rejection episode was 70-80%. Azathioprine (introduced into clinical practice in the early 1960s by Roy Calne) with substantial corticosteroid dosage was the only maintenance therapy; acute rejection episodes occurred in at least 80% of recipients. Acute rejection was treated with further high dose steroid (with the inevitable adverse effects) or graft irradiation. This scenario was transformed by the introduction of calcineurin inhibitors, CNI (cyclosporin then tacrolimus). Cyclosporin was first used in clinical practice by Roy Calne in Cambridge in the late 1970s, and this proved very challenging since its nephrotoxicity in humans had not been predicted by animal studies, and graft dysfunction was at first treated by increasing doses of cyclosporin on the presumption it represented rejection. The significant burden of resolving these issues was borne by Calne’s unit, and thereafter the use of CNI grew rapidly, 1 year graft survival in many units increasing stepwise from c.50% to > 80% with the use of CNI.

The use of CNI also paved the way over the next three decades  for sequential reductions in maintenance steroid dosage, indeed the use of a steroid-free regimen for a substantial proportion of patients with the many benefits of avoiding steroid-induced adverse effects.

The most recent major change in transplantation has been the increasing range of  donors being considered suitable since the turn of the century. UK transplant programmes were at first  dominated by use of deceased donors, but this became more challenging as various social changes (not least the introduction of seat belts) reduced the incidence of lethal traffic accidents, and more and more deceased donors were less ideal, mainly older people dying for example of stroke.  Confidence grew in the use of living related donors as increasing registry data increasingly showed significant benefit in graft survival  compared to deceased donors. And then the use of living unrelated donors emerged, as more and more evidence accumulated  of good outcomes with such donors even with poor HLA matching. This opened the way to widespread use of spousal donation, and recognition of its benefits in quality of life for both recipient and spouse, and also to the small but growing use of altruistic donation. These taken together have gradually increased transplantation rates, increasing the range of suitable recipients and suitable donors.

The one major innovation in surgical technique has been laparoscopic donor nephrectomy, first used in the UK in the late 199os and now the norm, making the offer to be a living donor much less daunting than previously.

Renal anaemia

A disappointment (though not unexpected) was the inability of maintenance dialysis to correct renal anaemia. The management approach until the late 1980s was often characterised by collusion between clinicians and patients. Clinicians had no effective means of raising haematocrit other than frequent transfusion with its inevitable disadvantages of iron overload and increased risk of circulating antibodies which compromised transplant opportunities.  To make the situation tolerable both clinicians and patients would persuade each other that it was ‘not too bad’ living chronically with haemoglobin concentrations below 80g/l as was commonly the case for dialysis patients – whereas in truth most patients felt exhausted and profoundly debilitated.

The advent of recombinant human erythropoietin (rh-Epo) for clinical use was transformative; perhaps the single greatest patient benefit for those with ESRD since the introduction of maintenance dialysis. The first clinical trials (led by Chris Winearls   in the UK, and  Eschbach et al in the US) showed a brisk predictable rise in haematocrit with concomitant symptomatic improvements. Although there was still much to learn about its use (dosing, iron management, target haematocrit, funding mechanisms given its high cost) rh-Epo rapidly developed a central role in the care of those with ESRD (Link to Epo page in Great Contributions).

Renin-angiotensin system (RAS) blockade

ACE inhibitors and angiotensin receptor blockers were developed from the 1970s onwards as treatments for hypertension. By the mid-1980s there was experimental evidence that these classes of drugs would delay the progression of kidney failure especially  in the context of significant proteinuria, with and without diabetes.  Some caution was introduced when it became clear that initiating RAS blockade might occasionally produce  a precipitate fall in renal function (usually in the context of occlusive renovascular disease). But it was clear that they were highly effective in many patients with progressive chronic kidney disease. Their routine over the last three decades has undoubtedly delayed or avoided dialysis in very large numbers of patients at very modest cost, a transformative benefit.

Last Updated on February 9, 2023 by John Feehally