Customer vulnerability FAQs: support

How do you support vulnerable customers?

Identifying vulnerable customers and recording their characteristics is only part of the job – the next step is actually doing something about it. These questions, drawn from an industry Q&A, look at the practicalities of support: what reasonable adjustment looks like, when to tailor service, what you can and can’t charge for, and why passive signposting is rarely enough on its own.

How can we support vulnerable customers?

What support looks like depends on three things: the characteristics the customer has, the severity, and the potential for harm from the particular product or service. A customer with dyslexia buying a motor policy needs different support to the same customer taking out a complex investment – and both look different again from a customer in the middle of bereavement trying to make a pension decision.

Most firms are already doing a fair amount of this informally, sometimes without realising it – an adviser spends longer with a client they know is grieving, a call handler reads out key points more slowly. Under Consumer Duty and FG21/1, those informal adjustments now need to be formalised, recorded and applied consistently – not just when an individual member of staff happens to notice.

There’s a large body of support available, much of it free. Directories like itswhoiam.me and the National Support Hub bring relevant services and charities together in one place. The UK also has well-established networks for debt (StepChangeCitizens Advice, the Money and Pensions Service), mental health (SamaritansMind), addiction (GamCareAA), bereavement (Cruse), and so on. You don’t need to build any of this yourself – just make it easy for staff and customers to reach it.

The practical challenge is training staff on the full range. Some support options are used dozens of times a day, others maybe once a year, and a manual approach inevitably narrows what gets offered to whatever staff can remember. Systems that suggest appropriate support based on the individual’s recorded characteristics take that bottleneck out of the equation – staff aren’t expected to memorise hundreds of options, the system surfaces the right ones for the situation.

A final thought on what ‘support’ actually means. It’s not just signposting – it includes the adjustments you make in how you deliver the product itself. Giving someone more time to make a decision, sending key information in a format they can use, involving a trusted person (with consent), verifying understanding before completing a sale – all of these count, and all need to be recorded and evidenced.

How differently do we need to treat vulnerable customers?

The answer Consumer Duty gives is: as much as needed to achieve outcomes comparable to those of resilient customers, and no more than that.

This matters in both directions. Under-treatment is the obvious risk – failing to make adjustments someone needs and ending up with poor outcomes. But over-treatment is real too. Assuming that every vulnerable customer needs significantly different service risks being patronising, can slow things down unnecessarily, and can itself be a source of harm. Nobody wants to be managed as a special case when they don’t need to be.

The way to square this in practice is a combination of two things:

  • Inclusive design – building services so they work for as many people as possible by default. Plain English communications, multiple formats as standard, accessible digital journeys, sensible defaults that don’t disadvantage anyone. Most customers' vulnerabilities can be accommodated without individual intervention if the baseline service is well designed.

  • Layered personal adjustment – for where the baseline isn’t enough. For a customer with a specific need – advanced macular degeneration, say, or a recent bereavement affecting capacity – you adjust the interaction: extra time, a trusted family member involved, a slower pace, verification of understanding. The Equality Act 2010’s concept of ‘reasonable adjustments’ is a useful frame here, even beyond the protected characteristics.

Critically, some vulnerabilities need almost no adjustment. Someone who is blind and uses a screen reader may well navigate your digital journey perfectly – the need is for the journey to be accessible, not for bespoke handling. Someone with mild, well-managed anxiety may want nothing different at all. The point of identifying characteristics of vulnerability isn’t to treat everyone differently; it’s to know when different treatment is needed and for whom.

Is it right to pass on the cost of reasonable adjustments, such as braille documents?

No – and this isn’t just a best-practice point, it’s the law.

Under the Equality Act 2010, service providers have a duty to make reasonable adjustments for disabled customers. Section 20(7) is explicit: the firm cannot require the disabled person to pay any of the costs of complying with that duty. Producing braille, large print, easy-read, audio or accessible digital formats all fall within reasonable adjustment, and the cost sits with the firm. Passing it on is unlawful under the Act and, separately, a clear Consumer Duty failing on fair value.

This extends beyond formats. If a customer needs a phone-based service because they can’t use the digital journey, you can’t charge a phone premium for that. If an adjustment takes longer to deliver, you can't charge the customer for the additional time. Indirect discrimination – where a neutral-looking policy disadvantages a disabled person in practice – is also caught by the Act. The Equality and Human Rights Commission publishes guidance for service providers on how this works in practice and is worth a read if you’re working through the detail.

Beyond the legal position, there’s a commercial reality: most reasonable adjustments are cheap, or can be designed into the baseline service so they cost nothing extra per customer. A plain English summary alongside a full document is a one-off piece of copywriting. Digital PDFs that are screen-reader compatible just need correct tagging at creation, not a separate process per request. Inclusive design turns a lot of per-case costs into fixed costs, and usually small ones. The fear that reasonable adjustment is expensive rarely survives contact with the actual numbers.

How hard is it for firms to meet the bar on vulnerable customer service, and will the bar keep rising?

Honestly, most firms are finding it harder than they expected, and yes, the bar is likely to keep rising.

The FCA’s multi-firm reviews since Consumer Duty came into force have been consistently critical. Identification rates remain low at many firms (often single-digit percentages against an expected roughly 50%). Evidencing outcomes for vulnerable cohorts – as opposed to simply recording that vulnerability exists – has been a particular weak point. Fair value assessments that properly consider vulnerable customers are patchy. Across speeches, ‘Dear CEO’ letters and review findings, the FCA has made clear it expects noticeable improvement.

Why is it hard? A few reasons. Customer vulnerability is genuinely complex – there are many types, many severities, and they change over time. Many firms started with training as their main lever and have since found that training alone doesn’t close the gap. Legacy systems often weren’t built to capture vulnerability data in a structured, reportable way. And the cultural shift – from ‘we’ll help when asked’ to ‘we actively find out who needs help’ – takes time to land.

As for the bar rising, several forces push that way. The FCA is publishing comparative data and expectations from its reviews, which raises the floor. Consumer Duty requires annual board reporting on outcomes by cohort, which drives internal investment. Technology in this space is maturing quickly, so what was state of the art two years ago is now standard. And industry benchmarking – led in part by bodies like the CII, whose customer vulnerability guidance sets out an implementation framework for insurance and personal finance – gives firms a concrete reference point against which to measure themselves.

Firms finding it hard now are in good company, but waiting won’t help. The firms that’ll find the next few years easiest are the ones investing in data, systems and structured processes now, because that’s where the bar is heading.

What are the FCA’s expectations on pricing adaptations for vulnerable clients?

The FCA hasn’t set a specific rule on differential pricing for support or adjustments, but a few principles are clear.

  • First, the Equality Act prevents you charging disabled customers for reasonable adjustments. That alone rules out passing on the cost of braille, large print, audio, easy-read, phone support in place of digital, and similar.

  • Second, Consumer Duty’s price and value outcome requires firms to assess fair value, and to do so for vulnerable cohorts as well as the market as a whole. If vulnerable customers are paying more, getting less, or both – particularly where that’s tied to characteristics like age, disability or ill-health – that’s a price and value issue the firm needs to justify or fix. The FCA’s general insurance pricing reforms (PS21/5) already bit hard into one version of this, the so-called ‘loyalty penalty’, and similar scrutiny applies more broadly under the Duty.

  • Third, the industry’s concern is that doing this well will drive up costs. The evidence doesn’t really bear that out. Most firms that have invested in proper customer vulnerability management typically find that operating costs stay flat or fall – the savings from avoided complaints, fewer reworks, and better-targeted service tend to cover the cost of doing the work. And good customer vulnerability practice builds customer loyalty, which has its own commercial value.

The direction of travel is that reasonable adjustment and tailored support will increasingly be a service differentiator rather than a cost centre. Firms that lean into it stand to win customers from firms that don’t.

What process should a firm put in place to support a vulnerable customer?

Good support isn’t a separate process bolted onto the side of everything else – it’s woven into the normal customer journey. A handful of principles tend to distinguish firms doing this well.

  • Gentle, in the flow of work. Assessment and support conversations happen as part of existing interactions, not as a separate ‘vulnerability process’ customers are pulled aside for. The tone is conversational, the purpose is framed positively (understanding the customer so they can be served better), and the language avoids labels like ‘vulnerable’.

  • Confidential and respectful of consent. What the customer shares is held securely, used only for appropriate purposes, and shared internally and externally only with informed consent. The customer should always know why information is being asked for, what happens to it, and how it benefits them.

  • Continual. Customers’ vulnerabilities change, so a one-off assessment at onboarding isn’t enough. Build in reassessment at annual review, at significant events (claims, renewals, complaints), and when the customer gets in touch with something that might be a trigger.

  • Active, not just informational. Signposting (‘here’s a leaflet, here’s a helpline number’) has its place, but most evidence suggests it’s rarely used, particularly at the point of crisis. When someone is in acute difficulty, the last thing they’re going to do is start phoning round charities. Genuinely useful support is more direct – offering to make a warm introduction where appropriate, actively checking the customer has what they need, following up to see if the support helped.

  • Self-help isn’t the answer at the point of crisis. Many firms rely heavily on self-service support content – FAQs, pre-recorded videos, online tools. These are useful for the well and the resilient; they’re generally not useful for someone in the middle of a mental health episode, a bereavement or a financial collapse. Active human (or well-designed assisted) support is what makes the difference when it matters.

  • Evidenced end to end. Every step gets recorded – what the characteristic is, what mitigation was offered, whether it was taken up, what the outcome was. That’s not just for the regulator, though it matters there. It’s also how you learn what works. Without the chain of evidence, you’re relying on intuition, and intuition is hard to scale.

The process doesn’t have to be complicated, but it does have to be consistent, visible, and backed by the right data and tools. Firms that try to do this on willpower alone – good intentions and staff empathy – tend to find it doesn’t scale. Firms with the process and the supporting systems find it becomes part of the way they operate.

How can technology make supporting vulnerable customers easier and more reliable?

Supporting vulnerable customers well is hard to do consistently at any reasonable scale without technology. Front-line staff, however well-trained, are working under time pressure, on different shifts, with different levels of experience, and across thousands of conversations a week. Asking them to also become experts in identifying and assessing customers’ vulnerabilities – its many forms, the way severity varies, and how it interacts with specific products – sets them up to struggle. Purpose-built technology takes much of that weight off them and produces results that are more reliable, more consistent and easier to evidence.

The first place technology earns its keep is in correct identification. A structured digital assessment, built around the right questions and an objective scoring framework, will pick up vulnerability characteristics that a free-flowing conversation often misses. People are frequently more willing to disclose personal circumstances through a guided assessment than in a phone call, partly because the questions feel neutral and partly because there’s no awkwardness about being labelled. The software does the categorisation behind the scenes, so neither the customer nor the staff member is asked to make a judgement call. The result is a higher and more accurate disclosure rate than training alone tends to achieve.

The second area is in-context help. Identifying that someone has a particular characteristic only matters if it’s connected to the situation at hand. A good system surfaces the relevant information at the point a staff member is dealing with a customer, alongside guidance tailored to the product and the interaction. That means a front-line colleague isn’t left trying to remember what to do – the right prompt appears in front of them, in the moment, for that specific customer and that specific product. It turns vulnerability support from something staff have to recall into something the system actively supports.

Technology also lets firms build their own support pathways (at MorganAsh, we call these ‘pathways’) and set triggers that fit how they operate in practice. Every firm’s products, channels and risk profile are different, so the mitigations that work for one may not map cleanly onto another. Being able to define your own pathways – the adjustments, communications, escalations or specialist referrals you want to apply for given combinations of characteristic, severity and product – means the system reflects your business rather than forcing you into a generic template. Triggers add a further layer: automatic reassessments at agreed intervals, alerts when a customer’s situation changes, or escalation when specific combinations of factors appear. That moves vulnerability management from a one-off snapshot to genuinely ongoing care. (That said, our software – MARS – does come with a predefined set of pathways to get firms started.)

Consistency across teams is another big gain. When assessments rely on individual judgement, you end up with different staff making different calls about similar customers, which makes the data hard to trust and the customer experience hit-and-miss. Objective scoring, applied uniformly, removes that variability. A customer in one channel gets assessed the same way as a customer in another, and a new joiner produces results comparable to a long-serving colleague’s. That consistency is what lets you report meaningfully on cohorts, compare outcomes, and demonstrate to the regulator that vulnerable customers are being treated fairly across the business.

Scalability is the obvious commercial argument. Trying to solve vulnerability through more headcount and more training simply doesn’t keep up with the volume most firms are dealing with. Software handles ten thousand assessments as easily as ten, without losing accuracy as the numbers climb. That changes customer vulnerability management from a problem that gets harder as you grow into one that holds up at scale.

Reliability matters because customer vulnerability is high-stakes. A customer slipping through the cracks isn’t just a service failure – it can lead to genuine harm, complaints, and regulatory exposure. A purpose-built system applies the same logic every time, doesn’t have an off day, and keeps a complete record of what was assessed, when, by whom, and what action was taken. That audit trail is exactly what firms need when something is challenged, and it’s what regulators want to see when they ask how vulnerable customers are being supported.

Underpinning all of this is consistency over time. Customer vulnerability isn’t static – people’s circumstances change, sometimes gradually, sometimes suddenly. Technology lets firms reassess at sensible intervals, capture changes as they happen, and build a longitudinal picture of each customer rather than relying on a single point-in-time view. Combined with the ability to define pathways and triggers, that turns the system into something genuinely dynamic, which is closer to what good support actually looks like in practice.

Software like MARS, from MorganAsh, is built specifically for this job, which is what makes it efficient. It identifies vulnerabilities accurately, surfaces guidance in context, supports custom pathways and triggers, applies objective scoring consistently across teams, scales without losing reliability, and produces the audit trail and reporting that regulators expect. None of these things is impossible to do manually – but doing them well, repeatedly, across a large customer base, is where purpose-built technology stops being a nice-to-have and becomes the only realistic answer.