Customer vulnerability FAQs: monitoring
How do you monitor and review vulnerable customers?
Identifying a vulnerable customer is only the start. Under Consumer Duty, firms have to keep an eye on how those customers are doing throughout the life of the product, spot changes, reassess when things shift, and show they've done it. These questions and answers look at the practicalities: how often to review, how long to keep data, how to handle information flow between manufacturer and distributor, and how to evidence all of it.
How can financial services firms effectively monitor and report on customer vulnerability over time?
Monitoring customers’ vulnerabilities well means treating vulnerabilities as something which changes, rather than being a fixed ‘vulnerable’ label. A customer’s circumstances might be temporary (a short bout of ill health, for example), recurring (a cyclical mental health condition), progressive (hearing loss) or permanent (a long-term disability), and any monitoring approach needs to reflect that.
The starting point is agreeing on what you’re trying to track – things such as whether customers are becoming more financially resilient over time, whether complaints and escalations are falling, and whether the support you’re providing is making a genuine difference. The FCA expects firms to show how their vulnerable customers are identified, supported and monitored on an ongoing basis, so the measurement framework needs to line up with that.
The next step is pulling in the right signals. No single data source gives you the full picture, so firms tend to combine several: transactional clues like missed payments or sudden changes in spending, CRM flags and case notes from previous interactions, life-event feedback such as bereavement notifications or address changes, and interaction analytics that pick up sentiment or keywords in phone calls and chats. Voice and text analytics in contact centres have grown considerably as a field, because many early signs of customer vulnerability surface in how customers talk about their situation, rather than in their transaction data alone. Complaints patterns add further depth.
However, these are largely reactive, and the FCA requires firms to be proactive – so, taking data from only those instances where customers get in touch delivers an extremely incomplete picture. It is far better to regularly reassess customers, proactively. The frequency of this depends on your needs and the product in question.
Once you have the signals, the question is how you turn them into something you can track over time. A layered approach works best: straightforward rules to catch immediate risk (for example, a clear drop in income coinciding with missed payments), combined with behavioural or machine-learning models that produce an early-warning score. The important part is keeping the history. If you only ever see the current status, you can’t tell whether a customer is getting better, getting worse, or cycling in and out of difficulty. Persisting historical scores lets you measure movement along the vulnerability severity range, spot recurrence, and understand how long it typically takes for someone to recover. Most practical guides suggest a mix of scheduled reviews – quarterly or annual – and to include event-driven checks, triggered by specific signals.
Reporting is where this all becomes useful to senior management and the board. Dashboards should show prevalence (the share of customers flagged as vulnerable), cohort trends broken down by product or channel, outcomes such as the support provided and harm prevented, time-to-resolution, and escalation rates. Linking the cadence of this reporting to Consumer Duty oversight helps keep evidence tied to regulatory expectations.
None of this works without proper operational and ethical controls. Human validation of model inferences is essential – for example, automated flags should prompt a conversation, not trigger an intervention on their own. Clear escalation paths, front-line staff training, and robust consent and privacy safeguards all matter. So does keeping a weather eye on drift and false positives so that the system stays genuinely helpful rather than creeping into something that causes harm.
Doing this properly is genuinely difficult without the right technology. Trying to run customer vulnerability monitoring on spreadsheets, or using generic CRM fields, tends to collapse under its own weight – particularly once you need longitudinal scoring, consistent assessments across teams, and board-ready reporting. Purpose-built software such as MARS, from MorganAsh, is designed specifically for this job, and firms which are serious about monitoring customers’ vulnerabilities efficiently tend to find that built-for-purpose tooling is what makes the difference between a check-box exercise and something that actually improves outcomes. Monitoring and reporting benefits enormously from software built for the task.
As a starting set of KPIs to report on monthly or quarterly, consider:
The share of customers flagged as vulnerable.
The share proactively contacted.
The average time from flag to support.
The rate of outcome improvement (for example, reduction in problem debt or complaints).
The false-positive rate for automated flags.
These give you a balanced view of reach, responsiveness, effectiveness and accuracy, and they’re straightforward enough to explain to a board.
How can customer vulnerability be reviewed and demonstrated as part of a firm’s annual review process?
How often you assess customers’ vulnerabilities depends on the product/service and the potential for poor outcomes. There’s no suggested fixed term in the rules. For most products, an annual check is a sensible starting point – less for short-term loans, possibly longer for something like a fixed-term mortgage. Life events (bereavement, divorce, job loss, serious illness) should trigger a reassessment in between.
The underlying principle, set out in FG21/1 and reinforced by Consumer Duty in FG22/5, is that firms must monitor customers throughout the lifetime of the product or service. So it makes sense to maintain customers’ vulnerability data over the same period.
To show that it’s genuinely covered by your annual review (and not just claimed to be), a few things help. Assessments need to be a named step on the review agenda, not an informal check-in that may or may not happen. The need to use the same framework each year so changes are comparable. The data needs to sit in a system that can report on cohorts, not just on individuals. Out-of-cycle triggers – claims, complaints, life events – need to prompt reassessment between annual reviews. And the whole chain (assessment, mitigation, outcome) needs to be on file so you can demonstrate the process end to end.
Without those, ‘we check at annual review’ is easy to claim but hard to prove. The FCA’s multi-firm reviews have made clear that evidence matters as much as intent.
Would an annual, one-size-fits-all survey be enough, or should customers be surveyed at different stages of the journey?
Both, really. A single annual survey is a reasonable baseline, but it won’t catch things that happen in between – and vulnerabilities don’t wait for your review cycle.
The FCA expects firms to identify and respond to vulnerabilities throughout the product lifetime. In practice, that means building assessment into several points in the journey: onboarding, annual review, significant interactions (claims, complaints, arrears), and anything that looks like a trigger event. Each touchpoint surfaces different information. Onboarding gives you the baseline. Annual review catches drift. Claims and complaints are often the moments when vulnerability is at its most acute. Life events – many of which your systems will already know about from other customer contact – are prompts to check in.
A single annual survey, used alone, produces thin data and misses temporary vulnerabilities entirely. A layered approach – periodic structured assessment plus event-triggered reassessment – gives much better coverage, and it’s the approach firms doing well in this area tend to settle on.
Should we be assessing customers’ vulnerabilities more often than annually?
This depends on the product and the customer.
For long-term, low-risk products – such as a whole-of-life policy or a pension pot in accumulation – annual assessment with event-triggered reassessment is usually proportionate. For short-term, high-risk products – consumer credit, short-term loans, anything aimed at people who may already be in financial difficulty – you’ll want more touchpoints. The FCA’s work on Borrowers in Financial Difficulty and the expectations set out in the Consumer Credit Sourcebook set a higher bar for firms in that space.
A useful rule of thumb: if the product can cause significant harm quickly (a revolving credit facility, a high-cost short-term loan), assess more often than once a year. If the product moves slowly (a long-term savings plan), annual plus triggers is fine.
Two other things worth remembering. Reassessment doesn’t need to mean running a full questionnaire from scratch – an existing assessment can often be refreshed by checking for changes. The record should be updated whenever new information emerges, whether from the customer, from observed behaviour, or from a third party (with consent). ‘Annual’ is the floor, not the ceiling.
How long should we keep a customer’s vulnerability data on file?
Two things govern this: what the FCA expects (monitoring over the lifetime of the product) and what GDPR allows (keep data only as long as needed for the purpose you collected it for).
Practically, while the product is in force, the data stays. It’s needed to treat the customer appropriately and to monitor outcomes over time. Once the relationship ends, your normal retention schedule kicks in – typically six years for most financial services records, longer for pensions and some advice. Your data protection officer will have the definitive answer for your firm.
Retain the severity and permanency markers, not just the vulnerability flag. A mild, temporary vulnerability three years ago isn’t the same as a severe, progressive one that’s been on file for five. Distinctions matter for decisions you make later.
On the separate question of how often to recheck: tie reassessment to the product and the nature of the vulnerability. Something progressive (a neurodegenerative condition, say) requires a shorter review cycle. Something stable (colour blindness, for instance) doesn’t need frequent rechecking, though it should still stay on the record. Don’t auto-clear a flag just because time has passed – prompt a reassessment instead.
How should firms capture and report long-term and short-term vulnerabilities? How are short-term vulnerabilities ‘ended’ or reviewed?
Both need to be captured, and they shouldn’t be treated the same.
The FCA’s four drivers of vulnerability include life events – which are often short-term by nature – alongside health, resilience and capability, which can be short or long-term. So your framework needs to accommodate vulnerabilities that come and go, vulnerabilities that are stable, and vulnerabilities that are progressive.
For each characteristic, it helps to record what it is, its severity (on a consistent scale), whether it’s temporary, permanent or progressive, the interaction or product context, the mitigation put in place, and whether that mitigation was taken up and worked. With that information, you can tell one kind of vulnerability from another when it comes to reporting and monitoring.
Short-term vulnerabilities aren’t really ‘ended’ – they’re reassessed and marked as resolved, with the history kept. Someone who was vulnerable during bereavement isn’t necessarily vulnerable two years later, but the record of that period matters: it tells future colleagues what happened, and it’s evidence that the firm acted appropriately at the time. (And life events can be the same but have a different impact: the expected death of an elderly person at the end of a long terminal illness may affect someone differently to the unexpected death of a child.)
The customer journey after a short-term vulnerability depends on what the vulnerability was. Bereavement and divorce usually need a check-in at the next review to confirm things have settled. Temporary financial hardships need confirmation that the customer is back on stable ground before normal service resumes. Health episodes need a confirmed return to baseline. The system should prompt a reassessment rather than silently clearing a flag.
Reporting should split vulnerable and resilient cohorts, and within the vulnerable cohort should distinguish short-term from long-term where that matters for the outcome. If short-term vulnerable customers are consistently achieving worse outcomes than resilient ones, that’s worth investigating, regardless of whether the vulnerability has since resolved.
A product manufacturer only engages with distributors, and never sees the end investor. How can customer vulnerability be monitored?
Consumer Duty anticipates exactly this situation and is clear that manufacturers can’t duck their responsibility just because they don’t hold the customer relationship.
Under PRIN 2A and FG22/5, manufacturers are expected to understand the characteristics of their target market, including vulnerable cohorts; assess fair value for vulnerable customers as part of the product approval and review process; monitor outcomes across the distribution chain, including how vulnerable customers are faring; and have arrangements in place with distributors to receive the information needed to do all of that.
In practice, this means you will need contractual and data-sharing arrangements with your distributors. The distributor, who has the customer relationship, gathers the customers’ vulnerability information (or supports customers through their assessment). The manufacturer receives management information – aggregated where appropriate, with vulnerability cohort data broken out – to inform product governance, price and value assessments, and reporting under Consumer Duty.
This is an area where many manufacturers have historically relied on the distributor doing everything and reporting back, and many distributors have assumed the manufacturer was happy with that. Consumer Duty has sharpened the requirement: both parties need to know what the other is doing, and gaps in the chain are nobody’s defence. The CII’s guidance on managing customer vulnerability makes a similar point about the need for clear allocation of responsibilities along the distribution chain.
Digital customer vulnerability platforms such as MARS were built partly with this in mind. The assessment is done once, with the customer, and then the data can be shared (with appropriate consent) between intermediary and manufacturer without either having to ask the customer again.
Once an adviser identifies a vulnerability, should that information be passed to the lender or provider? Should applications include a question about it?
Usually yes, where it’s relevant to what the lender or provider will deliver – but it needs the customer’s consent, it needs to go to the right people, and it needs to be at the right level of detail.
The FCA’s expectation under Consumer Duty is that information flows along the distribution chain in a sufficient way for each firm to meet its obligations. A lender can’t treat a vulnerable customer appropriately if it doesn’t know they’re vulnerable. Equally, a firm shouldn’t be a conduit for sensitive personal data it doesn’t actually need. The test is whether the information is relevant to the service being provided – and whether the customer has given informed consent to it being shared.
Application forms can legitimately include questions about personal circumstances where those circumstances affect the product. The wording matters. Frame it around understanding the customer, not labelling them as vulnerable: ‘Is there anything about your circumstances we should know, so we can make sure this product works for you?’ works far better than any variation of: ‘Are you a vulnerable customer?’
The level of detail should match the need. A lender usually needs to know the characteristic and what adjustment it implies, not a full clinical history. Tiered data sharing – where different roles see different levels of detail – is a helpful pattern here and sits well with GDPR’s data-minimisation principle.
We’re a small firm without the budget for user testing or panels. How can we test our communications to make sure the target market, including vulnerable customers, can understand them?
Consumer understanding is best tested by checking what customers actually took away from a communication, not by asking them in the abstract whether it was clear. A satisfaction survey typically asks how they felt about it. A comprehension check asks what they now know. They’re different things, and the second is what Consumer Duty is asking about.
A few low-cost options work well.
Follow-up comprehension surveys. A short, voluntary set of questions after a key communication: ‘Can you tell us, in your own words, what this letter was asking you to do?’ or ‘What would happen if you didn’t respond?’ Compare answers between vulnerable and resilient cohorts and you’ll quickly see where understanding is breaking down.
Analysis of inbound contact. What are customers phoning or emailing about after a mailing goes out? Their questions are a clue that something wasn’t clear. Complaints are even more direct. This is free data you’re already generating – just make sure someone’s looking at it through the Consumer Duty/vulnerable customer lens.
Readability tools. Free options (Flesch-Kincaid, Hemingway, the Plain English Campaign’s guidance) won’t replace proper user testing, but they’ll catch communications that are objectively hard to read. A reading age between 11 and 13 is the benchmark most UK sources cite for general customer-facing material. (The average reading age of adults in the UK is approximately 9–11 years old. This means content should be written to this level to be accessible to most adults, as over 6 million adults in England (roughly 1 in 6) have very poor literacy skills.)
The teach-back test. Ask colleagues who weren’t involved in writing something to read it and explain it back to you. If they can’t, customers won’t either. Pick people outside the specialist team – ideally someone closer to your typical customer.
Using current customers as an informal panel. A handful of willing customers, invited now and then to give feedback on a new communication, can give you much of what a formal panel would. Less structured, but it’s real.
Accessibility checks. Are your documents screen-reader friendly? Are colours accessible? Free tools (WAVE, axe) will flag the obvious issues in minutes.
None of these on its own is as good as a proper user-research programme – but, layered together, they’ll take a small firm a long way, and the main cost is time.
How do you evidence that you’re meeting the ‘consumer understanding’ outcome?
Consumer understanding is one of the four Consumer Duty outcomes, and it’s the one firms most often struggle to evidence – partly because understanding is harder to measure than price or service uptime.
The FCA’s expectation, set out in FG22/5 and reinforced in its multi-firm reviews, is that communications support customers in making effective, timely and properly informed decisions. That’s the standard you’re evidencing against. Not whether the communication was sent, read or clicked – but whether it did its job.
Useful evidence falls into a few categories.
Pre-issue evidence. Readability scores, accessibility audits, plain English review, and any testing done before the communication goes out. The testing doesn’t have to be expensive, but you should have some record that you tried.
Behavioural evidence. What did customers do after they got the communication? Did they take the action you wanted? Did they respond on time? If you asked for something specific and most didn’t, something’s off. Response rates, correct-action rates and time-to-respond are all measurable and all useful.
Direct comprehension evidence. Post-interaction surveys, periodic comprehension check-ins, and occasional deep-dive research – short, voluntary, and segmented by cohort where appropriate. You’re looking for signals that vulnerable customers understand at similar rates to resilient ones.
Inbound evidence. What are customers asking about after a mailing? Contact volumes and reasons tell you what wasn’t clear. A spike in ‘what does this mean?’ calls is a consumer-understanding failure, not a contact-centre success.
Complaints and error data. Complaints that boil down to ‘I didn’t realise’ or ‘I thought this meant something else’ are direct evidence that understanding fell short. Systematic errors – customers choosing the wrong option, missing deadlines they’d have met if they’d understood – point the same way.
Cohort comparison. Across all of the above, the core test under Consumer Duty is whether vulnerable customers are getting outcomes comparable to resilient ones. If response rates, comprehension scores or complaint rates differ meaningfully by cohort, that’s a clear signal that communications aren’t landing for everyone – and you’ve got the evidence either way.
Governance evidence. Board packs showing consumer-understanding management information, documented changes to communications in response to cohort data, training records, and review cycles. The FCA wants to see not just the data, but that you’re using it.
The underlying principle is simple: outcomes, not outputs. ‘We sent the letter’ isn’t evidence of understanding. ‘We sent the letter, X% of customers did the right thing in response, and vulnerable customers were within a few points of resilient ones’ is.