Customer vulnerability FAQs: privacy and GDPR

How does privacy and UK GDPR apply to firms managing vulnerable customers?

Data protection is one of the most common sources of confusion in customer vulnerability management. A widespread belief that UK GDPR prevents firms from recording vulnerability information has led some to collect less data than Consumer Duty actually requires – which tends to leave firms falling short of both regimes at once. The FCA and the ICO have been clear that this isn’t the case. These questions, drawn from an industry Q&A, work through the practicalities: when consent is needed, when it isn’t, how to handle special category data, and how to share information across the distribution chain.

Do you need explicit consent to record vulnerability?

Usually yes, but not always – and the exceptions matter.

Vulnerability data is almost always special category data under UK GDPR. Even where an individual data point doesn’t technically meet that threshold, the CII’s 2025 customer vulnerability guidance recommends treating all vulnerability data as special category on a precautionary basis, because so much of it either contains health information or implies it. A note that a customer uses sign language implies a hearing impairment; a request for a wheelchair-accessible venue implies a mobility impairment. ‘Implied health data’ is still treated as health data under UK GDPR.

For special category data, Article 9 of UK GDPR sets out the conditions which allow processing. Explicit consent (Article 9(2)(a)) is the default for most financial services firms, because it’s the most transparent, it builds trust, it fits naturally with the consent most firms already obtain at onboarding, and the Information Commissioner’s Office (ICO) treats it as a strong lawful basis for this kind of processing. The joint FCA and ICO statement on regulatory expectations regarding firms’ approaches to customer vulnerability-related data confirmed that processing vulnerability data is lawful and supported when done for legitimate purposes – so firms should not be using GDPR as an excuse to under-record.

That said, there are genuine scenarios where explicit consent can’t be obtained. A customer who lacks capacity, a life-threatening emergency, a third party disclosing on the customer’s behalf, a safeguarding situation where seeking consent would itself prejudice the support – all of these have alternative routes in Article 9 and the Data Protection Act 2018. 

So: explicit consent as default, alternative bases as genuine exceptions, with all of it documented properly.

What counts as ‘explicit consent’ when recording a customer’s vulnerability?

Explicit consent has a specific meaning under UK GDPR. It needs to be:

  • A clear, affirmative action. The customer actively says yes – ticking an unticked box, signing, clicking ‘accept’, or providing a verbal confirmation that’s recorded. Silence, pre-ticked boxes, and ‘continuing to use our service’ don’t count.

  • Informed. Before consenting, the customer needs to know who’s processing their data, what’s being processed, what it’ll be used for, who it might be shared with, and how long it’ll be kept. They also need to know they can withdraw consent at any time.

  • Specific. Consent needs to be for a defined purpose. A general ‘we might use your data for various things’ doesn’t meet the bar. Customer vulnerability data processing should be a specific, identifiable consent, not bundled into general terms and conditions.

  • Voluntary. The customer must be able to refuse without losing access to the service. Consent that’s a condition of receiving a product isn’t freely given, and therefore isn’t valid. That’s why some forms of processing – insurance underwriting, for instance – rely on alternative bases like the ‘insurance’ substantial public interest condition (Article 9(2)(g) and Schedule 1 Condition 20 of the DPA 2018).

  • Revocable. Customers need to be able to withdraw consent easily, and the firm needs a process to stop processing (and, where relevant, delete the data) when they do.

  • Documented. Who consented, when, how, and what they were told. Firms should record that metadata as standard. Alongside that, customer vulnerability systems must be able to produce the data on request (for a subject access request), delete it on valid request, and restrict access to the right people at the right level of detail – more on tiered access below.

A reasonable rule of thumb: if the customer wouldn’t be able to describe, in their own words, what they’ve agreed to, the consent probably isn’t strong enough.

Is explicit consent the only lawful basis for processing vulnerability data?

No. Explicit consent is the recommended default, but several alternatives exist for situations where consent isn’t workable.

Under Article 9 of UK GDPR, the main alternatives are vital interests (life-or-death), employment/social protection law, legal claims, and substantial public interest. The Data Protection Act 2018 expands the substantial public interest condition into 23 specific grounds in Schedule 1, of which the ones most relevant to financial services are:

  • Condition 12: processing necessary to comply with a regulatory requirement, where consent can’t reasonably be obtained.

  • Condition 18: safeguarding of children and individuals ‘at risk’.

  • Condition 19: safeguarding the economic wellbeing of individuals at economic risk.

  • Condition 20: insurance purposes (underwriting, claims, administration).

Legitimate interest (Article 6(1)(f)) is sometimes mentioned, but it doesn’t work on its own for special category data. It always has to be paired with an Article 9 condition – so for customer vulnerability data it rarely simplifies anything.

Each alternative basis has specific conditions, limitations, and documentation requirements. The CII’s 2025 customer vulnerability guidance sets them out in detail; firms should consult their data protection officer before relying on any of them routinely.

The underlying point: consent is the default because it’s the cleanest, most transparent basis. Alternatives exist for genuine edge cases; they're not there to routinely let firms avoid the consent conversation.

How are firms handling cases where they don’t have explicit consent?

Carefully – by using one of the alternative Article 9 conditions, documenting the decision, and moving back to consent as soon as it’s possible.

A few of the alternative bases come up regularly in practice:

  • Vital interests (Article 9(2)(c)) covers life-or-death situations where the customer can’t consent. A concerned family member phoning about an imminent suicide attempt, a customer unresponsive in a situation where their life is at risk. This can only be used where another basis isn’t available and the customer is genuinely unable to consent – not where they’ve refused.

  • Substantial public interest, safeguarding (Schedule 1 Conditions 18 and 19 of the DPA 2018) cover situations where recording the data is necessary to protect a person from harm or to protect the economic well-being of someone at risk, and seeking consent either can’t be done or would prejudice the protection. A customer in acute distress about to make an irreversible financial decision; suspected domestic abuse where approaching the customer directly could put them at greater risk. These conditions are narrower than they sound – they require all three of the following: the individual can't consent; the firm can't reasonably obtain consent; and seeking consent would undermine the protection.

  • Substantial public interest, insurance (Schedule 1 Condition 20) allows insurers, reinsurers and brokers to process special category data where it’s necessary for an insurance purpose – underwriting, claims handling, administering a contract. It exists because insurance can’t really function without health and other sensitive data, and consent that’s a condition of service isn’t freely given.

  • Substantial public interest, regulatory (Schedule 1 Condition 12) covers processing necessary for regulatory compliance, where consent can’t reasonably be obtained. Used carefully, this can cover situations where repeated contact attempts have failed but the firm still needs to record vulnerability to meet its Consumer Duty obligations.

Whichever alternative basis is used, firms need to document which one was relied on, why, when, and by whom, and set a review date to move back to consent where possible. The practical discipline is: alternative bases are genuine exceptions, not shortcuts, and the file needs to show why one was needed.

How should we record special category data if we’re worried about a data breach?

The starting point is that worry about breaches doesn’t justify not recording the data – it justifies recording it properly. Under-recording to avoid risk isn’t compliant with either Consumer Duty or UK GDPR, and it leaves customers unsupported. The Integrity and Confidentiality Principle (Article 5(1)(f)) sets the standard, and a few practical measures carry most of the weight.

  • Tiered access. Not everyone who interacts with a customer needs to see the full detail of their vulnerability. A reasonable model has multiple levels: a top-level flag (vulnerable yes/no) visible widely, a support-need description (for example, ‘ensure carer present’) visible to anyone likely to encounter the customer, a category (health, financial, life event, capability), and the full detail accessible only to staff directly supporting the customer. Most data breaches involve access that shouldn’t have existed in the first place – tiered access reduces the exposed ‘surface area’.

  • Structured fields, not free text. Free-text notes are harder to report on, harder to anonymise for cohort analysis, and more likely to contain staff opinions that shouldn’t be recorded. Dropdown lists, categories and severity ratings keep the data disciplined.

  • Encryption and role-based access controls. Standard good practice – data encrypted at rest and in transit, access granted by role, logged and auditable.

  • Reputable systems and providers. ISO 27001 or Cyber Essentials certification and regular security audits should be contractual requirements on any third-party processor. UK GDPR requires ‘sufficient guarantees’ from processors.

  • Centralisation. Customers’ vulnerability data, scattered across call recordings, email chains, case management notes and spreadsheets, is far harder to secure than a single, dedicated customer vulnerability record. The more places the data lives, the more exposure points you have.

  • Backups. Regular, encrypted, tested. And if a customer exercises their right to erasure, you need a process to handle the data in backups too – the ICO accepts ‘putting it beyond use’ as an interim measure until it can be properly deleted.

A breach of well-governed vulnerability data, held under appropriate controls, is a serious event but a manageable one. A breach of poorly-governed data spread across fifteen systems with no access controls is a catastrophe. Reduce the second risk by investing in the first.

If we use a code or score for each vulnerable characteristic – without recording personal details – do we still need explicit consent?

Yes, in almost every realistic case.

The instinct behind the question is sound: less identifiable data means less risk. But ‘VUL-MH’ against a named customer record is still personal data, and it’s still special category data if it reveals or implies health information. The code is only a label for the underlying fact, and UK GDPR looks at what the data reveals, not how it’s stored.

Some situations do reduce the obligation. Properly anonymised data – where the individual genuinely can’t be re-identified, even by combining it with other information the firm holds – isn’t personal data any more, and sits outside UK GDPR. That’s the basis on which aggregated cohort reporting works, and it’s why the CII’s customer vulnerability guidance recommends anonymising or pseudonymising data used for product-level analysis. Pseudonymisation (replacing identifiers with codes, but keeping the key to reverse it) reduces risk but doesn’t take the data outside GDPR, because it’s still identifiable with the key.

For live customer records – the ones staff actually use to deliver support – the data is inherently identifiable, because the whole point is to tie it to a named customer. Coding the characteristic doesn’t change that, and consent (or an alternative Article 9 condition) is still needed.

One practical middle ground: tiered access. Most staff see only a high-level flag or code; the underlying detail is visible only to those directly supporting the customer. That’s a sensible way to apply the data-minimisation principle internally. But the record in full, held against the customer, still needs a lawful basis.

What if we record the action required, not the vulnerability itself – do we still need consent?

Usually yes. Where the action implies the underlying characteristic (and in most cases it will), the record is still special category data for GDPR purposes.

‘Send large print’ implies a visual impairment. ‘Arrange wheelchair access’ implies mobility impairment. ‘Verbal confirmation before completing transaction’ often implies cognitive issues. Under UK GDPR, data that implies health or disability is treated as health data, and needs an Article 9 condition just as an explicit health note would.

Beyond the legal position, there’s a practical argument for recording the characteristic alongside the need. A ‘send large print’ flag doesn’t tell a future colleague why it’s there, whether it’s likely to change, or whether related adjustments might help. Someone with early macular degeneration, someone with dyslexia, and someone with a recent eye injury all might need large print – but the appropriate overall service looks different for each. Recording only the action gives you an operationally useful flag and loses everything else.

A common pattern in well-run firms is to record both – the characteristic (with severity and permanence) for context, and the specific support needs for action, at different tiers of access. That approach gets the benefits of each and handles the GDPR obligations cleanly.

What if a customer doesn’t consent but we’re fairly sure they’re vulnerable in a way that needs action?

This is common, and it needs to be handled with care. Most of the time, the answer is to keep trying to engage, document what you know and what you’ve offered, and act on what you can.

If the customer is of sound mind and has declined to engage with an assessment or to confirm a vulnerability, you can’t rely on vital interests (that requires that they can’t consent, not that they won’t). You can record the circumstances that led you to be concerned – a conversation that raised flags, a pattern of behaviour – as factual observations, without attaching an unverified diagnosis to the customer’s record. The ICO’s guidance is clear that records should be factual, objective and defensible; what a firm mustn’t do is attach its own diagnosis to the customer.

A few scenarios where the position shifts:

  • Genuine safety risk. If there’s a credible concern about harm – to the customer or to others – Schedule 1 Conditions 18 or 19 may allow processing without consent where seeking it would prejudice the protection. This is a high bar and needs to be documented carefully.

  • Third-party disclosure. A family member or carer discloses information. You can record what they’ve told you (noting the source), but you should normally check for authority – a power of attorney or equivalent – and, where safe, approach the customer to confirm. The CII’s customer vulnerability guidance gives a useful worked example of this.

  • Inferred vulnerability from behavioural or transactional data. You can flag this as ‘inferred, not verified’ and use it to prioritise outreach, but you shouldn’t record it as factual vulnerability data without confirming it with the customer.

In all cases, document what you know, what you offered, what the customer said, and the basis on which you acted. Consumer Duty doesn’t require firms to achieve the impossible – it requires them to act reasonably and evidence that they did.

What if someone can’t give consent – perhaps because of a condition that prevents it, like dementia?

Where a customer genuinely lacks capacity to consent, the firm still has obligations – and has options.

The specific lawful basis depends on the situation. Where there’s an immediate threat to life, vital interests (Article 9(2)(c)) applies. Where the processing is necessary for regulatory compliance and consent can’t reasonably be obtained, the regulatory substantial public interest condition (Schedule 1 Condition 12) may apply. Where safeguarding is at issue, Conditions 18 or 19 come into play. Where the firm is an insurer and the processing is necessary for an insurance purpose, Condition 20 applies.

Alongside the legal basis, the operational question is who can act on the customer’s behalf. A registered lasting power of attorney is the cleanest route – the attorney can give consent for the data subject. Where no formal authority exists, engaging a trusted family member or carer is often appropriate, but the firm should record who they are and what authority (if any) they hold, and be aware that a family member disclosing information isn’t the same as the customer consenting to it.

For degenerative conditions where capacity is expected to decline over time – dementia, progressive neurological conditions – the best approach is to have the conversation while capacity is intact. Explain what the firm would record, why, how it would be used, and who else might need to be involved. A consent given proactively, before capacity is lost, carries the customer’s own wishes through the whole arc of the condition.

For customers where capacity is genuinely in doubt on the day, an independent assessment – for instance by a qualified nurse – can remove the subjectivity and give the firm a clear position on which basis to rely on.

What if someone doesn’t give consent but we’re sure they’re at risk?

Safety trumps everything else.

If there’s a credible, immediate risk to life – a customer expressing suicidal intent, a disclosure suggesting imminent violence – firms should act, and the regulators expect them to. Vital interests (Article 9(2)(c)) allows processing without consent where someone’s life is at stake and they can’t consent. Where the situation is serious but not immediately life-threatening, the safeguarding conditions (Schedule 1 Conditions 18 and 19) cover cases where recording and acting on the data is necessary to protect the person from harm.

In genuine risk-to-life situations, the firm should take appropriate protective action – which can include contacting the emergency services, regardless of the customer’s wishes. This is one of the specific circumstances where the duty of care is unambiguous.

Once the immediate risk has passed, firms should move onto a more sustainable basis. Where possible, explain to the customer what was recorded and why, and seek consent retrospectively. Where consent can’t be obtained, transition to an appropriate ongoing basis (safeguarding, regulatory, or similar) and document the reasoning.

Document all of it: the concern, the action taken, the basis used, the decisions made, the follow-up. This is as much for the customer’s protection as for the firm’s – and in a regulatory review, the chain of decisions will be what’s examined.

A family member phones saying our customer is vulnerable. What do we do?

This comes up a lot, and the right approach balances genuine helpfulness with some important safeguards.

  • Start with gratitude. The family member is usually trying to help. Thank them, listen, and take what they say seriously.

  • Check for authority. Does the caller have a lasting power of attorney or other formal authority to act for the customer? If so, they can engage on the customer’s behalf, including giving or refusing consent. If not, they’re a concerned third party – their input is valuable but doesn’t replace the customer’s own say.

  • Gather the relevant information. What’s prompting the concern? Is there an immediate risk, or is this about ongoing support? Is the customer contactable? Is it safe to contact them directly, or is there a safeguarding concern that means approaching the customer could make things worse (for example, suspected coercive control)?

  • Record the disclosure as a disclosure. The family member’s information is a factual observation you can record, noting the source. What you shouldn’t do is treat the caller’s statement as a confirmed diagnosis on the customer’s record. ‘Partner states customer has been struggling with depression since December’ is fine. ‘Customer has depression’ – without verification – isn’t, unless you have another basis for recording it.

  • Decide the lawful basis. If the customer can be approached safely, the normal route is to reach out, explain what’s been raised, and ask for their consent to record and act on it. If the customer can’t be reached, can’t consent, or if approaching them would prejudice their safety, the relevant Article 9 condition applies – vital interests for immediate life-or-death cases, Schedule 1 Conditions 18 or 19 for safeguarding or economic wellbeing cases, or Condition 12 for cases where regulatory compliance is necessary and consent can't reasonably be obtained.

  • Act proportionately. Take protective action appropriate to the situation – a payment break, a communication preference change, a flag on the account, an onward referral to specialist support services. The CII’s guidance has a useful worked example: a partner phoning to say the customer is unwell and can’t make a payment, leading to a recorded three-month payment break, economic wellbeing as the lawful basis, and a follow-up letter to the customer once it’s safe to contact them.

  • Follow up with the customer. Once it’s appropriate, let the customer know what’s been recorded and what actions were taken, so they can correct anything inaccurate and exercise their data rights. This is a transparency obligation, and it’s also good practice – the customer needs to know what’s on file.

  • Document everything. Who called, when, what they said, what authority (if any) they had, what lawful basis you relied on, what you did, and when you followed up with the customer. This chain of evidence matters – both for compliance and for the quality of future interactions with the customer.

Should creditors share vulnerable customers’ information when passing accounts to a debt collection agency?

In most cases yes, with the customer’s knowledge and appropriate consent, and there’s a strong argument that it’s the right thing to do.

Passing an account to a debt collection agency without transferring what the firm already knows about the customer’s vulnerability forces the customer to disclose all over again, often in a more stressful context, and sometimes to staff less well-equipped to handle it. That repetition is one of the specific harms FG21/1 and Consumer Duty were designed to prevent.

The FCA doesn’t prescribe how data sharing should work between firms in the distribution chain, but the Duty’s expectation that outcomes are monitored across the chain – and that vulnerable customers receive comparable outcomes to resilient ones – all but requires the information to flow. The CII’s 2025 customer vulnerability guidance takes the same line: vulnerability data should be shared across the distribution chain to avoid harmful repetition, improve efficiency, and enable cohort-level outcomes monitoring.

In practice, the cleanest approach is to make data sharing part of the customer journey from the start: explain it at the point of disclosure, obtain consent to share with downstream parties, and capture the data in a consistent, structured format that can actually travel between firms. Free-text notes buried in a CRM aren’t really shareable; structured vulnerability data is.

Two other points worth flagging. Utilities companies already share customer vulnerability data through Priority Services Registers under the substantial public interest condition – a model the FCA is taking interest in through its work with UK Regulators Network (UKRN), since the customers are often the same people. And the Data (Use and Access) Act 2025 is introducing Smart Data Schemes that may, in time, provide a more formal framework for cross-sector data sharing with consent. That direction of travel is clear, but full implementation will take some years.

What do you think about firms sharing customers’ vulnerability information with each other more generally?

Sharing is, on balance, a good thing for customers and a good thing for compliance – when it’s done properly.

The benefits are substantial:

  • Customers don’t have to repeat sensitive information in painful moments. 

  • Firms along the distribution chain can deliver consistent, well-informed support. 

  • Outcomes across the chain can actually be measured, which is what Consumer Duty requires. 

  • Patterns that no single firm would see – cohort-level friction points, common areas of harm – become visible when data aggregates.

The concerns are real but manageable. Customers worry about their information being spread more widely than they intended. Firms worry about liability for onward use. And GDPR requires a lawful basis, a clear purpose, and appropriate safeguards. None of these is a reason not to share; they’re reasons to share well.

Practically, that means:

  • Sharing should be with the customer’s explicit consent, covering who the data goes to, what for, and how long.

  • The data should be structured so it can be shared meaningfully – not free text, not siloed in one firm’s system.

  • Tiered access should apply across firms as well as within them: a downstream processor doesn’t need to see everything, just what’s relevant to their role.

  • Appropriate agreements (data sharing, processor) should be in place with contractual safeguards.

  • Customers should be able to see what’s been shared and exercise their rights – access, correction, erasure – through any of the firms holding the data.

The direction of regulation and industry practice is clearly towards more sharing, not less. The firms getting ahead of this now – building the data structures and consent mechanisms that make it possible – will find the next few years considerably easier than those still holding the data in free-text notes on individual product systems.

A customer has disclosed a mental health condition. Can we tell their insurer, adviser, or lender?

Usually yes, with the customer’s consent and appropriate safeguards. Sometimes no, where sharing would not serve the customer’s interest or where consent can’t be obtained.

Mental health information is special category data under UK GDPR, so sharing it needs an Article 9 condition. In practice, for sharing across the distribution chain, explicit consent is almost always the right basis – the customer understands what’s being shared, with whom, and for what purpose, and retains control.

A few practical considerations.

  • The sharing has to have a purpose from which the customer benefits. Letting an insurer know about a mental health condition so they can make reasonable adjustments, give appropriate support, or handle a claim sensitively is a legitimate purpose. Sharing the same information with a marketing team, or with a firm that has no need to know, isn’t – and would likely breach purpose limitation even with consent.

  • Share at the right level of detail. A tiered approach works well here. The insurer’s front-line team may only need to know that the customer has a support need and the form it takes (for example, ‘needs simplified communications, prefers email over phone’). A specialist handling a specific case may need more. The full clinical detail is rarely relevant to anyone outside the immediate support context, and shouldn’t travel further than it needs to.

  • Use structured data, not free text. A diagnosis buried in a conversation note is hard to share meaningfully, easy to misinterpret, and hard to redact. A structured record – characteristic, severity, support needs – is designed to travel.

  • Put proper agreements in place. Where sharing is routine (for example, between an intermediary and their usual manufacturers), data sharing agreements with appropriate safeguards are expected. Where sharing is occasional, a clear audit trail and documented consent go a long way.

  • Check the customer’s own view. Some customers are comfortable sharing; others aren’t. Some will be happy for the broker to know but not the product provider, or vice versa. Respecting granular preferences is good practice and builds trust.

  • In the absence of a basis, do not share. If the customer hasn’t consented and no Article 9 condition applies, you can’t share, however well-intentioned the sharing would be. Record what you know, deliver what you can from your own side, and continue to engage the customer on the question of wider sharing.

Where it’s done well, information flow across the distribution chain saves the customer from repeated disclosure, improves outcomes, and supports the cohort-level monitoring Consumer Duty requires. Where it’s done badly or without a basis, it creates real harm and real compliance risk. The CII’s 2025 customer vulnerability guidance is explicit that sharing – done well – is the direction of travel; the work is in making sure the foundations are right.

Can we record that a customer has died, and share that with other firms?

Yes – and you should, for almost every practical purpose.

UK GDPR applies only to living individuals (Article 2 makes this explicit). Once a customer has died, data about them is no longer ‘personal data’ for GDPR purposes, so the day-to-day obligations around consent, lawful basis, access rights, and so on don’t apply to that data in the same way. Firms can record the death, share it with other parts of their own business, and share it with other firms in the distribution chain as needed to administer the account.

That said, a few things are worth being careful about.

  • The data is often still sensitive. Even if it isn’t technically personal data any more, records of a deceased customer will often contain information about living relatives, beneficiaries, executors, and third parties – all of whom are still protected by UK GDPR. A deceased customer’s record needs to be handled with continued care, because the data around them is still live.

  • Confidentiality duties may survive death. Common law confidentiality, professional obligations (for example, for advisers), and specific statutory provisions (for example, for medical data) can continue to apply after death. The CII’s customer vulnerability guidance notes that firms should consult their data protection officer on specific cases where this might bite.

  • Relatives and the deceased’s estate have their own rights. They aren’t the deceased’s data subject rights (which end at death), but they have their own relationships with the firm, and the way you communicate with them is subject to the usual rules. Handle estate correspondence with care – it’s often the first point of contact with people who are themselves grieving and therefore vulnerable.

  • Bereavement is itself a life event. If you learn of the death through a conversation with a surviving partner or family member, consider whether their circumstances (bereavement, possible financial strain, potential capacity issues) need to be recorded and acted on in the normal way, with their consent.

  • Sharing with other firms in the distribution chain makes sense. A deceased customer shouldn’t keep receiving marketing communications, renewal notices or payment reminders. Proactively notifying the manufacturer, the adviser, the insurer or the pension provider that the customer has died prevents real and avoidable distress for the estate and surviving family. This kind of sharing is legitimate and expected.

  • Industry registers exist. In financial services, there are established mechanisms for flagging deaths (The Bereavement Register, deceased flags in credit reference data, and similar). Using them reduces the frequency with which estates receive unwanted contact from firms that don’t know about the death.

The practical direction: record the death promptly, mark it clearly in the customer record, share it with the parts of the business and with other firms that need to know, and extend the same care to the people around the deceased that you would extend to any other vulnerable customer.

What do we do if a customer says our record of their vulnerability is wrong?

Take it seriously, investigate, and correct it if they’re right. UK GDPR Article 16 gives customers the right to rectification – and for vulnerability data specifically, getting it right matters more than for most categories of data, because incorrect records can lead to inappropriate treatment.

The process is straightforward.

  • Listen to the challenge carefully. Is the customer saying the underlying fact is wrong (‘I don’t have that condition’), the severity is wrong (‘that’s outdated – I’m fine now’), or the support need is wrong (‘that adjustment doesn’t work for me’)? Different challenges call for different responses.

  • Investigate without assuming. The customer is often right. Records can be inaccurate for all sorts of reasons – mis-entry, out-of-date information, inferred data that turned out not to match reality, a note made from a conversation that the customer remembers differently. Check the source and the metadata (who recorded it, when, on what basis).

  • Correct where the customer is right. If the record is wrong, update it – and preserve the original as a historical entry marked as corrected, so the audit trail is intact. Don’t simply overwrite.

  • Have a conversation where views differ. Sometimes a firm’s record may reflect something the customer said at the time but no longer wants to acknowledge. The ICO is clear that firms don’t always have to agree with the customer’s version – accuracy means ‘not misleading’, not ‘whatever the customer says’. But you do have to consider the challenge carefully, record the dispute, and make a defensible decision. Documenting the disagreement is itself a requirement.

  • Update severity or status where the situation has changed. ‘My depression is no longer active’ is a reasonable challenge – the record probably needs updating to reflect resolution rather than being deleted entirely. Mark the original as historical, create a current record showing resolution, and set a review date.

  • Consider whether an independent assessment would help. Where the disagreement is about a complex or contested characteristic (mental capacity, for instance), an independent assessment by a qualified professional can take the subjectivity out of the question.

  • Respond within a month. The standard UK GDPR window for responding to a rectification request is one month, extendable by two months for complex cases with the customer informed.

This is one of the reasons records should be factual, objective, and sourced from the outset – it makes rectification far easier. If the original entry was ‘customer said during call on 12 March: I sometimes struggle with written documents because of my dyslexia’, and the customer disputes it, you’ve got the source right there. If the entry was ‘dyslexic’ with no provenance, there’s nothing to fall back on, and the rectification request is harder to handle well.

A member of staff recorded a customer’s vulnerability in free-text notes that probably shouldn’t be there. What do we do?

Fix the record, learn the lesson, and consider whether it needs to go further.

The first step is to look at the specific entry. Is it a judgmental comment (‘customer being difficult’)? A speculative diagnosis (‘probably has memory issues’)? A personal opinion dressed as fact (‘customer clearly doesn’t understand their finances’)? These shouldn’t be on the record. They’re not factual, they’re not defensible in a subject access request, and they expose the customer to inappropriate treatment by future colleagues acting on them.

What to do.

  • Remove or correct the problematic content. Where the note is straightforwardly inappropriate, replace it with a factual alternative – what the customer actually said or did, what actions were taken, and nothing else. ‘This customer was anxious and kept asking the same questions, probably has mental health issues’ becomes ‘Customer asked for information to be repeated several times and expressed preference for email communication. Updated communication preference.’ Same operational value, no inappropriate inference.

  • Preserve the audit trail. Don’t simply delete as though it never existed. UK GDPR requires you to be able to show what’s happened to a record over time. Mark the original entry as corrected, with a note of why and when, and retain that marked version.

  • Consider whether the customer should know. If the inappropriate note has shaped the customer’s treatment – or if it’s the kind of thing that would surface in a subject access request – there’s a strong case for proactive communication. The customer’s right to accurate records includes their right to know the record has been corrected.

  • Escalate if the note reflects a data breach. If the inappropriate content has been shared beyond where it should have been – in an email to a third party, for example, or included in a document sent to another firm – that may be a personal data breach requiring notification to the ICO within 72 hours, depending on the risk. The data protection officer should make the call.

  • Treat it as a learning moment. One staff member recording an inappropriate note is a training gap; a pattern of inappropriate notes is a systemic problem. Review whether staff training on what to record (and what not to record) is doing its job. The CII’s vulnerable customer guidance has useful examples of safe and unsafe comments that work well as training material.

  • Tighten the system if you can. Free text is where problems like this tend to emerge. Structured fields – dropdown lists, category lists, severity scales – push staff towards factual recording. Where free text is needed, prompts and examples can help (‘Record the customer’s own words; record factual observations; do not record staff opinion or assumed diagnoses’).

As a guiding principle: customers' vulnerability data should be disclosable to the customer. A good test before recording anything is ‘would I be comfortable if the customer read this back?’ If the answer is no, the note shouldn’t be there.

Can we keep historical records of vulnerabilities that have resolved?

Yes, and in most cases you should.

The concern behind this question is usually that keeping a record of a resolved vulnerability somehow breaches data minimisation. It doesn’t – provided the record serves a legitimate purpose and is held appropriately. The CII’s 2025 vulnerable customer guidance is explicit that historical records are part of a compliant vulnerability data approach, not an exception to it.

Historical records earn their place for several reasons. They evidence the support you provided, which matters for Consumer Duty and for any later complaint or claim. They let you spot recurring patterns – a customer with a fluctuating condition may need a quicker response the next time it recurs. They provide context for new colleagues picking up the customer relationship, so nothing has to be re-disclosed. And they support cohort-level analysis that would otherwise be skewed by customers whose vulnerabilities have come and gone.

The key is how the record is held. A resolved vulnerability should be clearly marked as historical – not sitting in the live record as though it’s still active, which could lead to inappropriate ongoing treatment. The record should carry the metadata that shows when it was active, when it was resolved, who updated the status, and on what basis. Systems should distinguish clearly between current and historical data, and access controls should reflect the lower day-to-day relevance of historical records while keeping them available for the purposes above.

One useful pattern from the CII’s vulnerable customer guidance: where a customer’s vulnerability is resolved, create a new current record marked ‘resolved’ or ‘no longer active’, and retain the original record as a historical entry marked as such. The audit trail stays intact, the current position is unambiguous, and nothing is lost.

If data was recorded in error (wrong customer, incorrect diagnosis that was never verified), that’s different – mark it as recorded in error, explain the correction, and retain the error record for the audit trail. Don’t simply delete as though it never existed.

How long should we keep customers’ vulnerability data after a customer leaves us?

The short answer: as long as you’d keep other customer records relating to that product, and no longer – with a few exceptions worth knowing about.

UK GDPR’s storage limitation principle (Article 5(1)(e)) says data shouldn’t be kept longer than needed for the purpose it was collected for. Consumer Duty adds that firms need to monitor outcomes over the lifetime of the product, which tends to extend the window. In practice, vulnerability data follows your normal customer record retention policy – typically six years after the relationship ends for most financial services records, longer for pensions and certain advice, and whatever your specific product retention policy specifies for others. Your data protection officer will have the definitive answer for your firm.

A few things worth calling out.

  • Historical vulnerability records should usually be retained even while the relationship is live. A customer who had depression in 2023 and is fine now hasn't simply moved out of vulnerability – they're someone whose vulnerability has resolved, and the historical record matters. It evidences the support you provided, helps identify patterns if the condition recurs, and protects the firm if a complaint is raised later. Mark it as historical, but don’t delete it.

  • Regulatory and audit purposes can extend retention. Consumer Duty requires firms to evidence their processing and outcomes, and some of that evidence may need to be held beyond the normal retention period. That’s a legitimate reason to retain – but it needs to be documented in your retention schedule, not treated as a catch-all.

  • Right to erasure overrides where it applies. If a customer exercises their Article 17 right and no exception applies, the data needs to go – from live systems and, over time, from backups. See the separate question on erasure for the detail.

  • Backups get a slightly different treatment. The ICO accepts that deletion from backups can take longer than deletion from live systems, provided the backup data is ‘put beyond use’ until it can be overwritten or destroyed in the normal course of the backup cycle.

The practical discipline is to have a retention schedule, apply it consistently, mark historical data clearly, and know what triggers either extension (regulatory need) or earlier deletion (erasure request). Keeping vulnerability data forever ‘just in case’ is a compliance risk; deleting it the moment a customer leaves loses valuable context. The middle ground, documented, is what works.

What happens if a customer makes a subject access request for their vulnerability records?

Vulnerability data is subject to the same right of access as any other personal data under UK GDPR Article 15. Customers can ask what data you hold about them, what you’re doing with it, who it’s been shared with, and for how long it’ll be held – and they’re entitled to a copy, usually within one month.

A few things are specific to vulnerability data.

  • The records need to be disclosable. That means objective, factual, consistent, and free of staff opinions or assumed diagnoses. ‘Customer mentioned feeling overwhelmed by paperwork since her husband died’ is disclosable. ‘Customer is clearly unable to cope’ probably isn’t – and it shouldn’t be on the record in the first place. A subject access request is a good stress test for whether your records would stand up to scrutiny.

  • Structured records are much easier to retrieve and produce. Vulnerability data held in dedicated fields – characteristic, severity, support needs, review dates – can be extracted and presented cleanly. Data scattered across free-text notes, emails, call recordings and case management systems takes far more effort to assemble and raises the risk of missing something.

  • You can ask for clarification on large or complex requests. The ICO accepts that where a request covers a substantial volume of data, you can ask the customer to specify what they’re looking for – provided you respond to any clearly-defined parts of the request within the normal timeframe. For vulnerability data specifically, it’s reasonable to offer options: ‘Do you want your support records only, your full vulnerability assessments, or all records we hold?’

  • Third-party data usually gets redacted. If the record includes information disclosed by a family member, a named carer, or someone else, their personal data may need redacting before the response is sent. Get advice from your data protection officer where this comes up.

  • Data about the customer’s own health can usually be disclosed. The customer has a right to their own data, including sensitive health information they’ve disclosed themselves. The one specific exception is covered in the next question.

  • Watch the timelines. The standard window is one month from receipt, extendable by two months for complex requests (with the customer informed). Late responses create their own compliance issue.

If your vulnerability data is well-captured in the first place, responding to a DSAR is straightforward. If it isn’t, a data subject access request is the moment the weaknesses in your record-keeping get exposed.

Can we withhold information on mental or physical health from a data subject access request response?

Yes, in narrowly-defined circumstances, and only after following the proper process.

The Data Protection Act 2018 (Schedule 3, Part 2) provides an exemption from the right of access where disclosing health-related personal data would be likely to cause serious harm to the physical or mental health of the data subject or another person. It’s sometimes called the ‘serious harm test’, and it’s deliberately a high bar.

A few conditions need to be met for the exemption to apply.

  • The harm must be to mental or physical health. Emotional upset, embarrassment, or inconvenience don’t meet the threshold. The concern has to be genuine and clinically significant.

  • A relevant health professional’s opinion is needed. The DPA 2018 defines who counts as a ‘relevant health professional’ – registered medical practitioners, dentists, nurses, and other categories set out in section 204 of the Act. The opinion needs to be from someone with an appropriate current relationship with the data subject, or another qualified professional able to assess the likely impact.

  • The opinion must be recent. Generally within the last six months, though the specific timing depends on the circumstances and the stability of the condition.

  • The exemption is specific, not general. It allows you to withhold the parts of the data that would cause serious harm – not the whole response. The rest of the record still needs to be disclosed.

  • Firms shouldn’t make the judgement themselves. A firm concluding ‘we think this would upset the customer’ without a qualified opinion isn’t using the exemption correctly. This has to go through a health professional.

Where a customer vulnerability data response includes material that gives you genuine concern – a condition the customer may not be aware has been recorded, perhaps, or a diagnosis that emerged through inference or third-party disclosure – escalate before responding. Seek the health professional’s opinion, document the process, and base the response on their conclusion. If in doubt, disclose. Over-use of this exemption is a compliance risk in its own right.

One thing worth saying plainly: this exemption is for genuine clinical harm, not for firms uncomfortable about their own records being read back to them. If the concern is that the customer will find out the firm has made an assumption about their mental health with no sound basis, the answer isn’t to withhold the data – it’s not to have recorded it in the first place.

Do we need a data protection impact assessment for our vulnerability processes?

Yes. For almost any firm processing vulnerability data at any scale, a data protection impact assessment is required rather than optional.

UK GDPR Article 35 requires a data protection impact assessment where processing is ‘likely to result in a high risk’ to the rights and freedoms of individuals. The ICO has published a list of processing operations that require a data protection impact assessment, and several of them apply directly to vulnerability data: processing special category data on a large scale, profiling vulnerable individuals, combining or matching datasets, and processing that involves systematic monitoring. Most customer vulnerability programmes tick at least one of these boxes.

The CII’s 2025 vulnerable customer guidance makes the data protection impact assessment requirement explicit – firms must complete one before starting to process vulnerability data. It’s not a one-off: significant changes to your processes (new systems, new types of data, new sharing arrangements) should trigger a fresh data protection impact assessment or an update to the existing one.

A good vulnerability data protection impact assessment covers the full picture:

  • Purpose. Why the processing is necessary – usually the three purposes the CII’s vulnerable customer guidance identifies: delivering targeted support, outcomes monitoring, and product improvement.

  • Data. What characteristics, at what severity, collected how, stored where, for how long.

  • Lawful basis and Article 9 conditions. Which basis is the default (usually explicit consent), which alternatives apply in which scenarios, and how the decision is made in practice.

  • Risk assessment. What could go wrong – accidental disclosure, unauthorised access, inaccurate records causing harm, over-retention – and how likely each risk is.

  • Mitigations. What you’re doing about each risk: tiered access, encryption, staff training, consent management, audit trails, retention discipline, breach notification procedures.

  • Residual risk. What risk remains after mitigation, and whether it’s acceptable.

  • Review. When the data protection impact assessment will be revisited and who owns it.

A customer vulnerability data protection impact assessment is often the clearest expression of your overall approach, and it’s usually the document a regulator or auditor will ask for first. It’s also a useful exercise internally – the act of writing it surfaces gaps that aren’t always visible in day-to-day operations. Most firms find that completing a proper data protection impact assessment drives genuine improvements, not just paperwork.

Where to start: if your firm already has data protection impact assessment templates and a process for standard processing activities, adapt that for vulnerability data rather than starting from scratch. The ICO publishes data protection impact assessment guidance and a sample template, and the CII guidance includes prompts specific to vulnerability data. Your data protection officer should own the process and sign off the result.

What should our privacy notice say about customers’ vulnerability data?

At minimum, a privacy notice dealing with customers’ vulnerability data needs to cover six things: what you collect, why, how you’ll use it, who you might share it with, how long you’ll keep it, and what rights the customer has. All in plain English, all in a form the customer can actually read.

For vulnerability data specifically, a few points are worth being explicit about.

  • What you collect. Some description of the circumstances you’re asking about – health, life events, financial circumstances, capability – rather than just ‘personal information’. Plain language matters here. The customer should be able to understand what kind of information they’re being asked to share.

  • Why. The three purposes are a good frame: to support you appropriately, to ensure our products and services work for you, and to meet our regulatory obligations. Regulatory obligation alone isn’t a great reason from the customer’s perspective; the first two are what they actually care about.

  • How you’ll use it. Specifically, what happens with the data – who sees it, how it shapes the service you provide, and what it isn’t used for (typically marketing, pricing, or anything that could disadvantage the customer).

  • Who you might share it with. Other teams in your own firm on a need-to-know basis. Specific named categories of partner or third party, with the reason. Regulators where required. The customer’s trusted representatives where they’ve authorised it.

  • How long you’ll keep it. Either a specific period or the basis on which retention is decided (the life of the product plus a defined period, for instance). Link to your full retention schedule if you have one.

  • Rights. Access, correction, deletion, objection, and how to exercise them. Plus the right to withdraw consent, with the process explained.

The CII’s customer vulnerability guidance includes a sample extract that captures the tone well: clear, warm, and framed around the customer’s benefit rather than the firm’s compliance. ‘We collect information about your circumstances that may affect how we support you. This might include health conditions, life events, or financial circumstances that mean you need additional or non-standard support. We’ll only share this with others in our organisation who need to know to support you, and with carefully selected partners where this helps us provide better service. You can ask us to delete this information at any time.’

Two things to avoid:

  • Don’t bury vulnerability data in a generic privacy notice. A single paragraph about ‘we may collect special category data’ buried on page 14 of a 20-page notice isn’t transparency. Give vulnerability data its own section, or a clear pointer to a specific notice covering it.

  • Don’t over-caveat. A notice that reads like a legal disclaimer undermines the trust it’s meant to build. Keep it concise, explicit, and framed around what the customer needs to know.

Privacy notices are also worth testing – can a non-specialist read it and explain what happens with their data? If not, it needs rewriting. The Consumer Duty outcome on consumer understanding applies here too.

Who in our firm should own vulnerability data – the data protection officer, the vulnerability lead, or compliance?

The short answer: no single function owns it alone. The most effective model is clear ownership across several roles, with named individuals and explicit accountability.

In practice, the ownership usually sits something like this.

  • The vulnerability lead (or equivalent) owns the outcomes. Whether customers are being identified, supported, and achieving outcomes comparable to resilient cohorts is the business question at the heart of Consumer Duty. This sits best with someone close to customers and operations, whose job is to make it work, not to police it.

  • The data protection officer owns the data protection compliance. Is the lawful basis right, are the consents properly captured, are records accurate, is retention disciplined, are subject rights being honoured? The data protection officer doesn’t operate the process but oversees its data protection integrity and signs off the data protection impact assessment.

  • Compliance owns the regulatory framework. FCA requirements, Equality Act implications, the interaction with other regulatory obligations. Compliance also owns the documentation framework that ties it all together.

  • IT owns the systems and security. Access controls, encryption, audit trails, integration, backup – the technical layer that makes everything else possible.

  • The board owns the whole thing. Consumer Duty requires annual board-level reporting on outcomes, including for vulnerable cohorts. At least one senior executive should be accountable for vulnerability overall, with clear delegation to the operating roles above.

What doesn’t work is ambiguity. Vulnerability data that sits ‘somewhere between’ compliance, the data protection officer, and operations – with nobody owning it day to day – tends to drift. Data quality degrades, decisions get inconsistent, and the annual board report becomes a retrospective scramble.

A simple test: if a customer vulnerability-related issue came up tomorrow, do you know who would handle it, who they’d escalate to, and who would sign off the decision? If yes, ownership is probably working. If no, the structure needs attention.

For small firms, the named individuals may be the same person across multiple roles – and that’s fine, provided the distinct responsibilities are documented. Appointing a single ‘customer vulnerability lead’ with data protection officer support and executive sponsorship is often enough at smaller scale.

Isn’t GDPR proportional to company size? Are small firms exempt?

No – UK GDPR applies to firms of all sizes. There’s no size-based exemption for processing personal data.

What size does affect is proportionality, not applicability. The documentation burden scales – a small firm’s data protection impact assessment doesn’t need to look like a multinational bank’s, a small firm may not need a formally-appointed data protection officer (unless the processing itself requires it), and the ICO’s approach to enforcement takes resources into account. But the underlying obligations – lawful basis, transparency, accuracy, security, respecting customer rights – apply to everyone.

For small firms, the practical message is reassuring. Most of what UK GDPR asks for is good practice anyway: clear consent, honest communications, accurate records, sensible security. Buying a purpose-built customer vulnerability tool (rather than building one) handles most of the technical requirements at a price that’s accessible to small firms.

Does GDPR only apply to digital data – are our paper records fine?

No. UK GDPR applies to personal data held in any ‘filing system’ – paper included, provided the records are structured so that specific information about an individual is accessible.

A filing cabinet of vulnerability disclosures, organised by customer name, is still covered by GDPR. A stack of handwritten notes in no particular order probably isn’t (though it’s still good practice to handle them carefully). The test is structure, not medium.

In practice, paper records carry the same obligations – lawful basis, accuracy, retention, security, the right of access – with some additional risks: they’re harder to search when a subject access request comes in, harder to share safely across teams, easier to lose or misplace, and more expensive to redact. For vulnerability data specifically, paper is almost always the wrong medium. A structured digital system solves most of the practical problems paper creates.

Isn’t GDPR just an IT issue?

No. IT implements the controls, but the obligations are the firm’s.

GDPR touches processes, policies, training, governance, contracts, customer communications, and the design of the services themselves. A secure system helps, but if staff use it inconsistently, or if customers don’t understand what they’ve consented to, or if there’s no process to act on a subject access request, the firm is still non-compliant. The ICO has consistently emphasised that data protection is an organisational responsibility.

For customer vulnerability specifically, the implications cut across almost every function. Compliance needs to define the lawful basis and documentation framework. Operations needs to design how data is captured at the point of disclosure. IT needs to build the access controls, audit trails and retention mechanisms. HR and training need to make sure staff know what to do and what not to do. Customer communications need to set out, in plain language, what the firm does with the data. Board-level governance needs to own the whole thing. Treating any one of those as ‘someone else’s problem’ is where firms come unstuck.

Doesn’t GDPR prevent us from sharing customer vulnerability data?

No – this is probably the single most common misconception in the field, and the FCA and ICO have explicitly pushed back on it.

Their joint statement made clear that processing customers’ vulnerability data, including sharing it appropriately, is entirely lawful when done for a legitimate purpose. What UK GDPR requires is a valid lawful basis, a clear purpose, transparent communication with the customer, and appropriate safeguards. None of that prevents sharing – it shapes how it happens.

In practice, vulnerability data can be shared within a firm (tiered access, need-to-know, proper documentation), with third parties in the distribution chain (explicit consent, structured data, data sharing agreements), with regulators (various lawful bases depending on the request), and with external support services (consent, or a relevant substantial public interest condition). Each of these needs to be handled correctly, but each is lawful.

Firms that use GDPR as a reason not to share are missing the point, and – as the joint statement spelled out – they’re failing to meet Consumer Duty at the same time.

Are GDPR and Consumer Duty opposing regulations?

No. They’re complementary, and they’re aimed at the same underlying goal – protecting consumers and building trust.

Consumer Duty requires firms to understand their customers and deliver good outcomes, including for vulnerable cohorts. UK GDPR requires firms to do that in a way that respects the customer’s privacy and control over their own data. The firms that are doing both well treat them as two halves of the same conversation – ask the customer what they need, explain why the firm is asking, be clear about how the information will be used, and use it to deliver better outcomes.

The joint FCA and ICO statement made the complementarity explicit, responding directly to evidence that some firms were using GDPR concerns to justify collecting less data than Consumer Duty requires. Both regulators were clear: that isn’t compliant with either regime.

The firms finding the two in tension are usually firms with weak processes in one of them. Strong vulnerability management and strong data protection are the same exercise, done well.

Isn’t it better to be over-cautious? Isn’t minimising data always best?

Under-collecting is its own compliance failure.

The data minimisation principle in UK GDPR is often misread as ‘collect as little as possible’. The actual principle is to collect only what’s necessary for the stated purpose – and ‘necessary’ is broader than firms often assume. For customers’ vulnerability data specifically, firms need enough information to support individual customers, monitor outcomes by cohort, and identify systemic friction points that warrant product or service change. That’s a substantial data requirement, and trying to meet it with minimal records leaves gaps that hurt customers and fail Consumer Duty.

There’s also a practical point. Circumstances change. A vulnerability which seems mild today may become severe in two years. Capabilities the firm hasn’t yet developed may become relevant for data already held. A customer’s vulnerability may turn out to matter most for a product being sold by a different firm in the distribution chain. Over-minimising now creates problems that are hard to fix later.

The right discipline is: collect what you need for a clear, documented purpose, keep it for as long as that purpose requires, restrict access to those who need it, and delete it when the purpose ends. That’s genuinely GDPR-compliant – and it’s what Consumer Duty expects.

Can customers’ data only be used for its original purpose?

Broadly yes – this is the purpose limitation principle in Article 5(1)(b). Data collected for one purpose shouldn’t be repurposed without a fresh lawful basis.

For customer vulnerability data, this is particularly important because the data is sensitive and the risks of misuse are significant. Customers’ vulnerability information, collected to provide support, should not be used for marketing. It should not be used for pricing – using a customer’s disclosed vulnerability to charge them more, or to deny them a product, is both a purpose limitation breach and likely a Consumer Duty (and Equality Act) issue. It should not be used for profiling that produces significant effects for the customer without meeting Article 22’s additional requirements.

What’s permitted is use consistent with the original purpose – which for vulnerability data typically covers delivering support, monitoring outcomes, and improving products and services at a cohort level (with individual identifiers appropriately removed). The CII’s customer vulnerability guidance identifies these as the three purposes that justify vulnerability data processing in financial services.

When in doubt: was this purpose set out clearly to the customer at the point of consent? If yes, use is usually fine. If no, a fresh consent (or another lawful basis) is needed.

Don’t data protection laws stop organisations from sharing personal data?

No. They govern how sharing happens, not whether it can happen.

UK GDPR allows data sharing where there’s a valid lawful basis, a clear purpose, appropriate safeguards and, usually, transparency with the data subject. All of those can be met. Most sharing in financial services happens under consent, under a contractual necessity basis, under legitimate interest (for non-special-category data), or – for vulnerability data specifically – under explicit consent or an appropriate Article 9 condition.

The ICO has made a point of publishing guidance that emphasises this. Their sharing code of practice is largely about how to share responsibly, not about reasons not to. And their joint statement with the FCA on vulnerability-related data was specifically intended to correct the ‘we can’t share’ misconception.

Where firms do need to be careful: cross-border transfers (especially outside the UK or EEA), third-party processors (which need appropriate contractual safeguards), and sensitive categories of data (which need Article 9 conditions). Each has specific requirements, but none is a general bar on sharing.

You can’t share data even in an emergency, right?

You can – and often you should.

Vital interests (Article 9(2)(c)) allows processing, including sharing, where it’s necessary to protect someone’s life and they can’t consent. That covers emergency situations: a concerned family member disclosing that a customer is at immediate risk of harm, a staff member identifying a customer in crisis on a call, a pattern of behaviour that suggests imminent safeguarding risk. In these circumstances, firms should act – including by contacting emergency services where that’s appropriate – and the data protection rules support that action.

Schedule 1 Conditions 18 and 19 (safeguarding and economic wellbeing) provide further cover for non-life-threatening but still serious situations where sharing is necessary to protect the person, and where seeking consent would either be impossible or would undermine the protection.

The practical discipline in an emergency: act to protect the person first, document what was done and why, and move to a more sustainable basis once the immediate risk has passed. The last thing a firm should be doing in a crisis is hesitating over GDPR. The regulations are explicitly designed to permit action in genuine emergencies.

Do you always need consent to share someone’s data with another organisation?

No – but consent is often the cleanest option when it’s available.

For customer vulnerability data shared across the distribution chain – say, between an intermediary and a manufacturer, or between a creditor and a debt collection agency – explicit consent is usually the preferred basis, because it’s transparent, it fits with how most customer relationships already work, and it gives the customer clear control over who holds their information. It also makes the onward use easier: once the customer has consented, the data can flow through the chain with the firms’ obligations clear.

Where consent genuinely isn’t available, there are other bases: vital interests in emergencies, safeguarding conditions where the person is at risk, regulatory substantial public interest where the sharing is necessary to meet a regulatory obligation and consent can’t reasonably be obtained, and the insurance condition for firms in that sector. Each has specific limits, and each needs to be documented.

The CII’s 2025 customer vulnerability guidance is clear that more sharing – done well – is the direction of travel, because the alternative (customers having to repeat sensitive information every time they move between firms) is harmful to them and hurts outcomes across the chain. Firms that build consent and data structure into their processes now will find sharing much easier than firms relying on free-text notes and ad-hoc arrangements.

The Data (Use and Access) Act mentions Smart Data. Does that change anything for vulnerability data?

In the short term, no. In the medium term, probably – but the detail is still being worked through, and the core protections for special category data stay in place.

The Data (Use and Access) Act 2025 received Royal Assent in June 2025 and introduces a framework for Smart Data Schemes – a government-mandated, consent-based approach to data sharing across sectors. It builds on the open banking model and is intended to make sharing easier and more standardised across areas like utilities, telecoms, and financial services.

Two elements of the Act are worth knowing about in a customer vulnerability context:

  • Recognised legitimate interests. The Act creates a category of processing where firms can rely on legitimate interest as a lawful basis without running the full balancing test normally required under Article 6(1)(f). That’s potentially useful for simplifying some processing – but it explicitly doesn’t apply to special category data. Since most vulnerability information is special category data, the Article 9 requirements are unchanged. Consent, substantial public interest, and the other specific Article 9 conditions remain the route for vulnerability data.

  • Smart Data Schemes. The Act provides the legal framework for sector-specific schemes to be set up. Each scheme will be detailed in secondary legislation, and rollout is expected across 2026. In principle, these schemes could eventually enable vulnerability data to be shared across sectors in a standardised way – something industry and regulators have already flagged as potentially useful (for example, the overlap between utilities’ Priority Services Registers and financial services vulnerability data). But the detail is still being developed, and it’s too early to say exactly what it’ll mean for any specific sector.

The pragmatic approach for firms today: keep building your customer vulnerability data programme on UK GDPR as it stands, with explicit consent as default and Article 9 conditions for the exceptions. The CII’s 2025 vulnerable customer guidance is explicit on this – it assumes the current UK GDPR framework continues to apply until the Smart Data provisions are further clarified.

Watch for secondary legislation and any sector-specific schemes that cover financial services. Engage with industry bodies on what shape the schemes might take. And remember that whatever emerges, the underlying principles – consent where possible, proportionality, transparency, the customer’s control over their own data – will continue to apply. Smart Data is about making compliant sharing easier, not about bypassing the protections that matter.