Human-Centred Trustworthy AI (HCAI)
Decision Support for Product-Tech Teams
We believe successful AI products are those that center human values, minimise risk, and build trust. This tool helps you take a quick pulse on how your product measures up against key HCAI principles, pinpointing where you shine and where you might want to improve.
By answering a few targeted questions, you’ll receive a set of scores that reflect your product’s alignment with areas such as bias mitigation, transparency, safety, and more. From there, you can explore practical next steps to strengthen your AI’s impact on users, build trust, and navigate potential risks.
Decision Support Tool
Why Ethical AI Matters For Your Business
Consumer trust in AI companies has dropped globally from 61% in 2023 to 53% in 2024 and with U.S. markets seeing an even sharper decline to just 35%. According to Forbes (2024)

Ethical AI isn't just about compliance—it's about building products users trust and driving business success. This trust gap directly impacts your bottom line.
Harvard Business Review research shows companies demonstrating ethical AI practices experience 30% higher customer retention rates compared to competitors with questionable AI ethics. In today's competitive landscape, this trust premium is a significant competitive advantage.
Here’s why it matters:
  • Trust forms the bedrock of any lasting relationship between a product and its users.
  • Regulations are evolving rapidly, making responsible AI design a key differentiator and a safeguard.
  • Ethical alignment helps you avoid exclusionary design, reduce unintended harm, and foster positive social outcomes.
Key Ethical Questions
At every stage of the product development lifecycle, there are key questions you should consider to keep your AI feature / product on track to launch inline with Trustworthy AI principles.
For Each Phase of Product Development Lifecycle

1

2

3

4

1
Phase 1: Research & Strategy
Validate the idea and strategic fit
  • Is AI truly necessary, or might a simpler solution suffice with less risk?
  • Does the proposal align with your organisation's values and mission?
  • Are key stakeholder needs identified?
2
Phase 2: Discovery & Problem Definition
Define the problem and scope; understand data and impact
  • How sensitive is the data?
  • What is the potential impact on users, especially vulnerable groups?
  • Is the problem clearly defined with ethical risks in mind?
  • Can decisions be reversed?
3
Phase 3: Development & Design
Build and design the solution with fairness, robustness, and compliance in mind
  • How robust and secure is the technical design?
  • Have fairness and bias tests been integrated?
  • Is the system transparent and explainable?
  • Does it meet compliance standards?
4
Phase 4: Deployment & Monitoring
Launch, monitor, and adjust the system in real-world conditions
  • How will you monitor performance and user feedback?
  • Are there mechanisms for user redress and appeals?
  • Can the system adapt to regulatory changes?
  • Is operational readiness ensured?
How it works
  1. Answer short prompts on topics like bias, user autonomy, and data handling.
  1. View your results for each area, along with an overall score.
  1. Explore tailored recommendations – concise ideas you can put into practice today.
We’ll never tell you what’s “right” or “wrong.” Instead, we’ll share insights and case examples that help you see the potential impact of your decisions, so that you and your team can chart your own path to trustworthy, human-centred AI.
Ready to begin?
Let’s explore how your AI project can support people in meaningful, responsible ways.
Decision Support Tool
For each phase, you will answer a few questions by selecting a score from
Loading...

Understanding Your Results
When you complete your assessment, you'll receive risk ratings for each criterion and an overall phase risk level. These aren't just labels—they're calls to action:
  • High Risk (Red): Requires immediate attention and specific remediation plans
  • Medium Risk (Amber): Needs improvement with targeted enhancements
  • Low Risk (Green): Continue monitoring and maintaining good practices

Turning Insights Into Action
If Your Overall Score is Low Range
  1. What This May Indicate
  • You’re at an early stage of structuring responsible AI practices, or perhaps your product is still in rapid development.
  • You might not have fully defined processes for checking issues like bias, explaining AI decisions, or handling user data safely.
  1. Next Steps for Improvement
  • Identify Priority Gaps: Review your principle-by-principle scores to find the most pressing areas (e.g., bias or safety).
  • Start Small: Introduce pilot tests, basic audits, or user feedback loops in at least one area—this makes improvement manageable and visible.
  • Engage Experts & Stakeholders: Seek input from diverse voices, especially those who could be disproportionately affected by AI decisions.
  • Establish Accountability: Define who is responsible for tracking progress on each AI principle.

If Your Overall Score is Mid Range
  1. What This May Indicate
  • You’ve got a fair foundation but likely some blind spots or incomplete processes.
  • There’s awareness of responsible AI principles, yet potential gaps that could manifest as user mistrust, overlooked biases, or governance issues down the line.
  1. Next Steps for Improvement
  • Prioritise Key Risk Areas: Look at your principle-by-principle breakdown. If “Bias” is lower than “Safety,” for example, dedicate time to improving your data pipelines or fairness checks first.
  • Test & Validate: Run scenario-based user studies keeping an eye on equal outcomes of AI use across user groups including edge cases or simple stress tests to see how your AI performs under real-world conditions.
  • Refine Communication: Aim for clearer language in user interfaces or stakeholder reporting to deepen transparency.
  • Schedule Regular Reviews: Move from one-off checks to continuous improvement cycles, reviewing metrics at each product iteration.

If Your Overall Score is High Range
  1. What This May Indicate
  • You’re proactively embedding human-centred and ethical considerations into your AI lifecycle.
  • You likely have well-defined processes for auditing bias, explaining outcomes, ensuring user autonomy, and responding to any ethical or safety issues.
  1. Next Steps for Improvement
  • Maintain Momentum: Responsible AI is an ongoing effort. Keep auditing your processes as the product evolves or scales.
  • Share Your Learnings: If you’re at this stage, your insights could help others—consider publishing reports or holding knowledge-sharing sessions.
  • Explore Advanced Techniques: Investigate cutting-edge interpretability, fairness, or robust safety measures if your domain is high-stakes.
  • Stay Agile: Keep an eye on emerging regulations and evolving user expectations, adapting your frameworks as needed.
Measuring Progress
Harvard Business Review notes that organizations tracking ethical AI metrics achieve 30% higher customer retention rates by building user trust. Consider creating:
  • Team-level dashboards showing assessment improvements over time
  • Quarterly ethical reviews of all AI initiatives using this tool as a foundation
  • "Ethics checklists" derived from your assessment results for ongoing development

Wherever your scores place you, think of them as a snapshot. Even a lower alignment score isn’t a verdict – it’s a prompt to take incremental steps towards more trustworthy, people-centred AI. Similarly, high scores aren’t a final destination but an invitation to keep refining your processes to maintain alignment.
Example Scenarios:
Applying AI Ethics in Practice
System Reliability in Healthcare
Phase 1: Research & Strategy
Scenario: A healthcare technology team was exploring an AI diagnostic assistant for medical professionals.
Ethical Challenge: The consequences of diagnostic errors could be severe, raising questions about appropriate use cases and limitations.
Approach: Before development, they:
  • Conducted extensive stakeholder research including patients and diverse healthcare providers
  • Created clear boundaries for "advisory-only" versus "decision-support" functions
  • Established confidence thresholds requiring human review for borderline cases
  • Aligned system scope with evidence-based medical limitations

Outcome: The project shifted from broad diagnostic assistance to a narrower focus on analysing medical imaging for specific conditions where AI performance was most reliable.
Data Privacy in Recommendation Systems
Phase 2: Discovery & Problem Definition
Scenario: An e-commerce team is building a product recommendation engine using customer browsing history, purchase data, and demographic information.
Ethical Challenge: The team needs to determine how much personal data is necessary to create effective recommendations without compromising privacy.
Approach: They adopted a tiered data strategy:
  • Created an "anonymised mode" using only aggregate behaviour patterns
  • Implemented explicit opt-in for personalised recommendations
  • Established automatic data ageing to remove historical browsing data after 90 days

Outcome: Customer consent rates increased 27%, and the team discovered the anonymised recommendations performed nearly as well as personalised ones for most product categories.
Algorithmic Bias in Hiring Tools
Phase 3: Development & Design
Scenario: An HR software company developed an AI tool to screen job applications and identify promising candidates.
Ethical Challenge: Initial testing revealed the system favoured candidates from certain universities and backgrounds, potentially perpetuating existing workforce imbalances.
Approach: The team:
  • Tested performance across diverse candidate profiles, revealing bias patterns
  • Removed proxies for protected attributes (like university names)
  • Created specialised training for edge cases with less representation
  • Added an "explainability layer" showing which factors influenced screening decisions

Outcome: The revised system showed more consistent performance across demographic groups while maintaining overall accuracy in candidate screening.
Transparency in Financial Services
Phase 4: Deployment & Monitoring
Scenario: A fintech company deployed an AI system to evaluate loan applications, which customers often didn't understand when declined.
Ethical Challenge: Customers needed to comprehend decisions affecting their financial future, but the complex model used hundreds of variables.
Approach: The company:
  • Created simplified explanations of key factors affecting each decision
  • Implemented "counterfactual explanations" showing what would need to change for approval
  • Established a human review process for contested decisions
  • Tracked explanation effectiveness through customer feedback
Outcome: Customer satisfaction increased by 31% among declined applicants, and regulatory scrutiny decreased as transparency improved.
Remember: These examples demonstrate how ethical considerations translate into practical decisions throughout the AI development process. Use these as inspiration when addressing similar challenges in your own projects.
Best Practices & Lessons Learned
Putting People at the Centre
  • User Control & Transparency: Always inform your users about what data is being used and why. Provide meaningful choices so they can control how their data is utilised.
  • Iterative Improvement: Use this tool as a living guide—update your risk assessments as your project evolves. Regular check-ins can help you catch new issues early.
  • Collaborative Responsibility: Ethical AI isn't the responsibility of one team alone. Use this tool to start conversations across departments, ensuring everyone is on the same page.
Key Takeaways
  • Start Small: This tool is a starting point—a way to flag areas for deeper investigation.
  • Document as You Go: Keep a "living" ethics document or FactSheet that records your assumptions, risks, and planned mitigations.
  • Plan Next Steps: Use the recommendations to prepare for future phases. For example, if better data control is needed, explore ways to collect data directly from users with clear consent options.
Additional Resources & Next Steps
Remember, this is a starting point for creating transparent, human-centric, and responsible AI.

AI Ethics Frameworks
Globally there are several international AI Ethical frameworks so depending where you live, you could hopefully find one that applies such as :
  • The EU AI Act
  • IEEE Ethically Aligned Design
  • The OECD AI Principles for Trustworthy AI
  • IBM AI FactSheets.

Additional Help
Direct consulting services
Our goal isn’t just to identify gaps, but to help you close them. If you’re looking for hands-on guidance on AI responsibility and ethics, we offer tailored consulting services that integrate trust, accountability, and safety into your product development lifecycle.
How we can help:
  • Embedding Ethical Frameworks: We collaborate with your team to design clear, practical processes that reduce bias, enhance transparency, and boost user trust.
  • Cross-functional Alignment: We facilitate workshops and training so that everyone—from data scientists to business leaders—understands and supports ethical AI goals.
With the right structures in place, you can confidently innovate—knowing your AI solutions are built on a foundation of responsibility and human-centred values.

Ready to learn more? Get in touch and let’s explore how we can support your journey.
8 Week Cohort-based Masterclass for Designers & PMs
Implementing ethics can be tricky - especially because they are ambiguous and almost never discussed.
That is why I created a cohort based course to help make implementing human-centred ethical AI more approachable - you'll have the chance to :
  • deeply learn each ethical principle (1 per week)
  • look at real world case studies where AI implementation has gone wrong and
  • discuss with peers in a safe learning environment before you have to try to implement it into real world.
Human-centred Ethical AI Course Check to see when our next cohort starts.