Skip to main content

What data is processed by AI engines?

Updated this week

Vansah ensures that your organization’s AI usage remains fully private and secure.

  • No training or sharing: Your AI requests, prompts, completions, and feedback are not shared with OpenAI or any third party for model training or improvement.

  • Account-level privacy: All AI interactions stay within your organization’s Vansah account and are processed solely for delivering the requested functionality.

  • Data protection by design: Vansah’s AI features are built to align with security best practices, ensuring sensitive project or test data is never exposed outside your environment.

Below is a breakdown of what we process:

1. Jira Work Item Context

  • Work Item summary & description – The main user story, bug, or requirement text.

  • Acceptance criteria (if mapped) – Often the clearest basis for test steps.

  • Custom fields (if mapped) – e.g., business rules, priority, or compliance tags.

  • Labels, components, fix version, priority (if mapped) – Help shape test coverage and tagging

2. User Prompts / Overrides

  • Any explicit prompt you provide (e.g., “Test Case Type example: Boundary Tests, Functional Tests”).

  • Chosen style (Gherkin vs procedural), coverage depth, or scenario outlines.


Data Not Sent to AI Engines

  • Passwords, secrets, tokens

  • Execution results / production logs – test results are not used for generation, only test design context.


Processing & Residency Controls

  • Data is minimised: only relevant fields are sent for the generation request.

  • You can choose to exclude sensitive fields.

  • Data residency: prompts/responses are routed in line with your Jira region (EU, US, AU, etc.).

  • Audit trail: Vansah logs which user invoked AI, what it was linked to, and the resulting Test Case Keys.

Did this answer your question?