Straten AI & VeloraGPT — AI Governance, Responsible, & Acceptable Use

Operational guardrails for ethical, safe, and effective use of Straten AI engines and VeloraGPT across consulting workflows.

Last Updated: 2025-10-09

1) Purpose

This document codifies how Straten AI engines and VeloraGPT must be used to augment consulting work while safeguarding clients, users, and Stratenity. It defines ethical standards, accuracy controls, privacy protections, and escalation paths.

2) Scope

Applies to all Stratenity modules invoking Straten AI or VeloraGPT, including internal teams, external consultants, enterprise tenants, and partners who use AI outputs for research, recommendations, or deliverables.

3) Responsible Use

  • AI is a decision-support copilot; human experts retain accountability for client outcomes.
  • Calibrate prompts to the engagement context; avoid speculative or sweeping conclusions without evidence.
  • Use human-in-the-loop review for all client-facing deliverables.
  • Prefer retrieval-grounded workflows (documents, data rooms, citations) when making factual claims.
Tip: Save prompts that produce consistent, auditable outputs for repeatable engagements.

4) Model Limits & Hallucinations

Generative models may fabricate facts, misattribute sources, or overstate certainty. Treat ungrounded statements as hypotheses, not truth.

  • High-risk areas: regulations, medical/financial advice, safety-critical recommendations, confidential client contexts.
  • Mitigations: require citations, cross-check with authoritative sources, and mark uncertain outputs with qualifiers.
  • Red flags: precise numbers without sources, invented quotes, non-existent reports, or outdated “latest” claims.

5) Accuracy & Verification

Users must fact-check claims, validate assumptions with client data, and time-stamp external references. For material decisions, require dual-review and a short evidence appendix (sources, date accessed, and verification notes).

Minimal Verification Checklist
  • Identify whether output is retrieval-grounded or free-form.
  • Confirm dates, figures, and names with primary sources.
  • Label uncertainties; add “Assumptions & Limits” slide to deliverables.

6) Ethics, Bias & Fairness

Do not use AI to discriminate, exploit, or manipulate. Be alert to bias in data and outputs.

  • Probe for bias by testing diverse personas and scenarios.
  • Avoid sensitive attribute inference or profiling without explicit, lawful, and ethical justification.
  • Respect IP; do not request or include copyrighted material you do not have rights to use.

7) Data Handling & Privacy

Only input data you are authorized to use. Follow client contracts and applicable privacy laws.

  • Use anonymization or tokenization where feasible.
  • Do not include PII, PHI, or sensitive financial details unless covered by a signed agreement and approved controls.
  • Respect retention limits; purge temporary working sets after engagement close or per data policy.
Note: Store AI outputs in approved repositories with appropriate access controls.

8) Security Practices

Protect credentials, API keys, and tenant configurations. Restrict prompts and outputs containing sensitive context to permitted users only.

  • Enable MFA; rotate keys; follow least-privilege.
  • Scan uploads for malware; avoid executing untrusted code suggested by AI.
  • Log prompt/output interactions for audit where permitted.

9) Disclosure & Attribution

When appropriate, disclose that AI assisted in drafting or analysis. Attribute data sources clearly and avoid implying human authorship for AI-generated text without review.

10) Prohibited Uses

  • Fraud, deception, or deepfake creation for misleading purposes.
  • Unauthorized scraping, IP infringement, or attempts to extract model parameters.
  • Bypassing contractual, legal, or information-security obligations.

11) Risk Levels & Escalation

  • Low: internal drafts, brainstorming → team review recommended.
  • Medium: non-public client briefs, directional estimates → require citation and manager review.
  • High: regulatory, financial, safety, or public statements → mandate dual-review, source appendix, and approver sign-off.

12) Incident Reporting

Report suspected misuse, data exposure, or harmful outputs immediately. Pause affected workflows, preserve logs, and notify Stratenity support.

  • Describe what happened, data involved, time window, and affected users.
  • Do not delete evidence; follow approved containment steps.

13) Policy Updates

This guidance will evolve as capabilities and regulations change. Revised versions will carry an updated “Last Updated” date.

14) Contact

For questions or approvals, email advisory@velorstrategy.com.