Resources hub
Resources

Security checklist for legal AI

Compare data protection, access controls, AI governance and procurement readiness across legal AI platforms.

Security Checklist

Security review for legal AI is not only about infrastructure. The platform may process privileged documents, confidential client records, sensitive business data, prompts, outputs, citations, workflow decisions, signatures, and audit context.

Use this checklist to structure security, privacy, procurement, and AI governance review.

1. Data Classification and Intake

Ask:

  • What categories of customer data may enter the platform?
  • Can the platform support different data classes by organization, workspace, matter, team, or role?
  • How are documents, prompts, outputs, source references, signatures, comments, and audit records handled?
  • Are there workflows that should be prohibited or restricted because of data sensitivity?
  • How does the platform help teams keep sensitive data in the right workspace?

Evidence to request:

  • Data flow summary.
  • Data handling policy.
  • Retention and deletion documentation.
  • Support-access policy.
  • Security packet or questionnaire response.

2. Tenant Isolation and Access Controls

Ask:

  • How is customer data separated by tenant?
  • How are teams, roles, workspaces, matters, and records permissioned?
  • Does the platform support SSO/SAML, MFA, RBAC, and admin controls?
  • Can access be scoped by workflow or resource?
  • How are deprovisioning and user lifecycle events handled?
  • How are privileged administrative actions logged?

Strong answers show clear separation between identity, role, workspace, matter, document, and administrative access.

3. AI Model and Provider Boundaries

Ask:

  • Which AI providers or model paths may be used?
  • How are provider choices governed by environment, plan, workflow, or customer requirement?
  • What contractual terms support any no-training or data-use statements?
  • Are prompts, completions, retrieved sources, embeddings, logs, or derived data used to train general models?
  • How are provider outages, model changes, or routing changes governed?
  • Are model/provider details exposed to end users or only managed internally?

Evidence to request:

  • AI governance summary.
  • Model/provider data handling explanation.
  • Contractual no-training language or approved equivalent.
  • Subprocessor list.
  • Change-management process for AI routing.

4. Human Review and Professional Responsibility

Ask:

  • Which workflows require human review before output becomes final?
  • Can review points be configured by workflow, role, data type, or risk level?
  • Are reviewers shown source material, limitations, uncertainty, or confidence-relevant context?
  • Can reviewers reject, override, or escalate AI-assisted work?
  • How is reviewer identity preserved in the record?

Security and compliance reviewers should treat human review as a control, not as a marketing statement.

5. Source Grounding and Evidence Controls

Ask:

  • When does the platform cite sources or show retrieved evidence?
  • When does it clearly indicate that an output is not source-grounded?
  • Can teams restrict workflows to approved policies, templates, documents, or legal sources?
  • How are stale, conflicting, or missing sources handled?
  • Are citations, source excerpts, or knowledge references preserved in the work record where appropriate?

Red flags:

  • Broad "always accurate" claims.
  • No distinction between source-grounded and non-source-grounded outputs.
  • No escalation path for uncertainty.
  • No way to see which source material influenced the output.

6. Audit Logs and Records

Ask:

  • What user, admin, workflow, AI, and document events are logged?
  • Can logs show owner, source, review, approval, signature, and record context?
  • Are logs exportable for security, compliance, or incident review?
  • What is the retention period?
  • Are logs protected from unauthorized alteration or deletion?
  • How are support access and administrative actions recorded?

For regulated teams, auditability is part of the product surface. It should be easy to explain during procurement.

7. Encryption, Infrastructure, and Certifications

Ask:

  • Is data encrypted in transit and at rest?
  • What hosting regions, cloud providers, or deployment models apply?
  • How are backups protected?
  • What availability posture, monitoring, and incident response processes exist?
  • How are vulnerabilities tracked and remediated?
  • Is there a business continuity and disaster recovery process?
  • What privacy frameworks are fully supported (e.g., GDPR, LGPD, CCPA)?
  • What security certifications are maintained or in formal preparation (e.g., SOC 2, ISO 27001), and do core infrastructure and authentication suppliers hold SOC 2?

Evidence to request:

  • Security architecture summary.
  • Incident response summary.
  • Backup and recovery posture.
  • Penetration-test summary or current assurance materials when available.
  • Current security certificates or formal readiness/preparation documentation (e.g., SOC 2, ISO 27001), including inherited certifications from infrastructure and authentication subprocessors (e.g., AWS).
  • Privacy compliance posture (GDPR, LGPD, CCPA).

8. Retention, Deletion, and Portability

Ask:

  • How long are documents, prompts, outputs, logs, signatures, and vault records retained?
  • Can retention be configured by organization or workflow?
  • What deletion options exist during and after the contract?
  • How are backups handled after deletion?
  • Can records be exported at termination?
  • What happens to derived data, embeddings, or cached material?

Legal, security, and procurement should verify the answer in policy and contract language before signing.

9. Subprocessors and Support Access

Ask:

  • Which subprocessors may process customer data?
  • What changes trigger customer notice?
  • How is support access approved, scoped, logged, and revoked?
  • Can support access be limited to metadata or controlled sessions where appropriate?
  • Are support teams trained for sensitive legal and regulated data handling?

Evidence to request:

  • Current subprocessor list.
  • Support-access process.
  • DPA routing.
  • Security packet.

10. Procurement Readiness

Before legal or procurement review, collect:

  • Security overview or security packet.
  • Data processing agreement path.
  • Subprocessor list.
  • Privacy policy.
  • Incident response summary.
  • Retention and deletion summary.
  • AI governance summary.
  • Accessibility or compliance materials if required.
  • Security questionnaire response.
  • Named security or procurement contact path.

11. Minimum Review Record

Keep a decision record with:

  • Date of review.
  • Vendor contacts.
  • Product and deployment scope.
  • Data categories in scope.
  • Security documents received.
  • Open risks and mitigations.
  • Contract clauses requiring review.
  • Required implementation controls.
  • Renewal or reassessment date.

12. Forlex Security Review Questions

Bring these questions to a Forlex security or procurement conversation:

  • Which controls are available for our plan and rollout scope?
  • How are organization context, teams, roles, matters, and workspaces separated?
  • What evidence supports no-training, retention, support access, and subprocessor statements?
  • Which workflow events and review decisions can be preserved for audit?
  • What security packet materials are available before contract review?
  • What integrations require additional security scoping?
  • What implementation decisions affect risk posture?

13. Stop-and-Escalate Conditions

Escalate before purchase if:

  • The vendor cannot explain data use for prompts, outputs, logs, embeddings, or derived data.
  • No one owns security or DPA responses.
  • No-training claims are not supported by contract language.
  • Support access is broad, informal, or not logged.
  • Human review is promised but not visible in workflow behavior.
  • Audit logs do not cover the workflow events your organization needs.
  • Retention and deletion obligations are unclear.

Security review should reduce ambiguity before signature, not discover it during implementation.

Last reviewed May 2026

Choose the next step that matches your evaluation stage.