Security & Data Handling
Read the approach

Designing data access so client systems stay controlled under real conditions.

Keep scrolling
01 // 05

Security at Virtue shapes how we ask for access, how we isolate client work, how we think about AI usage, and how we hand back systems when an engagement ends. The goal is simple: make the data useful without making the environment sloppy.

Access
Read-first, scoped, reviewed regularly
Isolation
Separate repos, staging, production, and client boundaries
Approvals
Access limited to approved people, machines, and uses
Handoff
Clean documentation and offboarding built into the work
02 // 05

Context

I got obsessed with security early by jailbreaking iPhones and learning how systems broke. The thing that stayed with me was that behind every great product there is a place where the data lives, moves, and gets exposed if someone is careless. For great companies it is the same story. If you are serious about building on top of a company’s data, you have to be serious about how that data is accessed, logged, isolated, and handed back.

That perspective changes how we build. We do not treat security like a slide in a deck. It changes how we scope the first week of work, how we set up environments, how we think about logs, and how we decide whether data should go anywhere near AI systems at all.

03 // 05

How access is granted

Minimum useful access

We start with the minimum useful level of access. Early phases usually require read-only access and a tightly scoped operating problem.

Read first

Write access is introduced later when a client wants live systems, automation, and production workflows that need to act inside their environment.

Approved people and machines

Access is limited to approved team members, approved accounts, and approved machines. We review and harden access regularly as systems evolve.

Environment separation

Client environments are isolated from each other at the repo and infrastructure level, with separate staging and production environments as standard.

04 // 05

AI, logs, and architectural boundaries

Shared-model training is off the table.

We do not use client data to train shared models. Before any client data touches an AI workflow, we think carefully about what is being passed through and whether that use is appropriate for the environment and the client.

Logs are part of the system

One of the easiest places to get sloppy is around logs, exports, test environments, and temporary workflows. We design around that from the start.

Scrubbing matters

We structure flows with PII in mind and think carefully about how outputs are generated so sensitive data is not exposed carelessly.

Client boundaries stay intact

Client work is kept distinct operationally and architecturally. Where needed, we set up isolated warehouses, repos, and machines.

Offboarding is real

At offboarding, we can hand back systems, documentation, and operating context cleanly so the client retains what was built.

05 // 05

High-stakes environments make the standard higher.

We have worked closely with the University of Colorado Boulder in an environment where data handling and approvals were taken seriously. In practice that meant working directly with their team, earning IT approval, and designing around the realities of an environment where student data and institutional trust both mattered.

Work like that sharpens everything else. It forces better decisions around access, better staging discipline, and better documentation. The same habits carry into every other client environment we touch.