Assets that make results measurable and shareable internally.
(so you know it won't drag on)
A scoped, low-friction evaluation you can run without derailing delivery.
A working setup in your environment plus the artifacts to repeat it.
Now: Claude Code-first. Soon: ChatGPT/Codex.
First visible win in the first guided session, then expand only if it's worth it.
The workflow produces reviewable artifacts (specs, tests, PR-ready changes) and a clear measurement plan so outcomes are defensible.
One repo, one motivated engineer, and a Tech Lead who can sponsor the workflow for eligible work during the evaluation window.