01 / what I help with
I work across most QA areas: strategy, test design, manual testing,
automation, platforms, review practices. The three focus areas
below reflect where recent work has concentrated, driven by how
AI-enabled development is changing the way teams ship.
01
QA Architecture & Strategy
Test architecture for teams where coupled scripts have hit the
wall. Recent work: HTTP-first invariant frameworks with
test-account pool isolation, an agentic QA tester that derives
scenarios from specs and executes across blackbox / DB /
API layers, and self-healing preflight using consensus voting
across parallel LLM calls. The common thread: test rules, not
their implementation.
- Test strategy & planning
- Test architecture design & review
- Invariant test frameworks (HTTP + API)
- Agentic QA (spec-driven, disposable)
- Self-healing preflight (consensus voting, auto-PRs)
02
QA Orchestration & Platform
Test frameworks get robust once there's proper orchestration
underneath. Queueing, test-account locking, manifest management,
Jenkins integration, failure triage, and dashboards that surface
the operating model. Infrastructure that makes parallel runs
reliable and failures actionable.
- Orchestration backends (Node.js, Postgres, BullMQ)
- Test account & resource locking
- Manifest ingestion (versioned test catalog)
- Jenkins / CI integration, deploy-to-test mapping
- Failure triage with AI analysis
- Operating-model dashboards (Next.js)
03
AI & Full-Stack Systems
When the QA work calls for building the systems underneath, I
also do that. REST APIs, multi-tenant platforms, RAG pipelines
with grounding verification, document intelligence.
- REST / OpenAPI specs, compliance-first
- Multi-tenant RBAC, OAuth2 / JWT
- Background job processing (pg-boss, BullMQ)
- RAG pipelines with grounding & HITL
- Document extraction (OCR, vision)
- Next.js 15 apps, strict TypeScript
02 / how I work
The method underneath the three services above.
-
Start with the problem.
Every engagement begins with the team, the app, the cadence, the risks. No template, no preconceptions.
-
Design what fits.
Choose the testing layers, the automation boundary, the methodology. Architecture shaped by your situation, not a reference implementation.
-
Test rules, not flows.
Where it fits: invariants over UI scripts, HTTP-first, decouple tests from implementation. Where UI matters, self-healing or agents fill the gap.
-
Orchestrate at scale.
Queues, locks, manifest, triage. Run a hundred tests in parallel and trust the results.