Ways of Working
v5.0 -- March 2026Standards, processes, and methodologies that govern how the F7 platform transformation is executed. Every team follows these guidelines to ensure consistency, quality, and alignment across all domains. This page is the engineering source of truth -- if it is not documented here, it is not policy.
16 Core Principles
Non-negotiable engineering principles that apply to every team, every sprint, and every deliverable.
Every engineering workflow starts with AI tooling. Claude CLI, MCP stack, and multi-agent workflows are the default, not optional.
No implementation without an approved, locked spec. All functional requirements written in EARS syntax.
Data Models -> Event Schemas -> API Contracts -> DB Schema. Each step approved before next.
Every migration deploys the Shadow Data Consumer pattern. Parity validated at 99.9%+ for 2 consecutive weeks before cutover.
Each domain owns its data, APIs, and events. No shared databases. No cross-service DB access.
Cross-domain state changes flow through Kafka events. Synchronous HTTP only within a bounded context.
One service owns each entity. Other services consume via ECST projections, never direct DB queries.
Gateway-level JWT validation only. Internal services trust downstream headers injected by Kong. No peer-to-peer auth.
All gateway config, routing rules, and platform infra managed via IaC. No manual changes.
Circuit breakers, retries with exponential backoff, graceful degradation, and DLQ handling are mandatory.
New fields optional. Breaking changes require new API version, 2-week deprecation minimum, and Architecture Committee approval.
Two-stack observability: DataDog for API/APM/SLOs, Prometheus/Grafana for Kafka lag and ECST projections.
Secrets in AWS Secrets Manager. No hardcoded credentials. Security scanning in CI pipeline.
Services built in dependency level order (L0 -> L1 -> L2 -> L3). No jumping ahead.
Target: multiple deploys/week, <1 week lead time, <1 hour MTTR, <5% change failure rate.
Events carry full entity state. Consumers build local projections. No callback queries to source service.
5 Engineering Phases (Gated)
Every domain progresses through five sequential phases. Each phase has a gate -- you cannot enter the next phase until the current phase's Definition of Done is met.
P0 Gate Deliverables
Six mandatory artifacts per domain. All must be approved before any engineering work begins. The P0 Gate deadline is April 16, 2026.
Per domain: service name, purpose, entities owned vs consumed, SLO targets
Bounded context map, entity ownership matrix, ubiquitous language glossary, ACL boundary definitions
Dependency levels 0-3, event dependency matrix, API dependency matrix
P1/P2/P3 tiers, target sprint assignment, dependency-ordered implementation sequence
OpenAPI YAML for all endpoints, DTOs, rate limits, SLO definitions
AsyncAPI YAML for all event types, thick payload schemas, subscriber list, throughput estimates
Data-First Pipeline
Phase 1 follows a strict sequential pipeline. Each step requires approval before the next can begin. This ensures contracts are defined before implementation.
Three Required Inputs for Every Spec
Each step requires approval before proceeding. Specs must be approved and locked before implementation begins.
EARS Requirement Patterns
All F7 functional requirements are written using the Easy Approach to Requirements Syntax (EARS). Six patterns eliminate ambiguity and ensure testability.
AI-First Engineering
AI tooling is the default for every engineering workflow. The required MCP stack, multi-agent workflow, and confirmation protocol are mandatory.
Required MCP Stack
Multi-Agent Workflow
Confirmation-Before-Code Protocol
No Code Agent proceeds without explicit engineer approval. The sequence is mandatory:
Parallel Execution Plan
Teams execute in dependency order. Phase 1 (Data First) runs in parallel across all levels. Phase 3 (Implement) follows dependency ordering. 2-week sprint cadence.
| Level | Domains | Phase 1 | Phase 2 | Phase 3 | Note |
|---|---|---|---|---|---|
| L0 | Menu, Inventory | S1-S4 | S5-S6 | S7-S10 | Start Phase 1 immediately after P0 |
| L1 | Order | S1-S4 | S5-S6 | S11-S14 | Phase 1 parallel, Phase 3 after L0 done |
| L2 | CRM, BizOps | S1-S4 | S5-S6 | S15-S18 | Phase 1 parallel, Phase 3 after L1 done |
| L3 | Channels (POS, Table, Integrations) | S1-S4 | S5-S6 | S19-S22 | Phase 1 parallel, Phase 3 last |
| Platform | Platform Engineering | Always | Always | Always | Enabling track, always parallel |
Sprint Cadence (2-week sprints)
Testing Strategy
Test scope expands with dependency level. Higher levels test against all lower levels to validate end-to-end integration.
Mandatory Testing Requirements
- Unit test coverage >= 80% (100% for financial, critical, and ECST paths)
- Contract testing (Pact) mandatory for all cross-service APIs
- Load testing at 2x peak mandatory before first production release
- ECST-specific: handler unit tests, projection integration tests, staleness tests, schema evolution tests
DORA Metrics & SLOs
Engineering performance targets aligned with DORA research. Service-level objectives vary by restaurant segment.
Service Level Objectives by Segment
| Segment | Availability | Latency | Error Rate |
|---|---|---|---|
| QSR (Quick Service) | 99.95% | < 200ms p95 | < 0.1% |
| Casual / Fine Dining | 99.9% | < 500ms p95 | < 0.5% |
Definition of Ready
Nothing enters a sprint, service build, or feature release without meeting these readiness criteria. DoR ensures every work item has the specs, approvals, and dependencies resolved before engineering begins.
Definition of Done
Comprehensive DoD criteria exist for every phase and every level of deliverable. Expand each section to see the checklist.
A feature spanning multiple domains releases ONLY when ALL participating domains have completed development, testing, and deployment. No partial cross-domain releases.
Ceremonies
| Ceremony | Cadence | Purpose |
|---|---|---|
| Sprint Planning | Every 2 weeks | Plan sprint backlog, assign stories, confirm capacity allocation (BAU max 40%, F7 min 60%) |
| Daily Standup | Daily | Progress, blockers, and dependency callouts within the team |
| Spec + EARS Review | As needed | Review and approve EARS specifications before implementation begins |
| Shadow Parity Review | Weekly (Phase 2) | Review data reconciliation results, investigate discrepancies, confirm parity progress |
| Dependency Review Gate | As needed | Validate cross-domain dependencies are resolved before Phase 3 entry |
| Sprint Demo | Every 2 weeks | Demonstrate completed work to stakeholders and product owners |
| Retrospective | Every 2 weeks | Reflect on sprint, identify improvements, adjust processes |
| Architecture Committee | Bi-weekly | Review LLD submissions, approve designs, resolve architectural decisions. Members: Abdullah, Baraa, Sakr |
| Monthly Architecture Review | Monthly | Strategic architecture review, cross-domain alignment, standards evolution |
| Weekly Cross-Family Sync | Weekly | Cross-family synchronization, dependency tracking, blocker escalation |
Forbidden Patterns -- Zero Tolerance
Any of the following patterns will block a PR. No exceptions. These are enforced in code review and automated checks.
Backward Compatibility Rules
Critical for the Strangler Fig migration -- legacy and F7 must coexist. These rules apply to all API and event schema changes.
Observability (Two-Stack)
All services must instrument both observability stacks from day one.
DataDog
- API and service metrics
- APM distributed traces
- SLO dashboards
- Alerting and on-call routing
Prometheus / Grafana
- Kafka consumer lag
- Event throughput metrics
- DLQ depth monitoring
- ECST projection lag