F7
F7 Platform
The F7 Transformation Story

You Built the #1 Platform in MENA.
Now Let's Build What Comes Next.

This team took Foodics from a basic cashier app to a $59M ARR platform powering 35,000+ locations across 20+ countries. That is not in question. What is in question is whether the architecture that got us here can carry us to $285M ARR, 82,000+ locations, a fintech business unit, and an IPO.

This page is for the engineers who built this platform. It lays out the evidence honestly — what we achieved, where the architecture has hit its ceiling, and what F7 enables that no amount of stabilization can deliver.

$59M
ARR Today
27% YoY growth
35K+
Active Locations
#1 RMS in KSA (32% share)
90M
Monthly Orders
$970M+ monthly GMV
106%
Net Revenue Retention
Customers expanding with us
What You Built

11 Years of Shipping: From Cashier App to Regional Platform

Before we talk about what needs to change, let's be clear about what this team delivered. None of the numbers above happen without the engineering work you put in.

Products You Shipped

RMS V5 — full front-of-house/back-of-house rewrite
Foodics Online — digital ordering at scale
Kiosk, CDS, Waiter App — complete in-store experience
Foodics Pay — embedded payments everywhere
Accounting, Inventory & Supply Chain
Marketplace with 100+ integrations
Labeeb & Rushd — AI products in production

Business Impact You Created

0 → $59M ARR (54% CAGR 2020-24)
Scaled to 35,000+ active merchant locations
Captured 32% KSA market share — #1 platform
Foodics Pay: 0 → 18% attachment rate
106% NRR — customers expanding on the platform
Gross Profit $46.1M — 55% YoY growth
Powered 4 acquisitions through platform integration

Platform Milestones

ML3 cybersecurity certification achieved
92% SaaS gross margin infrastructure
13M consumer profiles in the data platform
API ecosystem with 100+ third-party integrations
Continuous delivery across 20+ countries
6 AI products shipped (Labeeb, Order+, Rushd...)
Investment round closed — IPO trajectory set

This is not a criticism of the work you did. You built a $59M business on this technology. The question is not “was this good enough?” — it was. The question is: can this architecture carry Foodics through the next stage of growth? The honest answer, backed by evidence, is no. And that is not a failure — it is the natural lifecycle of a platform that succeeded beyond its original design.

Where We're Going

The Business Is Scaling Beyond What the Monolith Can Handle

Foodics is no longer just a restaurant POS company. The 2026 strategy calls for multi-vertical expansion, a standalone fintech business unit, enterprise clients, and IPO readiness — all simultaneously.

$285M
ARR by 2029
42% CAGR from today
82K
Target Locations 2028
2.3x current scale
250B+
Fintech TAM
Tanar bank + payments
Mid-2027
IPO Target
SOC 2, ISO 27001, SAMA CSF

New Business Lines Require New Architecture

Tanar (Fintech)

Independent, SAMA-compliant financial services — payments, lending, neobanking. Cannot share a database or deployment pipeline with the POS monolith.

Enterprise & Government

Enterprise-grade SLAs (99.99%), audit logging, tenant isolation, and compliance reporting. The monolith's shared-everything model cannot guarantee per-client SLAs.

Retail Expansion

Extending beyond F&B into retail verticals. Requires a platform architecture that can host new domain logic without modifying the core order path.

Consumer Products

Open-loop loyalty, consumer-facing apps, cross-merchant experiences. Requires a Customer Data Platform that spans all channels — impossible with two separate customer databases.

2026 OKRs That Depend on F7

Unified Console v2 by H2-2026

A single merchant admin across all channels. Impossible while RMS and Online are separate backends with separate data models.

99.9% Platform Uptime

Requires per-domain scaling, circuit breakers, and SLO frameworks. The monolith's shared connection pool and single deployment unit make this architecturally impossible.

20-Day Code-to-Feature

Independent microservice deployments. Today, a feature touching Orders cannot ship without regression-testing Menu, Inventory, and Reporting.

Pay Attachment ≥55%

Unified payment service across all channels. Today, payment flows are duplicated across RMS and Online with different integration patterns.

Conway's Law: Why This Is an Architecture Problem, Not a Process Problem

“The structure of software will mirror the structure of the organization that built it.” — Our monolith reflects the org that built it: one large team, one large codebase, everything coupled together. F7 inverts this. Small, domain-aligned teams owning independent services. The org restructuring into product families (Orders, Menu, Inventory, Engagement, Organization, Shared Services, Marketplace) is not a coincidence — it is the prerequisite. You cannot build a modular platform with a monolithic org, and you cannot operate a modular org on a monolithic platform.

The Honest Assessment

What the Technical Due Diligence Found

A formal technical due diligence scored the RMS codebase at 6.2/10 overall with a security rating of 5.5/10 and an estimated 14-16 person-months of accumulated tech debt. These are not opinions — they are documented findings.

632
API Routes — only 1 validated
50+
Models without mass-assign guard
4.9M
Deleted modifiers never cleaned
0
Foreign key constraints
Problem 1

Two Systems, One Merchant, Zero Sync

Foodics sells two products to the same merchant: RMS (the POS system for in-store operations) and Online (the digital ordering app for delivery and takeaway). These are not two interfaces to the same backend — they are two completely isolated PHP/MySQL applications built by two separate engineering organizations.

Historical Context: How We Got Here

Online (codenamed SOLO) was originally a separate company that was acquired by Foodics. SOLO was designed as a POS-agnosticonline ordering platform — it treated Foodics RMS as “just another POS,” no different from any other integration partner. This POS-agnosticism was a core design principle, not an oversight.

After acquisition, Foodics needed to bridge the two systems. Rather than rebuilding Online to share RMS's data model, a pull-based sync was implemented: Online pulls data from RMS through API calls. The merchant must initiate this sync from the Online portal — logging in with separate credentials to a separate admin console.

What was meant to be a temporary integration became permanent infrastructure. The two systems have diverged further over time — each with its own data model, its own domain logic, and its own engineering team. Unifying them is not a matter of “merging codebases.” The data models are fundamentally incompatible. Unification ≠ Centralization — F7 must be a clean architecture that replaces both.

RMS

Restaurant Management System

POS, Kitchen, In-Store Operations

Own MySQL database (schema-per-tenant)
Own admin portal (console.foodics.com)
Own menu management
Own employee records & permissions
Own promotions & discount engine
Own reporting dashboard
Own API & webhook system
Own deployment pipeline
ONL

Online Ordering System

Web Ordering, Mobile App, Delivery

Own MySQL database (separate schema-per-tenant)
Own admin portal (different URL, different UX)
Own menu management (different data model)
Own employee records (not synced with RMS)
Own promotions (different rules engine)
Own reporting (different metrics)
Own API & webhook system (different contracts)
Own deployment pipeline (different schedule)

What This Means for a Merchant

Different Credentials

Two separate logins. Two separate accounts. Two separate onboarding processes. One merchant, two identities.

Different Menus

Menu created in RMS does not appear in Online. Merchant must manually recreate the same menu in both systems. Price change? Update it twice.

Different Promotions

BOGO deal in RMS? Online knows nothing about it. Customer orders online expecting the deal — it does not apply. Angry customer, confused staff.

Different Employee Records

Cashier added in RMS is not recognized in Online. Permissions are separate. A manager with full access in RMS has no access in Online until manually provisioned.

Different Reports

Revenue report from RMS shows in-store sales. Online shows delivery sales. There is no consolidated view. Merchant must manually combine spreadsheets to see total revenue.

Manual Order Sync

Online orders must be pushed to RMS via a fragile sync job so the kitchen can see them. When sync fails, online orders vanish from the kitchen. Customers wait. Food is never prepared.

Problem 2

The Sync Nightmare: A Bridge Built on Sand

Because RMS and Online are two isolated systems, Foodics built a sync mechanism to bridge them. The flow: a merchant sets up their restaurant in RMS, then logs into the Online portal with different credentials, and triggers a sync to pull data from RMS into Online. This sync is slow, partial, unreliable, and the root cause of some of our most painful merchant escalations.

The Current Sync Flow

Step 1
Merchant sets up RMS
Menu, branches, employees, promotions — all configured in the RMS admin portal
Step 2
Merchant logs into Online
Different URL, different credentials, different portal — merchant has to re-authenticate as if it is a separate product
Step 3
Trigger “Sync”
Online attempts to pull data from RMS via fragile API calls — slow, partial, and frequently fails

This sync was designed as a temporary bridge and became permanent infrastructure. It operates through direct database-to-database polling and REST API calls between two monoliths that were never designed to share data. There is no event bus, no retry mechanism, no dead-letter queue, no conflict resolution, and no monitoring dashboard for sync health.

What “Syncs” (Partially, Unreliably)

Menu Items (Product → Item)Complex mapping

RMS 'Products' map to Online 'Items' — different data models, different field names, different validation rules. Modifiers map to ModifierGroups with different cardinality constraints. Combos in RMS become special Items in Online. The mapping is lossy: modifier customization options, nested modifier hierarchies, and pricing tiers are frequently lost or corrupted during sync.

Categories (Category → Category)Partial sync

RMS uses 'Groups' as a menu-level container. Online uses a flat category model. RMS supports 3 levels of nesting (groups → sub-groups → sub-sub-groups) but Online's model is flatter. A 'group-as-menu' workaround was implemented where RMS groups are treated as Online menus — creating structural inconsistencies that confuse merchants.

OrdersFragile one-way push

Online orders must be pushed into RMS so the kitchen can see them. This is a cron-based job that polls the Online database and writes to RMS. When it fails — and it fails regularly — online orders vanish from the kitchen. The merchant has customers waiting for food that was never prepared.

BranchesPartial sync

Branch name and address may sync, but operating hours, delivery zones, minimum order values, and branch-specific settings do not. Merchant must re-configure each branch in Online after sync.

What Does NOT Sync At All

Promotions & DiscountsNo sync

A BOGO deal created in RMS does not exist in Online. Merchants must manually recreate every promotion in the Online portal. If they forget, customers ordering online see no deals — or worse, expect a deal that does not apply. This creates customer complaints and merchant frustration.

Loyalty ProgramsNo sync

Loyalty points earned from in-store purchases (RMS) are invisible to the Online system. A customer who earned 500 points dining in cannot redeem them when ordering online. Two separate loyalty databases. The merchant cannot offer a unified loyalty experience.

Customer ProfilesNo sync

A customer who ordered in-store 50 times is a stranger to the Online system. No order history, no preferences, no saved addresses. The merchant cannot personalize the online experience based on in-store behavior. The customer must create a new account for Online.

Employee Records & PermissionsNo sync

A cashier added in RMS with specific permissions does not exist in Online. Access must be provisioned separately. A manager with full access in RMS has zero access in Online until someone manually creates their account. Role definitions are different between the two systems.

Reports & AnalyticsNo sync

In-store revenue lives in the RMS database. Online revenue lives in the Online database. There is no consolidated view. To see total revenue, the merchant exports two CSVs and manually combines them in Excel. No unified view of customer behavior, product performance, or operational metrics.

Gift Cards & CouponsNo sync

A gift card issued through one system cannot be redeemed on the other. A coupon code created in RMS is not recognized by Online. Two completely separate promotion engines with no shared state.

Online System Data Volume (Accumulated from Sync)

Total Items in Online
1.6M
Synced from RMS products over years
Deleted Modifier Records
4.9M
Soft-deleted but never cleaned up
Item Prices
878K
Accumulated price records in Online DB
Categories
164K
Mapped from RMS groups with data loss

11 Documented Sync Problems (from Confluence Spike Investigation)

1Out-of-Memory Crashes

Large menu concepts (merchants with 500+ products) cause OOM exceptions during sync. The entire menu is loaded into memory at once — no pagination, no streaming.

2Gap Time

Significant delay between when a merchant updates data in RMS and when it becomes available in Online. No real-time propagation — sync must be manually triggered.

3Over-Fetching All Data

Every sync pulls the entire dataset from RMS, not just changes. A single price update triggers a full re-sync of all products, modifiers, and categories.

4Group Leveling Limit

RMS supports only 3 levels of menu nesting. Circular reference risk when mapping group hierarchies. Deeper nesting silently truncated.

5Single-Operation Processing

Sync processes one concept at a time sequentially. A merchant with menus, modifiers, combos, and categories waits for each to complete before the next starts.

6No Failure Recovery

If sync fails midway, there is no retry mechanism, no resume capability, no dead-letter queue. Partial data is left in an inconsistent state. Merchant must re-trigger manually.

7Parallel Sync Conflicts

If a merchant triggers sync while a previous sync is still running, the two processes conflict — causing duplicate records, data corruption, or silent data loss.

8Modifier Customization Lost

RMS modifier options and customization rules (min/max selections, default selections, required vs optional) are partially or fully lost during the mapping to Online's ModifierGroup model.

9Error-Prone Menu Linking

Linking synced items to Online menus relies on name matching — or requires direct manual database edits. No UI for this operation. One typo breaks the link.

10No Post-Sync Cleanup

Deleted items in RMS are soft-deleted in Online but never purged. Result: 4.9 million deleted modifier records accumulating in the Online database, degrading query performance.

11Massive Data Bloat

1.6M total items, 878K item prices, 164K categories in Online — much of it orphaned or stale sync artifacts. No mechanism to distinguish active data from sync debris.

The Merchant Experience

A restaurant owner who just spent an hour setting up their menu in RMS — with all modifiers, combos, pricing tiers, and images — triggers the sync to Online. They wait while the system over-fetches their entire dataset. If the menu is large enough, the sync crashes with an out-of-memory error. If it completes, they open the Online portal to find modifier customizations lost, category hierarchy flattened, and items unlinked from menus. They now spend 30-90 minutes manually fixing the data — or open a support ticket. Any future change in RMS requires re-syncing the entire dataset again. This is not a one-time pain. This is the daily operational reality for merchants who use both systems.

How F7 Eliminates the Sync Problem Entirely

One Menu Service

In F7, there is one Menu service that serves all channels — POS, Web, Mobile, Kiosk. A menu change made anywhere is immediately available everywhere. No sync. No duplication. One source of truth.

One Customer Profile

The CDP (Customer Data Platform) service maintains a single guest profile across all touchpoints. In-store purchases, online orders, loyalty points, preferences — all in one place. The customer is recognized regardless of channel.

One Event Bus

All domain events flow through Kafka/MSK. When a promotion is created, every channel that needs it consumes the event in real-time. No polling. No cron jobs. No partial sync. Events are immutable, ordered, and replayable.

One Merchant Identity

The Organization service provides SSO across all channels. One login. One admin portal. One set of permissions. A merchant manages their entire business from a single console — not two separate portals with two separate credential systems.

Problem 3

Paying for Infrastructure We Cannot Use

Our AWS infrastructure is provisioned for peak capacity that the monolith demands but rarely reaches. The schema-per-tenant model forces oversized instances. We cannot right-size because the monolith treats all workloads as one.

Why This Matters for IPO

Investors evaluate infrastructure efficiency as a key indicator of engineering maturity. A 12% CPU utilization rate on production infrastructure signals architectural debt, not headroom. In due diligence, this translates to questions about engineering leadership, cost discipline, and the ability to scale efficiently. Microservices with right-sized containers on EKS would let each service auto-scale independently — paying only for what each domain actually uses.

Problem 4

The Database Is the Bottleneck

Schema-per-tenant MySQL with 15,000+ schemas creates performance problems that no amount of hardware can fix. The technical due diligence uncovered fundamental data integrity issues beyond just performance.

Due Diligence: Database & Schema Analysis

Zero Foreign Key Constraints

No FK constraints anywhere in the schema. Referential integrity is enforced only by application code — if a bug or direct DB edit bypasses the ORM, orphaned records accumulate silently.

DOUBLE(19,5) for Money

All monetary fields use DOUBLE(19,5) instead of DECIMAL. Floating-point arithmetic causes rounding errors in financial calculations — a compliance and audit risk for a company heading toward IPO.

114 Database Migrations

The migration history contains 114 migrations, with inconsistent naming conventions and no automated rollback mechanism. Each migration must run across 15,000+ tenant schemas.

65+ Core Tables, ~45 Pivots

Schema complexity with over 65 core tables and approximately 45 pivot/junction tables. The relational surface area makes schema evolution extremely risky.

N+1 Query Patterns

Confirmed N+1 patterns in ReservationRepo and InventoryItem — loading related records one-by-one instead of batch queries. These multiply across 15K schemas, creating cascading slow queries.

DB::purge() on Tenant Switch

Schema-per-tenant requires calling DB::purge() to destroy and re-establish the database connection every time the application switches tenant context. Connection pooling is effectively impossible.

P95 API Response Time
2.8s
Target: <200ms
P99 API Response Time
8.4s
Target: <500ms
Slow Queries (>1s) per Hour
~3,200
Target: <50
Slow Queries (>5s) per Hour
~340
Target: 0
Avg MySQL Query Time
420ms
Target: <50ms
Deadlocks per Day
~45
Target: 0
Lock Wait Timeouts / Day
~120
Target: <5
Active DB Connections (Peak)
~2,800
Target: <500

Reporting Is Trapped on the Transactional Database

No CQRS Separation

Every report query runs against the same MySQL instance handling live POS transactions. Heavy aggregate queries (daily sales, inventory valuation, employee performance) compete directly with order placement and payment processing.

No Read Replicas for Analytics

The schema-per-tenant model makes read replica setup prohibitively complex. Each replica must handle 15,000+ schemas. Cross-schema queries for platform-level reporting are essentially impossible without custom ETL that does not exist.

Ramadan Impact

During Ramadan peak (iftar rush), report queries from early-closing merchants coincide with peak ordering from restaurants. API response times degrade from 200ms to 8-10 seconds. Merchants experience frozen POS screens while reports load.

Problem 5

Code Quality & Security: Due Diligence Findings

The technical due diligence scored security at 5.5/10. The findings below are not theoretical risks — they are documented facts from the RMS codebase audit. 632 routes, 117 controllers, and only 1 Form Request.

Routes with No Input Validation
631 / 632

632 routes across 117 controllers — only 1 uses a Form Request for input validation. All other endpoints accept unvalidated input, relying on Eloquent mass assignment protection that has been disabled on 50+ models.

Schema Migration Time
4-12 hours

ALTER TABLE on a 15,000-schema MySQL instance. Must be done tenant-by-tenant. Failures require manual rollback. Some migrations have been postponed for 6+ months.

OAuth Token Expiry
5 years

OAuth access tokens are configured with a 5-year expiry. Compromised tokens remain valid for years. No token rotation, no short-lived sessions, no refresh token mechanism.

Feature Flag Infrastructure
Google Sheets

Feature flags are managed via Google Sheets — a single point of failure. If the sheet is inaccessible, feature gate decisions fail. No audit trail, no rollback, no gradual rollout capability.

Automated Test Gates in CI
0

No automated test gates in the CI/CD pipeline. Code merges to production without passing any automated quality checks. Bugs are caught in production or manual QA.

Hardcoded Secrets in Codebase
Multiple

Secrets and API keys found hardcoded in the source code. Not externalized to vaults or environment config. Any developer with repo access can see production credentials.

Problem 6

Real Failures, Real Merchant Impact

These are not hypothetical risks. These incidents happen regularly and are a direct consequence of the monolith architecture. Every engineer who has been on-call has experienced at least one of these.

Problem 7

The Growth Ceiling

Foodics is expanding its merchant base and location count. The current architecture cannot absorb this growth without proportional increases in infrastructure cost, operational risk, and engineering headcount.

Schema-per-Tenant Does Not Scale

Adding 1,000 new merchants means 1,000 new schemas. Each schema adds metadata overhead to the MySQL instance.
DDL operations (ALTER TABLE) must be executed per-schema. At 15,000 schemas, a single column addition takes 4-12 hours.
MySQL information_schema queries slow down proportionally. Simple metadata lookups that took 50ms at 1,000 tenants now take 2+ seconds at 15,000.
Point-in-time recovery becomes impractical. Restoring a single tenant's data requires navigating 15,000 schemas.
At 30,000+ tenants (projected growth), the current RDS instance class cannot hold the metadata in memory. We would need to vertically scale to db.r6g.16xlarge — the largest available instance class.

IPO Readiness: Mid-2027

Due diligence will examine engineering architecture, deployment frequency, incident rates, and infrastructure cost efficiency.
Two separate monoliths with duplicate codebases signal engineering inefficiency — investors will question why the same features are built and maintained twice.
12% average CPU utilization on production infrastructure is a red flag for cost discipline. It suggests over-provisioning with no path to right-sizing.
Schema-per-tenant at 15,000+ schemas is a scaling liability. Investors familiar with SaaS companies will immediately recognize this as a growth blocker.
The absence of independent deployments, circuit breakers, and workload isolation means the entire business runs on a single point of failure. This is an unacceptable risk profile for a public company.
The Hard Truth

Why “Stabilize & Scale” Is Not Enough

The natural instinct is to fix what we have: migrate to MySQL 8, containerize, add observability, refactor the order path, enforce testing. This is sound engineering — and it would stabilize the system. But stabilization is not transformation. Here is why every “fix-in-place” strategy ultimately fails to solve our real problems.

The Fundamental Trade-Off

AStabilize the Monolith
Migrate 15K schemas to MySQL 8.0 clusters
Introduce single-schema multi-tenancy (partial)
Containerize all applications
Refactor order placement with proper SoC
Eliminate N+1 patterns in critical paths
Build domain-specific APIs for top domains
Achieve 60% unit test coverage
Add APM, SLOs, feature flags, IaC
Estimated effort
12-18 months
System is more stable — but still two monoliths, still PHP, still duplicate effort
BRebuild as Event-Driven Platform (F7)
Unified platform replacing both RMS and Online
Event-driven architecture (Kafka/MSK)
Domain-owned services with independent databases
CQRS for reporting — zero contention
One menu, one customer, one order lifecycle
Multi-language (.NET, Go, Kotlin) per domain
AI-assisted development across all BE teams
Shadow Data Consumer for zero-downtime migration
Estimated effort
18 months
System is unified, scalable, AI-ready, and positions Foodics for IPO and beyond

The stabilization path costs nearly the same time as a full rebuild — but at the end, you still have two PHP monoliths that cannot share data in real-time, still duplicate every feature across RMS and Online, and still cannot support new channels, new markets, or AI-first product experiences. You pay the same price and get a lower ceiling.

What Stabilization Cannot Fix

Two Systems Remain Two Systems

You can containerize both monoliths, add observability, and enforce testing — but RMS and Online are still separate applications with separate data models. A merchant still logs into two portals. A menu change still requires sync. Promotions, loyalty, and customer profiles still do not cross the boundary. No amount of refactoring within either monolith can solve this — because the problem is between them.

No Event-Driven = No Real-Time

Without an event bus like Kafka, every inter-system communication is either a sync job (batch, delayed, fragile) or a direct API call (tight coupling, cascading failures). You cannot build real-time inventory updates, instant menu propagation, or cross-channel order tracking. The 11 documented sync problems do not go away — they just get slightly faster polling.

Modular Monolith ≠ Independent Scaling

A modular monolith improves code organization but still deploys as one unit, shares one database connection pool, and scales as a single artifact. A surge in order volume still means scaling the entire application — including menu, reporting, inventory, and admin modules that do not need it. Domain-level scaling is architecturally impossible.

Still Building Everything Twice

Even with a refactored order placement flow in RMS, the Online system still needs its own order flow. Menu management, promotions, reporting, employee management — every feature exists in two codebases. Every bug fix is applied twice. Every new hire learns two systems. Stabilization does not reduce this duplication — it doubles the stabilization work.

PHP Ceiling on Talent & AI

The PHP/Laravel ecosystem cannot leverage the performance characteristics of Go, the type safety of .NET, or the JVM ecosystem of Kotlin. More critically, AI-assisted development tools deliver dramatically higher productivity with strongly-typed, well-structured codebases. A PHP monolith with 632 unvalidated routes and disabled mass assignment guards is the worst possible starting point for AI-augmented engineering.

No New Capabilities

Stabilization preserves existing functionality. It does not enable CQRS for isolated reporting, event sourcing for audit trails, saga orchestration for complex workflows, or BFF patterns for channel-specific optimization. Every product capability the business needs — real-time analytics, cross-channel loyalty, AI-powered recommendations — requires architectural primitives that the monolith cannot provide.

Why a Full Rebuild Is Feasible Now

AI-Augmented Engineering

Every backend engineer on F7 uses AI-assisted development (Claude) for spec generation, code implementation, test writing, and code review. This is not a marginal productivity gain — it compresses development timelines by 40-60% for greenfield services with well-defined specs. A full rebuild that would have taken 3 years in 2024 is achievable in 18 months in 2026.

Spec Level Development

F7's data-first pipeline (Data Schemas → Event Specs → API Contracts → DB Schema) produces machine-readable specifications that AI tools can consume directly. The spec is the implementation guide. No ambiguity, no interpretation errors, no back-and-forth. Every service starts from a locked, approved spec — the ideal input for AI-assisted code generation.

Greenfield Advantage

Stabilizing a monolith means working around 15 years of accumulated decisions, undocumented behaviors, and implicit contracts. Building greenfield services means choosing the right tool for each domain, designing clean data models, and writing code that AI tools can reason about. The refactoring tax on legacy code is higher than the cost of building new.

Industry Proof

Every Major Platform Company Has Done This

Foodics is not the first company to outgrow its monolith. The companies below faced the same decision point we face today — and every one of them chose to rebuild. Not because their monolith was “bad,” but because their business ambitions outgrew what a monolith could deliver.

Amazon

Before

Monolithic C++ application. Adding a feature took weeks of coordination. Every deployment risked the entire site.

After

Decomposed into 100s of microservices. Each team owns a service. Led to AWS itself — the infrastructure they built for internal use became a $90B business.

Lesson: The platform they built to serve themselves became larger than the original business.

Netflix

Before

Monolithic Java application with a single Oracle database. A database corruption in 2008 caused 3 days of downtime — no DVD shipments.

After

Rebuilt entirely on AWS with microservices, event-driven architecture, and Chaos Engineering. Open-sourced their tooling (Zuul, Eureka, Hystrix).

Lesson: The 2008 failure was their wake-up call. They chose to rebuild rather than stabilize — and became the gold standard for cloud architecture.

Uber

Before

Monolithic Python application (Dispatch). At scale, a single deployment took hours and a failure in payments would take down ride matching.

After

Rebuilt as domain-oriented microservices with their own service mesh. Each domain (rides, eats, payments) operates independently.

Lesson: They tried to stabilize the monolith first. It did not work. The rewrite was the only path to multi-product (Rides + Eats + Freight).

Shopify

Before

One of the largest Ruby on Rails monoliths ever built. A single codebase serving millions of merchants. Deploy queue was hours long.

After

Decomposed into domain-driven components using their 'Deconstructing the Monolith' strategy. Each commerce domain became independently deployable.

Lesson: They coined 'modular monolith' as a stepping stone — but ultimately, independent services were required for scale.

Airbnb

Before

Monolithic Rails app with a single massive MySQL database. Feature development slowed to a crawl as the codebase grew to millions of lines.

After

Rebuilt into SOA (Service-Oriented Architecture) with domain-specific services. Invested heavily in Thrift-based service communication and event-driven data platform.

Lesson: The migration took years — but without it, they could not have scaled from rooms to Experiences, or expanded internationally.

SoundCloud

Before

Monolithic Rails app. Any change required full regression. Deployment confidence was low. Feature velocity dropped as the team grew.

After

Rebuilt into microservices using the Strangler Fig pattern — exactly the same approach F7 uses. New services ran alongside the monolith until cutover.

Lesson: Smaller team, similar challenges. Proved that Strangler Fig works even without FAANG-level resources.

The pattern is universal:every company that succeeded beyond the scale of their original architecture had to rebuild. None of them succeeded by stabilizing the monolith. The question is never “should we rebuild?” — it is “do we rebuild proactively while the business is strong, or reactively after the architecture causes a crisis?” Foodics is choosing the proactive path.

The F7 Platform

What Your Daily Life Looks Like After F7

F7 is not an abstract architecture diagram. It is the platform you will build on every day. Here is what changes — from the perspective of an engineer writing code, a product manager shipping features, and the business hitting its targets.

For Engineers
You own a service, not a folder in a monolith. Your deployments do not affect other domains.
You choose the right language for your domain — .NET, Go, or Kotlin — not one-size-fits-all PHP.
Your tests run in minutes, not hours. Your CI pipeline is your own.
A failure in payments does not page you if you own the menu service. Blast radius is contained.
You write specs first, then use AI to accelerate implementation. The spec is the contract.
Schema changes affect your service only. No 15,000-schema migration nightmares.
For Product
A feature built once works across Cashier, Kiosk, Web, Mobile, and Waiter — no per-channel implementation.
Promotions, loyalty, and customer data work across all channels. No more 'Online doesn't have that.'
You can ship a menu feature without waiting for the orders team to regression-test their module.
Real-time analytics. Reports never slow down the POS. CQRS means reporting is a separate concern.
New channels (Drive-thru, Call Centre, Table Payment) are BFF configurations, not monolith rewrites.
AI-powered features (Order+, InventoryGuru) can access clean, domain-specific APIs instead of fighting a tangled schema.
For the Business
Infrastructure cost drops. Each service auto-scales independently — no more paying for 80% idle capacity.
Tanar (fintech) runs on isolated, SAMA-compliant services. Not entangled with the POS codebase.
Enterprise clients get guaranteed SLAs. Per-domain circuit breakers make 99.9% uptime achievable.
IPO due diligence sees modern architecture, automated testing, clean separation. Not a 6.2/10 legacy score.
New verticals (Retail, Hospitality) plug into the platform without forking the codebase.
Time-to-market drops from months to weeks. 20-day code-to-feature becomes structurally possible.

F7 Platform Domain Architecture

Client Channels (BFF Layer)
CashierWaiterKDSKioskCDSMobile AppWeb OrderingTable OrderingTable PaymentDrive-ThruCall CentreNotifierF-One Console
Domain Services
Orders & Checkout

Order Management, Calculation Engine, Payment Channels, Tax Engine

Menu

Menu Service, Product Catalog, Modifier Engine, Pricing

Inventory Operations

Stock Management, Supply Chain, Purchase Orders, Recipes

Engagement & Guest Experience

CDP, Loyalty, Campaigns, Feedback, Reservations

Organization

Merchant Management, Branches, Employees, Roles, Devices

Shared Services

Auth/IAM, Notifications, File Storage, Audit Log, Config

Marketplace

Partner Integrations, Developer Portal, Webhooks, OpenAPIs

Financial Services

Accounting, Payments (Tanar Pay), Capital, Spend Management

New Ventures

Retail, Enterprise, Self-Checkout, Hospitality

Platform Engineering

Cloud Infra, CI/CD, Observability, Service Mesh, Data & AI

Infrastructure
AWS EKSKafka / MSKRDS PostgreSQLAmazon API GatewayAWS App MeshArgo RolloutsPrometheus / GrafanaDataDogAWS Secrets Manager

Shadow Data Consumer: Safe, Incremental Migration

Monolith
RMS / Online
Kafka / MSK
Event Streaming
Shadow Consumer
Data Hydration
Domain Service
New F7 Service

The monolith continues running. New services build up their data stores by consuming events in shadow mode. When the data is validated and the service is tested, traffic shifts gradually. No big bang. No downtime. No data loss.

Program Milestones

Sprint 0
May 10, 2026
Program kickoff
First Feature Live
Q4 2026
Production traffic on F7
Full Migration
Q4 2027
Monolith decommissioned
IPO Target
Mid 2027
Architecture must be ready
Q2 2026Q3 2026Q4 2026Q1 2027Q2 2027Q3 2027Q4 2027
Sprint 0
First Feature
IPO Window
Full Migration

You Built the Foundation. Now Let's Build the Future.

Every engineer in this organization contributed to making Foodics the #1 restaurant platform in MENA. That is real. That matters. And the skills, domain knowledge, and product intuition you developed building RMS and Online are exactly what F7 needs.

F7 is not a rejection of what you built. It is the next chapter. You are not starting over — you are building the platform that your $59M ARR business deserves. The platform that scales to 82,000 locations. The platform that powers a fintech division. The platform that passes IPO due diligence not with a 6.2/10 — but with flying colors.

Amazon, Netflix, Uber, Shopify — they all made this leap. Their engineers had the same doubts. And on the other side, they built the platforms that defined their industries. That is what we are doing here.

Sprint 0 — May 10, 2026 — We Start Today
F7 Platform Transformation -- Internal Reference for Engineering & Product