You Built the #1 Platform in MENA.
Now Let's Build What Comes Next.
This team took Foodics from a basic cashier app to a $59M ARR platform powering 35,000+ locations across 20+ countries. That is not in question. What is in question is whether the architecture that got us here can carry us to $285M ARR, 82,000+ locations, a fintech business unit, and an IPO.
This page is for the engineers who built this platform. It lays out the evidence honestly — what we achieved, where the architecture has hit its ceiling, and what F7 enables that no amount of stabilization can deliver.
11 Years of Shipping: From Cashier App to Regional Platform
Before we talk about what needs to change, let's be clear about what this team delivered. None of the numbers above happen without the engineering work you put in.
Products You Shipped
Business Impact You Created
Platform Milestones
This is not a criticism of the work you did. You built a $59M business on this technology. The question is not “was this good enough?” — it was. The question is: can this architecture carry Foodics through the next stage of growth? The honest answer, backed by evidence, is no. And that is not a failure — it is the natural lifecycle of a platform that succeeded beyond its original design.
The Business Is Scaling Beyond What the Monolith Can Handle
Foodics is no longer just a restaurant POS company. The 2026 strategy calls for multi-vertical expansion, a standalone fintech business unit, enterprise clients, and IPO readiness — all simultaneously.
New Business Lines Require New Architecture
Independent, SAMA-compliant financial services — payments, lending, neobanking. Cannot share a database or deployment pipeline with the POS monolith.
Enterprise-grade SLAs (99.99%), audit logging, tenant isolation, and compliance reporting. The monolith's shared-everything model cannot guarantee per-client SLAs.
Extending beyond F&B into retail verticals. Requires a platform architecture that can host new domain logic without modifying the core order path.
Open-loop loyalty, consumer-facing apps, cross-merchant experiences. Requires a Customer Data Platform that spans all channels — impossible with two separate customer databases.
2026 OKRs That Depend on F7
A single merchant admin across all channels. Impossible while RMS and Online are separate backends with separate data models.
Requires per-domain scaling, circuit breakers, and SLO frameworks. The monolith's shared connection pool and single deployment unit make this architecturally impossible.
Independent microservice deployments. Today, a feature touching Orders cannot ship without regression-testing Menu, Inventory, and Reporting.
Unified payment service across all channels. Today, payment flows are duplicated across RMS and Online with different integration patterns.
Conway's Law: Why This Is an Architecture Problem, Not a Process Problem
“The structure of software will mirror the structure of the organization that built it.” — Our monolith reflects the org that built it: one large team, one large codebase, everything coupled together. F7 inverts this. Small, domain-aligned teams owning independent services. The org restructuring into product families (Orders, Menu, Inventory, Engagement, Organization, Shared Services, Marketplace) is not a coincidence — it is the prerequisite. You cannot build a modular platform with a monolithic org, and you cannot operate a modular org on a monolithic platform.
What the Technical Due Diligence Found
A formal technical due diligence scored the RMS codebase at 6.2/10 overall with a security rating of 5.5/10 and an estimated 14-16 person-months of accumulated tech debt. These are not opinions — they are documented findings.
Two Systems, One Merchant, Zero Sync
Foodics sells two products to the same merchant: RMS (the POS system for in-store operations) and Online (the digital ordering app for delivery and takeaway). These are not two interfaces to the same backend — they are two completely isolated PHP/MySQL applications built by two separate engineering organizations.
Historical Context: How We Got Here
Online (codenamed SOLO) was originally a separate company that was acquired by Foodics. SOLO was designed as a POS-agnosticonline ordering platform — it treated Foodics RMS as “just another POS,” no different from any other integration partner. This POS-agnosticism was a core design principle, not an oversight.
After acquisition, Foodics needed to bridge the two systems. Rather than rebuilding Online to share RMS's data model, a pull-based sync was implemented: Online pulls data from RMS through API calls. The merchant must initiate this sync from the Online portal — logging in with separate credentials to a separate admin console.
What was meant to be a temporary integration became permanent infrastructure. The two systems have diverged further over time — each with its own data model, its own domain logic, and its own engineering team. Unifying them is not a matter of “merging codebases.” The data models are fundamentally incompatible. Unification ≠ Centralization — F7 must be a clean architecture that replaces both.
Restaurant Management System
POS, Kitchen, In-Store Operations
Online Ordering System
Web Ordering, Mobile App, Delivery
What This Means for a Merchant
Two separate logins. Two separate accounts. Two separate onboarding processes. One merchant, two identities.
Menu created in RMS does not appear in Online. Merchant must manually recreate the same menu in both systems. Price change? Update it twice.
BOGO deal in RMS? Online knows nothing about it. Customer orders online expecting the deal — it does not apply. Angry customer, confused staff.
Cashier added in RMS is not recognized in Online. Permissions are separate. A manager with full access in RMS has no access in Online until manually provisioned.
Revenue report from RMS shows in-store sales. Online shows delivery sales. There is no consolidated view. Merchant must manually combine spreadsheets to see total revenue.
Online orders must be pushed to RMS via a fragile sync job so the kitchen can see them. When sync fails, online orders vanish from the kitchen. Customers wait. Food is never prepared.
The Sync Nightmare: A Bridge Built on Sand
Because RMS and Online are two isolated systems, Foodics built a sync mechanism to bridge them. The flow: a merchant sets up their restaurant in RMS, then logs into the Online portal with different credentials, and triggers a sync to pull data from RMS into Online. This sync is slow, partial, unreliable, and the root cause of some of our most painful merchant escalations.
The Current Sync Flow
This sync was designed as a temporary bridge and became permanent infrastructure. It operates through direct database-to-database polling and REST API calls between two monoliths that were never designed to share data. There is no event bus, no retry mechanism, no dead-letter queue, no conflict resolution, and no monitoring dashboard for sync health.
What “Syncs” (Partially, Unreliably)
RMS 'Products' map to Online 'Items' — different data models, different field names, different validation rules. Modifiers map to ModifierGroups with different cardinality constraints. Combos in RMS become special Items in Online. The mapping is lossy: modifier customization options, nested modifier hierarchies, and pricing tiers are frequently lost or corrupted during sync.
RMS uses 'Groups' as a menu-level container. Online uses a flat category model. RMS supports 3 levels of nesting (groups → sub-groups → sub-sub-groups) but Online's model is flatter. A 'group-as-menu' workaround was implemented where RMS groups are treated as Online menus — creating structural inconsistencies that confuse merchants.
Online orders must be pushed into RMS so the kitchen can see them. This is a cron-based job that polls the Online database and writes to RMS. When it fails — and it fails regularly — online orders vanish from the kitchen. The merchant has customers waiting for food that was never prepared.
Branch name and address may sync, but operating hours, delivery zones, minimum order values, and branch-specific settings do not. Merchant must re-configure each branch in Online after sync.
What Does NOT Sync At All
A BOGO deal created in RMS does not exist in Online. Merchants must manually recreate every promotion in the Online portal. If they forget, customers ordering online see no deals — or worse, expect a deal that does not apply. This creates customer complaints and merchant frustration.
Loyalty points earned from in-store purchases (RMS) are invisible to the Online system. A customer who earned 500 points dining in cannot redeem them when ordering online. Two separate loyalty databases. The merchant cannot offer a unified loyalty experience.
A customer who ordered in-store 50 times is a stranger to the Online system. No order history, no preferences, no saved addresses. The merchant cannot personalize the online experience based on in-store behavior. The customer must create a new account for Online.
A cashier added in RMS with specific permissions does not exist in Online. Access must be provisioned separately. A manager with full access in RMS has zero access in Online until someone manually creates their account. Role definitions are different between the two systems.
In-store revenue lives in the RMS database. Online revenue lives in the Online database. There is no consolidated view. To see total revenue, the merchant exports two CSVs and manually combines them in Excel. No unified view of customer behavior, product performance, or operational metrics.
A gift card issued through one system cannot be redeemed on the other. A coupon code created in RMS is not recognized by Online. Two completely separate promotion engines with no shared state.
Online System Data Volume (Accumulated from Sync)
11 Documented Sync Problems (from Confluence Spike Investigation)
Large menu concepts (merchants with 500+ products) cause OOM exceptions during sync. The entire menu is loaded into memory at once — no pagination, no streaming.
Significant delay between when a merchant updates data in RMS and when it becomes available in Online. No real-time propagation — sync must be manually triggered.
Every sync pulls the entire dataset from RMS, not just changes. A single price update triggers a full re-sync of all products, modifiers, and categories.
RMS supports only 3 levels of menu nesting. Circular reference risk when mapping group hierarchies. Deeper nesting silently truncated.
Sync processes one concept at a time sequentially. A merchant with menus, modifiers, combos, and categories waits for each to complete before the next starts.
If sync fails midway, there is no retry mechanism, no resume capability, no dead-letter queue. Partial data is left in an inconsistent state. Merchant must re-trigger manually.
If a merchant triggers sync while a previous sync is still running, the two processes conflict — causing duplicate records, data corruption, or silent data loss.
RMS modifier options and customization rules (min/max selections, default selections, required vs optional) are partially or fully lost during the mapping to Online's ModifierGroup model.
Linking synced items to Online menus relies on name matching — or requires direct manual database edits. No UI for this operation. One typo breaks the link.
Deleted items in RMS are soft-deleted in Online but never purged. Result: 4.9 million deleted modifier records accumulating in the Online database, degrading query performance.
1.6M total items, 878K item prices, 164K categories in Online — much of it orphaned or stale sync artifacts. No mechanism to distinguish active data from sync debris.
The Merchant Experience
A restaurant owner who just spent an hour setting up their menu in RMS — with all modifiers, combos, pricing tiers, and images — triggers the sync to Online. They wait while the system over-fetches their entire dataset. If the menu is large enough, the sync crashes with an out-of-memory error. If it completes, they open the Online portal to find modifier customizations lost, category hierarchy flattened, and items unlinked from menus. They now spend 30-90 minutes manually fixing the data — or open a support ticket. Any future change in RMS requires re-syncing the entire dataset again. This is not a one-time pain. This is the daily operational reality for merchants who use both systems.
How F7 Eliminates the Sync Problem Entirely
In F7, there is one Menu service that serves all channels — POS, Web, Mobile, Kiosk. A menu change made anywhere is immediately available everywhere. No sync. No duplication. One source of truth.
The CDP (Customer Data Platform) service maintains a single guest profile across all touchpoints. In-store purchases, online orders, loyalty points, preferences — all in one place. The customer is recognized regardless of channel.
All domain events flow through Kafka/MSK. When a promotion is created, every channel that needs it consumes the event in real-time. No polling. No cron jobs. No partial sync. Events are immutable, ordered, and replayable.
The Organization service provides SSO across all channels. One login. One admin portal. One set of permissions. A merchant manages their entire business from a single console — not two separate portals with two separate credential systems.
Paying for Infrastructure We Cannot Use
Our AWS infrastructure is provisioned for peak capacity that the monolith demands but rarely reaches. The schema-per-tenant model forces oversized instances. We cannot right-size because the monolith treats all workloads as one.
Why This Matters for IPO
Investors evaluate infrastructure efficiency as a key indicator of engineering maturity. A 12% CPU utilization rate on production infrastructure signals architectural debt, not headroom. In due diligence, this translates to questions about engineering leadership, cost discipline, and the ability to scale efficiently. Microservices with right-sized containers on EKS would let each service auto-scale independently — paying only for what each domain actually uses.
The Database Is the Bottleneck
Schema-per-tenant MySQL with 15,000+ schemas creates performance problems that no amount of hardware can fix. The technical due diligence uncovered fundamental data integrity issues beyond just performance.
Due Diligence: Database & Schema Analysis
No FK constraints anywhere in the schema. Referential integrity is enforced only by application code — if a bug or direct DB edit bypasses the ORM, orphaned records accumulate silently.
All monetary fields use DOUBLE(19,5) instead of DECIMAL. Floating-point arithmetic causes rounding errors in financial calculations — a compliance and audit risk for a company heading toward IPO.
The migration history contains 114 migrations, with inconsistent naming conventions and no automated rollback mechanism. Each migration must run across 15,000+ tenant schemas.
Schema complexity with over 65 core tables and approximately 45 pivot/junction tables. The relational surface area makes schema evolution extremely risky.
Confirmed N+1 patterns in ReservationRepo and InventoryItem — loading related records one-by-one instead of batch queries. These multiply across 15K schemas, creating cascading slow queries.
Schema-per-tenant requires calling DB::purge() to destroy and re-establish the database connection every time the application switches tenant context. Connection pooling is effectively impossible.
Reporting Is Trapped on the Transactional Database
Every report query runs against the same MySQL instance handling live POS transactions. Heavy aggregate queries (daily sales, inventory valuation, employee performance) compete directly with order placement and payment processing.
The schema-per-tenant model makes read replica setup prohibitively complex. Each replica must handle 15,000+ schemas. Cross-schema queries for platform-level reporting are essentially impossible without custom ETL that does not exist.
During Ramadan peak (iftar rush), report queries from early-closing merchants coincide with peak ordering from restaurants. API response times degrade from 200ms to 8-10 seconds. Merchants experience frozen POS screens while reports load.
Code Quality & Security: Due Diligence Findings
The technical due diligence scored security at 5.5/10. The findings below are not theoretical risks — they are documented facts from the RMS codebase audit. 632 routes, 117 controllers, and only 1 Form Request.
632 routes across 117 controllers — only 1 uses a Form Request for input validation. All other endpoints accept unvalidated input, relying on Eloquent mass assignment protection that has been disabled on 50+ models.
ALTER TABLE on a 15,000-schema MySQL instance. Must be done tenant-by-tenant. Failures require manual rollback. Some migrations have been postponed for 6+ months.
OAuth access tokens are configured with a 5-year expiry. Compromised tokens remain valid for years. No token rotation, no short-lived sessions, no refresh token mechanism.
Feature flags are managed via Google Sheets — a single point of failure. If the sheet is inaccessible, feature gate decisions fail. No audit trail, no rollback, no gradual rollout capability.
No automated test gates in the CI/CD pipeline. Code merges to production without passing any automated quality checks. Bugs are caught in production or manual QA.
Secrets and API keys found hardcoded in the source code. Not externalized to vaults or environment config. Any developer with repo access can see production credentials.
Real Failures, Real Merchant Impact
These are not hypothetical risks. These incidents happen regularly and are a direct consequence of the monolith architecture. Every engineer who has been on-call has experienced at least one of these.
The Growth Ceiling
Foodics is expanding its merchant base and location count. The current architecture cannot absorb this growth without proportional increases in infrastructure cost, operational risk, and engineering headcount.
Schema-per-Tenant Does Not Scale
IPO Readiness: Mid-2027
Why “Stabilize & Scale” Is Not Enough
The natural instinct is to fix what we have: migrate to MySQL 8, containerize, add observability, refactor the order path, enforce testing. This is sound engineering — and it would stabilize the system. But stabilization is not transformation. Here is why every “fix-in-place” strategy ultimately fails to solve our real problems.
The Fundamental Trade-Off
The stabilization path costs nearly the same time as a full rebuild — but at the end, you still have two PHP monoliths that cannot share data in real-time, still duplicate every feature across RMS and Online, and still cannot support new channels, new markets, or AI-first product experiences. You pay the same price and get a lower ceiling.
What Stabilization Cannot Fix
Two Systems Remain Two Systems
You can containerize both monoliths, add observability, and enforce testing — but RMS and Online are still separate applications with separate data models. A merchant still logs into two portals. A menu change still requires sync. Promotions, loyalty, and customer profiles still do not cross the boundary. No amount of refactoring within either monolith can solve this — because the problem is between them.
No Event-Driven = No Real-Time
Without an event bus like Kafka, every inter-system communication is either a sync job (batch, delayed, fragile) or a direct API call (tight coupling, cascading failures). You cannot build real-time inventory updates, instant menu propagation, or cross-channel order tracking. The 11 documented sync problems do not go away — they just get slightly faster polling.
Modular Monolith ≠ Independent Scaling
A modular monolith improves code organization but still deploys as one unit, shares one database connection pool, and scales as a single artifact. A surge in order volume still means scaling the entire application — including menu, reporting, inventory, and admin modules that do not need it. Domain-level scaling is architecturally impossible.
Still Building Everything Twice
Even with a refactored order placement flow in RMS, the Online system still needs its own order flow. Menu management, promotions, reporting, employee management — every feature exists in two codebases. Every bug fix is applied twice. Every new hire learns two systems. Stabilization does not reduce this duplication — it doubles the stabilization work.
PHP Ceiling on Talent & AI
The PHP/Laravel ecosystem cannot leverage the performance characteristics of Go, the type safety of .NET, or the JVM ecosystem of Kotlin. More critically, AI-assisted development tools deliver dramatically higher productivity with strongly-typed, well-structured codebases. A PHP monolith with 632 unvalidated routes and disabled mass assignment guards is the worst possible starting point for AI-augmented engineering.
No New Capabilities
Stabilization preserves existing functionality. It does not enable CQRS for isolated reporting, event sourcing for audit trails, saga orchestration for complex workflows, or BFF patterns for channel-specific optimization. Every product capability the business needs — real-time analytics, cross-channel loyalty, AI-powered recommendations — requires architectural primitives that the monolith cannot provide.
Why a Full Rebuild Is Feasible Now
Every backend engineer on F7 uses AI-assisted development (Claude) for spec generation, code implementation, test writing, and code review. This is not a marginal productivity gain — it compresses development timelines by 40-60% for greenfield services with well-defined specs. A full rebuild that would have taken 3 years in 2024 is achievable in 18 months in 2026.
F7's data-first pipeline (Data Schemas → Event Specs → API Contracts → DB Schema) produces machine-readable specifications that AI tools can consume directly. The spec is the implementation guide. No ambiguity, no interpretation errors, no back-and-forth. Every service starts from a locked, approved spec — the ideal input for AI-assisted code generation.
Stabilizing a monolith means working around 15 years of accumulated decisions, undocumented behaviors, and implicit contracts. Building greenfield services means choosing the right tool for each domain, designing clean data models, and writing code that AI tools can reason about. The refactoring tax on legacy code is higher than the cost of building new.
Every Major Platform Company Has Done This
Foodics is not the first company to outgrow its monolith. The companies below faced the same decision point we face today — and every one of them chose to rebuild. Not because their monolith was “bad,” but because their business ambitions outgrew what a monolith could deliver.
Amazon
Monolithic C++ application. Adding a feature took weeks of coordination. Every deployment risked the entire site.
Decomposed into 100s of microservices. Each team owns a service. Led to AWS itself — the infrastructure they built for internal use became a $90B business.
Lesson: The platform they built to serve themselves became larger than the original business.
Netflix
Monolithic Java application with a single Oracle database. A database corruption in 2008 caused 3 days of downtime — no DVD shipments.
Rebuilt entirely on AWS with microservices, event-driven architecture, and Chaos Engineering. Open-sourced their tooling (Zuul, Eureka, Hystrix).
Lesson: The 2008 failure was their wake-up call. They chose to rebuild rather than stabilize — and became the gold standard for cloud architecture.
Uber
Monolithic Python application (Dispatch). At scale, a single deployment took hours and a failure in payments would take down ride matching.
Rebuilt as domain-oriented microservices with their own service mesh. Each domain (rides, eats, payments) operates independently.
Lesson: They tried to stabilize the monolith first. It did not work. The rewrite was the only path to multi-product (Rides + Eats + Freight).
Shopify
One of the largest Ruby on Rails monoliths ever built. A single codebase serving millions of merchants. Deploy queue was hours long.
Decomposed into domain-driven components using their 'Deconstructing the Monolith' strategy. Each commerce domain became independently deployable.
Lesson: They coined 'modular monolith' as a stepping stone — but ultimately, independent services were required for scale.
Airbnb
Monolithic Rails app with a single massive MySQL database. Feature development slowed to a crawl as the codebase grew to millions of lines.
Rebuilt into SOA (Service-Oriented Architecture) with domain-specific services. Invested heavily in Thrift-based service communication and event-driven data platform.
Lesson: The migration took years — but without it, they could not have scaled from rooms to Experiences, or expanded internationally.
SoundCloud
Monolithic Rails app. Any change required full regression. Deployment confidence was low. Feature velocity dropped as the team grew.
Rebuilt into microservices using the Strangler Fig pattern — exactly the same approach F7 uses. New services ran alongside the monolith until cutover.
Lesson: Smaller team, similar challenges. Proved that Strangler Fig works even without FAANG-level resources.
The pattern is universal:every company that succeeded beyond the scale of their original architecture had to rebuild. None of them succeeded by stabilizing the monolith. The question is never “should we rebuild?” — it is “do we rebuild proactively while the business is strong, or reactively after the architecture causes a crisis?” Foodics is choosing the proactive path.
What Your Daily Life Looks Like After F7
F7 is not an abstract architecture diagram. It is the platform you will build on every day. Here is what changes — from the perspective of an engineer writing code, a product manager shipping features, and the business hitting its targets.
F7 Platform Domain Architecture
Order Management, Calculation Engine, Payment Channels, Tax Engine
Menu Service, Product Catalog, Modifier Engine, Pricing
Stock Management, Supply Chain, Purchase Orders, Recipes
CDP, Loyalty, Campaigns, Feedback, Reservations
Merchant Management, Branches, Employees, Roles, Devices
Auth/IAM, Notifications, File Storage, Audit Log, Config
Partner Integrations, Developer Portal, Webhooks, OpenAPIs
Accounting, Payments (Tanar Pay), Capital, Spend Management
Retail, Enterprise, Self-Checkout, Hospitality
Cloud Infra, CI/CD, Observability, Service Mesh, Data & AI
Shadow Data Consumer: Safe, Incremental Migration
The monolith continues running. New services build up their data stores by consuming events in shadow mode. When the data is validated and the service is tested, traffic shifts gradually. No big bang. No downtime. No data loss.
Program Milestones
You Built the Foundation. Now Let's Build the Future.
Every engineer in this organization contributed to making Foodics the #1 restaurant platform in MENA. That is real. That matters. And the skills, domain knowledge, and product intuition you developed building RMS and Online are exactly what F7 needs.
F7 is not a rejection of what you built. It is the next chapter. You are not starting over — you are building the platform that your $59M ARR business deserves. The platform that scales to 82,000 locations. The platform that powers a fintech division. The platform that passes IPO due diligence not with a 6.2/10 — but with flying colors.
Amazon, Netflix, Uber, Shopify — they all made this leap. Their engineers had the same doubts. And on the other side, they built the platforms that defined their industries. That is what we are doing here.