AI OTEL

Brain
Agents
Products
Plans
Moats
Roadmap

The Three-Level Brain

A company brain for every hotel. Not marketing language — literal architecture.

The brain isn't one thing. It's three brains, each operating at a different level of abstraction, each feeding the others. This is what makes us uncopiable.

3 levelsIndustry · OC · Property€800KAI/ML over 36 months18 moproperty-brain accumulation

Why three brains

No single hotel can learn what the network learns. No foreign vendor can ever access the data.

We're building a company brain for every hotel — a system that pulls fragmented operational knowledge out of reservations, communications, audits, and supplier history, structures it, keeps it current, and turns it into executable skills the AI uses to run the property. This is the primitive YC calls out as the missing layer between raw company data and reliable AI automation.

We were already building it. v4 names it correctly. And we name three of them — because the single-brain framing missed what makes this architecture compounding rather than merely capable.

Level 1

The Industry Brain

A foundation layer trained on the entirety of Türkiye hospitality operational signal.

What it knows

Hospitality at the level of Türkiye as a system. Macro demand patterns. Cross-property correlations. Regional inbound flow signatures. Seasonal anomalies that span hotels.

Why it matters

Activates the network density moat. At 200 hotels marginally useful. At 1,000 hotels better than any consultancy in the country. At 2,500 hotels the most authoritative operational intelligence about Türkiye hospitality that has ever existed.

Level 2

The OC Brain

The institutional brain of the AI Otel platform itself — our own G-Brain.

What it knows

How AI Otel works as a company. How to onboard a hotel. How to debug an integration. How to draft a compliance response. How to escalate a payment provider issue. The accumulated operational knowledge of the platform organization itself.

Why it matters

Two reasons. Internal — the platform team scales without scaling headcount linearly. External — it is the layer that fine-tunes Level 3, transferring "what good hospitality operations look like" to each property.

Level 3

The Property Brain

Per-hotel cognitive instance. Trained on the property's own 18-month operational history.

18-month clock starts at signup

What it knows

This hotel's repeat guests by name. This hotel's pricing history and what worked. This hotel's complaint resolution patterns. This hotel's seasonal voice. This hotel's KBS edge cases. This hotel's supplier rhythms. This hotel's staff workflows.

Why it matters

Per-customer moat. The hotel cannot transfer this brain elsewhere — it was trained on data that lives in our database under their tenancy. Switching to any competitor isn't just software migration; it's a measurable cognitive downgrade.

Knowledge flow

How the three brains feed each other.

A true compounding architecture. Every hotel that joins makes every other hotel's brain better. Foreign vendors with one model serving everyone cannot match this — they cannot even structure their data to match this.

Level 1Industry Brain

Türkiye hospitality system

UP — anonymized signal
DOWN — accumulated intelligence
Level 2OC Brain

AI Otel platform institutional knowledge

UP — operational signal
DOWN — adaptation transfer
Level 3Property Brains

Per-hotel cognitive instances

Knowledge flows up: every property brain (Level 3) anonymizes and contributes operational signal to the industry brain (Level 1).
Knowledge flows down: every property brain inherits accumulated platform knowledge from the OC Brain (Level 2) and accumulated industry intelligence from the Level 1 brain. The OC Brain itself improves as we run more deployments.

Model strategy — the honest version

Customer requirements dictate the deployment.

v3 was loose about which models we deploy on. v4 is precise. Saying "we deploy on Kumru and T3AI" is positioning theater. Saying "we deploy on the strongest Turkish-capable base for each customer's requirements" is honest engineering — and still aligns with the National AI Strategy.

International / no sovereignty requirement

Frontier closed models

  • Claude (Anthropic)
  • GPT-class (OpenAI)
  • Gemini (Google)

Best raw capability, deployed through standard cloud APIs. This is what most international customers run on; capability ceiling matters more than sovereignty. AI Otel uses these as the default base for operations that require deep reasoning.

Türkiye standard tier

Strong multilingual open weights

  • Qwen 3
  • Llama 3.x
  • Gemma 3

Self-hosted on Türkiye-resident GPU infrastructure. These are the models that current evaluation benchmarks show actually perform best on Turkish content, even though they are not "Turkish models." Honest engineering trumps political branding. Customer data stays in Türkiye; the model weights are open and transparent.

Türkiye sovereignty tier

Trendyol open-weight Turkish-tuned

  • LLM-8B-T1 (on Qwen 3)
  • Asure-12B (on Gemma 3)

Apache-2.0 licensed, real production lineage, the most credible Turkish-tuned base in the open ecosystem. As Türkiye's national foundation model effort matures and produces verified, benchmarked, production-ready weights, we add it to the deployment menu. We do not pretend any specific Turkish foundation model is the right answer until benchmarks say it is.

Why this matters for the pitch. Sophisticated reviewers will recognize this as the correct posture. Positioning theater would actually weaken our credibility with people who know.

Fine-tuning pipeline

Six stages. Concretely. End-to-end.

01

Base selection per customer

Customer onboarding includes a sovereignty assessment that maps to one of the three deployment tiers. The base model is selected accordingly.

Cadence At onboarding
02

Industry-level adaptation (Level 1 brain transfer)

Continued pre-training on a curated Turkish hospitality corpus — anonymized network data, public hospitality literature, regulatory documentation, service vocabulary — on top of the chosen base. LoRA adapters for parameter-efficient adaptation.

Cost $15–40K per baseCadence Refreshed quarterly
03

Platform-level adaptation (Level 2 brain transfer)

Instruction tuning with structured tool-use examples from the OC Brain corpus — playbooks, runbooks, documentation, support intelligence, integration patterns. Each property brain inherits this layer at deployment.

Cadence Continuous
04

Property-specific adaptation (Level 3 emergence)

The hotel runs on AI Otel and accumulates operational signal. Every 90 days, the property brain receives a fine-tune update on that property's operational data. Per-property LoRA adapters layered on the industry+platform-adapted base. Per-property weights stored in our data layer — switching customers means losing this adapter.

Cadence 90-day refresh per property
05

Continuous evaluation and retraining

Internal eval harness scoring brain output against human-rated benchmarks: pricing recommendation accuracy vs. revenue manager judgment, guest communication quality vs. brand voice rubric, compliance handling correctness vs. regulatory ground truth.

Cadence Monthly retraining cycles
06

Distillation and serving optimization

As the platform reaches scale, we move from inference-on-demand to serving-optimized deployment: distilled smaller variants of property brains for fast inference on routine operations, full-capacity inference for complex reasoning, hybrid routing within the platform.

Cadence 2028+ engineering

Cost & funding

What this costs and where the funding comes from.

LoRA fine-tuning per base, per quarter: ~$15–40K compute. Across three deployment tiers: ~$50–120K per quarter for industry-level adaptation.

Property-specific adapters: marginal. Adapter storage and inference costs are dominated by the operational platform, not by the fine-tuning itself. Per-property fine-tuning runs in cents per refresh.

The Sectoral Adaptation Project Call — TRY 50M (~€1.4M) per project — directly funds Stage 2 industry-level adaptation on Turkish foundation models for hospitality. This is exactly what the policy was written for. As Turkish foundation models reach production capability, we add them to our deployment menu and apply for grant funding to drive their adaptation.

Total platform AI/ML investment over 36 months: ~€800K. Revised up from v3's €300K to reflect the engineering reality of three-level brain architecture and continuous fine-tuning. The engineering math gets cleaner.

The brain only matters if agents can call it.

See the agent-native architecture that exposes every operation to MCP.