MobilEA
At a glance
- Industry: Mobility & Transportation
- Focus: Computer vision, route optimization, real-time orchestration
- Goal: Unified mobility workflow with reliable multi-component AI orchestration
- Duration: 8 months from prototype to production
Context
MobilEA was designed to optimize urban mobility operations by connecting multiple AI components into a single, coherent workflow. The system needed to process real-time video feeds, make route optimization decisions, and coordinate with operational teams — all within strict latency budgets.
The challenge was not building one great model. It was making multiple AI components work together reliably under production pressure. This is a typical Applied AI integration problem: the system fails not because any single model is bad, but because the seams between components break.
The hardest part wasn’t the AI. It was making the AI components agree on what “done” means.
Challenge
Primary objective: Combine computer vision outputs with optimization logic into a reliable mobility workflow that operators could trust.
Key constraints:
- Sub-second decision latency for time-critical paths
- 99.5%+ uptime for the orchestration layer
- Graceful degradation when any single component fails
- Clear audit trail for every decision
Technical Approach
Component Architecture
We designed explicit contracts between every AI component. Each module declared:
- Input schema: What data it expects, in what format
- Output schema: What it produces, with confidence metadata
- SLA: Maximum latency, fallback behavior on timeout
- Health signals: How to know if the component is healthy
This contract-first approach meant we could swap components, run A/B tests, and debug failures without guessing at the interface.
CV Pipeline
The computer vision pipeline processed multiple video streams in parallel:
- Object detection for vehicles and pedestrians
- Tracking across frames with ID persistence
- Event detection (stops, lane changes, anomalies)
- Confidence-weighted outputs for downstream consumption
We used temporal smoothing to reduce noise — a single missed detection shouldn’t trigger an operational alert.
Optimization Engine
The route optimization layer consumed CV outputs and computed:
- Dynamic routing based on current traffic conditions
- Load balancing across fleet resources
- Estimated time adjustments with confidence intervals
The optimizer was designed to be conservative. When CV confidence dropped, the optimizer widened its uncertainty bounds rather than making overconfident decisions.
Orchestration Layer
The orchestration service tied everything together:
- Request routing: Directed inputs to the right components
- Retry logic: Handled transient failures with exponential backoff
- Circuit breakers: Prevented cascade failures when a component was unhealthy
- Timeout management: Enforced SLAs across the entire pipeline
Trade-offs
| Decision | Trade-off |
|---|---|
| Conservative optimization | Slightly suboptimal routes in exchange for fewer failures |
| Explicit contracts | More upfront design work, but easier debugging and iteration |
| Temporal smoothing | Some latency added for stability |
| Circuit breakers | Occasional dropped requests to protect system health |
- Reliability over novelty. We preferred predictable behavior over experimental model tweaks.
- System constraints first. Latency and uptime budgets drove architecture decisions, not model accuracy targets.
- Observability investment. We spent significant effort on logging, metrics, and tracing before optimizing models.
Results
| Metric | Outcome |
|---|---|
| Decision latency | Under 1 second for critical paths |
| Service availability | 99.5–99.9% for orchestration layer |
| Manual handoffs | Reduced by 40% through unified coordination |
| Incident response | Mean time to detection under 2 minutes |
Stack
- CV Pipeline: Edge-optimized models, frame batching, confidence-weighted outputs
- Optimization Engine: Constraint-based solver with uncertainty handling
- Orchestration: Event-driven architecture with circuit breakers and retries
- Monitoring: Distributed tracing, real-time dashboards, alerting
Key Learnings
- Hybrid AI systems fail at the seams. Component interfaces need explicit contracts, not implicit assumptions.
- Clear interface contracts matter as much as model accuracy. A brilliant model with a vague output schema will cause production incidents.
- Invest in observability early. You can’t debug what you can’t see.
- Design for failure. Every component will fail eventually — the question is whether the system survives.