Personalized Retail Recommender
What if a shopping experience actually knew you — and got smarter every time?
This prototype demonstrates an AI-powered retail recommendation system that goes far beyond "customers also bought." Using a multi-agent architecture, vector embeddings, and a persistent contextual memory layer, it surfaces products that are genuinely relevant to who you are, what you're doing right now, and how you've behaved across sessions — then explains why. Oh, and the UI itself? Generated by AI agents on the fly.
The Problem
What Traditional Engines Get Wrong
Traditional recommendation engines are static rules dressed up as intelligence. They know what you bought. They don't know what you mean.
They don't adapt. They don't explain themselves. And they definitely don't generate a personalized interface every time you open the app.
This prototype asks: what happens when you replace that with a living, context-aware agent system?
Static Rules
Same logic every time, no matter who you are or what you need right now.
No Explanation
Recommendations appear with no reasoning — users can't trust what they can't understand.
No Memory
Each session starts from scratch. Behavioral signals are wasted.
How It Works: The Agentic Pipeline
Six agents collaborate in sequence — and loop back on themselves to keep improving.
Request Context Agent
Captures real-time intent: what page are you on, what have you been browsing, what categories are active in this session?
User History Lookup Agent
Retrieves long-term behavioral patterns: past views, clicks, purchases, category affinities.
Item Context Enrichment Agent
Identifies candidate products, enriches them with metadata, and attaches explanation signals: "similar to," "frequently co-purchased," "trending in your region."
Item Recommender Agent
Scores and ranks candidates, then generates a human-readable explanation for every recommendation shown.
User Feedback Agent
Watches what you do next: clicks, skips, dwell time, dismissals. All of it becomes signal.
Contextual Memory Agent
Updates both short-term (this session) and long-term (your evolving preferences) memory. Future recommendation cycles use this updated profile.

The result is a closed loop: the more you use it, the more it knows. The more it knows, the better it gets.
The AG-UI Leap: AI-Generated User Interface
The standout feature of this prototype isn't just the recommendations — it's how they're presented.
What is AG-UI?
When you open the app, the UI itself is generated by AI agents based on your profile. The product cards, shelf groupings, section labels, and layout are not pre-coded templates. They're assembled on the fly by reading your contextual profile and rendering components accordingly.
This is a protocol called AG-UI (Agent-User Interaction) — which enables lazy loading of LLM live context and generating the experience in real time made available to UI or other systems.
📱 iOS Simulator
Runs via Expo, accessible through a Cloudflare tunnel — shareable to any device without a local install.
Rating System
Star/dismiss interactions stored as structured memory events, feeding the next generation cycle.
How the UI Was Built
Spec Merlin
Mike built a custom Claude skill called Spec Merlin that scanned the existing codebase and auto-generated all the spec documents needed to describe the system — including UX and UI design documents.
Those specs were then fed into a knowledge graph, and feature-specific agents were spun up to implement each section independently.
Action Control Summary System (ACSS)
Coordinated through a framework called the Action Control Summary System (ACSS), each agent worked on its feature domain independently while staying aligned to the overall system spec.
In short: the UI was built by agents reading specs that were themselves generated by agents reading code. Dog food, all the way down.
This recursive, self-documenting build process represents a new paradigm for AI-assisted software development — where the system describes itself and then builds itself.
Tech Stack
A modern, cloud-native architecture built for scale, speed, and intelligent context management.
Python / FastAPI
High-performance async backend powering the agent orchestration layer.
React Native
Cross-platform iOS frontend with dynamic component rendering for AGUI.
PostgreSQL + Qdrant
Relational persistence paired with a vector database for semantic similarity search.
Anthropic Claude API
Core LLM powering agent reasoning, explanation generation, and UI assembly.
Docker + Cloudflare
Containerized deployment with Cloudflare Tunnel for frictionless live sharing.

Migration note: The original backend (Kevin Quon) used AWS Bedrock + Titan Embed — migrated to Anthropic Claude API directly. Google Analytics-style behavioral data is ingested and pre-processed with LLM summarization before indexing into the CARS (Context-Aware Recommender System) architecture.
What Makes This Different
Traditional Systems
  • Deterministic — same logic every time
  • Reprogrammed manually to change behavior
  • No explanation for recommendations
  • One UI for all users
  • Behavioral signals largely wasted
This System
  • Agentic — fresh reasoning pass every cycle
  • Self-improving through structured memory
  • Every recommendation includes a plain-language reason
  • UI generated uniquely per user per session
  • Every interaction becomes a learning signal
"Because you viewed similar running shoes" or "Popular in your area this week." — That transparency isn't cosmetic. It's part of how the feedback loop works.
What's Next
The team is exploring whether the AGUI approach can be taken even further — adapting not just which products appear, but what kind of UI layout a user prefers.
Layout Preference Learning
Do users prefer grid views or carousels? Dense or minimal cards? The system could learn this too — adapting the visual structure of the experience, not just its content.
Truly Unique Experiences
The bigger question Kevin and Mike are exploring: can an app have a back end and interface that is genuinely different for every single user?
Continuous Evolution
As the memory layer matures, the system's model of each user deepens — enabling recommendations and interfaces that evolve over months, not just sessions.
Come See It Live
Part of the nvisia AI Lab Prototype Series · AI Symposium · Chicago · May 7, 2026
Kevin Quon
Principal Architect — backend systems, CARS architecture, AWS Bedrock migration, behavioral data pipeline.
Michael Arce
UX Lead — AGUI design, Spec Merlin, ACSS framework, iOS frontend and agent-generated component system.
nvisia AI Lab
Exploring the frontier of agentic systems, contextual AI, and the future of human-software interaction.
Live demo available at the booth — experience a recommendation engine that knows you.