Connecting...
Minimal Mode Bridge + Worker-Pull + Manual Verda — Templates & Deployments hidden
Redis
--
Job Queue
Queue Manager
--
FastAPI Service
Inference
--
Mode
Disk Usage
--
Loading...
Scheduled Jobs
Loading...
Containers

Loading containers...
Logs
Select a container to view logs.
GPU Warm Keep-Alive
Loading warm-GPU status...
Cold Start Info
Loading...
Worker Logs (SFS)
Click Refresh to load worker logs from SFS
Worker Log Viewer
Select a worker log file to view.
Current GPU Deployment Loading...
Loading...
Fetching GPU status...

Available Deployments

Disables serverless, uses local GPU workers polling Redis queue
Disk Usage

Loading...
R2 Cloud Storage

Loading...
Directory Browser
Click refresh to load directory listing.
Pending
-
Jobs in queue
Running
-
Processing
Completed
-
Finished
Failed
-
Errors
Job Queue

Loading jobs...
Workshop Sections
Loading workshop schemes...
Workflows
-
Template files
On Disk
-
Models found
Missing
-
Models needed
Disk Free
-
GB available
Workflow Templates
Click Scan to discover workflow templates and their model dependencies.
No active session
47
Generations
38
Responses
81%
Response Rate
5
Active Users
22s
Avg Inference
Research Questions
Session History
Generations per User
Models Used
Research Data

No Active Section

Activate a ComfyuMe template to assign users to groups and warm GPU deployments.
Groups
No groups — activate a section to create them.
Deployment Templates
Loading...
ComfyUI Workflow Templates
Loading...
ComfyuMe Templates (Deployment + Workflow combined)
Loading...
Platform Settings

Runtime toggles for the workshop platform. Changes take effect immediately.

Minimal Mode off
Single-user operation with manual Verda setup. Hides Templates & Deployments tabs, forces Workshop Mode + Group Routing OFF. Flip OFF to enable and test individual systems. (#324)
Isolate Mode off
Blocks all /api/* endpoints (except this toggle). Use for fault isolation — disable everything, then test individual features. (#75)
Workshop Mode off
Master switch for the workshop template/scheme system. ON = template enforcement, group routing, section activation. OFF = normal ComfyUI operation. (#132)
FLAG ONLY — not yet wired to gate other settings. Use individual toggles below.
Group Routing off
Route jobs to group-specific Redis queues. Each group gets a dedicated GPU deployment. Requires an active section in the Deployments tab. (#132)
User access to native ComfyUI Template System disabled
When disabled, hides ComfyUI's built-in template import/browse UI. Users can only load admin-provided workflows or their own saves. Enable for advanced sessions. (ADR-001, #128)
GPU Overlay on
Floating banner showing real-time GPU inference progress. Takes effect on next page load. (#201)
Queue Progress Sync on
Syncs bridge progress into ComfyUI's native progress bar. Takes effect on next page load. (#201)
GPU Overlay Detail Level user
"admin" shows detailed inference phases, SFS polls, prompt_id. "user" shows simple progress messages.
Inference Timeout 10m
Maximum wait time for serverless inference (60s–1800s). Applies to cold start + generation. (#85)

New Deployment Template

1. GPU
2. Groups
3. Scaling
4. Save

New ComfyuMe Template

Activate Section

Select a ComfyuMe template to activate. This assigns users to groups based on the deployment template's configuration.