What’s New
v0.3.1 - April 1, 2026
Stability and polish update following the v0.3.0 rebuild. Refined installation, new creative tools, and tons of fixes based on community feedback.
Since the March 24 early release on Discord: minor auto-update tweak and script version string bump.
New Features
- Built-in FX Processors - Two processors ship with the operator, ready to use:
- Feedback Loop - Deforum-style image feedback with zoom, pan, rotation
- Feedback Grade - Color grading that compounds through the feedback loop (brightness, contrast, saturation, gamma, hue rotation, temperature, invert)
- Custom FX Processor Support - Drop your own Python processors into
custom_processors/and they auto-load. Four pipeline stages available (image_pre, latent_pre, latent_post, image_post). Claude skill included to help you write them. See the FX Processors page for details - StreamV2V / Cached Attention - Video-to-video mode with configurable max frames and interval. Temporal consistency is back! Requires TensorRT
- Depth TRT Auto-Build - Depth ControlNet TRT engine builds automatically on first use. Roughly 60% faster and a fraction of the VRAM compared to the PyTorch depth preprocessor
- User/Dev Tox Mode - Toggle to preserve your internal operator modifications across backend switches and updates
- FPS Limiting and Benchmark Report - Control GPU load, one-click benchmark to clipboard
- Scheduler/Sampler Selection - Choose between LCM and TCD schedulers, plus multiple sampler options
Installation Overhaul
- New Installer CLI -
verify,diagnose, andrepaircommands built into the operator - 13-point verification covering PyTorch CUDA, StreamDiffusion core, numpy, diffusers fork, protobuf, onnx, peft, and more
- Diagnose command outputs GPU info, full package versions, and environment details for bug reports
- Repair command auto-fixes common issues (onnx version, diffusers fork, protobuf pin)
- TensorRT install fixed on Windows - Installs sub-packages instead of the broken meta-package
- Dependency conflicts resolved - Protobuf, numpy, onnx, and diffusers fork version conflicts are caught and fixed automatically
- Commit tracking - Operator verifies your local repos match the expected version on startup
- Installation debug Claude skill - Ships with the operator. If you use Claude Code, it knows how to diagnose and fix StreamDiffusionTD installs
Daydream Cloud
- Cloud operator is now a separate bundled component included in this release
- ControlNet menu system works on cloud backend
- Pinned to Daydream v0.2.2 (v0.2.3 had a stability issue, fix expected in v0.2.4)
Auto-Update System
- Operator can check for and download updates directly from the About page
- Sign in on the Local operator’s About page to enable update checks
Bug Fixes
- Fixed black frames from prompt weight edge case
- Fixed FX temporal flickering when combining feedback with post-processors
- Fixed multi-ControlNet only sending control image to first slot
- Fixed TRT engine build crashes from dependency version conflicts
- Fixed SDXL ControlNet TRT engine builds
- Fixed stale error messages persisting between sessions
- Cleaned up debug output
- Many other stability improvements
Known Limitations
- TensorRT is required for ControlNet, IPAdapter, and StreamV2V (non-TRT paths are broken)
- Dual ControlNet on 24GB GPUs runs near VRAM ceiling with reduced FPS
- StreamV2V engines are locked to their build resolution
- If you previously ran v0.3.0 dependency fix scripts, you may need to reinstall the diffusers fork (the
repaircommand handles this)
v0.3.0 - November 6, 2025
Complete rebuild with SDXL support, IP Adapter, and TensorRT acceleration.
Highlights
- SDXL-Turbo as default model with native SDXL support
- IP Adapter with FaceID for image prompt conditioning
- Local TensorRT acceleration (previously cloud-only)
- Daydream Cloud backend with zero installation
- Multiple ControlNets simultaneously
- FX Processor hook system for custom processing
- Flexible TRT engines supporting 384-1024px without rebuild
- New model compatibility system, YAML config generation, and SharedMemory backend communication
Breaking Changes from v0.2.99
- Must install in a separate folder
- V2V temporal consistency removed (returned in v0.3.1)
- Different default model (SDXL-Turbo vs sd-turbo)
v0.2.99 - August 21, 2025
Bridge release between v0.2.x and the v0.3.0 rebuild.
- V2V Temporal Consistency with cached attention maps and feature injection
- Daydream Cloud backend with cloud TensorRT
- RTX 50-series compatibility improvements
- Enhanced Mac installation (cloud mode)
- Feedback loop toggle
v0.2.6 - February 12, 2025
- Improved Windows installation, fixed ‘cached_download’ error
- Mac (Apple Silicon) support with built-in install
- Textual Inversion/Embeddings support
- Fixed loading multiple LoRAs and weights
- Easy OP Loading (auto-detects install location)
- Settings 3 page with TensorRT loader and Huggingface downloader
Earlier Versions
v0.2.3 - October 8, 2024
SDXL ControlNet support, installation bug fixes, FPS counter
v0.2.2 - September 5, 2024
Resolution switching fix, improved local model and LoRA support
v0.2.0 - August 18, 2024
ControlNet support, V2V temporal consistency, Pause/Play/Unload commands, offline usage, LoRA improvements
v0.1.11 - May 16, 2024
Mac support, Model Preset Config, non-512x512 TensorRT resolutions
v0.1.8 - April 21, 2024
Major performance update: direct memory buffer replacing NDI+Spout
v0.1.7 - April 11, 2024
SDXL-Turbo and SDXS model support
v0.1.0 - January 11, 2024
Initial release: automated setup, stream control, live parameter updates, LoRA support, NDI/Spout integration