Usage Examples
Image-to-Image with Input
- Connect a TOP to the first input of StreamDiffusionTD (this is the input image)
- SD Mode is set to
img2img
by default - Use Step Sliders to control denoising:
- Higher values = closer to input image
- Lower values = more AI transformation
- Create custom effects and change the prompt and adjust settings to refine the look of the generated image stream
ControlNet (Local Mode)
Local ControlNet uses the second input for preprocessed control images:
- Connect preprocessed image (edges, depth, pose, etc.) to the second input
- Enable ControlNet in the ControlNet parameter page before starting stream
- ControlNet must be enabled when starting stream, but can be toggled during streaming
- Limited to single ControlNet for now, but supports many types
- Automatic download available for 12 different ControlNet types depending on your model
- You can use any ControlNet model as long as it matches your base model type
ControlNet (Daydream Mode)
Daydream ControlNet works differently - it uses the same input as the main image:
- Connect input image to the first input (same as regular img2img)
- Daydream has its own built-in preprocessors - no preprocessing required
- Multiple ControlNets can be combined for complex control
- Each ControlNet has individual weight controls
- Five ControlNet types available: OpenPose, HED, Canny, Depth, and Color
V2V Temporal Consistency
- Enable V2V in V2V parameter page
- Adjust temporal settings for smooth frame transitions
- Not compatible with TensorRT acceleration
- Ideal for video sequences and smooth animations
Performance Optimization
High FPS Setup
- Use TensorRT acceleration (local mode)
- Use default resolution (512x512)
- Lower step count (1-2 steps for highest FPS)
- Use optimized models (SD-Turbo, SDXS)
Quality Setup
- Increase step count (2-4 steps)
- Try higher resolution like 576x1024 for good 16:9 ratio
- Use SDXL-Turbo model for highest quality (will be slower)
- Enable V2V for temporal consistency