Nodes
This page explains node types you will use on the canvas.
1. Prompt node (core)
Prompt nodes call models and produce outputs.
What you can control
- output modality: text / image / video / audio
- capability and tier (when route groups are enabled)
- parameters like aspect ratio, resolution, and duration
- batch size:
x1 / x2 / x4
Inputs and outputs
- Inputs: text, reference image, reference video (depends on selected model)
- Outputs: text/image/video/audio (based on generation result)
Usage tips
- Run
x1first to validate direction, then scale tox2/x4. - Change only a few parameters per iteration.
2. Input nodes
Input Text
Store prompts, scripts, or context. Reusable across multiple Prompt nodes.
Input Image
Upload or drag/drop images. Common for image-to-image and image-guided video.
Input Video
Upload or drag/drop videos. Common for remix, extend, and enhancement flows.
Input Audio
Upload or drag/drop audio. Common for music continuation and speech workflows.
3. Result node
Displays generation output from Prompt nodes.
- auto-adapts viewer by content type
- supports download and chaining to next steps
- can be reused as upstream input
4. Split Output node
Splits long upstream text into multiple segments.
Typical use case:
- split a storyboard script into scenes, then route each scene into downstream generation.
Split rules:
- if numbered (
1./1)), split by numbering - otherwise split by blank lines
You can also add, remove, and edit segments manually.
5. Global Context node
Stores project-level defaults and shared constraints.
Common use cases:
- style constraints and naming conventions
- combined with workflow settings so new Prompt nodes get consistent defaults
6. Group node
Organizes complex graphs.
- move related nodes together
- drag a child outside the group to detach
- batch generation can create groups automatically