GPU Infrastructure for Image and Video Generation

Deploy high-performance GPUs and run ComfyUI, Invoke, or custom pipelines. Persistent storage for your models, hourly pricing, and full API access.

Request Demo
My Instances+ New Instance
ComfyUI-SDXLRunning
Container·d4e1a8b2
NVIDIA1 x RTX PRO 6000 (48GB)
30 CPU | 120GB RAM
Video-Gen-WanRunning
VirtualMachine·f7c2b9e5
NVIDIA1 x H100 (80GB)
30 CPU | 200GB RAM
FLUX-BatchRunning
Container·a3d6f1c8
NVIDIA1 x RTX 5090 (32GB)
15 CPU | 80GB RAM
Invoke-StudioSetting up
VirtualMachine·b8e4d2a7
NVIDIA1 x L40S (48GB)
30 CPU | 120GB RAM

Why CloudRift

Production-Grade Compute for Creative Workflows

Local hardware limits what you can generate. CloudRift provides high-VRAM RTX and datacenter GPUs on-demand, persistent volumes for your models and checkpoints, and prebuilt containers for tools like ComfyUI — so you can focus on creating.

  • Prebuilt ComfyUI containers
  • Persistent model storage
  • Full API and SSH access
  • Multi-GPU support

Workflow

How It Works

Choose GPU and Template

Select an RTX 4090, 5090, or PRO 6000 and launch with a prebuilt ComfyUI container or custom Docker image.

Create and Iterate

Run FLUX, SDXL, or video models. Attach persistent storage to keep checkpoints and workflows across sessions.

Scale Your Pipeline

Add more GPUs for batch rendering or experiments. Shut down when done — pay only for what you use.

Capabilities

What You Can Build

From text-to-video and image-to-video to 3D asset generation and SDXL pipelines — compose nodes to build exactly what you need.

Text to Video

Use models like LTX-Video, Mochi, Hunyuan Video, and Wan for prompt-driven video generation workflows.

3D Asset Generation

Generate novel views of objects from a single image using Stable Zero123 — a step toward quick 3D asset workflows.

Text to Image

Turn a prompt into a polished image, then iterate on composition and style. Supports SDXL and reference-based workflows.

Image to Video

Start with a single image and bring it to life as a short clip, or generate motion directly from a text prompt.

FAQ

Frequently Asked Questions

FLUX, Schnell, SDXL, and video models like LTX-Video and Wan work out of the box. Bring your own checkpoints, LoRAs, and custom nodes.
Use containers for fast setup and reproducibility, or VMs for full OS control. Attach persistent storage to keep models and workflows across sessions.
RTX 4090, RTX 5090, RTX PRO 6000, and datacenter GPUs — available on-demand or reserved.
Attach persistent volumes to any instance. Your checkpoints, workflows, and outputs persist across sessions — pick up where you left off.
Yes. Use prebuilt templates for ComfyUI and Invoke, or bring your own Docker image with any framework or toolchain.
Get in touch

Ready to get started?

Get in touch with our team to discuss your requirements and find the right solution for your infrastructure.