Unity doesn’t really have a built-in “4-D texture” object, but you can treat a time-varying 3-D volume (x × y × z × t) as either:
- a stack of independent
Texture3D assets held in a Texture3DArray (one slice = one frame) — or
- a sparse volume grid in OpenVDB/NanoVDB that you sample directly on the GPU.
Both approaches already have runtime and tooling support, so you don’t have to invent a brand-new binary format unless you need stronger compression or custom metadata. Below is an outline you can adapt to explosions, clouds, medical scans, etc.
| Use-case | GPU object | Pros | Cons |
|---|
| Dense, moderate resolution | Texture3DArray (or a plain array of Texture3D) | Native Unity API; hardware filtering & mipmaps; trivial to ray-march | RAM/VRAM grows linearly with frame-count; no sparsity |
| Huge or very sparse | NanoVDB grid in a ComputeBuffer | Sparse ≤5 % memory footprint; industry standard VFX format; direct GPU sampling | Read-only in shaders; needs offline conversion; requires CUDA-style kernels for decompression |
Texture arrays look like a single object to the GPU and are indexed with an extra “layer” coordinate citeturn11search0.
NanoVDB is a GPU-friendly, linearized version of OpenVDB aimed at real-time rendering and simulation citeturn4view0.
OpenVDB itself was designed for “efficient manipulation of sparse, time-varying volumetric data” citeturn16view0.
- Sim export → sequence of raw/EXR 3-D textures.
- Optional: offline BC6H/ASTC 3-D compression or Zstandard in a custom container.
- Unity import script packs frames into a
Texture3DArray asset.
Because every frame is just another array layer, you can stream chunks directly with Graphics.CopyTexture or AsyncGPUReadback without stalling the GPU.
- Houdini / EmberGen / Blast simulation →
.vdb per frame.
- Run
openvdb_to_nanovdb or NeuralVDB encoder to produce a single, concatenated .nvdb file (supports lossy fixed-rate compression ⬇VRAM by 5-10×) citeturn17view0.
- At load time, map the binary blob into a
ComputeBuffer (stride = 32 bytes node header).
Unity plugins such as OpenVDBForUnity already parse the grid and build compute buffers automatically citeturn8view0.
var gridBuffer = new ComputeBuffer(nodeCount, 32,
ComputeBufferType.Structured);
gridBuffer.SetData(serializedNodes);
ComputeShader cs = Resources.Load<ComputeShader>("VolumeRaymarch");
cs.SetBuffer(0,"_Grid", gridBuffer); // NanoVDB
cs.SetTexture(0,"_Frames", tex3DArray); // Texture3DArray path
ComputeBuffer is Unity’s low-level GPU data container for arbitrary structs citeturn3view0, while ComputeShader.SetTexture binds either read-only or UAV (random-write) textures to a kernel citeturn14search0.
void CSMain (uint3 id : SV_DispatchThreadID)
float3 rayPos = /* start in volume */;
float3 rayStep = /* step size */;
float tFrame = _Time.y * _PlaybackFPS; // animate
float density0 = SampleVolume(_Frames, rayPos, f0);
float density1 = SampleVolume(_Frames, rayPos, f1);
float density = lerp(density0, density1, k); // linear time-interp
/* accumulate colour & opacity … */
Texture slicing with view-aligned quads or ray-marching is covered in GPU Gems “Volume Rendering Techniques” citeturn10view0 and many modern GLSL write-ups citeturn20view0.
Instead of SampleVolume, call nanovdb_read(gid, xyz); the tree walk is branchless and cache-friendly. The recent GPU Volume Rendering with Hierarchical Compression Using VDB shows that decoding cost is <5 ns per sample on RTX-class GPUs citeturn17view0.
- Early-ray termination & empty-space skipping slice render cost by up to 70 % citeturn10view0.
- Store density as half-precision (
R16F) unless high-dynamic-range is needed.
- For explosions, pre-integrate emission/absorption into a BC6H 3-D LUT; sample once per step.
- Update only the changed bricks between frames (delta uploads) or GPU-blit new layers; medical time-series papers show >4× speed-ups using run-length encoded updates citeturn19view0.
| Requirement | Existing solution |
|---|
| Sparse grids, single file, GPU-ready | NanoVDB |
| Dense but small (<512³), fast artist iteration | Texture3DArray in a .asset bundle |
| Extreme compression (<0.5 B/voxel) | NeuralVDB or custom ZFP inside NanoVDB |
Only when you need bespoke metadata or delta-encoded streaming beyond what VDB or Unity assets offer would you devise a proprietary container.
- Conversion step (Python/Houdini):
.sim → .vdb or raw voxel frames
- optional bake to
.nvdb or .asset
- Unity custom importer (
AssetPostprocessor):
- detect VDB or raw sequence
- build
Texture3DArray or upload NanoVDB to ComputeBuffer
- Runtime
- Component chooses “dense” vs “sparse” path at Start()
- Dispatch compute shader each frame with interpolation factor
- Blit result into HDRP/URP volume pass
This workflow keeps everything GPU-resident, scales from mobile (array textures) to workstation GPUs (NanoVDB), and reuses community-tested formats rather than reinventing them.
- Time-varying multimodal 3-D texture rendering technique (exploits GPU updates instead of re-uploading full volumes) citeturn19view0
- Real-time cloud, smoke and explosion ray-marching tricks citeturn20view0
With these building blocks you can store, parse and render 4-D volumetric simulations efficiently in Unity without inventing a new wheel—just pick the dense-vs-sparse path that matches your data scale and performance budget.