My method to bake simulations from Blender to Godot

To anyone interested in transfering simulations from Blender to Godot, I made this python script for blender that creates a rig to bake the cloth/liquid simulations to a rig, based on this video.

import bpy
from bpy import context
from mathutils import Vector

def AddEmptiesAtVertices(length):
    objects = bpy.context.view_layer.objects
    obj = objects.active
    coll = bpy.data.collections.new("empties")
    bpy.context.scene.collection.children.link(coll)
    if not obj or obj.type != 'MESH':
        return
    
    empties = []
    for v in obj.data.vertices:
        mt = bpy.data.objects.new(
            f"Vert{v.index}",
            None,
        )
        mt.empty_display_type = 'ARROWS'
        mt.empty_display_size = length
        mt.parent = obj
        mt.parent_type = 'VERTEX'
        mt.parent_vertices = [v.index] * 3
        coll.objects.link(mt)
        empties.append(mt)
    
    return empties
        

def AddBonesAtVertices(length, use_normals, empty_names):
    objects = bpy.context.view_layer.objects
    obj = objects.active
    if not obj or obj.type != 'MESH':
        return

    points = []
    normals = []
    data = []
    for v in obj.data.vertices:
        p = obj.matrix_world @ v.co
        target = v.normal @ obj.matrix_world
        dir = target - p
        dir.normalize()
        dir = dir * length
        n = p + dir * (-1)
        points.append(p)
        if not use_normals:
            n = Vector((p[0], p[1], p[2] + length))
        normals.append(n)

    amt = bpy.data.armatures.new(obj.name + "_vBones")
    rig = bpy.data.objects.new(obj.name + '_vRig', amt)
    
    bpy.context.collection.objects.link(rig)
    objects.active = rig

    bpy.ops.object.editmode_toggle()
    for i, l in enumerate(zip(points, normals)):        
        bone = amt.edit_bones.new(str(i))
        bone.head = l[0]
        bone.tail = l[1]
        
        bpy.ops.object.posemode_toggle()
        pose_bone = bpy.context.object.pose.bones[str(i)]
        bpy.context.object.data.bones.active = pose_bone.bone
        pose_bone.bone.select = True
        constraint = pose_bone.constraints.new("COPY_LOCATION")
        constraint.target = empty_names[i]
        bpy.ops.object.editmode_toggle()
    
    bpy.ops.object.editmode_toggle() 

bpy.ops.object.mode_set(mode='OBJECT')
empties = AddEmptiesAtVertices(0.5)
AddBonesAtVertices(0.5, False, empties)

Once you have the simulation, you run the script having the mesh with the simulation selected. Then it will create a bunch of empties parented to each vertex of the mesh, and a rig with a bone position at each vertex of the rig and with a copy location constraint pointing to their empty counterpart. This way, the rig follows the simulation. Depending on the amount of vertices and the specs of your computer, the script can freeze Blender for a while, be patient!

To bake the simulation to the rig you can go to pose mode, select all the bones, and go to pose > animation > bake action. It will open this menu:

The important setting here is Visual Keying, where the information of the copy location constraint resides. Once baked, voilĂ , you have a rig with an animation doing the simulation that you can export to Godot. I used this method for simulating the sail in a ship:

I still haven’t tested the performance of this method. It creates a rig with as many bones as vertices in the mesh, and the animations can have one keyframe per frame (this one depends on how you bake the action in blender).

2 Likes

Vertex animation textures are better suited to per-vertex animations, though more complicated to export and import. They are often used for groups of animated background characters (due to it’s near zero CPU usage) or baked physics simulations with a high vertex count.

1 Like

This is really cool, thanks for sharing!!!

1 Like

whoa that’s crazy, I didn’t know about texture animations! Could you blend and make a state machine between different texture animations? I have to try! Thanks for sharing!

You could blend VATs through the mix function in a shader. A state machine would have to be done in scripts, and altering the shader unifroms. Certainly a lot of great Godot functionality is lost as you have to make your own shaders, I don’t think it’s standard in any engine considering it can be finicky for every use case.

1 Like

Because I need to use it now and I don’t understand any programming at all, thanks to the original poster who is a god for helping. I asked Grok to modify it into a two-step execution version.I don’t know why merging these two scripts into one always fails when I run them together, so I split them into two separate scripts for execution, so I split them into two separate scripts for execution.

This is the version that works successfully in Blender 5.0.1:

Script 1 (Generate one bone per vertex + constraints):

Python

import bpy
from mathutils import Vector

def AddEmptiesAtVertices(length):
    obj = bpy.context.active_object
    if not obj or obj.type != 'MESH':
        print("Please select the cloth Mesh first!")
        return []

    coll = bpy.data.collections.new("Empties_" + obj.name)
    bpy.context.scene.collection.children.link(coll)
    
    empties = []
    for v in obj.data.vertices:
        mt = bpy.data.objects.new(f"Vert{v.index}", None)
        mt.empty_display_type = 'ARROWS'
        mt.empty_display_size = length
        mt.parent = obj
        mt.parent_type = 'VERTEX'
        mt.parent_vertices = (v.index, 0, 0)
        coll.objects.link(mt)
        empties.append(mt)
    return empties

def AddBonesAtVertices(length, use_normals, empty_objects):
    obj = bpy.context.active_object
    if not obj or obj.type != 'MESH':
        return None

    points = []
    normals = []
    for v in obj.data.vertices:
        p = obj.matrix_world @ v.co
        n = Vector((p.x, p.y, p.z + length))
        points.append(p)
        normals.append(n)

    amt = bpy.data.armatures.new(obj.name + "_vBones")
    rig = bpy.data.objects.new(obj.name + '_vRig', amt)
    bpy.context.scene.collection.objects.link(rig)
    bpy.context.view_layer.objects.active = rig

    bpy.ops.object.mode_set(mode='EDIT')
    for i in range(len(points)):
        bone = amt.edit_bones.new(str(i))
        bone.head = points[i]
        bone.tail = normals[i]
    bpy.ops.object.mode_set(mode='POSE')

    for i in range(len(points)):
        pose_bone = rig.pose.bones[str(i)]
        constraint = pose_bone.constraints.new("COPY_LOCATION")
        constraint.target = empty_objects[i]

    bpy.ops.object.mode_set(mode='OBJECT')
    return rig

bpy.ops.object.mode_set(mode='OBJECT')
empties = AddEmptiesAtVertices(0.5)
if empties:
    rig = AddBonesAtVertices(0.5, False, empties)
    print(f"Completed: {len(empties)} bones")

Script 2 (Bake cloth simulation vertex results per frame to bones):

Python

import bpy

START_FRAME = 1
END_FRAME = 30  # Change to your total frame count

rig = bpy.context.active_object
if not rig or rig.type != 'ARMATURE':
    print("Please select _vRig first")
else:
    bpy.context.view_layer.objects.active = rig
    bpy.ops.object.mode_set(mode='POSE')
    bpy.ops.pose.select_all(action='SELECT')

    bpy.ops.nla.bake(
        frame_start=START_FRAME,
        frame_end=END_FRAME,
        step=1,
        only_selected=True,
        visual_keying=True,
        clear_constraints=True,
        use_current_action=True,
        bake_types={'POSE'}
    )

    bpy.ops.object.mode_set(mode='OBJECT')
    print("Bake completed")

Workflow:
Run Script 1 on your already-simulated object → Bind the mesh (or you can run Script 2 first and bind the mesh later) → Select the rig → Run Script 2

Finally, bind the bones to the original cloth-simulated object using Automatic Weights, and you’re done.

Because Blender’s direct auto-update workflow to Godot is convenient, if the object already has shape keys, you can’t add thickness or any other modifiers afterward. Adding any modifier will cause the shape keys to disappear after import into Godot.

I’m not sure if exporting to .gltf first and then importing into Godot would preserve them, but I doubt it—since the Blender file auto-update just quickly exports .gltf in the background to Godot.

In short, cloth simulation for close-up shots definitely needs thickness, so the method of binding a skeleton to every frame of the simulation is chosen.