this is the problem model in the first image. each color is a different shader that had to be generated by tangerine to render the voxel, and the average complexity of the generated shaders is also high.
the second image is the generated part for one of the these shaders. its basically the object structure w/ all the params pulled out.
here's some very entertaining reading about the same problem in a completely different project https://dolphin-emu.org/blog/2017/07/30/ubershaders/
update: I hacked together an interpreted mode and it works great for this :D
https://github.com/Aeva/tangerine/commit/19aa738799e3531f69dc0c7de3ebb14d8cf615d1
@aeva whoa, this is cool! Do you have different sdfs at different levels of the octtree?
@jonbro yes :D the root of the tree contains the entire model, which would be too slow to render compiled or otherwise. the octree splits to eliminate dead space, and as it does so each node removes the parts of the CSG tree that can't effect it resulting in a simpler SDF
@aeva are you building the octree cpu side?
@jonbro yes. here's the implementation if you're interested https://github.com/Aeva/tangerine/blob/excelsior/tangerine/sdfs.cpp#L1156
@aeva oh neat! storing aabbs for the sdf ops is clever.
@jonbro so far this approach is working quite well for me. the main problem is the distance fields aren't exact after any set operators, so it can't cull as aggressively on the CPU as i would like it to. it also definitely needs clustered occlusion culling. I think this strat has promise though.