this is the problem model in the first image. each color is a different shader that had to be generated by tangerine to render the voxel, and the average complexity of the generated shaders is also high.
the second image is the generated part for one of the these shaders. its basically the object structure w/ all the params pulled out.
here's some very entertaining reading about the same problem in a completely different project https://dolphin-emu.org/blog/2017/07/30/ubershaders/
update: I hacked together an interpreted mode and it works great for this :D
https://github.com/Aeva/tangerine/commit/19aa738799e3531f69dc0c7de3ebb14d8cf615d1
@aeva whoa, this is cool! Do you have different sdfs at different levels of the octtree?
@jonbro yes :D the root of the tree contains the entire model, which would be too slow to render compiled or otherwise. the octree splits to eliminate dead space, and as it does so each node removes the parts of the CSG tree that can't effect it resulting in a simpler SDF
@aeva are you building the octree cpu side?
@jonbro thank you :D also i wrote a blog post a while back about the general technique http://zone.dog/braindump/sdf_clustering_part_2/
@aeva awesome! I'm not sure I'm ready to revive my toy voxel sdf thingy, but these notes are gonna be my starting point if i do.
I gave up at the culling SDF ops stage, so I could never really have complex models :(
@jonbro so far this approach is working quite well for me. the main problem is the distance fields aren't exact after any set operators, so it can't cull as aggressively on the CPU as i would like it to. it also definitely needs clustered occlusion culling. I think this strat has promise though.
@aeva I guess I can't think of an alternate way to approach :D
this is really cool to see an end to end implementation of this.