I have a code that generates a, ponentially enormous, amount of data: a BVH for raytracing in a compute shader
That data consists of a (really large) PackedVector3Array and an Array[BVHnode], where BVHnode is a custom class that contains 2 Vector3s and 2 ints
What is the fastest way to store the generated data and retrieve it when needed?
Help would be really appreciated, as I fear that storing and retrieving data inefficiently might even bottleneck the project…
So thanks in advance for any replies!
What is “ponentially enormous”, and what kind of storage do you need?
As it stands its in RAM when you create the compute shader. Do you need more then that? If you need to go to persistent storage, writing to a file is the next step. You can use a Resource tools to asynchronous save/load a file. Or write a custom thread based tool. That will probably be bottlenecked based on your storage device speed.
There is also a concept of a memory-mapped file features of the OS, but this will require custom code to manage it properly if you intend to optimize RAM usage some how, and the the BVH is some how segmented and can be easily chunked?
Thanks for the reply!
By guessing, the file’s size should be around 4 times the size of the used mesh, so i fear that using bigger and bigger meshes could be a big problem. The kind of storage I need is any one that saves the data in a file and is able to retrieve it on the fly while the project is running. It’s just that I don’t know the ideal way to do it
Even if you have 4 million bvh entries for a mesh it only equates to 100MB of ram just for bvh. I dont think you should care about storage and retrieval. Beyond that and efficiency it really depends on your goals and the hardware you intend to support.
Like will you only be using one instance of this data structure? Do you need to chunk the data structure? Goals really matter in regards to paths forward.