The scanned object matrix is too large to function properly

First of all, ggems is a very friendly program, but the following problems are found during use: When the input scanned object matrix size exceeds a value, the program will report an error, 8192_8192_36 is OK, but the increase to 8192_8192_60 cannot run normally, the error is as follows: It is not clear whether the program has a limit on the input matrix size. What is the reason for this limit and can it be optimized as it would be an unfriendly limit on a high performance PC
Traceback (most recent call last):
File “D:\My_CT\Monte_Carlo\ggems-master\examples\MCC_simulation.py”, line 172, in
ggems.initialize() #初始化
^^^^^^^^^^^^^^^^^^
File “D:\Anaconda3\envs\XXX\Lib\site-packages\ggems-1.2-py3.12-win-amd64.egg\ggems_init_.py”, line 85, in initialize
ggems_lib.initialize_ggems(self.obj, seed)
OSError: [WinError -529697949] Windows Error 0xe06d7363

Yes it’s normal. You have a limit of memory allocation for a buffer in OpenCL. In your case it seems 4GB.
Kind regards
Didier

First of all, thank you very much for your answer. This seems to be the limitation of opencl itself, but this limitation really affects the work carried out with ggems. I would like to know how to set up to handle large matrices, because I really need to build such a large matrix. In addition, would you consider lifting or adjusting this limitation in the next version update? This will allow ggems to be used in more types of situations.

Unfornately, the buffer limit size allocation have a limit in OpenCL. I can’t change nothing in the code to change that. The only solution for you is to use a bigger voxel size.