Hi, when I have architectured my neural network and intended to train my model, I surprisingly found that the model tried to allocate 6.49 GiB of memory in my GPU. My batch is chosen as 4, which is rather small I suppose, and my network isn’t that complicated. What are the possible reason causing the out of memory error? What should I do about it? I’m troubled and would appreciate any help.
Hi, there are many reasons which could cause the out of memory
issue, you can refer to this link How to Solve ‘CUDA out of memory’ in PyTorch for possible solutions. Specifically, our project is based on Pytorch Lightning which has many built-in techniques, so you should probably search for their official documentation to use these techniques (e.g. Mixed Precision Training) or you can try to optimize it on your own.