Hi, Can someone please explain the difference in texture memory as used in the context of Cuda as opposed to texture memory used in the context of DirectX. Suppose a graphics card has 512 MB of advertised memory, how is it divided into constant memory/texture memory and global memory.
E.g. I have a tesla card that has totalConstMem as 64KB and totalGlobalMem as 4GB, as queried by cudaGetDeviceProperties, but there is no variable that tells me how much of texture memory is required.
Also, how much is "Texture memory" when accessed via DirectX etc graphics APIs. I don't have experience programming in these APIs, so I don't know how and what kind of memory can they access. But AFAIK, all the memory is access is hardware-cached. Please correct me if I'm wrong.
After KoppeKTop's answer: So does the shared memory act as automatic cache for texture memory in case of CUDA and DirectX both? I don't suppose having another h/w cache would make sense anyway. Does it also mean that if I'm using the whole of shared memory in a kernel, texture memory wouldn't get cached?
Thanks.