« Back to Glossary Index

CUDA, which stands for Compute Unified Device Architecture, is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers to use Nvidia GPUs for general purpose processing – an approach known as GPGPU (General-Purpose computing on Graphics Processing Units).

CUDA gives developers direct access to the virtual instruction set and memory of the parallel computational elements in Nvidia GPUs. This enables dramatic increases in computing performance by harnessing the power of the GPU for more than just graphics rendering. CUDA is widely used in scientific and industrial fields for tasks that require heavy computational workloads, such as deep learning, numerical analysis, and 3D modeling.

« Back to Glossary Index