|
|
Posted on 11/14/2024 1:19:15 PM
|
|
|
|

Driver Download:The hyperlink login is visible. CUDA Toolkit:The hyperlink login is visible. cuDNN:The hyperlink login is visible.
What is GPU?
The concept of GPU was proposed by Nvidia in 1999. A GPU is a chip on a graphics card, just like a CPU is a chip on a motherboard. So there were no GPUs on graphics cards before 1999? Of course, there was, but no one named it at that time, and it did not attract enough attention from people, and its development was relatively slow.
Since Nvidia proposed the concept of GPU, GPUs have entered a period of rapid development. In short, it has gone through the following stages of development:
1. Only for graphics rendering, this function is the original intention of GPUs, which can be seen from its name: Graphic Processing Unit;
2. Later, it was discovered that it was too wasteful for such a powerful device as a GPU to be used only for graphics processing, and it should be used to do more work, such as floating-point operations. How to do it? Giving floating-point operations directly to the GPU is not possible because it can only be used for graphics processing (at that time). The easiest thing to think of is to do some processing of floating-point operations, package them into graphics rendering tasks, and then hand them over to the GPU. This is the concept of GPGPU (General Purpose GPU). However, there is a disadvantage of this, that is, you must have some knowledge of graphics, otherwise you will not know how to pack.
3. Therefore, in order to allow people who do not understand graphics to experience the power of GPU computing, Nvidia proposed the concept of CUDA.
What is CUDA?
CUDA (ComputeUnified Device Architecture) is a computing platform launched by graphics card manufacturer NVIDIA. CUDA is a general-purpose parallel computing architecture launched by NVIDIA. It contains the CUDA instruction set architecture and a parallel computing engine inside the GPU. You can develop CUDA programs by using a CUDA C language similar to C language, which makes it easier to use the powerful computing power of the GPU, instead of packaging the computing task into a graphics rendering task and then handing it over to the GPU to process.
In other words, CUDA is a parallel computing framework launched by NVIDIA for its own GPUs, which means that CUDA can only run on NVIDIA's GPUs, and can only play the role of CUDA when the computing problem to be solved is that it can be computed in a large number of parallel computations.
Note that not all GPUs support CUDA.
What is CUDNN?
NVIDIA cuDNN is a GPU-accelerated library for deep neural networks. It emphasizes performance, ease of use, and low memory overhead. NVIDIA cuDNN can be integrated into higher-level machine learning frameworks such as Google's Tensorflow, UC Berkeley's popular caffe software. Simple plug-in design allows developers to focus on designing and implementing neural network models rather than simply tuning performance, while also enabling high-performance modern parallel computing on GPUs.
If you want to train a model with a GPU, cuDNN is not required, but it is generally used as an acceleration library.
What is the relationship between CUDA and CUDNN?
CUDA is seen as a workbench with many tools such as hammers, screwdrivers, etc. cuDNN is a CUDA-based deep learning GPU-accelerated library, with which deep learning calculations can be completed on GPUs. It is equivalent to a tool for work, for example, it is a wrench. But when the CUDA workbench was bought, it did not provide a wrench. To run a deep neural network on CUDA, you need to install cuDNN, just like you want to screw a nut and buy a wrench back. This allows the GPU to work on deep neural networks, which is much faster than that of CPUs.
|
Previous:The "freeze_support()" line can be omitted if the program is not...Next:UPS inline, interactive, and online interactive uninterruptible power supplies
|