AI 筆記 NVDIA AMD 以及 ? Intel ?
AI 筆記 NVDIA AMD 以及 ? Intel ?
整理一些收集來的筆記:
CUDA and OpenCL are the two main ways for programming GPUs. CUDA is by far the most developed, has the most extensive ecosystem, and is the most robustly supported by deep learning libraries. CUDA is a proprietary language created by Nvidia, so it can’t be used by GPUs from other companies. When fast.ai recommends Nvidia GPUs, it is not out of any special affinity or loyalty to Nvidia on our part, but that this is by far the best option for deep learning.
Nvidia dominates the market for GPUs, with the next closest competitor being the company AMD. This summer, AMD announced the release of a platform called ROCm to provide more support for deep learning. The status of ROCm for major deep learning libraries such as PyTorch, TensorFlow, MxNet, and CNTK is still under development.
總之Nvidia 系列就要研究CUDA
而如果是其他就要靠OpenCL
OpenCL (Open Computing Language,开放计算语言)是一個為異構平台編寫程式的框架,此異構平台可由CPU、GPU、DSP、FPGA或其他類型的處理器與硬體加速器所組成。OpenCL由一門用於編寫kernels的語言和一組用於定義並控制平台的API組成。OpenCL提供了基於任務分割和資料分割的平行計算機制。
一直追逐Nvidia的 AMD推 ROCm 先初步整理一下
ROCm, the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent.
The ROCr System Runtime is language independent and makes heavy use of the Heterogeneous System Architecture (HSA) Runtime API. This approach provides a rich foundation to exectute programming languages such as HCC C++ and HIP, the Khronos Group’s OpenCL, and Continum’s Anaconda Python.
整理一些收集來的筆記:
CUDA and OpenCL are the two main ways for programming GPUs. CUDA is by far the most developed, has the most extensive ecosystem, and is the most robustly supported by deep learning libraries. CUDA is a proprietary language created by Nvidia, so it can’t be used by GPUs from other companies. When fast.ai recommends Nvidia GPUs, it is not out of any special affinity or loyalty to Nvidia on our part, but that this is by far the best option for deep learning.
Nvidia dominates the market for GPUs, with the next closest competitor being the company AMD. This summer, AMD announced the release of a platform called ROCm to provide more support for deep learning. The status of ROCm for major deep learning libraries such as PyTorch, TensorFlow, MxNet, and CNTK is still under development.
總之Nvidia 系列就要研究CUDA
而如果是其他就要靠OpenCL
OpenCL (Open Computing Language,开放计算语言)是一個為異構平台編寫程式的框架,此異構平台可由CPU、GPU、DSP、FPGA或其他類型的處理器與硬體加速器所組成。OpenCL由一門用於編寫kernels的語言和一組用於定義並控制平台的API組成。OpenCL提供了基於任務分割和資料分割的平行計算機制。
一直追逐Nvidia的 AMD推 ROCm 先初步整理一下
ROCm, the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent.
The ROCr System Runtime is language independent and makes heavy use of the Heterogeneous System Architecture (HSA) Runtime API. This approach provides a rich foundation to exectute programming languages such as HCC C++ and HIP, the Khronos Group’s OpenCL, and Continum’s Anaconda Python.
留言
張貼留言
歡迎留言一起討論