I tried to read the article but i am too stupid. I think nvidia has a proprietary hardware/software combo that is very fast, but because they “own it” they want money; instead other companies are using this without paying… Am i close?
You can use graphics cards for more than just graphics, eg for AI. Nvidia is a leader in facilitating that.
They offer a software toolkit for developing programs (an SDK) that use their GPUs to best effect. People have begun making “translation layers” that allow such CUDA programs to run on non-nvidia hardware. (I have no idea how any of this works.) The license of that SDK now forbids reverse engineering its output to create these compatibility tools.
Unless I am very mistaken, Nvidia can’t ban the use of “translation layers” or stop people making them, as such. This clause creates a barrier to creating them, though.
Some programs will probably remain CUDA specific, because of that clause. That means that Nvidia is a gatekeeper for these programs and can charge extra for access.
It’s not about it being fast, it’s about it only being available for NVidia GPUs. As long as software for things like machine learning uses CUDA, you need to buy an NVidia GPU to use it. A translation layer would let you use the same software on other companies’ GPUs, which means people aren’t forced to buy NVidia’s GPUs anymore.
I tried to read the article but i am too stupid. I think nvidia has a proprietary hardware/software combo that is very fast, but because they “own it” they want money; instead other companies are using this without paying… Am i close?
You can use graphics cards for more than just graphics, eg for AI. Nvidia is a leader in facilitating that.
They offer a software toolkit for developing programs (an SDK) that use their GPUs to best effect. People have begun making “translation layers” that allow such CUDA programs to run on non-nvidia hardware. (I have no idea how any of this works.) The license of that SDK now forbids reverse engineering its output to create these compatibility tools.
Unless I am very mistaken, Nvidia can’t ban the use of “translation layers” or stop people making them, as such. This clause creates a barrier to creating them, though.
Some programs will probably remain CUDA specific, because of that clause. That means that Nvidia is a gatekeeper for these programs and can charge extra for access.
Thanks
It’s not about it being fast, it’s about it only being available for NVidia GPUs. As long as software for things like machine learning uses CUDA, you need to buy an NVidia GPU to use it. A translation layer would let you use the same software on other companies’ GPUs, which means people aren’t forced to buy NVidia’s GPUs anymore.