Comparison with a cuda kernel chat GPT O3-Min Practice High and Deepsek R1

0
5
Comparison with a cuda kernel chat GPT O3-Min Practice High and Deepsek R1


Old love is not war. And if you, like the author, maintain open source software, for further development and maintenance, which has very short time for a long time, you are happy about active help. For top products like Optimization Library Geneva (“The Best At the End”, ix 12/2010), it is difficult to build a community, as not everyone has applications for this or right hardware resources at home. Virtual helpers, such as New Deepsake R1 or competitive chat O 3-Mune, Promise-Ef you see through the complex C ++ code.




Dr. Rüdiger Berlich has been working with an open source since 1992 and has dealt with the subject of the respective business model in his MBA. He advises companies on open and internal sources, tight practices and change management questions.

After some experiments with the old Openai model 4o, I was suspicious, but with the help of a much more powerful O1 model dared to try to modernize Geneva. The surprising results felt like Harry Potter or a cosmic doveuvement () function. In a structured, appropriate natural language, signs usually introduced the desired result. AI’s expenses can later be improved by giving additional information or correct input to AI. It happened that I did not discover any mistakes in the later review of the codes generated. However, you should not trust it.

The doubt about the use of AI was due to the price increase not only for chat. Companies should also worry about whether they want to send high -quality software to the cloud without being able to control further use.

This was our Heise Plus article “Practice Comparison-reading with a cuda kernel with a CUDA kernel with the GPT O3-Mini High and Deepsek R1. You can read and listen to the entire article with a heise plus subscription.


LEAVE A REPLY

Please enter your comment!
Please enter your name here