Dokar, AI and Model Runner: Glimer of Hope or Despair?

0
4
Dokar, AI and Model Runner: Glimer of Hope or Despair?


I initially announced from last week for the April Fool’s joke – but this is really true: Dokar Desktop version supports the larger language model (LLM) than 4.40. In this article, I explain what it means, how it works and how the whole thing can be classified strategically.




Golo Roden is the founder and CTO of the native web GMBH. It deals with web and cloud application as well as conception and development of APIs with web and cloud application focusing on event-powered and service-based distribution. Their guiding theory is that software development is not an end in itself, but always has to follow an underlying professionalism.

This is such a thing with Dokar. The “Dokar” software has revolutionized the market almost overnight in the last 15 years. Deployment works fundamentally differently than before, Kuberanets were probably not present in the form, and large parts of the web and cloud world look completely different. After all these years, Dokar Ink. It is a matter of shame, because Dokar made a lot of start and changed positively, but could not take advantage to the extent that the company would have liked.

Recommended editorial material

With your consent, an external YouTube video (Google Ireland Limited) has been invited here.

New wave of free games for Android that sets the trend this weekNew wave of free games for Android that sets the trend this week

Always load youtube videos

Local llms with Docker: How it works! // German

Fair, you have to say that Dokar has not made especially clever in many places – in retrospect, however, it is always easy to say. However, there were just a lot of half -to -half -way approaches which were declared and later were re -demolished again. Against this background, you also have to evaluate the current declaration. I will come back to him later.

Let’s start with the declaration itself. Dokar has one on 1 April (a date in which many automatically think of a joke) Announced new version of Dokar DesktopThis version brings a new feature – model runners, with which large language models can be obtained and can be executed directly through a doctor. Dokar is now moving towards artificial intelligence.

So what is really a docker model runner? This is initially important: This feature is part of the Dokar Desktop, not from the Dokar engine. Sleeping No Extension of core product, but a function within a wider dock desktop package. In particular, the command line tool expands the command available through an extension – and that under the command rate docker modelThese commands serve interaction with voice models.

In particular, it means: with docker model pull A large language model can be downloaded in a local machine – such as Lama, Mistral, Fee, Jemma or Deepsek. These models are available in the OCI format, which are similar to the doors images, and therefore it can be obtained from the doctor registries-as the doctor hub. This is what Docker actually uses: im New name speed “AI” Many of these models are available on Dokar Hub. This Docker LLMS makes only accessible to developers who use doors anyway. At first it makes sense.

If you start a downloaded model with command docker model runThere are two options: either you can do this in interactive mode, where you can directly chat in the command line with the model, or in non -intravar mode, in which one is immediately handed over to one and a answer is released directly. It is well suited for small, targeted inquiries without opening a session.

In addition, the dock desktop also offers an HTTP-based API based on the Openai specification. In this way, your own application can be connected to the voice model – without worrying where and how the model is executed. Dokar does this. For perfection, it should be mentioned that this feature currently works exclusively with Apple silicone chips under Macos. There is no support for Linux, it has been declared for at least Windows. Under Macos, however, the use actually works basically: as soon as the dock is installed in the desktop version 4.40, you can start directly – without additional setup or special configuration.

It seems familiar to the developers who know Dokar. No new equipment, no new workflow-entraction is comfortable and opens up many options, it is for development, as a basis for AI assistants in a local trial or development environment. The capacity is great, the operation is simple. In addition to the download time for the model – which can take some time based on size – everything is ready for use quickly. No API key is required, the use is completely local and offline. This is also an advantage.

But everything is not confident. Let’s start with small weaknesses and finish with the greatest criticism. First, the feature especially works with Apple silicon on MACOS – a restriction that the target group is currently significantly reduced. Second, there is no graphic surface. If you want to work with convenience, use command line or a third-party UI that supports the Openai API format-but there is a lot of it.

The lack of adaptability is somewhat more severe: the way the model they were published in the Docker Hub. A separate fine -tuning or final tuning is not currently intended. There is no further training or scaling. This is a fit-shaped-all solution. If it fits, it is good – if not, it all remains that the model runner is unsuitable for this application. This can be enough for some scenarios, but does not have a complete mlops.

I have a central problem, that is, local execution. The idea feels good – locally, offline, data security -friendship. But how does execution technically work? Does the model move in a container? The answer is: no. Model runner has nothing to do with containers. It is only a thin cover of the library llama.cppIntegrated in Dokar Desktop. The advantage of this is that the existing docker installation can be used. However, there is no material for container technology-the actual brand of the docker.

Of course, you can argue that a native execution is faster than the container. This is correct. But this also means: The whole thing is limited to the local machine. There is no need for a perfect approach. This feature is unsuitable for server income, as doors desktops are usually not used. So it is not a new AI perfection device, but has a net comfort feature for local use. It does not exceed a cover around llama.cpp.

And here I ask myself the question of additional value. Tools that use llama.cpp and enable comfortable use, for example – for example Alaama Or LM StudioTo name just two examples. The latter also provides a graphic UI. Therefore, the world has not waited for Dokar to close this gap. So the model runner looks like a pure “me too” product but is too late. At least I can’t see a real innovation.

This shuts down the circle: Dokar (as explained early) has difficulties for years to establish a permanent business model. And even if the feature is currently independent, it looks like an attempt to link the keyword “AI” in any way – expecting profit from publicity. The model runner formally fits into the tool set, but not with the original and real doctor concept in terms of materials. A great opportunity must have been here – however, the most boring version was implemented.

There are also many bugs. To give an example: After downloading a model, “0 mbyte” is sometimes displayed as a download size. Such trivial errors do not make good light on quality assurance. And we do not talk about a small start-up, but sales in a company with hundreds of employees and in a million range of two or perhaps three points. If such basic tasks fail, questions raise questions – either quality awareness or invested efforts.

The model runner looks incomplete-as it was published only to jump on the AI ​​train. This is not a good sign. Instead of installing new standards, the doctor lives within his comfort field. What comes out is a minimum version with low impact. This strategy has not worked before. What is new that Dokar is now starting to give water to his brand. Instead of constantly thinking about containerization, an AI feature is presented which has nothing to do with containers. This is disappointing.

Conclusion: If you use Docker Desktop on Maco with Apple Silicon anyway, you can install the update on version 4.40 and try the model runner. You will probably use a few minutes – like me – and then put the convenience under “quite good, but really irrelevant”. At least I currently see no reason why you should use it permanently. Perhaps I just remember Kalpana.

How do you see it? Have you already tried the convenience? Do you find it practical in any form? Or do you share my evaluation? Please write that in the comments, I am eager for your opinion!


(Map)

Dokar, AI and Model Runner: Glimer of Hope or Despair?Dokar, AI and Model Runner: Glimer of Hope or Despair?

LEAVE A REPLY

Please enter your comment!
Please enter your name here