Introducing Jan

Introducing Jan

I really like LM Studio and its capability to simplify the utilization of local models on my personal computer. The project is ever-evolving, supporting an expanding range of models and features. However, as I noted several months ago, it is not open source.

As we continue this blog series, let’s explore a fully open-source alternative to LM StudioJan, a project from Southeast Asia.

Jan: A Highly Versatile, Open Platform

Jan’s slogan is Turn your computer into an AI computer. This solution, designed to optimize any device’s performance – be it PCs or multi-GPU clusters, is available as a desktop version for MacOS, Windows, and Linux. Furthermore, it can also be deployed as a server on Could or on premises.

This platform allows you to run LLMs locally and offline, or connect to remote GenAI APIs. Jan’s philosophical principles are rooted in local-first and file over app approaches.

Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA’s TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan’s Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.

Jan goes a step further by integrating with other local engines like LM Studio and ollama.

Jan’s interoperability even extends to exposing these LLMs through a server API. This API is compatible with the widely-recognized OpenAI API, simplifying its integration with many components of the GenAI ecosystem!

Importantly, Jan is free software (think ‘freedom’) licensed under Affero GPL. This grants anyone the liberty to run it offline, self-host, or modify it. Note that this license applies copyleft aspect also to network distribution, ensuring Cloud/SaaS providers contribute back to the core project and community.

Jan’s open source philosophy can also be seen in its design for extensions and TypeScript API, making it user-friendly for customization and extension by anyone.

Now that we’ve introduced Jan, let’s demonstrate how easy it is to use.

Installation and Use of Local Models

You can easily browse and install pre-configured models from the Model Hub.

I prefer importing and reusing the models I’ve previously downloaded for LM Studio to achieve storage optimization.


From the Thread panel, I can simply select the model (Llama 3, in this case), define the assistant and model parameters (I just tweaked the Max Token value to 200), and initiate a new Thread.


Using Jan as a Server

Configuring Jan Desktop as a Local API Server and monitoring its logs is straightforward.

Next, you can make OpenAI-like API calls to the server, as shown here from a Jupyter JavaScript Notebook.

(For more information on using a JavaScript Notebook with GenAI APIs, refer to this previous article and my GenAI’s Lamp YouTube Series).

Jan Server can also be independently deployed (no GUI) in Docker or Kubernetes for on-premise or Cloud running. Detailed instructions for AWS, GCP, and Azure are available in the project’s documentation).

Conclusion

The Jan project is ever-expanding, promising many more enhancements and features – all visible thanks to its build in the open philosophy – on the roadmap.

AFter activating the Experimental Mode, you can preview upcoming features, such as the promising Retrieval tool, designed to implement the RAG pattern and chat with PDF documents.

So definitely stay tuned for updates!

Leave a Reply

Your email address will not be published. Required fields are marked *