📎 Extending #SemanticKernel using OllamaSharp for chat and text completion

📎 Extending #SemanticKernel using OllamaSharp for chat and text completion

Hi!

In previous posts I shared how to host and chat with a Llama 2 model hosted locally with Ollama. (view post).

And then I also found OllamaSharp (nuget package and repo).

_OllamaSharp is a .NET binding for the Ollama API, making it easy to interact with Ollama using your favorite .NET languages.

So, I decided to try it, and create a Chat Completion and a Text Generation specific implementation for Semantic Kernel using this library.

The full test is a console app using both services with Semantic Kernel.

Text Generation Service

The Text Generation Service is an easy one. Just implement the interface Microsoft.SemanticKernel.TextGeneration.ITextGenerationService , and the generated code looks like this:

Chat Completion Service

The chat completion, requires the implementation of the interface: IChatCompletionService. The code looks like this:

Test Chat Completion and Text Generation Services

With both services implemented, we can now code with Semantic Kernel to access these services.

The following code:

Creates 2 services: text and chat, both with ollamasharp implementation
Create a semantic kernel builder, register both services, and build a kernel.
Using the kernel run a text generation sample, and later a chat history sample.
In the chat sample, it also uses a System Message to define the chat behavior for the conversation.
This is a test, there are a lot of improvements that can be made here.

The full code is available here: https://github.com/elbruno/semantickernel-localLLMs. And the main readme of the repo also needs to be updated.

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno

Leave a Reply

Your email address will not be published. Required fields are marked *