Nvidia’s AI chatbot now supports Google’s Gemma model, voice queries, and more

Nvidia’s AI chatbot now supports Google’s Gemma model, voice queries, and more

Image: Nvidia

Nvidia is updating its experimental ChatRTX chatbot with more AI models for RTX GPU owners. The chatbot, which runs locally on a Windows PC, can already use Mistral or Llama 2 to query personal documents that you feed into it, but now the list of supported AI models is growing to include Google’s Gemma, ChatGLM3, and even OpenAI’s CLIP model to make it easier to search your photos.

Nvidia first introduced ChatRTX as “Chat with RTX” in February as a demo app, and you’ll need an RTX 30- or 40-series GPU with 8GB of VRAM or more to be able to run it. The app essentially creates a local chatbot server that you can access from a browser and feed your local documents and even YouTube videos to get a powerful search tool complete with summaries…

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *