Posted inEmergent Tech

NVIDIA launches its own personalised AI chatbot – Chat with RTX

Nvidia, the global chipmaker late on Monday, announced the launch of its generative AI chatbot for the Windows PC- Chat with RX.

The generative AI (GenAI) race just got all the more intense and exciting, Nvidia, the global chipmaker late on Monday, announced the launch of its generative AI chatbot for the Windows PC- Chat with RX. The chatbot gives enterprises the potential to leverage AI on employees’ local environment to bolster productivity. With Chat with RTX, one is not required to access gen AI tools on platforms that are hosted by OpenAI.

Nvidia released the demo app for free download. Users can personalise Chat with RTX with their own content, and customise data sources of the bot’s large language models. What this does is keep the user’s data private on their PC while also helping them search for answers to questions based on that data.

In a blog talking about the chatbot, Jesse Clayton, Nvidia’s product manager stated as Chat with RTX runs locally on the Windows RTX PCs and workstations, the results are faster, and also the user’s data stays on the device.

Clayton explained instead of relying on cloud-based LLM services, the bot lets users process sensitive data on the local PC without the need to share it with a third party. Chat with RTX allows users to open two open-source LLMs- Llama 2 or Mistral – and it also requires Nvidia GeForce RTX 30 series GPU or higher with 8GB or video RAM. All running on Windows 11 or 11, with the Nvidia GPU drivers. Chat with RTX also runs on GEForce-powered Windows PCs using retrieval augumented generation (RAG), Nvidia RTX acceleration, and NVIDIA TensorRT-LLM software.

Clayton said- “Rather than searching through notes or saved content, users can simply type queries, one could ask, ‘What was the restaurant my partner recommended while in Las Vegas?’ and Chat with RTX will scan local files the user points it to and provide the answer with context.”