Skip to main content

LLM Conversation

The following is an example showing how to integrate in-browser models (e.g. via WebLlm/Wllama) into React ChatBotify. It leverages on the LLM Connector Plugin, which is maintained separately on the React ChatBotify Plugins organization. This example also taps on the WebLlmProvider and WllamaProvider, both of which ships by default with the LLM Connector Plugin. If you require support with the plugin, please reach out to support on the plugins discord instead.

tip

The plugin also comes with other default providers, which you can try out in the OpenAI Integration Example and Gemini Integration Example.

tip

If you expect your LLM responses to contain markdown, consider using the Markdown Renderer Plugin as well!

caution

Running models in the browser can be sluggish (especially if a large model is chosen). In production, you should pick a reasonably sized model or look to proxy your request to a backend. A lightweight demo project for an LLM proxy can be found here. You may also refer to this article for more details.

Live Editor
const MyChatBot = () => {
	// initialize the plugin
	const plugins = [LlmConnector()];

	// checks user message stop condition to end llm conversation
	const onUserMessageCheck = async (message: Message) => {
		if (
			typeof message.content === 'string' &&
			message.content.toUpperCase() === 'RESTART'
		) {
			return 'start';
		}
	};

	// checks key down stop condition to end llm conversation
	const onKeyDownCheck = async (event: KeyboardEvent) => {
		if (event.key === 'Escape') {
			return 'start';
		}
		return null;
	}

	// example flow for testing
	const flow: Flow = {
		start: {
			message: "Hello, pick a model runtime to get started!",
			options: ["WebLlm", "Wllama"],
			chatDisabled: true,
			path: async (params) => {
				await params.simulateStreamMessage("Type 'RESTART' or hit 'ESC` to pick another runtime!");
				await params.simulateStreamMessage("Ask away!");
				return params.userInput.toLowerCase();
			},
		},
		webllm: {
			llmConnector: {
				// provider configuration guide:
				// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/WebLlm.md
				provider: new WebLlmProvider({
					model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
				}),
				outputType: 'character',
				stopConditions: {
					onUserMessage: onUserMessageCheck,
					onKeyDown: onKeyDownCheck,
				},
			},
		},
		wllama: {
			llmConnector: {
				// provider configuration guide:
				// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/Wllama.md
				provider: new WllamaProvider({
					modelUrl: 'https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct-GGUF/resolve/main/smollm2-360m-instruct-q8_0.gguf',
					loadModelConfig: {
						n_ctx: 8192,
					},
				}),
				outputType: 'character',
				stopConditions: {
					onUserMessage: onUserMessageCheck,
					onKeyDown: onKeyDownCheck,
				},
			},
		},
	};

	return (
		<ChatBot
			settings={{general: {embedded: true}, chatHistory: {storageKey: "example_openai_integration"}}}
			plugins={plugins}
			flow={flow}
		></ChatBot>
	);
};

render(<MyChatBot/>)
Result
Loading...