Skip to main content

LLM Conversation

The following is an example showing how to integrate in-browser models (e.g. via WebLlm) into React ChatBotify. It leverages on the LLM Connector Plugin, which is maintained separately on the React ChatBotify Plugins organization. This example taps on the WebLlmProvider, which ships by default with the LLM Connector Plugin. If you require support with the plugin, please reach out to support on the plugins discord instead.

tip

The plugin also comes with other default providers, which you can try out in the OpenAI Integration Example and Gemini Integration Example.

tip

If you expect your LLM responses to contain markdown, consider using the Markdown Renderer Plugin as well!

caution

Running models in the browser can be sluggish (especially if a large model is chosen). In production, you should pick a reasonably sized model or look to proxy your request to a backend. A lightweight demo project for an LLM proxy can be found here. You may also refer to this article for more details.

Live Editor
const MyChatBot = () => {
	// initialize the plugin
	const plugins = [LlmConnector()];

	// example flow for testing
	const flow: Flow = {
		start: {
			message: "Hello, feel free to ask away!",
			chatDisabled: true,
			transition: 0,
			path: "webllm",
		},
		webllm: {
			llmConnector: {
				// provider configuration guide:
				// https://github.com/React-ChatBotify-Plugins/llm-connector/blob/main/docs/providers/WebLlm.md
				provider: new WebLlmProvider({
					model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
				}),
				outputType: 'character',
			},
		},
	};

	return (
		<ChatBot
			settings={{general: {embedded: true}, chatHistory: {storageKey: "example_openai_integration"}}}
			plugins={plugins}
			flow={flow}
		></ChatBot>
	);
};

render(<MyChatBot/>)
Result
Loading...