# Chat

> Develop AI-powered chat interfaces with streaming responses, reasoning capabilities, and tool calling.

Bitrix24 UI provides a set of components designed to build AI-powered chat interfaces. They integrate seamlessly with the [Vercel AI SDK](https://ai-sdk.dev/) for streaming responses, reasoning, tool calling, and more.

## Components

| Component | Description |
| --- | --- |
| ChatMessages | Scrollable message list with auto-scroll and loading indicator. |
| ChatMessage | Individual message bubble with avatar, actions, and slots. |
| ChatPrompt | Enhanced textarea for submitting prompts. |
| ChatPromptSubmit | Submit button with automatic status handling. |
| ChatReasoning | Collapsible block for AI reasoning / thinking process. |
| ChatTool | Collapsible block for AI tool invocation status. |
| ChatShimmer | Text shimmer animation for streaming states. |
| ChatPalette | Layout wrapper for embedding chat in modals or drawers. |

## Installation

The Chat components are designed to be used with the [Vercel AI SDK](https://ai-sdk.dev/), specifically the [`Chat`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat) class for managing chat state and streaming responses.

Install the required dependencies:

```bash
pnpm add ai @ai-sdk/gateway @ai-sdk/vue

```

```bash
yarn add ai @ai-sdk/gateway @ai-sdk/vue

```

```bash
npm install ai @ai-sdk/gateway @ai-sdk/vue

```

```bash
bun add ai @ai-sdk/gateway @ai-sdk/vue

```

## Server Setup

Create a server API endpoint to handle chat requests using [`streamText`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text). You can use the [Vercel AI Gateway](https://vercel.com/ai-gateway) to access AI models through a centralized endpoint:

```ts [server/api/chat.post.ts]
import { streamText, convertToModelMessages } from 'ai'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  return streamText({
    model: gateway('anthropic/claude-sonnet-4.6'),
    maxOutputTokens: 10000,
    system: 'You are a helpful assistant.',
    messages: await convertToModelMessages(messages)
  }).toUIMessageStreamResponse()
})
```

### Reasoning

To enable [reasoning](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot#reasoning), configure `providerOptions` for your provider ([Anthropic](https://ai-sdk.dev/docs/guides/providers/anthropic#reasoning), [Google](https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai#thinking), [OpenAI](https://ai-sdk.dev/docs/guides/providers/openai#reasoning)):

```ts [server/api/chat.post.ts]
import { streamText, convertToModelMessages } from 'ai'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  return streamText({
    model: gateway('anthropic/claude-sonnet-4.6'),
    maxOutputTokens: 10000,
    system: 'You are a helpful assistant.',
    messages: await convertToModelMessages(messages),
    providerOptions: {
      anthropic: {
        thinking: {
          type: 'adaptive'
        },
        effort: 'low'
      },
      google: {
        thinkingConfig: {
          includeThoughts: true,
          thinkingLevel: 'low'
        }
      },
      openai: {
        reasoningEffort: 'low',
        reasoningSummary: 'detailed'
      }
    }
  }).toUIMessageStreamResponse()
})
```

### Web Search

Some providers offer built-in web search tools: [Anthropic](https://ai-sdk.dev/docs/guides/providers/anthropic#web-search-tool), [Google](https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai#google-search), [OpenAI](https://ai-sdk.dev/providers/ai-sdk-providers/openai#web-search-tool).

```ts
import { streamText, convertToModelMessages } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  return streamText({
    model: gateway('anthropic/claude-sonnet-4.6'),
    system: 'You are a helpful assistant.',
    messages: await convertToModelMessages(messages),
    tools: {
      web_search: anthropic.tools.webSearch_20250305({})
    }
  }).toUIMessageStreamResponse()
})

```

```ts
import { streamText, convertToModelMessages } from 'ai'
import { google } from '@ai-sdk/google'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  return streamText({
    model: gateway('google/gemini-3-flash'),
    system: 'You are a helpful assistant.',
    messages: await convertToModelMessages(messages),
    tools: {
      google_search: google.tools.googleSearch({})
    }
  }).toUIMessageStreamResponse()
})

```

```ts
import { streamText, convertToModelMessages } from 'ai'
import { openai } from '@ai-sdk/openai'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  return streamText({
    model: gateway('openai/gpt-5-nano'),
    system: 'You are a helpful assistant.',
    messages: await convertToModelMessages(messages),
    tools: {
      web_search: openai.tools.webSearch({})
    }
  }).toUIMessageStreamResponse()
})

```

### Tool Calling with MCP

Empower your chatbot with advanced tool-calling features using the [Model Context Protocol (MCP)](https://ai-sdk.dev/docs/ai-sdk-core/mcp-tools) from `@ai-sdk/mcp`. MCP enables your AI to perform dynamic actions, such as searching your documentation or executing custom tasks, to provide more relevant and accurate responses.

To get started, install the MCP package:

```bash
npm install @ai-sdk/mcp

```

```bash
pnpm add @ai-sdk/mcp

```

```bash
yarn add @ai-sdk/mcp

```

Then, configure your server endpoint to use MCP tools:

```ts [server/api/chat.post.ts]
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'
import { streamText, convertToModelMessages, stepCountIs } from 'ai'
import { createMCPClient } from '@ai-sdk/mcp'
import { gateway } from '@ai-sdk/gateway'

export default defineEventHandler(async (event) => {
  const { messages } = await readBody(event)

  const httpClient = await createMCPClient({
    transport: { type: 'http', url: 'https://your-app.com/mcp' }
  })
  const tools = await httpClient.tools()

  return streamText({
    model: gateway('anthropic/claude-sonnet-4.6'),
    maxOutputTokens: 10000,
    system: 'You are a helpful assistant. Use your tools to search for relevant information before answering questions.',
    messages: await convertToModelMessages(messages),
    stopWhen: stepCountIs(6),
    tools,
    onFinish: async () => {
      await httpClient.close()
    },
    onError: async (error) => {
      console.error('streamText error:', error)
      await httpClient.close()
    }
  }).toUIMessageStreamResponse()
})
```

> [!TIP]
> You can use the [DeepSeek Provider](https://ai-sdk.dev/providers/ai-sdk-providers/deepseek) to access AI model through a centralized endpoint:Install the required dependencies:
> ```bash
> pnpm add ai @ai-sdk/deepseek @ai-sdk/vue
> ```
> ```bash
> yarn add ai @ai-sdk/deepseek @ai-sdk/vue
> ```
> ```bash
> npm install ai @ai-sdk/deepseek @ai-sdk/vue
> ```
> ```bash
> bun add ai @ai-sdk/deepseek @ai-sdk/vue
> ```
> Create a server API endpoint:
> ```ts
> import { streamText, convertToModelMessages } from 'ai'
> import { createDeepSeek } from '@ai-sdk/deepseek'
> export default defineEventHandler(async (event) => {
>   const { messages } = await readBody(event)
>   const deepseek = createDeepSeek({
>     apiKey: process.env.DEEPSEEK_API_KEY ?? ''
>   })
>   return streamText({
>     model: deepseek('deepseek-reasoner'), // or 'deepseek-chat'
>     maxOutputTokens: 10000,
>     system: 'You are a helpful assistant.',
>     messages: await convertToModelMessages(messages)
>   }).toUIMessageStreamResponse()
> })
> ```
> Reasoning
> ```ts
> import { streamText, convertToModelMessages } from 'ai'
> import { createDeepSeek } from '@ai-sdk/deepseek'
> export default defineEventHandler(async (event) => {
>   const { messages } = await readBody(event)
>   const deepseek = createDeepSeek({
>     apiKey: process.env.DEEPSEEK_API_KEY ?? ''
>   })
>   return streamText({
>     model: deepseek('deepseek-reasoner'), // or 'deepseek-chat'
>     maxOutputTokens: 10000,
>     system: 'You are a helpful assistant.',
>     messages: await convertToModelMessages(messages),
>     providerOptions: {
>       openai: {
>         reasoningEffort: 'low',
>         reasoningSummary: 'detailed'
>       }
>     }
>   }).toUIMessageStreamResponse()
> })
> ```
> Tool Calling (MCP)
> ```ts
> import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'
> import { streamText, convertToModelMessages, stepCountIs, smoothStream } from 'ai'
> import { createMCPClient } from '@ai-sdk/mcp'
> import { createDeepSeek } from '@ai-sdk/deepseek'
> export default defineEventHandler(async (event) => {
>   const { messages } = await readBody(event)
>   if (!messages || !Array.isArray(messages)) {
>     throw createError({ status: 400, message: 'Invalid or missing messages array.' })
>   }
>   const httpClient = await createMCPClient({
>     transport: { type: 'http', url: 'https://your-app.com/mcp' }
>   })
>   const tools = await httpClient.tools()
>   return streamText({
>     model: deepseek('deepseek-reasoner'), // or 'deepseek-chat'
>     maxOutputTokens: 10000,
>     providerOptions: {},
>     system: 'You are a helpful assistant. Use your tools to search for relevant information before answering questions.',
>     messages: await convertToModelMessages(messages),
>     experimental_transform: smoothStream(),
>     stopWhen: stepCountIs(6),
>     tools,
>     onFinish: async () => {
>       await httpClient.close()
>     },
>     onError: async (error) => {
>       console.error('streamText error:', error)
>       await httpClient.close()
>     }
>   }).toUIMessageStreamResponse()
> })
> ```

## Client Setup

Use the `Chat` class from `@ai-sdk/vue` to manage chat state and connect to your server endpoint:

```vue
<script setup lang="ts">
import type { UIMessage } from 'ai'
import { isReasoningUIPart, isTextUIPart, isToolUIPart, getToolName } from 'ai'
import { Chat } from '@ai-sdk/vue'
import { isPartStreaming, isToolStreaming } from '@bitrix24/b24ui-nuxt'

const input = ref('')

const chat = new Chat({
  onError(error) {
    console.error(error)
  }
})

function onSubmit() {
  chat.sendMessage({ text: input.value })

  input.value = ''
}
</script>

<template>
  <B24ChatMessages
    :messages="chat.messages"
    :status="chat.status"
  >
    <template #content="{ message }">
      <template
        v-for="(part, index) in message.parts"
        :key="`${message.id}-${part.type}-${index}`"
      >
        <B24ChatReasoning
          v-if="isReasoningUIPart(part)"
          :text="part.text"
          :streaming="isPartStreaming(part)"
        >
          <MDC
            :value="part.text"
            :cache-key="`reasoning-${message.id}-${index}`"
            class="*:first:mt-0 *:last:mb-0"
          />
        </B24ChatReasoning>

        <B24ChatTool
          v-else-if="isToolUIPart(part)"
          :text="getToolName(part)"
          :streaming="isToolStreaming(part)"
        />

        <template v-else-if="isTextUIPart(part)">
          <MDC
            v-if="message.role === 'assistant'"
            :value="part.text"
            :cache-key="`${message.id}-${index}`"
            class="*:first:mt-0 *:last:mb-0"
          />
          <p v-else-if="message.role === 'user'" class="whitespace-pre-wrap">
            {{ part.text }}
          </p>
        </template>
      </template>
    </template>
  </UChatMessages>

  <B24ChatPrompt
    v-model="input"
    :error="chat.error"
    @submit="onSubmit"
  >
    <B24ChatPromptSubmit
      :status="chat.status"
      @stop="chat.stop()"
      @reload="chat.regenerate()"
    />
  </B24ChatPrompt>
</template>
```

> [!NOTE]
> In this example, we use the `MDC` component from [`@nuxtjs/mdc`](https://github.com/nuxt-modules/mdc) to render messages as Markdown. As Bitrix24 UI provides pre-styled prose components, your content will be automatically styled.