Meyka AI Chatbot API
Multi-model AI chatbot API supporting OpenAI GPT, Claude, and DeepSeek models. Features streaming responses, custom system prompts, billing tracking, and intelligent stock market analysis with real-time data.
🤖 Multiple AI Models
Access GPT, Claude, and DeepSeek models
⚡ Streaming Support
Real-time responses with SSE
🛠️ Tool Integration
Web search and stock data
💰 Usage Billing
Pay per token with tracking
Authentication
All API requests require authentication using Bearer token.
Getting Your API Key
- Sign up or log in at api.meyka.com
- Navigate to API Keys
- Click "Create Key" and name it
- Copy and secure your API key
Using Your API Key
Authorization: Bearer YOUR_API_KEY_HERE
Available Models
Choose from multiple AI providers and models based on your needs.
Openai
Anthropic
Deepseek
Quick Start
Get started with the API in minutes.
import requests
API_KEY = "your_api_key_here"
BASE_URL = "https://api.meyka.com"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# Create chat
chat = requests.post(
f"{BASE_URL}/api/v1/chats/",
headers=headers,
json={"model_used": "gpt-4o-mini"}
).json()
# Send message
response = requests.post(
f"{BASE_URL}/api/v1/messages/{chat['id']}/send/",
headers=headers,
json={
"content": "Analyze Tesla stock",
"stream": False
}
).json()
print(response["ai_message"])
const API_KEY = 'your_api_key_here';
const BASE_URL = 'https://api.meyka.com';
const headers = {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
};
// Create chat
const chat = await fetch(`${BASE_URL}/api/v1/chats/`, {
method: 'POST',
headers,
body: JSON.stringify({ model_used: 'gpt-4o-mini' })
}).then(r => r.json());
// Send message
const response = await fetch(`${BASE_URL}/api/v1/messages/${chat.id}/send/`, {
method: 'POST',
headers,
body: JSON.stringify({
content: 'Analyze Tesla stock',
stream: false
})
}).then(r => r.json());
console.log(response.ai_message);
# Create chat
curl -X POST https://api.meyka.com/api/v1/chats/ \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model_used": "gpt-4o-mini"}'
# Send message
curl -X POST https://api.meyka.com/api/v1/messages/CHAT_ID/send/ \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"content": "Analyze Tesla stock",
"stream": false
}'
Chats
Manage chat sessions with AI models
/api/v1/chats/
List all chat sessions
Retrieve all chat sessions for the authenticated user
Responses
Non-Streaming Response
[
{
"id": "123e4567-e89b-12d3-a456-426614174000",
"user_email": "user@example.com",
"title": "Apple Stock Analysis",
"model_used": "gpt-4o-mini",
"created_at": "2025-10-15T10:30:00Z",
"updated_at": "2025-10-15T10:45:00Z"
}
]
/api/v1/chats/
Create a new chat session
Create a new chat session with model selection
Request Body
| Name | Type | Required | Description |
|---|---|---|---|
title |
string | No | Optional chat title (auto-generated from first message if not provided) |
model_used |
string (default: gpt-4o-mini) | No | AI model to use for this chat |
Responses
/api/v1/chats/{chat_id}/
Get chat details
Retrieve details of a specific chat session
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
chat_id |
string | Yes | Unique chat session ID (UUID) |
Responses
/api/v1/chats/{chat_id}/
Update chat
Update chat title or model
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
chat_id |
string | Yes | Unique chat session ID (UUID) |
Request Body
| Name | Type | Required | Description |
|---|---|---|---|
title |
string | No | |
model_used |
string | No |
Responses
/api/v1/chats/{chat_id}/
Delete chat
Delete a chat session and all its messages
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
chat_id |
string | Yes | Unique chat session ID (UUID) |
Responses
Messages
Send messages and receive AI responses with streaming support
/api/v1/messages/{chat_id}/messages/
Get all messages in a chat
Retrieve all messages from a specific chat session
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
chat_id |
string | Yes | Unique chat session ID (UUID) |
Responses
Non-Streaming Response
[
{
"id": 1,
"role": "user",
"content": "What's the current trend for AAPL?",
"token_count": 12,
"cumulative_token_count": 12,
"created_at": "2025-10-15T10:30:00Z"
},
{
"id": 2,
"role": "assistant",
"content": "Apple (AAPL) is currently showing a bullish trend...",
"token_count": 150,
"cumulative_token_count": 162,
"created_at": "2025-10-15T10:30:05Z"
}
]
/api/v1/messages/{chat_id}/send/
Send a message
Send a message to the AI chatbot and receive a response. Supports multiple AI models (GPT, Claude, DeepSeek), streaming responses, custom system prompts, and automatic billing. The AI can use tools like web search and stock data retrieval.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
chat_id |
string | Yes | Unique chat session ID (UUID) |
Request Body
Message content and optional configuration
| Name | Type | Required | Description |
|---|---|---|---|
content |
string | Yes | Your message to the AI chatbot |
model |
string | No | Optional: Override the chat's default model (must match the chat's original model) |
stream |
boolean | No | Enable streaming response (SSE format). When true, response is delivered as Server-Sent Events. When false (default), response is a single JSON object. |
system_prompt |
string | No | Optional: Custom system prompt to override default behavior |
company_name |
string (default: Meyka AI) | No | Optional: Company name for branding in responses |
enable_thinking |
boolean | No | Optional: Enable extended thinking for models that support it (claude-sonnet-4-5, deepseek-reasoner, etc.) |
Example
{
"content": "Analyze Tesla stock performance and recent news",
"model": "gpt-4o-mini",
"stream": false,
"system_prompt": "You are a professional financial advisor specializing in tech stocks.",
"company_name": "Meyka AI",
"enable_thinking": false
}
Responses
Non-Streaming Response
{
"_description": "Non-streaming response (stream: false) - Single JSON response with complete AI message",
"ai_message": "Tesla (TSLA) has shown strong momentum recently. The stock is up 15% over the past month, driven by strong Q3 deliveries and positive market sentiment around EV adoption. Key metrics show revenue growth of 8% YoY with improved margins in the automotive segment.",
"metadata": {
"tokens": {
"user_message": 25,
"ai_message": 156,
"conversation_total": 500,
"context_limit": 128000,
"remaining": 127500
},
"context": {
"messages_in_context": 10,
"messages_trimmed": false,
"cutoff_point": 0
},
"model": "gpt-4o-mini",
"chat_id": "123e4567-e89b-12d3-a456-426614174000",
"billing": {
"input_cost": 0.0015,
"output_cost": 0.006,
"total_cost": 0.0075,
"remaining_balance": 24.9925,
"currency": "USD"
}
}
}
Streaming response (stream: true) - Server-Sent Events format
Response is delivered as multiple SSE chunks. Each line starting with 'data:' contains a JSON object. Special events like 'hint' and 'reasoning' indicate tool usage and thinking content.
data: {"content": "Tesla "}
data: {"content": "(TSLA) "}
data: {"content": "has shown "}
event: hint
data: {"content": "Searching the web..."}
data: {"content": "strong momentum "}
event: reasoning
data: {"content": "Analyzing the stock performance data and recent market trends..."}
data: {"content": "recently. The stock is up 15%..."}
data: {"done": true, "message_id": 123, "symbol": "TSLA", "title": "Tesla Stock Analysis", "metadata": {"tokens": {"user_message": 25, "ai_message": 156, "conversation_total": 500, "context_limit": 128000, "remaining": 127500}, "context": {"messages_in_context": 10, "messages_trimmed": false, "cutoff_point": 0}, "model": "gpt-4o-mini", "chat_id": "123e4567-e89b-12d3-a456-426614174000", "billing": {"input_cost": 0.0015, "output_cost": 0.006, "total_cost": 0.0075, "remaining_balance": 24.9925, "currency": "USD"}}}
Error Codes
| 200 | Success |
| 201 | Created |
| 400 | Bad Request |
| 401 | Unauthorized |
| 402 | Payment Required |
| 404 | Not Found |
| 500 | Server Error |