Set Up Chat Sessions
Learn how to maintain conversation context across multiple requests, enabling natural back-and-forth conversations with AI models through Fusion AI.
How Conversation Context Works
1
Send History
Include previous messages in your request to maintain context
2
NeuroSwitch Routing
Consistent routing ensures smooth conversation flow
3
Context Preserved
AI remembers the conversation and responds appropriately
Basic Chat Session
Step 1: First Message
First Request
{ "prompt": "Hi! I'm planning a vacation to Japan. Can you help me?", "provider": "neuroswitch", "max_tokens": 300 }
Step 2: Follow-up with History
Second Request
{ "messages": [ { "role": "user", "content": "Hi! I'm planning a vacation to Japan. Can you help me?" }, { "role": "assistant", "content": "I'd be happy to help you plan your Japan vacation! Japan offers incredible experiences..." }, { "role": "user", "content": "What's the best time of year to visit for cherry blossoms?" } ], "provider": "neuroswitch", "max_tokens": 300 }
Message Format
Single Prompt (Simple)
{ "prompt": "Your message here", "provider": "neuroswitch" }
Best for single questions without context
Messages Array (Chat)
{ "messages": [ {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}, {"role": "user", "content": "..."} ], "provider": "neuroswitch" }
Required for maintaining conversation context
JavaScript Chat Implementation
JavaScript
class FusionChat { constructor(apiKey) { this.apiKey = apiKey; this.messages = []; this.apiUrl = 'https://api.mcp4.ai/chat'; } async sendMessage(userMessage) { // Add user message to history this.messages.push({ role: 'user', content: userMessage }); try { const response = await fetch(this.apiUrl, { method: 'POST', headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ messages: this.messages, provider: 'neuroswitch', max_tokens: 500 }) }); const data = await response.json(); // Add AI response to history this.messages.push({ role: 'assistant', content: data.response }); return { response: data.response, provider: data.provider_used, cost: data.cost }; } catch (error) { console.error('Chat error:', error); throw error; } } clearHistory() { this.messages = []; } } // Usage example const chat = new FusionChat('sk-fusion-your-api-key'); const response1 = await chat.sendMessage("What's the capital of France?"); console.log(response1.response); // "The capital of France is Paris..." const response2 = await chat.sendMessage("What's the population there?"); console.log(response2.response); // "Paris has a population of approximately 2.1 million..."
Python Chat Implementation
Python
import requests class FusionChat: def __init__(self, api_key): self.api_key = api_key self.messages = [] self.api_url = 'https://api.mcp4.ai/chat' def send_message(self, user_message): # Add user message to history self.messages.append({ 'role': 'user', 'content': user_message }) try: response = requests.post( self.api_url, headers={ 'Authorization': f'Bearer {self.api_key}', 'Content-Type': 'application/json' }, json={ 'messages': self.messages, 'provider': 'neuroswitch', 'max_tokens': 500 } ) data = response.json() # Add AI response to history self.messages.append({ 'role': 'assistant', 'content': data['response'] }) return { 'response': data['response'], 'provider': data['provider_used'], 'cost': data['cost'] } except Exception as error: print(f'Chat error: {error}') raise error def clear_history(self): self.messages = [] # Usage example chat = FusionChat('sk-fusion-your-api-key') response1 = chat.send_message("Explain machine learning in simple terms") print(response1['response']) response2 = chat.send_message("Can you give me a practical example?") print(response2['response']) # Will reference the previous explanation
Advanced Chat Features
System Messages
Set the AI's behavior and personality at the start of conversations.
{ "messages": [ { "role": "system", "content": "You are a helpful travel assistant specializing in Japan." }, { "role": "user", "content": "Plan my Tokyo itinerary" } ] }
Context Management
Manage conversation length to stay within token limits.
Keep last 10-20 messages
Summarize older context
Monitor token usage
Best Practices
✅ Do This
Always include conversation history for multi-turn chats
Use system messages to set AI behavior
Monitor token usage to manage costs
Use "neuroswitch" for consistent routing
❌ Avoid This
Sending follow-up questions without context
Mixing "prompt" and "messages" parameters
Letting conversations grow too long
Switching providers mid-conversation
Performance Tips
First response: 1-3 seconds
Follow-ups: 0.5-1 second
NeuroSwitch maintains session affinity
Context preserved across providers
Master Chat Sessions!
You now know how to build conversational AI applications. Explore advanced features to enhance your chat experience.