- Added "audioCapture" permission to manifest for microphone access. - Introduced DeepSeek as a new AI provider option in the side panel. - Implemented a capture mode selection (tab-only, mic-only, mixed) in the side panel. - Added options to enable/disable the extension and auto-open the assistant window. - Integrated a mic monitor feature with live input level visualization. - Included buttons for requesting microphone permission and granting tab access. - Updated styles for new sections and mic level visualization. - Enhanced model fetching logic to support DeepSeek and improved error handling.
85 lines
3.2 KiB
Markdown
85 lines
3.2 KiB
Markdown
# AI Providers Guide
|
|
|
|
## Supported AI Providers
|
|
|
|
Your AI Interview Assistant now supports multiple AI providers! Here's how to set up and use each one:
|
|
|
|
## 🤖 **OpenAI (GPT)**
|
|
- **Models Available**: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo
|
|
- **API Key**: Get from [OpenAI Platform](https://platform.openai.com/account/api-keys)
|
|
- **Recommended Model**: GPT-4o-mini (good balance of speed and quality)
|
|
- **Cost**: Pay per token usage
|
|
|
|
## 🧠 **Anthropic (Claude)**
|
|
- **Models Available**: Claude-3.5-Sonnet, Claude-3.5-Haiku, Claude-3-Opus
|
|
- **API Key**: Get from [Anthropic Console](https://console.anthropic.com/)
|
|
- **Recommended Model**: Claude-3.5-Sonnet (excellent reasoning)
|
|
- **Cost**: Pay per token usage
|
|
|
|
## 🔍 **Google (Gemini)**
|
|
- **Models Available**: Gemini-1.5-Pro, Gemini-1.5-Flash, Gemini-Pro
|
|
- **API Key**: Get from [Google AI Studio](https://aistudio.google.com/app/apikey)
|
|
- **Recommended Model**: Gemini-1.5-Flash (fast and efficient)
|
|
- **Cost**: Free tier available, then pay per token
|
|
|
|
## 🌊 **DeepSeek**
|
|
- **Models Available**: DeepSeek-Chat, DeepSeek-Reasoner
|
|
- **API Key**: Get from [DeepSeek Platform](https://platform.deepseek.com/)
|
|
- **Recommended Model**: DeepSeek-Chat (general use)
|
|
- **Cost**: Pay per token usage
|
|
|
|
## 🏠 **Ollama (Local)**
|
|
- **Models Available**: Llama3.2, Llama3.1, Mistral, CodeLlama, Phi3
|
|
- **Setup**: Install [Ollama](https://ollama.ai/) locally
|
|
- **No API Key Required**: Runs completely on your machine
|
|
- **Cost**: Free (uses your computer's resources)
|
|
|
|
## 🚀 **How to Setup**
|
|
|
|
### 1. **Choose Your Provider**
|
|
- Open the extension side panel
|
|
- Select your preferred AI provider from the dropdown
|
|
|
|
### 2. **Select Model**
|
|
- Choose the specific model you want to use
|
|
- Different models have different capabilities and speeds
|
|
|
|
### 3. **Add API Key** (if required)
|
|
- Enter your API key for the selected provider
|
|
- Ollama doesn't require an API key
|
|
- Keys are stored securely in Chrome's storage
|
|
|
|
### 4. **Start Using**
|
|
- Click "Start Listening" to begin audio capture
|
|
- The extension will use your selected AI provider for responses
|
|
|
|
## 💡 **Tips**
|
|
|
|
- **For Speed**: Use GPT-4o-mini, Gemini-1.5-Flash, or Claude-3.5-Haiku
|
|
- **For Quality**: Use GPT-4o, Claude-3.5-Sonnet, or Gemini-1.5-Pro
|
|
- **For Privacy**: Use Ollama (runs locally, no data sent to servers)
|
|
- **For Free Usage**: Try Google Gemini's free tier or set up Ollama
|
|
|
|
## 🔧 **Ollama Setup**
|
|
|
|
If you want to use Ollama (local AI):
|
|
|
|
1. Install Ollama from [ollama.ai](https://ollama.ai/)
|
|
2. Run: `ollama pull llama3.2` (or your preferred model)
|
|
3. Make sure Ollama is running: `ollama serve`
|
|
4. Select "Ollama (Local)" in the extension
|
|
|
|
## 🆘 **Troubleshooting**
|
|
|
|
- **"API key not set"**: Make sure you've entered a valid API key
|
|
- **"Failed to connect"**: Check your internet connection (or Ollama service for local)
|
|
- **"Invalid API key"**: Verify your API key is correct and has sufficient credits
|
|
- **Slow responses**: Try switching to a faster model like GPT-4o-mini or Gemini-1.5-Flash
|
|
|
|
## 🔒 **Privacy & Security**
|
|
|
|
- API keys are stored locally in Chrome's secure storage
|
|
- Only the selected provider receives your audio transcriptions
|
|
- Ollama option keeps everything completely local
|
|
- No audio data is stored permanently
|