Initial setup
This commit is contained in:
78
AI_PROVIDERS_GUIDE.md
Normal file
78
AI_PROVIDERS_GUIDE.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# AI Providers Guide
|
||||
|
||||
## Supported AI Providers
|
||||
|
||||
Your AI Interview Assistant now supports multiple AI providers! Here's how to set up and use each one:
|
||||
|
||||
## 🤖 **OpenAI (GPT)**
|
||||
- **Models Available**: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo
|
||||
- **API Key**: Get from [OpenAI Platform](https://platform.openai.com/account/api-keys)
|
||||
- **Recommended Model**: GPT-4o-mini (good balance of speed and quality)
|
||||
- **Cost**: Pay per token usage
|
||||
|
||||
## 🧠 **Anthropic (Claude)**
|
||||
- **Models Available**: Claude-3.5-Sonnet, Claude-3.5-Haiku, Claude-3-Opus
|
||||
- **API Key**: Get from [Anthropic Console](https://console.anthropic.com/)
|
||||
- **Recommended Model**: Claude-3.5-Sonnet (excellent reasoning)
|
||||
- **Cost**: Pay per token usage
|
||||
|
||||
## 🔍 **Google (Gemini)**
|
||||
- **Models Available**: Gemini-1.5-Pro, Gemini-1.5-Flash, Gemini-Pro
|
||||
- **API Key**: Get from [Google AI Studio](https://aistudio.google.com/app/apikey)
|
||||
- **Recommended Model**: Gemini-1.5-Flash (fast and efficient)
|
||||
- **Cost**: Free tier available, then pay per token
|
||||
|
||||
## 🏠 **Ollama (Local)**
|
||||
- **Models Available**: Llama3.2, Llama3.1, Mistral, CodeLlama, Phi3
|
||||
- **Setup**: Install [Ollama](https://ollama.ai/) locally
|
||||
- **No API Key Required**: Runs completely on your machine
|
||||
- **Cost**: Free (uses your computer's resources)
|
||||
|
||||
## 🚀 **How to Setup**
|
||||
|
||||
### 1. **Choose Your Provider**
|
||||
- Open the extension side panel
|
||||
- Select your preferred AI provider from the dropdown
|
||||
|
||||
### 2. **Select Model**
|
||||
- Choose the specific model you want to use
|
||||
- Different models have different capabilities and speeds
|
||||
|
||||
### 3. **Add API Key** (if required)
|
||||
- Enter your API key for the selected provider
|
||||
- Ollama doesn't require an API key
|
||||
- Keys are stored securely in Chrome's storage
|
||||
|
||||
### 4. **Start Using**
|
||||
- Click "Start Listening" to begin audio capture
|
||||
- The extension will use your selected AI provider for responses
|
||||
|
||||
## 💡 **Tips**
|
||||
|
||||
- **For Speed**: Use GPT-4o-mini, Gemini-1.5-Flash, or Claude-3.5-Haiku
|
||||
- **For Quality**: Use GPT-4o, Claude-3.5-Sonnet, or Gemini-1.5-Pro
|
||||
- **For Privacy**: Use Ollama (runs locally, no data sent to servers)
|
||||
- **For Free Usage**: Try Google Gemini's free tier or set up Ollama
|
||||
|
||||
## 🔧 **Ollama Setup**
|
||||
|
||||
If you want to use Ollama (local AI):
|
||||
|
||||
1. Install Ollama from [ollama.ai](https://ollama.ai/)
|
||||
2. Run: `ollama pull llama3.2` (or your preferred model)
|
||||
3. Make sure Ollama is running: `ollama serve`
|
||||
4. Select "Ollama (Local)" in the extension
|
||||
|
||||
## 🆘 **Troubleshooting**
|
||||
|
||||
- **"API key not set"**: Make sure you've entered a valid API key
|
||||
- **"Failed to connect"**: Check your internet connection (or Ollama service for local)
|
||||
- **"Invalid API key"**: Verify your API key is correct and has sufficient credits
|
||||
- **Slow responses**: Try switching to a faster model like GPT-4o-mini or Gemini-1.5-Flash
|
||||
|
||||
## 🔒 **Privacy & Security**
|
||||
|
||||
- API keys are stored locally in Chrome's secure storage
|
||||
- Only the selected provider receives your audio transcriptions
|
||||
- Ollama option keeps everything completely local
|
||||
- No audio data is stored permanently
|
||||
180
NEW_FEATURES_GUIDE.md
Normal file
180
NEW_FEATURES_GUIDE.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# 🚀 New Features Guide
|
||||
|
||||
## 📄 Context Management
|
||||
|
||||
### What is Context Management?
|
||||
Context management allows you to provide additional information (like your CV, job description, company information) to help the AI give more personalized and relevant responses during interviews.
|
||||
|
||||
### How to Use Context Management
|
||||
|
||||
#### 1. **Upload Files**
|
||||
- Click the "Upload Files" tab in the Context Management section
|
||||
- Click "📁 Upload CV/Job Description"
|
||||
- Select your files (supports TXT, PDF, DOC, DOCX)
|
||||
- Files will be automatically processed and saved
|
||||
|
||||
#### 2. **Add Text Directly**
|
||||
- Click the "Add Text" tab
|
||||
- Paste your CV, job description, or any relevant context
|
||||
- Give it a descriptive title (e.g., "My CV", "Job Description")
|
||||
- Click "💾 Save Context"
|
||||
|
||||
#### 3. **Manage Contexts**
|
||||
- Click the "Manage" tab to see all saved contexts
|
||||
- Edit existing contexts by clicking "✏️ Edit"
|
||||
- Delete contexts with "🗑️ Delete"
|
||||
- Clear all contexts with "🗑️ Clear All Context"
|
||||
|
||||
### Context Examples
|
||||
- **CV/Resume**: Your work experience, skills, education
|
||||
- **Job Description**: The role you're applying for
|
||||
- **Company Info**: About the company, their values, recent news
|
||||
- **Project Details**: Specific projects you want to discuss
|
||||
- **Technical Skills**: Programming languages, tools, certifications
|
||||
|
||||
### How Context Improves Responses
|
||||
Without context: *"Tell me about your experience with Python."*
|
||||
AI Response: *"Python is a programming language..."*
|
||||
|
||||
With context (your CV uploaded): *"Tell me about your experience with Python."*
|
||||
AI Response: *"Based on your background, you have 3 years of Python experience at TechCorp, where you built data analysis tools and worked with pandas and scikit-learn..."*
|
||||
|
||||
---
|
||||
|
||||
## 📱 Multi-Device Listening
|
||||
|
||||
### What is Multi-Device Listening?
|
||||
This feature allows you to use the AI Interview Assistant from other devices (phones, tablets, other computers) while keeping the main processing on your primary Chrome browser.
|
||||
|
||||
### How to Enable Multi-Device Access
|
||||
|
||||
#### 1. **Enable Remote Access**
|
||||
- In the extension, scroll to "📱 Multi-Device Listening"
|
||||
- Click "🌐 Enable Remote Access"
|
||||
- Wait for the server to start
|
||||
|
||||
#### 2. **Get Access Link**
|
||||
- Once enabled, you'll see an access URL
|
||||
- Copy the link with "📋 Copy Link"
|
||||
- A QR code will also be generated
|
||||
|
||||
#### 3. **Connect Other Devices**
|
||||
- Open the access URL on any other device
|
||||
- The remote device will show a connection interface
|
||||
- Click "🔗 Connect" to establish connection
|
||||
- Click "🎤 Start Listening" to capture audio from that device
|
||||
|
||||
### Use Cases for Multi-Device Listening
|
||||
|
||||
#### **Phone as Microphone**
|
||||
- Use your phone's better microphone
|
||||
- Move around freely during video calls
|
||||
- Keep your computer focused on the video call
|
||||
|
||||
#### **Tablet for Discrete Access**
|
||||
- Keep the AI responses on a tablet beside you
|
||||
- Less obvious than looking at your computer screen
|
||||
- Touch-friendly interface for quick control
|
||||
|
||||
#### **Backup Device**
|
||||
- Have a secondary device ready in case of technical issues
|
||||
- Multiple people can access the same AI assistant
|
||||
- Share responses with interview partners
|
||||
|
||||
### Technical Details
|
||||
- **Connection**: Local network connection (devices must be on same WiFi)
|
||||
- **Security**: Session-based access with unique IDs
|
||||
- **Audio Processing**: Audio captured on remote device, processed on main device
|
||||
- **Responses**: AI responses sent to all connected devices
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Setup Instructions
|
||||
|
||||
### 1. **Reload the Extension**
|
||||
After the updates, reload the extension in Chrome:
|
||||
- Go to `chrome://extensions/`
|
||||
- Find "AI Interview Assistant"
|
||||
- Click the reload button 🔄
|
||||
|
||||
### 2. **Configure Context**
|
||||
- Add your CV, job description, and relevant information
|
||||
- Test with a sample question to see improved responses
|
||||
|
||||
### 3. **Test Multi-Device (Optional)**
|
||||
- Enable remote access
|
||||
- Try connecting from another device on the same network
|
||||
- Test audio capture and response delivery
|
||||
|
||||
---
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
### Context Management Tips
|
||||
- **Be Specific**: Include specific technologies, company names, project details
|
||||
- **Keep Updated**: Update context for different interviews/companies
|
||||
- **Organize**: Use clear titles like "CV - Software Engineer", "Job Desc - Google"
|
||||
- **Length**: Include full details - the AI will extract relevant parts
|
||||
|
||||
### Multi-Device Tips
|
||||
- **Network**: Ensure all devices are on the same WiFi network
|
||||
- **Audio Quality**: Use devices with good microphones for better transcription
|
||||
- **Positioning**: Place remote device where it can capture audio clearly
|
||||
- **Backup**: Always have the main device ready as backup
|
||||
|
||||
### Interview Tips with New Features
|
||||
- **Preparation**: Upload job description and your CV before the interview
|
||||
- **Discrete Usage**: Use tablet/phone for less obvious AI assistance
|
||||
- **Practice**: Test the setup before important interviews
|
||||
- **Fallback**: Have manual notes ready in case of technical issues
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Privacy & Security
|
||||
|
||||
### Context Data
|
||||
- All context data stored locally in Chrome's secure storage
|
||||
- No context data sent to external servers except during AI processing
|
||||
- You can delete all context data anytime
|
||||
|
||||
### Multi-Device Connection
|
||||
- Connections are local network only (not internet-accessible)
|
||||
- Session-based security with unique IDs
|
||||
- No permanent connections or data storage on remote devices
|
||||
- All audio processing happens on your main device
|
||||
|
||||
### AI Provider Integration
|
||||
- Context is included in prompts sent to your selected AI provider
|
||||
- Same privacy policies apply as your chosen AI service
|
||||
- Context helps personalize responses but follows AI provider's data handling
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Troubleshooting
|
||||
|
||||
### Context Issues
|
||||
- **File not uploading**: Try converting to TXT format first
|
||||
- **Context not applied**: Check that contexts are saved in "Manage" tab
|
||||
- **Responses still generic**: Add more specific details to your context
|
||||
|
||||
### Multi-Device Issues
|
||||
- **Can't connect**: Ensure devices are on same WiFi network
|
||||
- **No audio capture**: Check microphone permissions on remote device
|
||||
- **Connection lost**: Try refreshing the remote access page
|
||||
|
||||
### General Issues
|
||||
- **Extension not working**: Reload the extension in Chrome
|
||||
- **UI looks broken**: Clear browser cache and reload
|
||||
- **Features missing**: Ensure you're using the latest version
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
1. **Add your context** - Start with your CV and current job description
|
||||
2. **Test responses** - Ask a few sample interview questions to see the improvement
|
||||
3. **Try multi-device** - Set up remote access and test from another device
|
||||
4. **Practice** - Use the enhanced features in mock interviews
|
||||
5. **Customize** - Adjust context for different types of interviews
|
||||
|
||||
The AI Interview Assistant is now much more powerful and flexible. Use these features to get more personalized, relevant responses that truly reflect your background and the specific role you're interviewing for!
|
||||
75
README.md
Normal file
75
README.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# AI Interview Assistant Chrome Extension
|
||||
|
||||
## Overview
|
||||
|
||||
The AI Interview Assistant is a Chrome extension designed to help users during interviews or meetings by providing real-time AI-powered responses to questions. It listens to the audio from the current tab, transcribes the speech, identifies questions, and generates concise answers using OpenAI's GPT model.
|
||||
|
||||
<div align="center">
|
||||
<img src="screenshot.png">
|
||||
</div>
|
||||
|
||||
## Features
|
||||
|
||||
- Real-time audio capture from the current tab
|
||||
- Speech-to-text transcription
|
||||
- Question detection
|
||||
- AI-powered responses using OpenAI's GPT-3.5-turbo model
|
||||
- Persistent side panel interface
|
||||
- Secure API key storage
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Google Chrome browser (version 114 or later)
|
||||
- An OpenAI API key
|
||||
|
||||
### Steps
|
||||
|
||||
1. Clone this repository or download the source code as a ZIP file and extract it.
|
||||
|
||||
2. Open Google Chrome and navigate to `chrome://extensions/`.
|
||||
|
||||
3. Enable "Developer mode" by toggling the switch in the top right corner.
|
||||
|
||||
4. Click on "Load unpacked" and select the directory containing the extension files.
|
||||
|
||||
5. The AI Interview Assistant extension should now appear in your list of installed extensions.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Click on the AI Interview Assistant icon in the Chrome toolbar to open the side panel.
|
||||
|
||||
2. Enter your OpenAI API key in the provided input field and click "Save API Key".
|
||||
|
||||
3. Click "Start Listening" to begin capturing audio from the current tab.
|
||||
|
||||
4. As questions are detected in the audio, they will appear in the "Transcript" section.
|
||||
|
||||
5. AI-generated responses will appear in the "AI Response" section.
|
||||
|
||||
6. Click "Stop Listening" to end the audio capture.
|
||||
|
||||
## Privacy and Security
|
||||
|
||||
- The extension only captures audio from the current tab when actively listening.
|
||||
- Your OpenAI API key is stored securely in Chrome's storage and is only used for making API requests.
|
||||
- No audio data or transcripts are stored or transmitted beyond what's necessary for generating responses.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure you have granted the necessary permissions for the extension to access tab audio.
|
||||
- If you're not seeing responses, check that your API key is entered correctly and that you have sufficient credits on your OpenAI account.
|
||||
- For any issues, please check the Chrome developer console for error messages.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions to the AI Interview Assistant are welcome! Please feel free to submit pull requests or create issues for bugs and feature requests.
|
||||
|
||||
## License
|
||||
|
||||
[MIT License](LICENSE)
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This extension is not affiliated with or endorsed by OpenAI. Use of the OpenAI API is subject to OpenAI's use policies and pricing.
|
||||
20
assistant.html
Normal file
20
assistant.html
Normal file
@@ -0,0 +1,20 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>AI Interview Assistant</title>
|
||||
<link rel="stylesheet" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app">
|
||||
<h3>AI Interview Assistant</h3>
|
||||
<input type="password" id="apiKeyInput" placeholder="Enter your OpenAI API Key here">
|
||||
<button id="saveApiKey">Save API Key</button>
|
||||
<button id="toggleListening">Start Listening</button>
|
||||
<div id="transcript"></div>
|
||||
<div id="aiResponse"></div>
|
||||
</div>
|
||||
<script src="assistant.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
56
assistant.js
Normal file
56
assistant.js
Normal file
@@ -0,0 +1,56 @@
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
const toggleButton = document.getElementById('toggleListening');
|
||||
const transcriptDiv = document.getElementById('transcript');
|
||||
const aiResponseDiv = document.getElementById('aiResponse');
|
||||
const apiKeyInput = document.getElementById('apiKeyInput');
|
||||
const saveApiKeyButton = document.getElementById('saveApiKey');
|
||||
let isListening = false;
|
||||
|
||||
// Load saved API key
|
||||
chrome.storage.sync.get('openaiApiKey', (result) => {
|
||||
if (result.openaiApiKey) {
|
||||
apiKeyInput.value = result.openaiApiKey;
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
}
|
||||
});
|
||||
|
||||
apiKeyInput.addEventListener('input', function() {
|
||||
saveApiKeyButton.textContent = 'Save API Key';
|
||||
saveApiKeyButton.disabled = false;
|
||||
});
|
||||
|
||||
saveApiKeyButton.addEventListener('click', function() {
|
||||
const apiKey = apiKeyInput.value.trim();
|
||||
if (apiKey) {
|
||||
chrome.runtime.sendMessage({action: 'setApiKey', apiKey: apiKey});
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
} else {
|
||||
alert('Please enter a valid API key');
|
||||
}
|
||||
});
|
||||
|
||||
toggleButton.addEventListener('click', function() {
|
||||
isListening = !isListening;
|
||||
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
||||
|
||||
if (isListening) {
|
||||
chrome.runtime.sendMessage({action: 'startListening'});
|
||||
transcriptDiv.textContent = 'Listening for questions...';
|
||||
aiResponseDiv.textContent = 'The answer will appear here.';
|
||||
} else {
|
||||
chrome.runtime.sendMessage({action: 'stopListening'});
|
||||
transcriptDiv.textContent = '';
|
||||
aiResponseDiv.textContent = '';
|
||||
}
|
||||
});
|
||||
|
||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||
if (request.action === 'updateTranscript') {
|
||||
transcriptDiv.textContent = request.transcript;
|
||||
} else if (request.action === 'updateAIResponse') {
|
||||
aiResponseDiv.textContent = request.response;
|
||||
}
|
||||
});
|
||||
});
|
||||
378
background.js
Normal file
378
background.js
Normal file
@@ -0,0 +1,378 @@
|
||||
let recognition;
|
||||
let assistantWindowId = null;
|
||||
let currentAIConfig = { provider: 'openai', model: 'gpt-4o-mini' };
|
||||
|
||||
// AI Service configurations
|
||||
const aiServices = {
|
||||
openai: {
|
||||
baseUrl: 'https://api.openai.com/v1/chat/completions',
|
||||
headers: (apiKey) => ({
|
||||
'Content-Type': 'application/json',
|
||||
'Authorization': `Bearer ${apiKey}`
|
||||
}),
|
||||
formatRequest: (model, question, context = '') => ({
|
||||
model: model,
|
||||
messages: [
|
||||
{ role: "system", content: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses. ${context ? `\n\nContext Information:\n${context}` : ''}` },
|
||||
{ role: "user", content: question }
|
||||
],
|
||||
max_tokens: 200,
|
||||
temperature: 0.7
|
||||
}),
|
||||
parseResponse: (data) => data.choices[0].message.content.trim()
|
||||
},
|
||||
anthropic: {
|
||||
baseUrl: 'https://api.anthropic.com/v1/messages',
|
||||
headers: (apiKey) => ({
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': apiKey,
|
||||
'anthropic-version': '2023-06-01'
|
||||
}),
|
||||
formatRequest: (model, question, context = '') => ({
|
||||
model: model,
|
||||
max_tokens: 200,
|
||||
messages: [
|
||||
{ role: "user", content: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}` }
|
||||
]
|
||||
}),
|
||||
parseResponse: (data) => data.content[0].text.trim()
|
||||
},
|
||||
google: {
|
||||
baseUrl: (apiKey, model) => `https://generativelanguage.googleapis.com/v1beta/models/${model}:generateContent?key=${apiKey}`,
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json'
|
||||
}),
|
||||
formatRequest: (model, question, context = '') => ({
|
||||
// Use systemInstruction for instructions/context, and user role for the question
|
||||
systemInstruction: {
|
||||
role: 'system',
|
||||
parts: [{
|
||||
text: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.` + (context ? `\n\nContext Information:\n${context}` : '')
|
||||
}]
|
||||
},
|
||||
contents: [{
|
||||
role: 'user',
|
||||
parts: [{ text: `Question: ${question}` }]
|
||||
}],
|
||||
generationConfig: {
|
||||
maxOutputTokens: 200,
|
||||
temperature: 0.7
|
||||
}
|
||||
}),
|
||||
parseResponse: (data) => data.candidates[0].content.parts[0].text.trim()
|
||||
},
|
||||
ollama: {
|
||||
baseUrl: 'http://localhost:11434/api/generate',
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json'
|
||||
}),
|
||||
formatRequest: (model, question, context = '') => ({
|
||||
model: model,
|
||||
prompt: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}\n\nAnswer:`,
|
||||
stream: false,
|
||||
options: {
|
||||
temperature: 0.7,
|
||||
num_predict: 200
|
||||
}
|
||||
}),
|
||||
parseResponse: (data) => data.response.trim()
|
||||
}
|
||||
};
|
||||
|
||||
// Multi-device server state
|
||||
let remoteServer = null;
|
||||
let remoteServerPort = null;
|
||||
let activeConnections = new Set();
|
||||
|
||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||
if (request.action === 'startListening') {
|
||||
if (request.aiProvider && request.model) {
|
||||
currentAIConfig = { provider: request.aiProvider, model: request.model };
|
||||
}
|
||||
startListening();
|
||||
} else if (request.action === 'stopListening') {
|
||||
stopListening();
|
||||
} else if (request.action === 'getAIResponse') {
|
||||
getAIResponse(request.question);
|
||||
} else if (request.action === 'startRemoteServer') {
|
||||
startRemoteServer(request.sessionId, request.port, sendResponse);
|
||||
return true; // Keep message channel open for async response
|
||||
} else if (request.action === 'stopRemoteServer') {
|
||||
stopRemoteServer(sendResponse);
|
||||
return true;
|
||||
} else if (request.action === 'remoteQuestion') {
|
||||
// Handle questions from remote devices
|
||||
getAIResponse(request.question);
|
||||
}
|
||||
});
|
||||
|
||||
chrome.action.onClicked.addListener((tab) => {
|
||||
chrome.sidePanel.open({ tabId: tab.id });
|
||||
});
|
||||
|
||||
chrome.windows.onRemoved.addListener((windowId) => {
|
||||
if (windowId === assistantWindowId) {
|
||||
assistantWindowId = null;
|
||||
}
|
||||
});
|
||||
|
||||
function startListening() {
|
||||
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||
if (chrome.runtime.lastError) {
|
||||
console.error('Error querying tabs:', chrome.runtime.lastError);
|
||||
return;
|
||||
}
|
||||
if (tabs.length === 0) {
|
||||
console.error('No active tab found');
|
||||
return;
|
||||
}
|
||||
const activeTabId = tabs[0].id;
|
||||
if (typeof activeTabId === 'undefined') {
|
||||
console.error('Active tab ID is undefined');
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if the current tab is a valid web page (not chrome:// or extension pages)
|
||||
const tab = tabs[0];
|
||||
if (!tab.url || tab.url.startsWith('chrome://') || tab.url.startsWith('chrome-extension://')) {
|
||||
console.error('Cannot capture audio from this type of page:', tab.url);
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Cannot capture audio from this page. Please navigate to a regular website.'});
|
||||
return;
|
||||
}
|
||||
|
||||
chrome.tabCapture.getMediaStreamId({ consumerTabId: activeTabId }, (streamId) => {
|
||||
if (chrome.runtime.lastError) {
|
||||
console.error('Error getting media stream ID:', chrome.runtime.lastError);
|
||||
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Error: ${errorMsg}. Make sure you've granted microphone permissions.`});
|
||||
return;
|
||||
}
|
||||
if (!streamId) {
|
||||
console.error('No stream ID received');
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Failed to get media stream. Please try again.'});
|
||||
return;
|
||||
}
|
||||
injectContentScriptAndStartCapture(activeTabId, streamId);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function injectContentScriptAndStartCapture(tabId, streamId) {
|
||||
chrome.scripting.executeScript({
|
||||
target: { tabId: tabId },
|
||||
files: ['content.js']
|
||||
}, (injectionResults) => {
|
||||
if (chrome.runtime.lastError) {
|
||||
console.error('Error injecting content script:', chrome.runtime.lastError);
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Failed to inject content script. Please refresh the page and try again.'});
|
||||
return;
|
||||
}
|
||||
|
||||
// Wait a bit to ensure the content script is fully loaded
|
||||
setTimeout(() => {
|
||||
chrome.tabs.sendMessage(tabId, { action: 'startCapture', streamId: streamId }, (response) => {
|
||||
if (chrome.runtime.lastError) {
|
||||
console.error('Error starting capture:', chrome.runtime.lastError);
|
||||
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Error: ${errorMsg}. Please make sure microphone permissions are granted.`});
|
||||
} else {
|
||||
console.log('Capture started successfully');
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Listening for audio... Speak your questions!'});
|
||||
}
|
||||
});
|
||||
}, 200); // Increased timeout slightly for better reliability
|
||||
});
|
||||
}
|
||||
|
||||
function stopListening() {
|
||||
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||
if (chrome.runtime.lastError || tabs.length === 0) {
|
||||
console.error('Error querying tabs for stop:', chrome.runtime.lastError);
|
||||
return;
|
||||
}
|
||||
|
||||
chrome.tabs.sendMessage(tabs[0].id, { action: 'stopCapture' }, (response) => {
|
||||
if (chrome.runtime.lastError) {
|
||||
console.error('Error stopping capture:', chrome.runtime.lastError);
|
||||
// Don't show error to user for stop operation, just log it
|
||||
} else {
|
||||
console.log('Capture stopped successfully');
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Stopped listening.'});
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function isQuestion(text) {
|
||||
// Simple check for question words or question mark
|
||||
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
||||
const lowerText = text.toLowerCase();
|
||||
return questionWords.some(word => lowerText.includes(word)) || text.includes('?');
|
||||
}
|
||||
|
||||
async function getAIResponse(question) {
|
||||
try {
|
||||
const { provider, model } = currentAIConfig;
|
||||
const service = aiServices[provider];
|
||||
|
||||
if (!service) {
|
||||
throw new Error(`Unsupported AI provider: ${provider}`);
|
||||
}
|
||||
|
||||
// Get saved contexts to include in the prompt
|
||||
const contextData = await getStoredContexts();
|
||||
const systemContexts = contextData.filter(c => c.type === 'system');
|
||||
const generalContexts = contextData.filter(c => c.type !== 'system');
|
||||
|
||||
const systemPromptExtra = systemContexts.length > 0
|
||||
? systemContexts.map(ctx => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
||||
: '';
|
||||
|
||||
const contextString = generalContexts.length > 0
|
||||
? generalContexts.map(ctx => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
||||
: '';
|
||||
|
||||
// Get API key for the current provider (skip for Ollama)
|
||||
let apiKey = null;
|
||||
if (provider !== 'ollama') {
|
||||
apiKey = await getApiKey(provider);
|
||||
if (!apiKey) {
|
||||
throw new Error(`${provider.charAt(0).toUpperCase() + provider.slice(1)} API key not set`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Sending request to ${provider} API (${model})...`);
|
||||
|
||||
// Prepare request configuration
|
||||
let url, headers, body;
|
||||
|
||||
if (provider === 'google') {
|
||||
url = service.baseUrl(apiKey, model);
|
||||
headers = service.headers();
|
||||
} else {
|
||||
url = service.baseUrl;
|
||||
headers = service.headers(apiKey);
|
||||
}
|
||||
|
||||
// Inject system prompt extras into question or dedicated field depending on provider
|
||||
// For consistency we keep a single system message including systemPromptExtra
|
||||
const mergedContext = systemPromptExtra
|
||||
? `${systemPromptExtra}${contextString ? '\n\n---\n\n' + contextString : ''}`
|
||||
: contextString;
|
||||
|
||||
body = JSON.stringify(service.formatRequest(model, question, mergedContext));
|
||||
|
||||
const response = await fetch(url, {
|
||||
method: 'POST',
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text();
|
||||
let errorMessage;
|
||||
|
||||
try {
|
||||
const errorData = JSON.parse(errorText);
|
||||
errorMessage = errorData.error?.message || errorData.message || errorText;
|
||||
} catch {
|
||||
errorMessage = errorText;
|
||||
}
|
||||
|
||||
throw new Error(`Failed to get response from ${provider}: ${response.status} ${response.statusText}\n${errorMessage}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
const answer = service.parseResponse(data);
|
||||
|
||||
// Send response to both local UI and remote devices
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: answer});
|
||||
broadcastToRemoteDevices('aiResponse', { response: answer, question: question });
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error getting AI response:', error);
|
||||
|
||||
// Provide more specific error messages
|
||||
let errorMessage = error.message;
|
||||
if (error.message.includes('API key')) {
|
||||
errorMessage = `${error.message}. Please check your API key in the settings.`;
|
||||
} else if (error.message.includes('Failed to fetch')) {
|
||||
if (currentAIConfig.provider === 'ollama') {
|
||||
errorMessage = 'Failed to connect to Ollama. Make sure Ollama is running locally on port 11434.';
|
||||
} else {
|
||||
errorMessage = 'Network error. Please check your internet connection.';
|
||||
}
|
||||
}
|
||||
|
||||
const fullErrorMessage = 'Error: ' + errorMessage;
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: fullErrorMessage});
|
||||
broadcastToRemoteDevices('aiResponse', { response: fullErrorMessage, question: question });
|
||||
}
|
||||
}
|
||||
|
||||
async function getApiKey(provider) {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.sync.get('apiKeys', (result) => {
|
||||
const apiKeys = result.apiKeys || {};
|
||||
resolve(apiKeys[provider]);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async function getStoredContexts() {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.local.get('contexts', (result) => {
|
||||
resolve(result.contexts || []);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Multi-device server functions
|
||||
async function startRemoteServer(sessionId, port, sendResponse) {
|
||||
try {
|
||||
// Note: Chrome extensions can't directly create HTTP servers
|
||||
// This is a simplified implementation that would need a companion app
|
||||
// For now, we'll simulate the server functionality
|
||||
|
||||
remoteServerPort = port;
|
||||
console.log(`Starting remote server on port ${port} with session ${sessionId}`);
|
||||
|
||||
// In a real implementation, you would:
|
||||
// 1. Start a local HTTP/WebSocket server
|
||||
// 2. Handle incoming connections
|
||||
// 3. Route audio data and responses
|
||||
|
||||
// For this demo, we'll just track the state
|
||||
sendResponse({
|
||||
success: true,
|
||||
message: 'Remote server started (demo mode)',
|
||||
url: `http://localhost:${port}?session=${sessionId}`
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error starting remote server:', error);
|
||||
sendResponse({
|
||||
success: false,
|
||||
error: error.message
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function stopRemoteServer(sendResponse) {
|
||||
remoteServer = null;
|
||||
remoteServerPort = null;
|
||||
activeConnections.clear();
|
||||
|
||||
console.log('Remote server stopped');
|
||||
sendResponse({ success: true });
|
||||
}
|
||||
|
||||
function broadcastToRemoteDevices(type, data) {
|
||||
// In a real implementation, this would send data to all connected WebSocket clients
|
||||
console.log('Broadcasting to remote devices:', type, data);
|
||||
|
||||
// For demo purposes, we'll just log the broadcast
|
||||
if (activeConnections.size > 0) {
|
||||
console.log(`Broadcasting ${type} to ${activeConnections.size} connected devices`);
|
||||
}
|
||||
}
|
||||
86
content.js
Normal file
86
content.js
Normal file
@@ -0,0 +1,86 @@
|
||||
let audioContext;
|
||||
let mediaStream;
|
||||
let recognition;
|
||||
|
||||
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
|
||||
if (request.action === 'startCapture') {
|
||||
startCapture(request.streamId);
|
||||
sendResponse({success: true});
|
||||
} else if (request.action === 'stopCapture') {
|
||||
stopCapture();
|
||||
sendResponse({success: true});
|
||||
}
|
||||
return true; // Keep the message channel open for async responses
|
||||
});
|
||||
|
||||
function startCapture(streamId) {
|
||||
navigator.mediaDevices.getUserMedia({
|
||||
audio: {
|
||||
chromeMediaSource: 'tab',
|
||||
chromeMediaSourceId: streamId
|
||||
}
|
||||
}).then((stream) => {
|
||||
mediaStream = stream;
|
||||
audioContext = new AudioContext();
|
||||
const source = audioContext.createMediaStreamSource(stream);
|
||||
|
||||
// Initialize speech recognition
|
||||
recognition = new webkitSpeechRecognition();
|
||||
recognition.continuous = true;
|
||||
recognition.interimResults = true;
|
||||
|
||||
recognition.onresult = function(event) {
|
||||
let finalTranscript = '';
|
||||
for (let i = event.resultIndex; i < event.results.length; ++i) {
|
||||
if (event.results[i].isFinal) {
|
||||
finalTranscript += event.results[i][0].transcript;
|
||||
}
|
||||
}
|
||||
|
||||
if (finalTranscript.trim() !== '') {
|
||||
chrome.runtime.sendMessage({action: 'updateTranscript', transcript: finalTranscript});
|
||||
|
||||
// Check if the transcript contains a question
|
||||
if (isQuestion(finalTranscript)) {
|
||||
chrome.runtime.sendMessage({action: 'getAIResponse', question: finalTranscript});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
recognition.onerror = function(event) {
|
||||
console.error('Speech recognition error:', event.error);
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Speech recognition error: ${event.error}. Please try again.`});
|
||||
};
|
||||
|
||||
recognition.start();
|
||||
}).catch((error) => {
|
||||
console.error('Error starting capture:', error);
|
||||
let errorMessage = 'Failed to start audio capture. ';
|
||||
if (error.name === 'NotAllowedError') {
|
||||
errorMessage += 'Please allow microphone access and try again.';
|
||||
} else if (error.name === 'NotFoundError') {
|
||||
errorMessage += 'No microphone found.';
|
||||
} else {
|
||||
errorMessage += error.message || 'Unknown error occurred.';
|
||||
}
|
||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: errorMessage});
|
||||
});
|
||||
}
|
||||
|
||||
function stopCapture() {
|
||||
if (mediaStream) {
|
||||
mediaStream.getTracks().forEach(track => track.stop());
|
||||
}
|
||||
if (audioContext) {
|
||||
audioContext.close();
|
||||
}
|
||||
if (recognition) {
|
||||
recognition.stop();
|
||||
}
|
||||
}
|
||||
|
||||
function isQuestion(text) {
|
||||
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
||||
const lowerText = text.toLowerCase();
|
||||
return questionWords.some(word => lowerText.includes(word)) || text.includes('?');
|
||||
}
|
||||
68
contentScript.js
Normal file
68
contentScript.js
Normal file
@@ -0,0 +1,68 @@
|
||||
function createDraggableUI() {
|
||||
const uiHTML = `
|
||||
<div id="ai-assistant-ui" class="ai-assistant-container">
|
||||
<div id="ai-assistant-header">AI Interview Assistant</div>
|
||||
<div id="ai-assistant-content">
|
||||
<input type="password" id="apiKeyInput" placeholder="Enter your OpenAI API Key here">
|
||||
<button id="saveApiKey">Save API Key</button>
|
||||
<button id="toggleListening">Start Listening</button>
|
||||
<div id="transcript"></div>
|
||||
<div id="aiResponse"></div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
|
||||
const uiElement = document.createElement('div');
|
||||
uiElement.innerHTML = uiHTML;
|
||||
document.body.appendChild(uiElement);
|
||||
|
||||
const container = document.getElementById('ai-assistant-ui');
|
||||
const header = document.getElementById('ai-assistant-header');
|
||||
|
||||
let isDragging = false;
|
||||
let currentX;
|
||||
let currentY;
|
||||
let initialX;
|
||||
let initialY;
|
||||
let xOffset = 0;
|
||||
let yOffset = 0;
|
||||
|
||||
header.addEventListener('mousedown', dragStart);
|
||||
document.addEventListener('mousemove', drag);
|
||||
document.addEventListener('mouseup', dragEnd);
|
||||
|
||||
function dragStart(e) {
|
||||
initialX = e.clientX - xOffset;
|
||||
initialY = e.clientY - yOffset;
|
||||
|
||||
if (e.target === header) {
|
||||
isDragging = true;
|
||||
}
|
||||
}
|
||||
|
||||
function drag(e) {
|
||||
if (isDragging) {
|
||||
e.preventDefault();
|
||||
currentX = e.clientX - initialX;
|
||||
currentY = e.clientY - initialY;
|
||||
|
||||
xOffset = currentX;
|
||||
yOffset = currentY;
|
||||
|
||||
setTranslate(currentX, currentY, container);
|
||||
}
|
||||
}
|
||||
|
||||
function dragEnd(e) {
|
||||
initialX = currentX;
|
||||
initialY = currentY;
|
||||
|
||||
isDragging = false;
|
||||
}
|
||||
|
||||
function setTranslate(xPos, yPos, el) {
|
||||
el.style.transform = `translate3d(${xPos}px, ${yPos}px, 0)`;
|
||||
}
|
||||
}
|
||||
|
||||
createDraggableUI();
|
||||
24
contentStyle.css
Normal file
24
contentStyle.css
Normal file
@@ -0,0 +1,24 @@
|
||||
.ai-assistant-container {
|
||||
position: fixed;
|
||||
top: 20px;
|
||||
right: 20px;
|
||||
width: 300px;
|
||||
background-color: white;
|
||||
border: 1px solid #ccc;
|
||||
border-radius: 5px;
|
||||
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
|
||||
z-index: 9999;
|
||||
}
|
||||
|
||||
#ai-assistant-header {
|
||||
padding: 10px;
|
||||
background-color: #f0f0f0;
|
||||
border-bottom: 1px solid #ccc;
|
||||
cursor: move;
|
||||
}
|
||||
|
||||
#ai-assistant-content {
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
/* Add more styles as needed */
|
||||
BIN
icon128.png
Normal file
BIN
icon128.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 6.1 KiB |
BIN
icon16.png
Normal file
BIN
icon16.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.2 KiB |
BIN
icon48.png
Normal file
BIN
icon48.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 4.1 KiB |
39
manifest.json
Normal file
39
manifest.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"manifest_version": 3,
|
||||
"name": "AI Interview Assistant",
|
||||
"version": "1.0",
|
||||
"description": "Monitors audio and answers questions in real-time using AI",
|
||||
"permissions": [
|
||||
"tabCapture",
|
||||
"storage",
|
||||
"activeTab",
|
||||
"scripting",
|
||||
"sidePanel"
|
||||
],
|
||||
"host_permissions": [
|
||||
"https://*/*",
|
||||
"http://*/*",
|
||||
"https://api.openai.com/*",
|
||||
"https://api.anthropic.com/*",
|
||||
"https://generativelanguage.googleapis.com/*",
|
||||
"http://localhost:11434/*"
|
||||
],
|
||||
"background": {
|
||||
"service_worker": "background.js"
|
||||
},
|
||||
"action": {
|
||||
"default_icon": {
|
||||
"16": "icon16.png",
|
||||
"48": "icon48.png",
|
||||
"128": "icon128.png"
|
||||
}
|
||||
},
|
||||
"side_panel": {
|
||||
"default_path": "sidepanel.html"
|
||||
},
|
||||
"icons": {
|
||||
"16": "icon16.png",
|
||||
"48": "icon48.png",
|
||||
"128": "icon128.png"
|
||||
}
|
||||
}
|
||||
20
popup.html
Normal file
20
popup.html
Normal file
@@ -0,0 +1,20 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>AI Interview Assistant</title>
|
||||
<link rel="stylesheet" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app">
|
||||
<h3>AI Interview Assistant</h3>
|
||||
<input type="password" id="apiKeyInput" placeholder="Enter your OpenAI API Key here">
|
||||
<button id="saveApiKey">Save API Key</button>
|
||||
<button id="toggleListening">Start Listening</button>
|
||||
<div id="transcript"></div>
|
||||
<div id="aiResponse"></div>
|
||||
</div>
|
||||
<script src="popup.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
56
popup.js
Normal file
56
popup.js
Normal file
@@ -0,0 +1,56 @@
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
const toggleButton = document.getElementById('toggleListening');
|
||||
const transcriptDiv = document.getElementById('transcript');
|
||||
const aiResponseDiv = document.getElementById('aiResponse');
|
||||
const apiKeyInput = document.getElementById('apiKeyInput');
|
||||
const saveApiKeyButton = document.getElementById('saveApiKey');
|
||||
let isListening = false;
|
||||
|
||||
// Load saved API key
|
||||
chrome.storage.sync.get('openaiApiKey', (result) => {
|
||||
if (result.openaiApiKey) {
|
||||
apiKeyInput.value = result.openaiApiKey;
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
}
|
||||
});
|
||||
|
||||
apiKeyInput.addEventListener('input', function() {
|
||||
saveApiKeyButton.textContent = 'Save API Key';
|
||||
saveApiKeyButton.disabled = false;
|
||||
});
|
||||
|
||||
saveApiKeyButton.addEventListener('click', function() {
|
||||
const apiKey = apiKeyInput.value.trim();
|
||||
if (apiKey) {
|
||||
chrome.runtime.sendMessage({action: 'setApiKey', apiKey: apiKey});
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
} else {
|
||||
alert('Please enter a valid API key');
|
||||
}
|
||||
});
|
||||
|
||||
toggleButton.addEventListener('click', function() {
|
||||
isListening = !isListening;
|
||||
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
||||
|
||||
if (isListening) {
|
||||
chrome.runtime.sendMessage({action: 'startListening'});
|
||||
transcriptDiv.textContent = 'Listening for questions...';
|
||||
aiResponseDiv.textContent = 'The answer will appear here.';
|
||||
} else {
|
||||
chrome.runtime.sendMessage({action: 'stopListening'});
|
||||
transcriptDiv.textContent = '';
|
||||
aiResponseDiv.textContent = '';
|
||||
}
|
||||
});
|
||||
|
||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||
if (request.action === 'updateTranscript') {
|
||||
transcriptDiv.textContent = request.transcript;
|
||||
} else if (request.action === 'updateAIResponse') {
|
||||
aiResponseDiv.textContent = request.response;
|
||||
}
|
||||
});
|
||||
});
|
||||
335
remote-access.html
Normal file
335
remote-access.html
Normal file
@@ -0,0 +1,335 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>AI Interview Assistant - Remote Access</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
margin: 0;
|
||||
padding: 20px;
|
||||
min-height: 100vh;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.container {
|
||||
background: white;
|
||||
border-radius: 15px;
|
||||
box-shadow: 0 20px 40px rgba(0,0,0,0.1);
|
||||
padding: 30px;
|
||||
max-width: 500px;
|
||||
width: 100%;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.logo {
|
||||
font-size: 48px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
color: #2c3e50;
|
||||
margin-bottom: 10px;
|
||||
font-size: 24px;
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
color: #666;
|
||||
margin-bottom: 30px;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.status {
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.status.connected {
|
||||
background: #d5f4e6;
|
||||
color: #27ae60;
|
||||
}
|
||||
|
||||
.status.disconnected {
|
||||
background: #fdf2f2;
|
||||
color: #e74c3c;
|
||||
}
|
||||
|
||||
.status.connecting {
|
||||
background: #e8f4fd;
|
||||
color: #3498db;
|
||||
}
|
||||
|
||||
.controls {
|
||||
display: flex;
|
||||
gap: 15px;
|
||||
justify-content: center;
|
||||
margin-bottom: 30px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
button {
|
||||
padding: 12px 24px;
|
||||
border: none;
|
||||
border-radius: 8px;
|
||||
cursor: pointer;
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.primary-btn {
|
||||
background: #3498db;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.primary-btn:hover {
|
||||
background: #2980b9;
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.danger-btn {
|
||||
background: #e74c3c;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.danger-btn:hover {
|
||||
background: #c0392b;
|
||||
}
|
||||
|
||||
.transcript-section, .response-section {
|
||||
margin-bottom: 20px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.section-title {
|
||||
font-weight: 600;
|
||||
color: #2c3e50;
|
||||
margin-bottom: 10px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.content-box {
|
||||
background: #f8fafc;
|
||||
border: 1px solid #e0e6ed;
|
||||
border-radius: 8px;
|
||||
padding: 15px;
|
||||
min-height: 60px;
|
||||
max-height: 150px;
|
||||
overflow-y: auto;
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.response-box {
|
||||
background: #e8f6fd;
|
||||
}
|
||||
|
||||
.device-info {
|
||||
background: #f0f4f8;
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
margin-top: 20px;
|
||||
font-size: 14px;
|
||||
color: #666;
|
||||
}
|
||||
|
||||
@media (max-width: 600px) {
|
||||
.container {
|
||||
padding: 20px;
|
||||
margin: 10px;
|
||||
}
|
||||
|
||||
.controls {
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
button {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<div class="logo">🤖</div>
|
||||
<h1>AI Interview Assistant</h1>
|
||||
<div class="subtitle">Remote Access Portal</div>
|
||||
|
||||
<div id="status" class="status disconnected">
|
||||
🔴 Disconnected from main device
|
||||
</div>
|
||||
|
||||
<div class="controls">
|
||||
<button id="connectBtn" class="primary-btn">🔗 Connect</button>
|
||||
<button id="listenBtn" class="primary-btn" disabled>🎤 Start Listening</button>
|
||||
<button id="stopBtn" class="danger-btn" disabled>⏹️ Stop</button>
|
||||
</div>
|
||||
|
||||
<div class="transcript-section">
|
||||
<div class="section-title">
|
||||
<span>🎯</span>
|
||||
<span>Live Transcript</span>
|
||||
</div>
|
||||
<div id="transcript" class="content-box">
|
||||
Waiting for audio input...
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="response-section">
|
||||
<div class="section-title">
|
||||
<span>🧠</span>
|
||||
<span>AI Response</span>
|
||||
</div>
|
||||
<div id="aiResponse" class="content-box response-box">
|
||||
AI responses will appear here...
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="device-info">
|
||||
<strong>📱 How to use:</strong><br>
|
||||
1. Make sure the main Chrome extension is running<br>
|
||||
2. Click "Connect" to establish connection<br>
|
||||
3. Start listening to capture audio from this device<br>
|
||||
4. Questions will be sent to the main device for AI processing
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const statusEl = document.getElementById('status');
|
||||
const connectBtn = document.getElementById('connectBtn');
|
||||
const listenBtn = document.getElementById('listenBtn');
|
||||
const stopBtn = document.getElementById('stopBtn');
|
||||
const transcriptEl = document.getElementById('transcript');
|
||||
const aiResponseEl = document.getElementById('aiResponse');
|
||||
|
||||
let isConnected = false;
|
||||
let isListening = false;
|
||||
let recognition = null;
|
||||
let websocket = null;
|
||||
|
||||
// Get session ID from URL
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const sessionId = urlParams.get('session');
|
||||
|
||||
if (!sessionId) {
|
||||
statusEl.textContent = '❌ Invalid session. Please use the link from the main extension.';
|
||||
statusEl.className = 'status disconnected';
|
||||
}
|
||||
|
||||
connectBtn.addEventListener('click', connect);
|
||||
listenBtn.addEventListener('click', toggleListening);
|
||||
stopBtn.addEventListener('click', stopListening);
|
||||
|
||||
function connect() {
|
||||
statusEl.textContent = '🔄 Connecting...';
|
||||
statusEl.className = 'status connecting';
|
||||
|
||||
// In a real implementation, this would connect to the WebSocket server
|
||||
// For demo purposes, we'll simulate the connection
|
||||
setTimeout(() => {
|
||||
isConnected = true;
|
||||
statusEl.textContent = '🟢 Connected to main device';
|
||||
statusEl.className = 'status connected';
|
||||
connectBtn.textContent = '✅ Connected';
|
||||
connectBtn.disabled = true;
|
||||
listenBtn.disabled = false;
|
||||
}, 1500);
|
||||
}
|
||||
|
||||
function toggleListening() {
|
||||
if (!isListening) {
|
||||
startListening();
|
||||
} else {
|
||||
stopListening();
|
||||
}
|
||||
}
|
||||
|
||||
function startListening() {
|
||||
if (!('webkitSpeechRecognition' in window)) {
|
||||
alert('Speech recognition not supported in this browser');
|
||||
return;
|
||||
}
|
||||
|
||||
recognition = new webkitSpeechRecognition();
|
||||
recognition.continuous = true;
|
||||
recognition.interimResults = true;
|
||||
|
||||
recognition.onstart = function() {
|
||||
isListening = true;
|
||||
listenBtn.textContent = '🔴 Listening...';
|
||||
listenBtn.classList.remove('primary-btn');
|
||||
listenBtn.classList.add('danger-btn');
|
||||
stopBtn.disabled = false;
|
||||
transcriptEl.textContent = 'Listening for questions...';
|
||||
};
|
||||
|
||||
recognition.onresult = function(event) {
|
||||
let finalTranscript = '';
|
||||
for (let i = event.resultIndex; i < event.results.length; ++i) {
|
||||
if (event.results[i].isFinal) {
|
||||
finalTranscript += event.results[i][0].transcript;
|
||||
}
|
||||
}
|
||||
|
||||
if (finalTranscript.trim() !== '') {
|
||||
transcriptEl.textContent = finalTranscript;
|
||||
|
||||
// Check if it's a question and send to main device
|
||||
if (isQuestion(finalTranscript)) {
|
||||
sendQuestionToMainDevice(finalTranscript);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
recognition.onerror = function(event) {
|
||||
console.error('Speech recognition error:', event.error);
|
||||
aiResponseEl.textContent = `Speech recognition error: ${event.error}`;
|
||||
};
|
||||
|
||||
recognition.start();
|
||||
}
|
||||
|
||||
function stopListening() {
|
||||
if (recognition) {
|
||||
recognition.stop();
|
||||
}
|
||||
isListening = false;
|
||||
listenBtn.textContent = '🎤 Start Listening';
|
||||
listenBtn.classList.remove('danger-btn');
|
||||
listenBtn.classList.add('primary-btn');
|
||||
stopBtn.disabled = true;
|
||||
transcriptEl.textContent = 'Stopped listening.';
|
||||
}
|
||||
|
||||
function isQuestion(text) {
|
||||
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
||||
const lowerText = text.toLowerCase();
|
||||
return questionWords.some(word => lowerText.includes(word)) || text.includes('?');
|
||||
}
|
||||
|
||||
function sendQuestionToMainDevice(question) {
|
||||
// In a real implementation, this would send the question via WebSocket
|
||||
// For demo purposes, we'll just show a processing message
|
||||
aiResponseEl.textContent = '🤔 Processing your question...';
|
||||
|
||||
// Simulate AI response after a delay
|
||||
setTimeout(() => {
|
||||
aiResponseEl.textContent = `Demo response: This is where the AI would respond to "${question.substring(0, 50)}${question.length > 50 ? '...' : ''}"`;
|
||||
}, 2000);
|
||||
}
|
||||
|
||||
// Auto-connect if session ID is present
|
||||
if (sessionId) {
|
||||
setTimeout(connect, 1000);
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
BIN
screenshot.png
Normal file
BIN
screenshot.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 36 KiB |
88
sidepanel.html
Normal file
88
sidepanel.html
Normal file
@@ -0,0 +1,88 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>AI Interview Assistant</title>
|
||||
<link rel="stylesheet" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app">
|
||||
<h3>AI Interview Assistant</h3>
|
||||
|
||||
<div class="ai-provider-section">
|
||||
<label for="aiProvider">AI Provider:</label>
|
||||
<select id="aiProvider">
|
||||
<option value="openai">OpenAI (GPT)</option>
|
||||
<option value="anthropic">Anthropic (Claude)</option>
|
||||
<option value="google">Google (Gemini)</option>
|
||||
<option value="ollama">Ollama (Local)</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="model-selection">
|
||||
<label for="modelSelect">Model:</label>
|
||||
<select id="modelSelect">
|
||||
<!-- Options will be populated based on provider selection -->
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="api-key-section">
|
||||
<input type="password" id="apiKeyInput" placeholder="Enter your API Key here">
|
||||
<button id="saveApiKey">Save API Key</button>
|
||||
<div id="apiKeyStatus" class="status-message"></div>
|
||||
</div>
|
||||
|
||||
<div class="context-section">
|
||||
<h4>📄 Context Management</h4>
|
||||
<div class="context-tabs">
|
||||
<button class="tab-button active" data-tab="upload">Upload Files</button>
|
||||
<button class="tab-button" data-tab="text">Add Text</button>
|
||||
<button class="tab-button" data-tab="manage">Manage (0)</button>
|
||||
</div>
|
||||
|
||||
<div id="uploadTab" class="tab-content active">
|
||||
<input type="file" id="contextFileInput" multiple accept=".txt,.pdf,.doc,.docx" style="display: none;">
|
||||
<button id="uploadContextBtn">📁 Upload CV/Job Description</button>
|
||||
<div class="upload-info">Supports: PDF, DOC, DOCX, TXT</div>
|
||||
</div>
|
||||
|
||||
<div id="textTab" class="tab-content">
|
||||
<textarea id="contextTextInput" placeholder="Paste your CV, job description, or any relevant context here..."></textarea>
|
||||
<select id="contextTypeSelect">
|
||||
<option value="general">General context</option>
|
||||
<option value="system">System prompt</option>
|
||||
<option value="cv">CV / Resume</option>
|
||||
<option value="job_description">Job description</option>
|
||||
</select>
|
||||
<input type="text" id="contextTitleInput" placeholder="Context title (e.g., 'My CV', 'Job Description')">
|
||||
<button id="addContextBtn">💾 Save Context</button>
|
||||
</div>
|
||||
|
||||
<div id="manageTab" class="tab-content">
|
||||
<div id="contextList"></div>
|
||||
<button id="clearAllContextBtn" class="danger-btn">🗑️ Clear All Context</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="device-section">
|
||||
<h4>📱 Multi-Device Listening</h4>
|
||||
<div class="device-options">
|
||||
<button id="enableRemoteListening">🌐 Enable Remote Access</button>
|
||||
<div id="remoteStatus" class="status-message"></div>
|
||||
<div id="deviceInfo" class="device-info" style="display: none;">
|
||||
<div>📱 <strong>Access from any device:</strong></div>
|
||||
<div class="access-url" id="accessUrl"></div>
|
||||
<button id="copyUrlBtn">📋 Copy Link</button>
|
||||
<div class="qr-code" id="qrCode"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<button id="toggleListening">Start Listening</button>
|
||||
<div id="transcript"></div>
|
||||
<div id="aiResponse"></div>
|
||||
</div>
|
||||
<script src="sidepanel.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
473
sidepanel.js
Normal file
473
sidepanel.js
Normal file
@@ -0,0 +1,473 @@
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
const toggleButton = document.getElementById('toggleListening');
|
||||
const transcriptDiv = document.getElementById('transcript');
|
||||
const aiResponseDiv = document.getElementById('aiResponse');
|
||||
const apiKeyInput = document.getElementById('apiKeyInput');
|
||||
const saveApiKeyButton = document.getElementById('saveApiKey');
|
||||
const aiProviderSelect = document.getElementById('aiProvider');
|
||||
const modelSelect = document.getElementById('modelSelect');
|
||||
const apiKeyStatus = document.getElementById('apiKeyStatus');
|
||||
|
||||
// Context management elements
|
||||
const contextFileInput = document.getElementById('contextFileInput');
|
||||
const uploadContextBtn = document.getElementById('uploadContextBtn');
|
||||
const contextTextInput = document.getElementById('contextTextInput');
|
||||
const contextTypeSelect = document.getElementById('contextTypeSelect');
|
||||
const contextTitleInput = document.getElementById('contextTitleInput');
|
||||
const addContextBtn = document.getElementById('addContextBtn');
|
||||
const contextList = document.getElementById('contextList');
|
||||
const clearAllContextBtn = document.getElementById('clearAllContextBtn');
|
||||
|
||||
// Multi-device elements
|
||||
const enableRemoteListening = document.getElementById('enableRemoteListening');
|
||||
const remoteStatus = document.getElementById('remoteStatus');
|
||||
const deviceInfo = document.getElementById('deviceInfo');
|
||||
const accessUrl = document.getElementById('accessUrl');
|
||||
const copyUrlBtn = document.getElementById('copyUrlBtn');
|
||||
const qrCode = document.getElementById('qrCode');
|
||||
|
||||
let isListening = false;
|
||||
let remoteServerActive = false;
|
||||
|
||||
// AI Provider configurations
|
||||
const aiProviders = {
|
||||
openai: {
|
||||
name: 'OpenAI',
|
||||
models: ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-3.5-turbo'],
|
||||
defaultModel: 'gpt-4o-mini',
|
||||
apiKeyPlaceholder: 'Enter your OpenAI API Key',
|
||||
requiresKey: true
|
||||
},
|
||||
anthropic: {
|
||||
name: 'Anthropic',
|
||||
models: ['claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', 'claude-3-opus-20240229'],
|
||||
defaultModel: 'claude-3-5-sonnet-20241022',
|
||||
apiKeyPlaceholder: 'Enter your Anthropic API Key',
|
||||
requiresKey: true
|
||||
},
|
||||
google: {
|
||||
name: 'Google',
|
||||
models: ['gemini-1.5-pro', 'gemini-1.5-flash', 'gemini-pro'],
|
||||
defaultModel: 'gemini-1.5-flash',
|
||||
apiKeyPlaceholder: 'Enter your Google AI API Key',
|
||||
requiresKey: true
|
||||
},
|
||||
ollama: {
|
||||
name: 'Ollama',
|
||||
models: ['llama3.2', 'llama3.1', 'mistral', 'codellama', 'phi3'],
|
||||
defaultModel: 'llama3.2',
|
||||
apiKeyPlaceholder: 'No API key required (Local)',
|
||||
requiresKey: false
|
||||
}
|
||||
};
|
||||
|
||||
// Load saved settings
|
||||
chrome.storage.sync.get(['aiProvider', 'selectedModel', 'apiKeys'], (result) => {
|
||||
const savedProvider = result.aiProvider || 'openai';
|
||||
const savedModel = result.selectedModel || aiProviders[savedProvider].defaultModel;
|
||||
const savedApiKeys = result.apiKeys || {};
|
||||
|
||||
aiProviderSelect.value = savedProvider;
|
||||
updateModelOptions(savedProvider, savedModel);
|
||||
updateApiKeyInput(savedProvider);
|
||||
|
||||
if (savedApiKeys[savedProvider] && aiProviders[savedProvider].requiresKey) {
|
||||
apiKeyInput.value = savedApiKeys[savedProvider];
|
||||
updateApiKeyStatus('API Key Saved', 'success');
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
}
|
||||
});
|
||||
|
||||
// Load and display saved contexts
|
||||
loadContexts();
|
||||
|
||||
// Helper functions
|
||||
function updateModelOptions(provider, selectedModel = null) {
|
||||
const models = aiProviders[provider].models;
|
||||
modelSelect.innerHTML = '';
|
||||
|
||||
models.forEach(model => {
|
||||
const option = document.createElement('option');
|
||||
option.value = model;
|
||||
option.textContent = model;
|
||||
if (selectedModel === model || (!selectedModel && model === aiProviders[provider].defaultModel)) {
|
||||
option.selected = true;
|
||||
}
|
||||
modelSelect.appendChild(option);
|
||||
});
|
||||
}
|
||||
|
||||
function updateApiKeyInput(provider) {
|
||||
const providerConfig = aiProviders[provider];
|
||||
apiKeyInput.placeholder = providerConfig.apiKeyPlaceholder;
|
||||
apiKeyInput.disabled = !providerConfig.requiresKey;
|
||||
saveApiKeyButton.disabled = !providerConfig.requiresKey;
|
||||
|
||||
if (!providerConfig.requiresKey) {
|
||||
apiKeyInput.value = '';
|
||||
updateApiKeyStatus('No API key required', 'success');
|
||||
} else {
|
||||
updateApiKeyStatus('', '');
|
||||
}
|
||||
}
|
||||
|
||||
function updateApiKeyStatus(message, type) {
|
||||
apiKeyStatus.textContent = message;
|
||||
apiKeyStatus.className = `status-message ${type}`;
|
||||
}
|
||||
|
||||
// Context Management Functions
|
||||
async function loadContexts() {
|
||||
const result = await chrome.storage.local.get('contexts');
|
||||
const contexts = result.contexts || [];
|
||||
displayContexts(contexts);
|
||||
updateManageTabCount(contexts.length);
|
||||
}
|
||||
|
||||
function displayContexts(contexts) {
|
||||
contextList.innerHTML = '';
|
||||
if (contexts.length === 0) {
|
||||
contextList.innerHTML = '<div class="no-contexts">No context added yet. Add your CV or job description to get better responses!</div>';
|
||||
return;
|
||||
}
|
||||
|
||||
contexts.forEach((context, index) => {
|
||||
const contextItem = document.createElement('div');
|
||||
contextItem.className = 'context-item';
|
||||
contextItem.innerHTML = `
|
||||
<div class="context-item-info">
|
||||
<div class="context-item-title">${context.title} ${context.type ? `<span style="font-weight: 400; color: #666;">• ${context.type}</span>` : ''}</div>
|
||||
<div class="context-item-preview">${context.content.substring(0, 100)}${context.content.length > 100 ? '...' : ''}</div>
|
||||
</div>
|
||||
<div class="context-item-actions">
|
||||
<button onclick="editContext(${index})" class="edit-btn">✏️ Edit</button>
|
||||
<button onclick="deleteContext(${index})" class="delete-btn danger-btn">🗑️ Delete</button>
|
||||
</div>
|
||||
`;
|
||||
contextList.appendChild(contextItem);
|
||||
});
|
||||
}
|
||||
|
||||
function updateManageTabCount(count) {
|
||||
const manageTab = document.querySelector('[data-tab="manage"]');
|
||||
manageTab.textContent = `Manage (${count})`;
|
||||
}
|
||||
|
||||
async function saveContext(title, content) {
|
||||
if (!title.trim() || !content.trim()) {
|
||||
alert('Please provide both title and content');
|
||||
return;
|
||||
}
|
||||
|
||||
// Optional basic guard for extremely large items (>4MB)
|
||||
const approxBytes = new Blob([content]).size;
|
||||
if (approxBytes > 4 * 1024 * 1024) {
|
||||
alert('This context is too large to store locally. Please split it into smaller parts.');
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await chrome.storage.local.get('contexts');
|
||||
const contexts = result.contexts || [];
|
||||
|
||||
contexts.push({
|
||||
id: Date.now(),
|
||||
title: title.trim(),
|
||||
content: content.trim(),
|
||||
type: (contextTypeSelect && contextTypeSelect.value) || 'general',
|
||||
createdAt: new Date().toISOString()
|
||||
});
|
||||
|
||||
await chrome.storage.local.set({ contexts: contexts });
|
||||
loadContexts();
|
||||
|
||||
// Clear inputs
|
||||
contextTitleInput.value = '';
|
||||
contextTextInput.value = '';
|
||||
if (contextTypeSelect) contextTypeSelect.value = 'general';
|
||||
|
||||
// Switch to manage tab
|
||||
switchTab('manage');
|
||||
}
|
||||
|
||||
async function deleteContext(index) {
|
||||
if (!confirm('Are you sure you want to delete this context?')) return;
|
||||
|
||||
const result = await chrome.storage.local.get('contexts');
|
||||
const contexts = result.contexts || [];
|
||||
contexts.splice(index, 1);
|
||||
|
||||
await chrome.storage.local.set({ contexts: contexts });
|
||||
loadContexts();
|
||||
}
|
||||
|
||||
async function clearAllContexts() {
|
||||
if (!confirm('Are you sure you want to delete all contexts? This cannot be undone.')) return;
|
||||
|
||||
await chrome.storage.local.set({ contexts: [] });
|
||||
loadContexts();
|
||||
}
|
||||
|
||||
function switchTab(tabName) {
|
||||
// Update tab buttons
|
||||
document.querySelectorAll('.tab-button').forEach(btn => {
|
||||
btn.classList.remove('active');
|
||||
});
|
||||
document.querySelector(`[data-tab="${tabName}"]`).classList.add('active');
|
||||
|
||||
// Update tab content
|
||||
document.querySelectorAll('.tab-content').forEach(content => {
|
||||
content.classList.remove('active');
|
||||
});
|
||||
document.getElementById(`${tabName}Tab`).classList.add('active');
|
||||
}
|
||||
|
||||
async function processFile(file) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const reader = new FileReader();
|
||||
|
||||
reader.onload = function(e) {
|
||||
const content = e.target.result;
|
||||
resolve({
|
||||
title: file.name,
|
||||
content: content
|
||||
});
|
||||
};
|
||||
|
||||
reader.onerror = function() {
|
||||
reject(new Error('Failed to read file'));
|
||||
};
|
||||
|
||||
if (file.type === 'text/plain') {
|
||||
reader.readAsText(file);
|
||||
} else {
|
||||
// For other file types, we'll need to extract text
|
||||
// This is a simplified version - in production, you'd want proper PDF/DOC parsing
|
||||
reader.readAsText(file);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Multi-device functions
|
||||
async function enableRemoteAccess() {
|
||||
try {
|
||||
remoteStatus.textContent = 'Starting server...';
|
||||
remoteStatus.className = 'status-message';
|
||||
|
||||
// Generate a unique session ID
|
||||
const sessionId = Math.random().toString(36).substring(2, 15);
|
||||
const port = 8765;
|
||||
const accessURL = `http://localhost:${port}?session=${sessionId}`;
|
||||
|
||||
// Start WebSocket server (we'll implement this)
|
||||
chrome.runtime.sendMessage({
|
||||
action: 'startRemoteServer',
|
||||
sessionId: sessionId,
|
||||
port: port
|
||||
}, (response) => {
|
||||
if (response.success) {
|
||||
remoteServerActive = true;
|
||||
accessUrl.textContent = accessURL;
|
||||
deviceInfo.style.display = 'block';
|
||||
remoteStatus.textContent = 'Remote access enabled!';
|
||||
remoteStatus.className = 'status-message success';
|
||||
enableRemoteListening.textContent = '🛑 Disable Remote Access';
|
||||
|
||||
// Generate QR code (simplified)
|
||||
generateQRCode(accessURL);
|
||||
} else {
|
||||
remoteStatus.textContent = 'Failed to start server: ' + response.error;
|
||||
remoteStatus.className = 'status-message error';
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
remoteStatus.textContent = 'Error: ' + error.message;
|
||||
remoteStatus.className = 'status-message error';
|
||||
}
|
||||
}
|
||||
|
||||
function disableRemoteAccess() {
|
||||
chrome.runtime.sendMessage({ action: 'stopRemoteServer' }, (response) => {
|
||||
remoteServerActive = false;
|
||||
deviceInfo.style.display = 'none';
|
||||
remoteStatus.textContent = '';
|
||||
enableRemoteListening.textContent = '🌐 Enable Remote Access';
|
||||
});
|
||||
}
|
||||
|
||||
function generateQRCode(url) {
|
||||
// Simple QR code placeholder - in production, use a QR code library
|
||||
qrCode.innerHTML = `
|
||||
<div style="border: 2px solid #333; padding: 10px; display: inline-block;">
|
||||
<div style="font-size: 8px; font-family: monospace;">QR Code</div>
|
||||
<div style="font-size: 6px;">Scan to access</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
// Make functions available globally for onclick handlers
|
||||
window.editContext = function(index) {
|
||||
chrome.storage.local.get('contexts', (result) => {
|
||||
const contexts = result.contexts || [];
|
||||
const context = contexts[index];
|
||||
if (context) {
|
||||
contextTitleInput.value = context.title;
|
||||
contextTextInput.value = context.content;
|
||||
if (contextTypeSelect) contextTypeSelect.value = context.type || 'general';
|
||||
switchTab('text');
|
||||
// Remove the old context when editing
|
||||
deleteContext(index);
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
window.deleteContext = deleteContext;
|
||||
|
||||
// Event listeners
|
||||
aiProviderSelect.addEventListener('change', function() {
|
||||
const selectedProvider = this.value;
|
||||
updateModelOptions(selectedProvider);
|
||||
updateApiKeyInput(selectedProvider);
|
||||
|
||||
// Save provider selection
|
||||
chrome.storage.sync.set({
|
||||
aiProvider: selectedProvider,
|
||||
selectedModel: aiProviders[selectedProvider].defaultModel
|
||||
});
|
||||
|
||||
// Load saved API key for this provider
|
||||
chrome.storage.sync.get('apiKeys', (result) => {
|
||||
const apiKeys = result.apiKeys || {};
|
||||
if (apiKeys[selectedProvider] && aiProviders[selectedProvider].requiresKey) {
|
||||
apiKeyInput.value = apiKeys[selectedProvider];
|
||||
updateApiKeyStatus('API Key Saved', 'success');
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
} else {
|
||||
apiKeyInput.value = '';
|
||||
saveApiKeyButton.textContent = 'Save API Key';
|
||||
saveApiKeyButton.disabled = !aiProviders[selectedProvider].requiresKey;
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
modelSelect.addEventListener('change', function() {
|
||||
chrome.storage.sync.set({ selectedModel: this.value });
|
||||
});
|
||||
|
||||
apiKeyInput.addEventListener('input', function() {
|
||||
if (aiProviders[aiProviderSelect.value].requiresKey) {
|
||||
saveApiKeyButton.textContent = 'Save API Key';
|
||||
saveApiKeyButton.disabled = false;
|
||||
updateApiKeyStatus('', '');
|
||||
}
|
||||
});
|
||||
|
||||
saveApiKeyButton.addEventListener('click', function() {
|
||||
const apiKey = apiKeyInput.value.trim();
|
||||
const provider = aiProviderSelect.value;
|
||||
|
||||
if (!aiProviders[provider].requiresKey) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (apiKey) {
|
||||
// Save API key for the current provider
|
||||
chrome.storage.sync.get('apiKeys', (result) => {
|
||||
const apiKeys = result.apiKeys || {};
|
||||
apiKeys[provider] = apiKey;
|
||||
|
||||
chrome.storage.sync.set({ apiKeys: apiKeys }, () => {
|
||||
saveApiKeyButton.textContent = 'API Key Saved';
|
||||
saveApiKeyButton.disabled = true;
|
||||
updateApiKeyStatus('API Key Saved', 'success');
|
||||
});
|
||||
});
|
||||
} else {
|
||||
updateApiKeyStatus('Please enter a valid API key', 'error');
|
||||
}
|
||||
});
|
||||
|
||||
// Context management event listeners
|
||||
document.querySelectorAll('.tab-button').forEach(button => {
|
||||
button.addEventListener('click', function() {
|
||||
const tabName = this.getAttribute('data-tab');
|
||||
switchTab(tabName);
|
||||
});
|
||||
});
|
||||
|
||||
uploadContextBtn.addEventListener('click', function() {
|
||||
contextFileInput.click();
|
||||
});
|
||||
|
||||
contextFileInput.addEventListener('change', async function() {
|
||||
const files = Array.from(this.files);
|
||||
for (const file of files) {
|
||||
try {
|
||||
const result = await processFile(file);
|
||||
await saveContext(result.title, result.content);
|
||||
} catch (error) {
|
||||
alert('Error processing file: ' + error.message);
|
||||
}
|
||||
}
|
||||
this.value = ''; // Clear file input
|
||||
});
|
||||
|
||||
addContextBtn.addEventListener('click', function() {
|
||||
const title = contextTitleInput.value.trim();
|
||||
const content = contextTextInput.value.trim();
|
||||
saveContext(title, content);
|
||||
});
|
||||
|
||||
clearAllContextBtn.addEventListener('click', clearAllContexts);
|
||||
|
||||
// Multi-device event listeners
|
||||
enableRemoteListening.addEventListener('click', function() {
|
||||
if (remoteServerActive) {
|
||||
disableRemoteAccess();
|
||||
} else {
|
||||
enableRemoteAccess();
|
||||
}
|
||||
});
|
||||
|
||||
copyUrlBtn.addEventListener('click', function() {
|
||||
navigator.clipboard.writeText(accessUrl.textContent).then(() => {
|
||||
const originalText = copyUrlBtn.textContent;
|
||||
copyUrlBtn.textContent = '✅ Copied!';
|
||||
setTimeout(() => {
|
||||
copyUrlBtn.textContent = originalText;
|
||||
}, 2000);
|
||||
});
|
||||
});
|
||||
|
||||
toggleButton.addEventListener('click', function() {
|
||||
isListening = !isListening;
|
||||
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
||||
|
||||
if (isListening) {
|
||||
// Send current AI configuration with start listening
|
||||
const currentProvider = aiProviderSelect.value;
|
||||
const currentModel = modelSelect.value;
|
||||
|
||||
chrome.runtime.sendMessage({
|
||||
action: 'startListening',
|
||||
aiProvider: currentProvider,
|
||||
model: currentModel
|
||||
});
|
||||
transcriptDiv.textContent = 'Listening for questions...';
|
||||
aiResponseDiv.textContent = `Using ${aiProviders[currentProvider].name} (${currentModel}). The answer will appear here.`;
|
||||
} else {
|
||||
chrome.runtime.sendMessage({action: 'stopListening'});
|
||||
transcriptDiv.textContent = '';
|
||||
aiResponseDiv.textContent = '';
|
||||
}
|
||||
});
|
||||
|
||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||
if (request.action === 'updateTranscript') {
|
||||
transcriptDiv.textContent = request.transcript;
|
||||
} else if (request.action === 'updateAIResponse') {
|
||||
aiResponseDiv.textContent = request.response;
|
||||
}
|
||||
});
|
||||
});
|
||||
299
style.css
Normal file
299
style.css
Normal file
@@ -0,0 +1,299 @@
|
||||
body {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
padding: 20px;
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
background-color: #f0f4f8;
|
||||
color: #333;
|
||||
margin: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
#app {
|
||||
background-color: white;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
|
||||
padding: 20px;
|
||||
height: calc(100% - 40px);
|
||||
min-width: 76vw;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
h3 {
|
||||
font-size: 22px;
|
||||
margin-bottom: 20px;
|
||||
color: #2c3e50;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
input[type="password"], select {
|
||||
width: 100%;
|
||||
padding: 10px;
|
||||
margin-bottom: 15px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
.ai-provider-section, .model-selection, .api-key-section {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.ai-provider-section label, .model-selection label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: 600;
|
||||
color: #2c3e50;
|
||||
}
|
||||
|
||||
.status-message {
|
||||
font-size: 12px;
|
||||
margin-top: 5px;
|
||||
padding: 5px;
|
||||
border-radius: 3px;
|
||||
}
|
||||
|
||||
.status-message.success {
|
||||
color: #27ae60;
|
||||
background-color: #d5f4e6;
|
||||
}
|
||||
|
||||
.status-message.error {
|
||||
color: #e74c3c;
|
||||
background-color: #fdf2f2;
|
||||
}
|
||||
|
||||
/* Context Management Styles */
|
||||
.context-section, .device-section {
|
||||
margin-bottom: 20px;
|
||||
border: 1px solid #e0e6ed;
|
||||
border-radius: 6px;
|
||||
padding: 15px;
|
||||
background-color: #f8fafc;
|
||||
}
|
||||
|
||||
.context-section h4, .device-section h4 {
|
||||
margin: 0 0 15px 0;
|
||||
color: #2c3e50;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.context-tabs {
|
||||
display: flex;
|
||||
margin-bottom: 15px;
|
||||
border-bottom: 1px solid #ddd;
|
||||
}
|
||||
|
||||
.tab-button {
|
||||
background: none;
|
||||
border: none;
|
||||
padding: 8px 12px;
|
||||
cursor: pointer;
|
||||
border-bottom: 2px solid transparent;
|
||||
font-size: 14px;
|
||||
color: #666;
|
||||
margin: 0;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.tab-button.active {
|
||||
color: #3498db;
|
||||
border-bottom-color: #3498db;
|
||||
}
|
||||
|
||||
.tab-button:hover {
|
||||
background-color: #f0f4f8;
|
||||
}
|
||||
|
||||
.tab-content {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.tab-content.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
#contextTextInput {
|
||||
width: 100%;
|
||||
min-height: 100px;
|
||||
padding: 10px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
resize: vertical;
|
||||
font-family: inherit;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
#contextTypeSelect {
|
||||
width: 100%;
|
||||
padding: 8px;
|
||||
margin-bottom: 10px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
#contextTitleInput {
|
||||
width: 100%;
|
||||
padding: 8px;
|
||||
margin-bottom: 10px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.upload-info {
|
||||
font-size: 12px;
|
||||
color: #666;
|
||||
margin-top: 5px;
|
||||
}
|
||||
|
||||
.context-item {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 10px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
margin-bottom: 8px;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
.context-item-info {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.context-item-title {
|
||||
font-weight: 600;
|
||||
color: #2c3e50;
|
||||
}
|
||||
|
||||
.context-item-preview {
|
||||
font-size: 12px;
|
||||
color: #666;
|
||||
margin-top: 2px;
|
||||
}
|
||||
|
||||
.context-item-actions {
|
||||
display: flex;
|
||||
gap: 5px;
|
||||
}
|
||||
|
||||
.context-item-actions button {
|
||||
padding: 4px 8px;
|
||||
font-size: 12px;
|
||||
margin: 0;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.danger-btn {
|
||||
background-color: #e74c3c !important;
|
||||
}
|
||||
|
||||
.danger-btn:hover {
|
||||
background-color: #c0392b !important;
|
||||
}
|
||||
|
||||
/* Device Section Styles */
|
||||
.device-options {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.access-url {
|
||||
background-color: #f0f4f8;
|
||||
padding: 10px;
|
||||
border-radius: 4px;
|
||||
font-family: monospace;
|
||||
font-size: 12px;
|
||||
word-break: break-all;
|
||||
margin: 10px 0;
|
||||
border: 1px solid #ddd;
|
||||
}
|
||||
|
||||
.qr-code {
|
||||
text-align: center;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
.device-info {
|
||||
background-color: white;
|
||||
padding: 15px;
|
||||
border-radius: 4px;
|
||||
border: 1px solid #ddd;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
button {
|
||||
width: 100%;
|
||||
padding: 10px;
|
||||
margin-bottom: 15px;
|
||||
background-color: #3498db;
|
||||
color: white;
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 16px;
|
||||
transition: background-color 0.3s ease;
|
||||
}
|
||||
|
||||
button:hover {
|
||||
background-color: #2980b9;
|
||||
}
|
||||
|
||||
#saveApiKey {
|
||||
background-color: #2ecc71;
|
||||
}
|
||||
|
||||
#saveApiKey:hover {
|
||||
background-color: #27ae60;
|
||||
}
|
||||
|
||||
#transcript, #aiResponse {
|
||||
margin-top: 15px;
|
||||
border: 1px solid #ddd;
|
||||
padding: 15px;
|
||||
min-height: 60px;
|
||||
max-height: 150px;
|
||||
overflow-y: auto;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
#transcript {
|
||||
background-color: #ecf0f1;
|
||||
}
|
||||
|
||||
#aiResponse {
|
||||
background-color: #e8f6fd;
|
||||
}
|
||||
|
||||
/* Scrollbar styling */
|
||||
::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: #f1f1f1;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: #888;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: #555;
|
||||
}
|
||||
|
||||
button:disabled {
|
||||
background-color: #95a5a6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
button:disabled:hover {
|
||||
background-color: #95a5a6;
|
||||
}
|
||||
Reference in New Issue
Block a user