feat: Enhance audio capture and monitoring features
- Added "audioCapture" permission to manifest for microphone access. - Introduced DeepSeek as a new AI provider option in the side panel. - Implemented a capture mode selection (tab-only, mic-only, mixed) in the side panel. - Added options to enable/disable the extension and auto-open the assistant window. - Integrated a mic monitor feature with live input level visualization. - Included buttons for requesting microphone permission and granting tab access. - Updated styles for new sections and mic level visualization. - Enhanced model fetching logic to support DeepSeek and improved error handling.
This commit is contained in:
@@ -22,6 +22,12 @@ Your AI Interview Assistant now supports multiple AI providers! Here's how to se
|
|||||||
- **Recommended Model**: Gemini-1.5-Flash (fast and efficient)
|
- **Recommended Model**: Gemini-1.5-Flash (fast and efficient)
|
||||||
- **Cost**: Free tier available, then pay per token
|
- **Cost**: Free tier available, then pay per token
|
||||||
|
|
||||||
|
## 🌊 **DeepSeek**
|
||||||
|
- **Models Available**: DeepSeek-Chat, DeepSeek-Reasoner
|
||||||
|
- **API Key**: Get from [DeepSeek Platform](https://platform.deepseek.com/)
|
||||||
|
- **Recommended Model**: DeepSeek-Chat (general use)
|
||||||
|
- **Cost**: Pay per token usage
|
||||||
|
|
||||||
## 🏠 **Ollama (Local)**
|
## 🏠 **Ollama (Local)**
|
||||||
- **Models Available**: Llama3.2, Llama3.1, Mistral, CodeLlama, Phi3
|
- **Models Available**: Llama3.2, Llama3.1, Mistral, CodeLlama, Phi3
|
||||||
- **Setup**: Install [Ollama](https://ollama.ai/) locally
|
- **Setup**: Install [Ollama](https://ollama.ai/) locally
|
||||||
|
|||||||
78
Plans_and_Todo.md
Normal file
78
Plans_and_Todo.md
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
# Personal Browser Companion - Plans & To-Do
|
||||||
|
|
||||||
|
## Goals
|
||||||
|
- Start local-first with an option to sync to cloud.
|
||||||
|
- Online-only operation (LLM required for decisions).
|
||||||
|
- Auto-start mode during meetings.
|
||||||
|
- Integrations: calendar, email, Discord, Nextcloud.
|
||||||
|
|
||||||
|
## Phase Plan
|
||||||
|
|
||||||
|
### Phase 1: Local MVP (Foundation)
|
||||||
|
- Local storage for sessions, summaries, and user profile.
|
||||||
|
- Meeting/interview modes with manual start and overlay UI.
|
||||||
|
- Basic memory retrieval: recent session summaries + user profile.
|
||||||
|
- Audio capture + STT pipeline (mic + tab) and transcript display.
|
||||||
|
- Privacy controls: store/forget, per-session toggle.
|
||||||
|
|
||||||
|
### Phase 2: Smart Auto-Start
|
||||||
|
- Detect meeting tabs (Google Meet, Zoom, Teams) and prompt to start.
|
||||||
|
- Auto-start rules (domain allowlist, time-based, calendar hints).
|
||||||
|
- Lightweight on-device heuristics for meeting detection.
|
||||||
|
|
||||||
|
### Phase 3: Cloud Sync (Optional)
|
||||||
|
- Opt-in cloud sync for memory + settings.
|
||||||
|
- Conflict resolution strategy (last-write wins + merge for summaries).
|
||||||
|
- Encryption at rest, user-controlled delete/export.
|
||||||
|
|
||||||
|
### Phase 4: Integrations (MCP)
|
||||||
|
- Calendar: read upcoming meetings, attach context.
|
||||||
|
- Email: draft follow-ups, summaries.
|
||||||
|
- Discord: post meeting summary or action items to a channel.
|
||||||
|
- Nextcloud: store meeting notes, transcripts, and attachments.
|
||||||
|
|
||||||
|
## MVP To-Do (Local)
|
||||||
|
|
||||||
|
### Core
|
||||||
|
- Define memory schema (profile, session, summary, action items).
|
||||||
|
- Implement local RAG: index summaries + profile into embeddings.
|
||||||
|
- Add session lifecycle: start, pause, end, summarize.
|
||||||
|
|
||||||
|
### Audio + STT
|
||||||
|
- Implement reliable STT for tab audio (server-side if needed).
|
||||||
|
- Keep mic-only STT as fallback.
|
||||||
|
- Add device selection + live mic monitor.
|
||||||
|
|
||||||
|
### UI/UX
|
||||||
|
- Overlay controls: resize, hide/show, minimize.
|
||||||
|
- Auto-start toggle in side panel.
|
||||||
|
- Session summary view with “save to memory” toggle.
|
||||||
|
|
||||||
|
### Privacy
|
||||||
|
- Per-session storage consent prompt.
|
||||||
|
- “Forget session” button.
|
||||||
|
|
||||||
|
## Integration To-Do (MCP)
|
||||||
|
|
||||||
|
### MCP Server Options
|
||||||
|
- Build a local MCP server as a bridge for integrations.
|
||||||
|
- Use MCP tool registry for calendar/email/Discord/Nextcloud.
|
||||||
|
|
||||||
|
### Calendar
|
||||||
|
- Read upcoming meetings and titles.
|
||||||
|
- Auto-attach relevant context packs.
|
||||||
|
|
||||||
|
### Email
|
||||||
|
- Generate follow-up drafts from summary + action items.
|
||||||
|
|
||||||
|
### Discord
|
||||||
|
- Post meeting summary/action items to a selected channel.
|
||||||
|
|
||||||
|
### Nextcloud
|
||||||
|
- Upload meeting notes and transcripts.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- Preferred cloud provider for sync?
|
||||||
|
- How long should session memories persist by default?
|
||||||
|
- Should auto-start be opt-in per domain or global?
|
||||||
|
- What data should be redacted before sync?
|
||||||
24
README.md
24
README.md
@@ -10,12 +10,16 @@ The AI Interview Assistant is a Chrome extension designed to help users during i
|
|||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Real-time audio capture from the current tab
|
- Real-time audio capture (tab, mic, or mixed mode)
|
||||||
- Speech-to-text transcription
|
- Speech-to-text transcription with live overlay
|
||||||
- Question detection
|
- AI-powered responses with multiple providers (OpenAI, Anthropic, Google, DeepSeek, Ollama)
|
||||||
- AI-powered responses using OpenAI's GPT-3.5-turbo model
|
|
||||||
- Persistent side panel interface
|
- Persistent side panel interface
|
||||||
- Secure API key storage
|
- Secure API key storage
|
||||||
|
- Context management (upload or paste documents for better answers)
|
||||||
|
- Speed mode (faster, shorter responses)
|
||||||
|
- Multi-device demo mode for remote access
|
||||||
|
- Overlay controls: drag, resize, minimize, detach, hide/show
|
||||||
|
- Mic monitor with input device selection and live level meter
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -50,6 +54,18 @@ The AI Interview Assistant is a Chrome extension designed to help users during i
|
|||||||
|
|
||||||
6. Click "Stop Listening" to end the audio capture.
|
6. Click "Stop Listening" to end the audio capture.
|
||||||
|
|
||||||
|
## Plans & Roadmap
|
||||||
|
|
||||||
|
- See the evolving roadmap and to-do list in `Plans_and_Todo.md`.
|
||||||
|
|
||||||
|
## Recent Improvements
|
||||||
|
|
||||||
|
- Larger, lighter overlay with a visible resize handle.
|
||||||
|
- Overlay hide/show controls.
|
||||||
|
- Mic monitor with input device selection and live level meter.
|
||||||
|
- Auto-open assistant window option after Start Listening.
|
||||||
|
- Better async message handling in content scripts.
|
||||||
|
|
||||||
## Privacy and Security
|
## Privacy and Security
|
||||||
|
|
||||||
- The extension only captures audio from the current tab when actively listening.
|
- The extension only captures audio from the current tab when actively listening.
|
||||||
|
|||||||
@@ -9,9 +9,7 @@
|
|||||||
<body>
|
<body>
|
||||||
<div id="app">
|
<div id="app">
|
||||||
<h3>AI Interview Assistant</h3>
|
<h3>AI Interview Assistant</h3>
|
||||||
<input type="password" id="apiKeyInput" placeholder="Enter your OpenAI API Key here">
|
<div class="status-message">Detached view</div>
|
||||||
<button id="saveApiKey">Save API Key</button>
|
|
||||||
<button id="toggleListening">Start Listening</button>
|
|
||||||
<div id="transcript"></div>
|
<div id="transcript"></div>
|
||||||
<div id="aiResponse"></div>
|
<div id="aiResponse"></div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
44
assistant.js
44
assistant.js
@@ -1,50 +1,6 @@
|
|||||||
document.addEventListener('DOMContentLoaded', function() {
|
document.addEventListener('DOMContentLoaded', function() {
|
||||||
const toggleButton = document.getElementById('toggleListening');
|
|
||||||
const transcriptDiv = document.getElementById('transcript');
|
const transcriptDiv = document.getElementById('transcript');
|
||||||
const aiResponseDiv = document.getElementById('aiResponse');
|
const aiResponseDiv = document.getElementById('aiResponse');
|
||||||
const apiKeyInput = document.getElementById('apiKeyInput');
|
|
||||||
const saveApiKeyButton = document.getElementById('saveApiKey');
|
|
||||||
let isListening = false;
|
|
||||||
|
|
||||||
// Load saved API key
|
|
||||||
chrome.storage.sync.get('openaiApiKey', (result) => {
|
|
||||||
if (result.openaiApiKey) {
|
|
||||||
apiKeyInput.value = result.openaiApiKey;
|
|
||||||
saveApiKeyButton.textContent = 'API Key Saved';
|
|
||||||
saveApiKeyButton.disabled = true;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
apiKeyInput.addEventListener('input', function() {
|
|
||||||
saveApiKeyButton.textContent = 'Save API Key';
|
|
||||||
saveApiKeyButton.disabled = false;
|
|
||||||
});
|
|
||||||
|
|
||||||
saveApiKeyButton.addEventListener('click', function() {
|
|
||||||
const apiKey = apiKeyInput.value.trim();
|
|
||||||
if (apiKey) {
|
|
||||||
chrome.runtime.sendMessage({action: 'setApiKey', apiKey: apiKey});
|
|
||||||
saveApiKeyButton.textContent = 'API Key Saved';
|
|
||||||
saveApiKeyButton.disabled = true;
|
|
||||||
} else {
|
|
||||||
alert('Please enter a valid API key');
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
toggleButton.addEventListener('click', function() {
|
|
||||||
isListening = !isListening;
|
|
||||||
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
|
||||||
|
|
||||||
if (isListening) {
|
|
||||||
chrome.runtime.sendMessage({action: 'startListening'});
|
|
||||||
transcriptDiv.textContent = 'Listening for questions...';
|
|
||||||
aiResponseDiv.textContent = 'The answer will appear here.';
|
|
||||||
} else {
|
|
||||||
chrome.runtime.sendMessage({action: 'stopListening'});
|
|
||||||
transcriptDiv.textContent = '';
|
|
||||||
aiResponseDiv.textContent = '';
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||||
if (request.action === 'updateTranscript') {
|
if (request.action === 'updateTranscript') {
|
||||||
|
|||||||
581
background.js
581
background.js
@@ -1,23 +1,27 @@
|
|||||||
let recognition;
|
'use strict';
|
||||||
let assistantWindowId = null;
|
|
||||||
let currentAIConfig = { provider: 'openai', model: 'gpt-4o-mini' };
|
|
||||||
|
|
||||||
// AI Service configurations
|
const DEFAULT_AI_CONFIG = { provider: 'openai', model: 'gpt-4o-mini' };
|
||||||
const aiServices = {
|
const DEFAULT_CAPTURE_MODE = 'tab';
|
||||||
|
const LISTENING_PROMPT = 'You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.';
|
||||||
|
|
||||||
|
const AI_SERVICES = {
|
||||||
openai: {
|
openai: {
|
||||||
baseUrl: 'https://api.openai.com/v1/chat/completions',
|
baseUrl: 'https://api.openai.com/v1/chat/completions',
|
||||||
headers: (apiKey) => ({
|
headers: (apiKey) => ({
|
||||||
'Content-Type': 'application/json',
|
'Content-Type': 'application/json',
|
||||||
'Authorization': `Bearer ${apiKey}`
|
Authorization: `Bearer ${apiKey}`
|
||||||
}),
|
}),
|
||||||
formatRequest: (model, question, context = '') => ({
|
formatRequest: (model, question, context = '', options = {}) => ({
|
||||||
model: model,
|
model,
|
||||||
messages: [
|
messages: [
|
||||||
{ role: "system", content: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses. ${context ? `\n\nContext Information:\n${context}` : ''}` },
|
{
|
||||||
{ role: "user", content: question }
|
role: 'system',
|
||||||
|
content: `${LISTENING_PROMPT}${context ? `\n\nContext Information:\n${context}` : ''}`
|
||||||
|
},
|
||||||
|
{ role: 'user', content: question }
|
||||||
],
|
],
|
||||||
max_tokens: 200,
|
max_tokens: options.maxTokens || 200,
|
||||||
temperature: 0.7
|
temperature: options.temperature ?? 0.7
|
||||||
}),
|
}),
|
||||||
parseResponse: (data) => data.choices[0].message.content.trim()
|
parseResponse: (data) => data.choices[0].message.content.trim()
|
||||||
},
|
},
|
||||||
@@ -28,11 +32,14 @@ const aiServices = {
|
|||||||
'x-api-key': apiKey,
|
'x-api-key': apiKey,
|
||||||
'anthropic-version': '2023-06-01'
|
'anthropic-version': '2023-06-01'
|
||||||
}),
|
}),
|
||||||
formatRequest: (model, question, context = '') => ({
|
formatRequest: (model, question, context = '', options = {}) => ({
|
||||||
model: model,
|
model,
|
||||||
max_tokens: 200,
|
max_tokens: options.maxTokens || 200,
|
||||||
messages: [
|
messages: [
|
||||||
{ role: "user", content: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}` }
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: `${LISTENING_PROMPT}${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}`
|
||||||
|
}
|
||||||
]
|
]
|
||||||
}),
|
}),
|
||||||
parseResponse: (data) => data.content[0].text.trim()
|
parseResponse: (data) => data.content[0].text.trim()
|
||||||
@@ -42,68 +49,79 @@ const aiServices = {
|
|||||||
headers: () => ({
|
headers: () => ({
|
||||||
'Content-Type': 'application/json'
|
'Content-Type': 'application/json'
|
||||||
}),
|
}),
|
||||||
formatRequest: (model, question, context = '') => ({
|
formatRequest: (model, question, context = '', options = {}) => ({
|
||||||
// Use systemInstruction for instructions/context, and user role for the question
|
|
||||||
systemInstruction: {
|
systemInstruction: {
|
||||||
role: 'system',
|
role: 'system',
|
||||||
parts: [{
|
parts: [
|
||||||
text: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.` + (context ? `\n\nContext Information:\n${context}` : '')
|
{
|
||||||
}]
|
text: `${LISTENING_PROMPT}${context ? `\n\nContext Information:\n${context}` : ''}`
|
||||||
|
}
|
||||||
|
]
|
||||||
},
|
},
|
||||||
contents: [{
|
contents: [
|
||||||
role: 'user',
|
{
|
||||||
parts: [{ text: `Question: ${question}` }]
|
role: 'user',
|
||||||
}],
|
parts: [{ text: `Question: ${question}` }]
|
||||||
|
}
|
||||||
|
],
|
||||||
generationConfig: {
|
generationConfig: {
|
||||||
maxOutputTokens: 200,
|
maxOutputTokens: options.maxTokens || 200,
|
||||||
temperature: 0.7
|
temperature: options.temperature ?? 0.7
|
||||||
}
|
}
|
||||||
}),
|
}),
|
||||||
parseResponse: (data) => data.candidates[0].content.parts[0].text.trim()
|
parseResponse: (data) => data.candidates[0].content.parts[0].text.trim()
|
||||||
},
|
},
|
||||||
|
deepseek: {
|
||||||
|
baseUrl: 'https://api.deepseek.com/v1/chat/completions',
|
||||||
|
headers: (apiKey) => ({
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
Authorization: `Bearer ${apiKey}`
|
||||||
|
}),
|
||||||
|
formatRequest: (model, question, context = '', options = {}) => ({
|
||||||
|
model,
|
||||||
|
messages: [
|
||||||
|
{
|
||||||
|
role: 'system',
|
||||||
|
content: `${LISTENING_PROMPT}${context ? `\n\nContext Information:\n${context}` : ''}`
|
||||||
|
},
|
||||||
|
{ role: 'user', content: question }
|
||||||
|
],
|
||||||
|
max_tokens: options.maxTokens || 200,
|
||||||
|
temperature: options.temperature ?? 0.7
|
||||||
|
}),
|
||||||
|
parseResponse: (data) => data.choices[0].message.content.trim()
|
||||||
|
},
|
||||||
ollama: {
|
ollama: {
|
||||||
baseUrl: 'http://localhost:11434/api/generate',
|
baseUrl: 'http://localhost:11434/api/generate',
|
||||||
headers: () => ({
|
headers: () => ({
|
||||||
'Content-Type': 'application/json'
|
'Content-Type': 'application/json'
|
||||||
}),
|
}),
|
||||||
formatRequest: (model, question, context = '') => ({
|
formatRequest: (model, question, context = '', options = {}) => ({
|
||||||
model: model,
|
model,
|
||||||
prompt: `You are a helpful assistant that answers questions briefly and concisely during interviews. Provide clear, professional responses.${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}\n\nAnswer:`,
|
prompt: `${LISTENING_PROMPT}${context ? `\n\nContext Information:\n${context}` : ''}\n\nQuestion: ${question}\n\nAnswer:`,
|
||||||
stream: false,
|
stream: false,
|
||||||
options: {
|
options: {
|
||||||
temperature: 0.7,
|
temperature: options.temperature ?? 0.7,
|
||||||
num_predict: 200
|
num_predict: options.maxTokens || 200
|
||||||
}
|
}
|
||||||
}),
|
}),
|
||||||
parseResponse: (data) => data.response.trim()
|
parseResponse: (data) => data.response.trim()
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Multi-device server state
|
const state = {
|
||||||
let remoteServer = null;
|
recognition: undefined,
|
||||||
let remoteServerPort = null;
|
assistantWindowId: null,
|
||||||
let activeConnections = new Set();
|
currentAIConfig: { ...DEFAULT_AI_CONFIG },
|
||||||
|
currentCaptureMode: DEFAULT_CAPTURE_MODE,
|
||||||
|
remoteServer: null,
|
||||||
|
remoteServerPort: null,
|
||||||
|
activeConnections: new Set(),
|
||||||
|
isActive: true
|
||||||
|
};
|
||||||
|
|
||||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
|
||||||
if (request.action === 'startListening') {
|
return handleMessage(request, sender, sendResponse);
|
||||||
if (request.aiProvider && request.model) {
|
|
||||||
currentAIConfig = { provider: request.aiProvider, model: request.model };
|
|
||||||
}
|
|
||||||
startListening();
|
|
||||||
} else if (request.action === 'stopListening') {
|
|
||||||
stopListening();
|
|
||||||
} else if (request.action === 'getAIResponse') {
|
|
||||||
getAIResponse(request.question);
|
|
||||||
} else if (request.action === 'startRemoteServer') {
|
|
||||||
startRemoteServer(request.sessionId, request.port, sendResponse);
|
|
||||||
return true; // Keep message channel open for async response
|
|
||||||
} else if (request.action === 'stopRemoteServer') {
|
|
||||||
stopRemoteServer(sendResponse);
|
|
||||||
return true;
|
|
||||||
} else if (request.action === 'remoteQuestion') {
|
|
||||||
// Handle questions from remote devices
|
|
||||||
getAIResponse(request.question);
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
chrome.action.onClicked.addListener((tab) => {
|
chrome.action.onClicked.addListener((tab) => {
|
||||||
@@ -111,76 +129,185 @@ chrome.action.onClicked.addListener((tab) => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
chrome.windows.onRemoved.addListener((windowId) => {
|
chrome.windows.onRemoved.addListener((windowId) => {
|
||||||
if (windowId === assistantWindowId) {
|
if (windowId === state.assistantWindowId) {
|
||||||
assistantWindowId = null;
|
state.assistantWindowId = null;
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
initializeActiveState();
|
||||||
|
|
||||||
|
function handleMessage(request, _sender, sendResponse) {
|
||||||
|
switch (request.action) {
|
||||||
|
case 'startListening':
|
||||||
|
if (!state.isActive) {
|
||||||
|
chrome.runtime.sendMessage({
|
||||||
|
action: 'updateAIResponse',
|
||||||
|
response: 'Extension is inactive. Turn it on in the side panel to start listening.'
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.aiProvider && request.model) {
|
||||||
|
state.currentAIConfig = { provider: request.aiProvider, model: request.model };
|
||||||
|
}
|
||||||
|
if (request.captureMode) {
|
||||||
|
state.currentCaptureMode = request.captureMode;
|
||||||
|
}
|
||||||
|
startListening();
|
||||||
|
return false;
|
||||||
|
case 'stopListening':
|
||||||
|
stopListening();
|
||||||
|
return false;
|
||||||
|
case 'getAIResponse':
|
||||||
|
getAIResponse(request.question);
|
||||||
|
return false;
|
||||||
|
case 'startRemoteServer':
|
||||||
|
startRemoteServer(request.sessionId, request.port, sendResponse);
|
||||||
|
return true;
|
||||||
|
case 'stopRemoteServer':
|
||||||
|
stopRemoteServer(sendResponse);
|
||||||
|
return true;
|
||||||
|
case 'remoteQuestion':
|
||||||
|
getAIResponse(request.question);
|
||||||
|
return false;
|
||||||
|
case 'grantTabAccess':
|
||||||
|
grantTabAccess(sendResponse);
|
||||||
|
return true;
|
||||||
|
case 'openAssistantWindow':
|
||||||
|
openAssistantWindow(sendResponse);
|
||||||
|
return true;
|
||||||
|
case 'setActiveState':
|
||||||
|
setActiveState(Boolean(request.isActive), sendResponse);
|
||||||
|
return true;
|
||||||
|
default:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function startListening() {
|
function startListening() {
|
||||||
|
if (state.currentCaptureMode === 'mic') {
|
||||||
|
startMicListening();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (state.currentCaptureMode === 'mixed') {
|
||||||
|
startMixedListening();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||||
if (chrome.runtime.lastError) {
|
if (chrome.runtime.lastError) {
|
||||||
console.error('Error querying tabs:', chrome.runtime.lastError);
|
console.error('Error querying tabs:', chrome.runtime.lastError);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (tabs.length === 0) {
|
if (!tabs.length) {
|
||||||
console.error('No active tab found');
|
console.error('No active tab found');
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
const activeTabId = tabs[0].id;
|
|
||||||
if (typeof activeTabId === 'undefined') {
|
|
||||||
console.error('Active tab ID is undefined');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the current tab is a valid web page (not chrome:// or extension pages)
|
|
||||||
const tab = tabs[0];
|
const tab = tabs[0];
|
||||||
if (!tab.url || tab.url.startsWith('chrome://') || tab.url.startsWith('chrome-extension://')) {
|
if (!isValidCaptureTab(tab)) {
|
||||||
|
const message = 'Error: Cannot capture audio from this page. Please navigate to a regular website.';
|
||||||
console.error('Cannot capture audio from this type of page:', tab.url);
|
console.error('Cannot capture audio from this type of page:', tab.url);
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Cannot capture audio from this page. Please navigate to a regular website.'});
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: message });
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
chrome.tabCapture.getMediaStreamId({ consumerTabId: activeTabId }, (streamId) => {
|
chrome.tabCapture.getMediaStreamId({ consumerTabId: tab.id }, (streamId) => {
|
||||||
if (chrome.runtime.lastError) {
|
if (chrome.runtime.lastError) {
|
||||||
console.error('Error getting media stream ID:', chrome.runtime.lastError);
|
|
||||||
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Error: ${errorMsg}. Make sure you've granted microphone permissions.`});
|
const userMessage = buildTabCaptureErrorMessage(errorMsg);
|
||||||
|
console.error('Error getting media stream ID:', chrome.runtime.lastError);
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: userMessage });
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (!streamId) {
|
if (!streamId) {
|
||||||
console.error('No stream ID received');
|
console.error('No stream ID received');
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Failed to get media stream. Please try again.'});
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to get media stream. Please try again.' });
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
injectContentScriptAndStartCapture(activeTabId, streamId);
|
injectContentScriptAndStartCapture(tab.id, streamId);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startMicListening() {
|
||||||
|
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||||
|
if (chrome.runtime.lastError || tabs.length === 0) {
|
||||||
|
console.error('Error querying tabs:', chrome.runtime.lastError);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const tab = tabs[0];
|
||||||
|
if (!isValidCaptureTab(tab)) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Cannot capture audio from this page. Please navigate to a regular website.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
chrome.scripting.executeScript({ target: { tabId: tab.id }, files: ['content.js'] }, () => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to inject content script. Please refresh the page and try again.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
chrome.tabs.sendMessage(tab.id, { action: 'startMicCapture' }, () => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to start microphone capture.' });
|
||||||
|
} else {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Listening for audio (mic-only)...' });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startMixedListening() {
|
||||||
|
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||||
|
if (chrome.runtime.lastError || tabs.length === 0) {
|
||||||
|
console.error('Error querying tabs:', chrome.runtime.lastError);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const tab = tabs[0];
|
||||||
|
if (!isValidCaptureTab(tab)) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Cannot capture audio from this page. Please navigate to a regular website.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
chrome.scripting.executeScript({ target: { tabId: tab.id }, files: ['content.js'] }, () => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to inject content script. Please refresh the page and try again.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
chrome.tabs.sendMessage(tab.id, { action: 'startMixedCapture' }, () => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to start mixed capture.' });
|
||||||
|
} else {
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Listening for audio (mixed mode)...' });
|
||||||
|
}
|
||||||
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
function injectContentScriptAndStartCapture(tabId, streamId) {
|
function injectContentScriptAndStartCapture(tabId, streamId) {
|
||||||
chrome.scripting.executeScript({
|
chrome.scripting.executeScript({ target: { tabId }, files: ['content.js'] }, () => {
|
||||||
target: { tabId: tabId },
|
|
||||||
files: ['content.js']
|
|
||||||
}, (injectionResults) => {
|
|
||||||
if (chrome.runtime.lastError) {
|
if (chrome.runtime.lastError) {
|
||||||
console.error('Error injecting content script:', chrome.runtime.lastError);
|
console.error('Error injecting content script:', chrome.runtime.lastError);
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Error: Failed to inject content script. Please refresh the page and try again.'});
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Error: Failed to inject content script. Please refresh the page and try again.' });
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait a bit to ensure the content script is fully loaded
|
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
chrome.tabs.sendMessage(tabId, { action: 'startCapture', streamId: streamId }, (response) => {
|
chrome.tabs.sendMessage(tabId, { action: 'startCapture', streamId }, () => {
|
||||||
if (chrome.runtime.lastError) {
|
if (chrome.runtime.lastError) {
|
||||||
console.error('Error starting capture:', chrome.runtime.lastError);
|
|
||||||
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
const errorMsg = chrome.runtime.lastError.message || 'Unknown error';
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Error: ${errorMsg}. Please make sure microphone permissions are granted.`});
|
console.error('Error starting capture:', chrome.runtime.lastError);
|
||||||
|
chrome.runtime.sendMessage({
|
||||||
|
action: 'updateAIResponse',
|
||||||
|
response: `Error: ${errorMsg}. Please make sure microphone permissions are granted.`
|
||||||
|
});
|
||||||
} else {
|
} else {
|
||||||
console.log('Capture started successfully');
|
console.log('Capture started successfully');
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Listening for audio... Speak your questions!'});
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Listening for audio... Speak your questions!' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, 200); // Increased timeout slightly for better reliability
|
}, 200);
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -191,48 +318,49 @@ function stopListening() {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
chrome.tabs.sendMessage(tabs[0].id, { action: 'stopCapture' }, (response) => {
|
chrome.tabs.sendMessage(tabs[0].id, { action: 'stopCapture' }, () => {
|
||||||
if (chrome.runtime.lastError) {
|
if (chrome.runtime.lastError) {
|
||||||
console.error('Error stopping capture:', chrome.runtime.lastError);
|
console.error('Error stopping capture:', chrome.runtime.lastError);
|
||||||
// Don't show error to user for stop operation, just log it
|
return;
|
||||||
} else {
|
|
||||||
console.log('Capture stopped successfully');
|
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Stopped listening.'});
|
|
||||||
}
|
}
|
||||||
|
console.log('Capture stopped successfully');
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: 'Stopped listening.' });
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
function isQuestion(text) {
|
function isQuestion(text) {
|
||||||
// Simple check for question words or question mark
|
|
||||||
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
||||||
const lowerText = text.toLowerCase();
|
const lowerText = text.toLowerCase();
|
||||||
return questionWords.some(word => lowerText.includes(word)) || text.includes('?');
|
return questionWords.some((word) => lowerText.includes(word)) || text.includes('?');
|
||||||
}
|
}
|
||||||
|
|
||||||
async function getAIResponse(question) {
|
async function getAIResponse(question) {
|
||||||
try {
|
try {
|
||||||
const { provider, model } = currentAIConfig;
|
const storedConfig = await getAIConfigFromStorage();
|
||||||
const service = aiServices[provider];
|
if (storedConfig) {
|
||||||
|
state.currentAIConfig = storedConfig;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { provider, model } = state.currentAIConfig;
|
||||||
|
const service = AI_SERVICES[provider];
|
||||||
|
const speedMode = await getSpeedModeFromStorage();
|
||||||
|
|
||||||
if (!service) {
|
if (!service) {
|
||||||
throw new Error(`Unsupported AI provider: ${provider}`);
|
throw new Error(`Unsupported AI provider: ${provider}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get saved contexts to include in the prompt
|
|
||||||
const contextData = await getStoredContexts();
|
const contextData = await getStoredContexts();
|
||||||
const systemContexts = contextData.filter(c => c.type === 'system');
|
const { systemContexts, generalContexts } = selectContextsForRequest(contextData, speedMode);
|
||||||
const generalContexts = contextData.filter(c => c.type !== 'system');
|
|
||||||
|
|
||||||
const systemPromptExtra = systemContexts.length > 0
|
const systemPromptExtra = systemContexts.length
|
||||||
? systemContexts.map(ctx => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
? systemContexts.map((ctx) => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
||||||
: '';
|
: '';
|
||||||
|
|
||||||
const contextString = generalContexts.length > 0
|
const contextString = generalContexts.length
|
||||||
? generalContexts.map(ctx => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
? generalContexts.map((ctx) => `${ctx.title}:\n${ctx.content}`).join('\n\n---\n\n')
|
||||||
: '';
|
: '';
|
||||||
|
|
||||||
// Get API key for the current provider (skip for Ollama)
|
|
||||||
let apiKey = null;
|
let apiKey = null;
|
||||||
if (provider !== 'ollama') {
|
if (provider !== 'ollama') {
|
||||||
apiKey = await getApiKey(provider);
|
apiKey = await getApiKey(provider);
|
||||||
@@ -243,9 +371,8 @@ async function getAIResponse(question) {
|
|||||||
|
|
||||||
console.log(`Sending request to ${provider} API (${model})...`);
|
console.log(`Sending request to ${provider} API (${model})...`);
|
||||||
|
|
||||||
// Prepare request configuration
|
let url;
|
||||||
let url, headers, body;
|
let headers;
|
||||||
|
|
||||||
if (provider === 'google') {
|
if (provider === 'google') {
|
||||||
url = service.baseUrl(apiKey, model);
|
url = service.baseUrl(apiKey, model);
|
||||||
headers = service.headers();
|
headers = service.headers();
|
||||||
@@ -254,20 +381,26 @@ async function getAIResponse(question) {
|
|||||||
headers = service.headers(apiKey);
|
headers = service.headers(apiKey);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Inject system prompt extras into question or dedicated field depending on provider
|
const mergedContextRaw = systemPromptExtra
|
||||||
// For consistency we keep a single system message including systemPromptExtra
|
? `${systemPromptExtra}${contextString ? `\n\n---\n\n${contextString}` : ''}`
|
||||||
const mergedContext = systemPromptExtra
|
|
||||||
? `${systemPromptExtra}${contextString ? '\n\n---\n\n' + contextString : ''}`
|
|
||||||
: contextString;
|
: contextString;
|
||||||
|
const mergedContext = truncateContext(mergedContextRaw, provider, speedMode);
|
||||||
|
|
||||||
body = JSON.stringify(service.formatRequest(model, question, mergedContext));
|
const requestOptions = buildRequestOptions(speedMode);
|
||||||
|
const body = JSON.stringify(service.formatRequest(model, question, mergedContext, requestOptions));
|
||||||
|
|
||||||
|
const controller = new AbortController();
|
||||||
|
const timeoutId = setTimeout(() => controller.abort(), speedMode ? 20000 : 30000);
|
||||||
|
|
||||||
const response = await fetch(url, {
|
const response = await fetch(url, {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
headers: headers,
|
headers,
|
||||||
body: body
|
body,
|
||||||
|
signal: controller.signal
|
||||||
});
|
});
|
||||||
|
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
const errorText = await response.text();
|
const errorText = await response.text();
|
||||||
let errorMessage;
|
let errorMessage;
|
||||||
@@ -285,32 +418,153 @@ async function getAIResponse(question) {
|
|||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
const answer = service.parseResponse(data);
|
const answer = service.parseResponse(data);
|
||||||
|
|
||||||
// Send response to both local UI and remote devices
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: answer });
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: answer});
|
broadcastToRemoteDevices('aiResponse', { response: answer, question });
|
||||||
broadcastToRemoteDevices('aiResponse', { response: answer, question: question });
|
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Error getting AI response:', error);
|
console.error('Error getting AI response:', error);
|
||||||
|
|
||||||
// Provide more specific error messages
|
|
||||||
let errorMessage = error.message;
|
let errorMessage = error.message;
|
||||||
if (error.message.includes('API key')) {
|
if (error.message.includes('API key')) {
|
||||||
errorMessage = `${error.message}. Please check your API key in the settings.`;
|
errorMessage = `${error.message}. Please check your API key in the settings.`;
|
||||||
} else if (error.message.includes('Failed to fetch')) {
|
} else if (error.message.includes('Failed to fetch')) {
|
||||||
if (currentAIConfig.provider === 'ollama') {
|
if (state.currentAIConfig.provider === 'ollama') {
|
||||||
errorMessage = 'Failed to connect to Ollama. Make sure Ollama is running locally on port 11434.';
|
errorMessage = 'Failed to connect to Ollama. Make sure Ollama is running locally on port 11434.';
|
||||||
} else {
|
} else {
|
||||||
errorMessage = 'Network error. Please check your internet connection.';
|
errorMessage = 'Network error. Please check your internet connection.';
|
||||||
}
|
}
|
||||||
|
} else if (error.message.includes('aborted')) {
|
||||||
|
errorMessage = 'Request timed out. Try again or enable speed mode.';
|
||||||
}
|
}
|
||||||
|
|
||||||
const fullErrorMessage = 'Error: ' + errorMessage;
|
const fullErrorMessage = `Error: ${errorMessage}`;
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: fullErrorMessage});
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: fullErrorMessage });
|
||||||
broadcastToRemoteDevices('aiResponse', { response: fullErrorMessage, question: question });
|
broadcastToRemoteDevices('aiResponse', { response: fullErrorMessage, question });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function getApiKey(provider) {
|
function truncateContext(context, provider, speedMode) {
|
||||||
|
if (!context) return '';
|
||||||
|
const maxContextCharsByProvider = {
|
||||||
|
deepseek: speedMode ? 30000 : 60000,
|
||||||
|
openai: speedMode ? 50000 : 120000,
|
||||||
|
anthropic: speedMode ? 50000 : 120000,
|
||||||
|
google: speedMode ? 50000 : 120000,
|
||||||
|
ollama: speedMode ? 50000 : 120000
|
||||||
|
};
|
||||||
|
const maxChars = maxContextCharsByProvider[provider] || 200000;
|
||||||
|
if (context.length <= maxChars) return context;
|
||||||
|
return `${context.slice(0, maxChars)}\n\n[Context truncated to fit model limits.]`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function selectContextsForRequest(contexts, speedMode) {
|
||||||
|
const sorted = [...contexts].sort((a, b) => (b.createdAt || '').localeCompare(a.createdAt || ''));
|
||||||
|
const systemContexts = sorted.filter((ctx) => ctx.type === 'system');
|
||||||
|
const generalContexts = sorted.filter((ctx) => ctx.type !== 'system');
|
||||||
|
|
||||||
|
const maxGeneralItems = speedMode ? 2 : 4;
|
||||||
|
const maxSystemItems = speedMode ? 1 : 2;
|
||||||
|
const maxItemChars = speedMode ? 4000 : 8000;
|
||||||
|
|
||||||
|
const trimItem = (ctx) => ({
|
||||||
|
...ctx,
|
||||||
|
content: (ctx.content || '').slice(0, maxItemChars)
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
systemContexts: systemContexts.slice(0, maxSystemItems).map(trimItem),
|
||||||
|
generalContexts: generalContexts.slice(0, maxGeneralItems).map(trimItem)
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildRequestOptions(speedMode) {
|
||||||
|
if (!speedMode) {
|
||||||
|
return { maxTokens: 200, temperature: 0.7 };
|
||||||
|
}
|
||||||
|
return { maxTokens: 120, temperature: 0.4 };
|
||||||
|
}
|
||||||
|
|
||||||
|
function getSpeedModeFromStorage() {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
chrome.storage.sync.get(['speedMode'], (result) => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
resolve(false);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
resolve(Boolean(result.speedMode));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function grantTabAccess(sendResponse) {
|
||||||
|
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||||
|
if (chrome.runtime.lastError || !tabs.length) {
|
||||||
|
sendResponse({ success: false, error: 'No active tab found.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const tabId = tabs[0].id;
|
||||||
|
chrome.sidePanel.open({ tabId }, () => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
sendResponse({ success: false, error: 'Click the extension icon on the target tab to grant access.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (chrome.action && chrome.action.openPopup) {
|
||||||
|
chrome.action.openPopup(() => {
|
||||||
|
sendResponse({ success: true });
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
sendResponse({ success: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function openAssistantWindow(sendResponse) {
|
||||||
|
if (state.assistantWindowId !== null) {
|
||||||
|
chrome.windows.update(state.assistantWindowId, { focused: true }, () => {
|
||||||
|
sendResponse({ success: true });
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
chrome.windows.create(
|
||||||
|
{
|
||||||
|
url: chrome.runtime.getURL('assistant.html'),
|
||||||
|
type: 'popup',
|
||||||
|
width: 420,
|
||||||
|
height: 320
|
||||||
|
},
|
||||||
|
(win) => {
|
||||||
|
if (chrome.runtime.lastError || !win) {
|
||||||
|
sendResponse({ success: false, error: 'Failed to open assistant window.' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
state.assistantWindowId = win.id;
|
||||||
|
sendResponse({ success: true });
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function getAIConfigFromStorage() {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
chrome.storage.sync.get(['aiProvider', 'selectedModel'], (result) => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
resolve(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const provider = result.aiProvider;
|
||||||
|
const model = result.selectedModel;
|
||||||
|
if (!provider || !model) {
|
||||||
|
resolve(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
resolve({ provider, model });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function getApiKey(provider) {
|
||||||
return new Promise((resolve) => {
|
return new Promise((resolve) => {
|
||||||
chrome.storage.sync.get('apiKeys', (result) => {
|
chrome.storage.sync.get('apiKeys', (result) => {
|
||||||
const apiKeys = result.apiKeys || {};
|
const apiKeys = result.apiKeys || {};
|
||||||
@@ -319,7 +573,7 @@ async function getApiKey(provider) {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
async function getStoredContexts() {
|
function getStoredContexts() {
|
||||||
return new Promise((resolve) => {
|
return new Promise((resolve) => {
|
||||||
chrome.storage.local.get('contexts', (result) => {
|
chrome.storage.local.get('contexts', (result) => {
|
||||||
resolve(result.contexts || []);
|
resolve(result.contexts || []);
|
||||||
@@ -327,28 +581,16 @@ async function getStoredContexts() {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// Multi-device server functions
|
function startRemoteServer(sessionId, port, sendResponse) {
|
||||||
async function startRemoteServer(sessionId, port, sendResponse) {
|
|
||||||
try {
|
try {
|
||||||
// Note: Chrome extensions can't directly create HTTP servers
|
state.remoteServerPort = port;
|
||||||
// This is a simplified implementation that would need a companion app
|
|
||||||
// For now, we'll simulate the server functionality
|
|
||||||
|
|
||||||
remoteServerPort = port;
|
|
||||||
console.log(`Starting remote server on port ${port} with session ${sessionId}`);
|
console.log(`Starting remote server on port ${port} with session ${sessionId}`);
|
||||||
|
|
||||||
// In a real implementation, you would:
|
|
||||||
// 1. Start a local HTTP/WebSocket server
|
|
||||||
// 2. Handle incoming connections
|
|
||||||
// 3. Route audio data and responses
|
|
||||||
|
|
||||||
// For this demo, we'll just track the state
|
|
||||||
sendResponse({
|
sendResponse({
|
||||||
success: true,
|
success: true,
|
||||||
message: 'Remote server started (demo mode)',
|
message: 'Remote server started (demo mode)',
|
||||||
url: `http://localhost:${port}?session=${sessionId}`
|
url: `http://localhost:${port}?session=${sessionId}`
|
||||||
});
|
});
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Error starting remote server:', error);
|
console.error('Error starting remote server:', error);
|
||||||
sendResponse({
|
sendResponse({
|
||||||
@@ -359,20 +601,73 @@ async function startRemoteServer(sessionId, port, sendResponse) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function stopRemoteServer(sendResponse) {
|
function stopRemoteServer(sendResponse) {
|
||||||
remoteServer = null;
|
state.remoteServer = null;
|
||||||
remoteServerPort = null;
|
state.remoteServerPort = null;
|
||||||
activeConnections.clear();
|
state.activeConnections.clear();
|
||||||
|
|
||||||
console.log('Remote server stopped');
|
console.log('Remote server stopped');
|
||||||
sendResponse({ success: true });
|
sendResponse({ success: true });
|
||||||
}
|
}
|
||||||
|
|
||||||
function broadcastToRemoteDevices(type, data) {
|
function broadcastToRemoteDevices(type, data) {
|
||||||
// In a real implementation, this would send data to all connected WebSocket clients
|
|
||||||
console.log('Broadcasting to remote devices:', type, data);
|
console.log('Broadcasting to remote devices:', type, data);
|
||||||
|
if (state.activeConnections.size > 0) {
|
||||||
// For demo purposes, we'll just log the broadcast
|
console.log(`Broadcasting ${type} to ${state.activeConnections.size} connected devices`);
|
||||||
if (activeConnections.size > 0) {
|
|
||||||
console.log(`Broadcasting ${type} to ${activeConnections.size} connected devices`);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function isValidCaptureTab(tab) {
|
||||||
|
if (!tab || !tab.url) return false;
|
||||||
|
return !tab.url.startsWith('chrome://') && !tab.url.startsWith('chrome-extension://');
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildTabCaptureErrorMessage(errorMsg) {
|
||||||
|
let userMessage = `Error: ${errorMsg}.`;
|
||||||
|
if (errorMsg.includes('Extension has not been invoked')) {
|
||||||
|
userMessage += ' Click the extension icon on the tab you want to capture, then press Start Listening.';
|
||||||
|
} else {
|
||||||
|
userMessage += ' Make sure you\'ve granted microphone permissions.';
|
||||||
|
}
|
||||||
|
return userMessage;
|
||||||
|
}
|
||||||
|
|
||||||
|
function initializeActiveState() {
|
||||||
|
chrome.storage.sync.get(['extensionActive'], (result) => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
state.isActive = true;
|
||||||
|
updateActionBadge();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
state.isActive = result.extensionActive !== false;
|
||||||
|
updateActionBadge();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setActiveState(isActive, sendResponse) {
|
||||||
|
state.isActive = isActive;
|
||||||
|
chrome.storage.sync.set({ extensionActive: isActive }, () => {
|
||||||
|
updateActionBadge();
|
||||||
|
if (!isActive) {
|
||||||
|
stopListeningAcrossTabs();
|
||||||
|
}
|
||||||
|
sendResponse({ success: true, isActive });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateActionBadge() {
|
||||||
|
if (!chrome.action || !chrome.action.setBadgeText) return;
|
||||||
|
chrome.action.setBadgeText({ text: state.isActive ? 'ON' : 'OFF' });
|
||||||
|
chrome.action.setBadgeBackgroundColor({ color: state.isActive ? '#2ecc71' : '#e74c3c' });
|
||||||
|
}
|
||||||
|
|
||||||
|
function stopListeningAcrossTabs() {
|
||||||
|
chrome.tabs.query({}, (tabs) => {
|
||||||
|
if (chrome.runtime.lastError || !tabs.length) return;
|
||||||
|
tabs.forEach((tab) => {
|
||||||
|
if (!tab.id) return;
|
||||||
|
chrome.tabs.sendMessage(tab.id, { action: 'stopCapture' }, () => {
|
||||||
|
// Ignore errors for tabs without the content script.
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|||||||
573
content.js
573
content.js
@@ -1,19 +1,89 @@
|
|||||||
let audioContext;
|
let audioContext;
|
||||||
let mediaStream;
|
let mediaStream;
|
||||||
let recognition;
|
let recognition;
|
||||||
|
let isCapturing = false;
|
||||||
|
let overlayInitialized = false;
|
||||||
|
let activeCaptureMode = 'tab';
|
||||||
|
let overlayListening = false;
|
||||||
|
let overlayHidden = false;
|
||||||
|
let analyserNode = null;
|
||||||
|
let meterSource = null;
|
||||||
|
let meterRaf = null;
|
||||||
|
|
||||||
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
|
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
|
||||||
if (request.action === 'startCapture') {
|
if (request.action === 'startCapture') {
|
||||||
|
activeCaptureMode = 'tab';
|
||||||
startCapture(request.streamId);
|
startCapture(request.streamId);
|
||||||
sendResponse({success: true});
|
sendResponse({ success: true });
|
||||||
} else if (request.action === 'stopCapture') {
|
return false;
|
||||||
stopCapture();
|
|
||||||
sendResponse({success: true});
|
|
||||||
}
|
}
|
||||||
return true; // Keep the message channel open for async responses
|
if (request.action === 'startMicCapture') {
|
||||||
|
activeCaptureMode = 'mic';
|
||||||
|
startMicCapture();
|
||||||
|
sendResponse({ success: true });
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'startMixedCapture') {
|
||||||
|
activeCaptureMode = 'mixed';
|
||||||
|
startMixedCapture(request.streamId);
|
||||||
|
sendResponse({ success: true });
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'stopCapture') {
|
||||||
|
stopCapture();
|
||||||
|
sendResponse({ success: true });
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'requestMicPermission') {
|
||||||
|
requestMicPermission().then(sendResponse);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (request.action === 'updateTranscript') {
|
||||||
|
updateOverlay('transcript', request.transcript);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'updateAIResponse') {
|
||||||
|
updateOverlay('response', request.response);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'showOverlay') {
|
||||||
|
setOverlayHidden(false);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (request.action === 'hideOverlay') {
|
||||||
|
setOverlayHidden(true);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
});
|
});
|
||||||
|
|
||||||
|
async function requestMicPermission() {
|
||||||
|
try {
|
||||||
|
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
||||||
|
stream.getTracks().forEach(track => track.stop());
|
||||||
|
return { success: true };
|
||||||
|
} catch (error) {
|
||||||
|
let errorMessage = 'Microphone permission denied.';
|
||||||
|
if (error.name === 'NotAllowedError') {
|
||||||
|
errorMessage = 'Microphone permission denied.';
|
||||||
|
} else if (error.name === 'NotFoundError') {
|
||||||
|
errorMessage = 'No microphone found.';
|
||||||
|
} else {
|
||||||
|
errorMessage = error.message || 'Unknown error occurred.';
|
||||||
|
}
|
||||||
|
return { success: false, error: errorMessage };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function startCapture(streamId) {
|
function startCapture(streamId) {
|
||||||
|
isCapturing = true;
|
||||||
|
overlayListening = true;
|
||||||
|
ensureOverlay();
|
||||||
|
updateOverlayIndicator();
|
||||||
|
updateOverlay(
|
||||||
|
'response',
|
||||||
|
'Tab audio is captured, but speech recognition uses the microphone. Use mic or mixed mode if you want transcription.'
|
||||||
|
);
|
||||||
navigator.mediaDevices.getUserMedia({
|
navigator.mediaDevices.getUserMedia({
|
||||||
audio: {
|
audio: {
|
||||||
chromeMediaSource: 'tab',
|
chromeMediaSource: 'tab',
|
||||||
@@ -22,37 +92,10 @@ function startCapture(streamId) {
|
|||||||
}).then((stream) => {
|
}).then((stream) => {
|
||||||
mediaStream = stream;
|
mediaStream = stream;
|
||||||
audioContext = new AudioContext();
|
audioContext = new AudioContext();
|
||||||
const source = audioContext.createMediaStreamSource(stream);
|
createAudioMeter(stream);
|
||||||
|
if (ensureSpeechRecognitionAvailable()) {
|
||||||
// Initialize speech recognition
|
startRecognition();
|
||||||
recognition = new webkitSpeechRecognition();
|
}
|
||||||
recognition.continuous = true;
|
|
||||||
recognition.interimResults = true;
|
|
||||||
|
|
||||||
recognition.onresult = function(event) {
|
|
||||||
let finalTranscript = '';
|
|
||||||
for (let i = event.resultIndex; i < event.results.length; ++i) {
|
|
||||||
if (event.results[i].isFinal) {
|
|
||||||
finalTranscript += event.results[i][0].transcript;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (finalTranscript.trim() !== '') {
|
|
||||||
chrome.runtime.sendMessage({action: 'updateTranscript', transcript: finalTranscript});
|
|
||||||
|
|
||||||
// Check if the transcript contains a question
|
|
||||||
if (isQuestion(finalTranscript)) {
|
|
||||||
chrome.runtime.sendMessage({action: 'getAIResponse', question: finalTranscript});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
recognition.onerror = function(event) {
|
|
||||||
console.error('Speech recognition error:', event.error);
|
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Speech recognition error: ${event.error}. Please try again.`});
|
|
||||||
};
|
|
||||||
|
|
||||||
recognition.start();
|
|
||||||
}).catch((error) => {
|
}).catch((error) => {
|
||||||
console.error('Error starting capture:', error);
|
console.error('Error starting capture:', error);
|
||||||
let errorMessage = 'Failed to start audio capture. ';
|
let errorMessage = 'Failed to start audio capture. ';
|
||||||
@@ -64,23 +107,473 @@ function startCapture(streamId) {
|
|||||||
errorMessage += error.message || 'Unknown error occurred.';
|
errorMessage += error.message || 'Unknown error occurred.';
|
||||||
}
|
}
|
||||||
chrome.runtime.sendMessage({action: 'updateAIResponse', response: errorMessage});
|
chrome.runtime.sendMessage({action: 'updateAIResponse', response: errorMessage});
|
||||||
|
updateOverlay('response', errorMessage);
|
||||||
|
overlayListening = false;
|
||||||
|
updateOverlayIndicator();
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function startMicCapture() {
|
||||||
|
isCapturing = true;
|
||||||
|
overlayListening = true;
|
||||||
|
ensureOverlay();
|
||||||
|
updateOverlayIndicator();
|
||||||
|
navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {
|
||||||
|
mediaStream = stream;
|
||||||
|
audioContext = new AudioContext();
|
||||||
|
createAudioMeter(stream);
|
||||||
|
if (ensureSpeechRecognitionAvailable()) {
|
||||||
|
startRecognition();
|
||||||
|
}
|
||||||
|
}).catch((error) => {
|
||||||
|
console.error('Error starting mic capture:', error);
|
||||||
|
let errorMessage = 'Failed to start microphone capture. ';
|
||||||
|
if (error.name === 'NotAllowedError') {
|
||||||
|
errorMessage += 'Please allow microphone access and try again.';
|
||||||
|
} else if (error.name === 'NotFoundError') {
|
||||||
|
errorMessage += 'No microphone found.';
|
||||||
|
} else {
|
||||||
|
errorMessage += error.message || 'Unknown error occurred.';
|
||||||
|
}
|
||||||
|
chrome.runtime.sendMessage({action: 'updateAIResponse', response: errorMessage});
|
||||||
|
updateOverlay('response', errorMessage);
|
||||||
|
overlayListening = false;
|
||||||
|
updateOverlayIndicator();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startMixedCapture(streamId) {
|
||||||
|
isCapturing = true;
|
||||||
|
overlayListening = true;
|
||||||
|
ensureOverlay();
|
||||||
|
updateOverlayIndicator();
|
||||||
|
navigator.mediaDevices.getUserMedia({
|
||||||
|
audio: {
|
||||||
|
chromeMediaSource: 'tab',
|
||||||
|
chromeMediaSourceId: streamId
|
||||||
|
}
|
||||||
|
}).then((stream) => {
|
||||||
|
mediaStream = stream;
|
||||||
|
audioContext = new AudioContext();
|
||||||
|
createAudioMeter(stream);
|
||||||
|
if (ensureSpeechRecognitionAvailable()) {
|
||||||
|
startRecognition();
|
||||||
|
}
|
||||||
|
}).catch((error) => {
|
||||||
|
console.error('Error starting mixed capture:', error);
|
||||||
|
chrome.runtime.sendMessage({action: 'updateAIResponse', response: 'Failed to start mixed capture.'});
|
||||||
|
updateOverlay('response', 'Failed to start mixed capture.');
|
||||||
|
overlayListening = false;
|
||||||
|
updateOverlayIndicator();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startRecognition() {
|
||||||
|
if (recognition) {
|
||||||
|
try {
|
||||||
|
recognition.stop();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to stop previous recognition:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
recognition = new webkitSpeechRecognition();
|
||||||
|
recognition.continuous = true;
|
||||||
|
recognition.interimResults = true;
|
||||||
|
|
||||||
|
recognition.onresult = function(event) {
|
||||||
|
let finalTranscript = '';
|
||||||
|
for (let i = event.resultIndex; i < event.results.length; ++i) {
|
||||||
|
if (event.results[i].isFinal) {
|
||||||
|
finalTranscript += event.results[i][0].transcript;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (finalTranscript.trim() !== '') {
|
||||||
|
chrome.runtime.sendMessage({action: 'updateTranscript', transcript: finalTranscript});
|
||||||
|
updateOverlay('transcript', finalTranscript);
|
||||||
|
chrome.runtime.sendMessage({action: 'getAIResponse', question: finalTranscript});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
recognition.onerror = function(event) {
|
||||||
|
console.error('Speech recognition error:', event.error);
|
||||||
|
if (event.error === 'no-speech' && isCapturing) {
|
||||||
|
try {
|
||||||
|
recognition.start();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to restart recognition after no-speech:', error);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
chrome.runtime.sendMessage({action: 'updateAIResponse', response: `Speech recognition error: ${event.error}. Please try again.`});
|
||||||
|
updateOverlay('response', `Speech recognition error: ${event.error}. Please try again.`);
|
||||||
|
};
|
||||||
|
|
||||||
|
recognition.onend = function() {
|
||||||
|
if (!isCapturing) return;
|
||||||
|
try {
|
||||||
|
recognition.start();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to restart recognition:', error);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
recognition.start();
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureSpeechRecognitionAvailable() {
|
||||||
|
const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
|
||||||
|
if (!SpeechRecognition) {
|
||||||
|
const message = 'Speech recognition is not available in this browser context. Use mic mode in Chrome or enable speech recognition.';
|
||||||
|
chrome.runtime.sendMessage({ action: 'updateAIResponse', response: message });
|
||||||
|
updateOverlay('response', message);
|
||||||
|
overlayListening = false;
|
||||||
|
updateOverlayIndicator();
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
function stopCapture() {
|
function stopCapture() {
|
||||||
|
isCapturing = false;
|
||||||
|
overlayListening = false;
|
||||||
|
updateOverlayIndicator();
|
||||||
|
stopAudioMeter();
|
||||||
if (mediaStream) {
|
if (mediaStream) {
|
||||||
mediaStream.getTracks().forEach(track => track.stop());
|
mediaStream.getTracks().forEach(track => track.stop());
|
||||||
}
|
}
|
||||||
if (audioContext) {
|
if (audioContext) {
|
||||||
audioContext.close();
|
audioContext.close();
|
||||||
|
audioContext = null;
|
||||||
}
|
}
|
||||||
if (recognition) {
|
if (recognition) {
|
||||||
recognition.stop();
|
recognition.stop();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function isQuestion(text) {
|
function ensureOverlay() {
|
||||||
const questionWords = ['what', 'when', 'where', 'who', 'why', 'how'];
|
if (overlayInitialized) return;
|
||||||
const lowerText = text.toLowerCase();
|
overlayInitialized = true;
|
||||||
return questionWords.some(word => lowerText.includes(word)) || text.includes('?');
|
|
||||||
|
if (document.getElementById('ai-interview-overlay')) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const style = document.createElement('style');
|
||||||
|
style.textContent = `
|
||||||
|
#ai-interview-overlay {
|
||||||
|
position: fixed;
|
||||||
|
top: 24px;
|
||||||
|
right: 24px;
|
||||||
|
width: 420px;
|
||||||
|
min-width: 280px;
|
||||||
|
min-height: 240px;
|
||||||
|
background: rgba(20, 20, 20, 0.35);
|
||||||
|
color: #f5f5f5;
|
||||||
|
border: 1px solid rgba(255, 255, 255, 0.15);
|
||||||
|
border-radius: 12px;
|
||||||
|
backdrop-filter: blur(10px);
|
||||||
|
z-index: 2147483647;
|
||||||
|
font-family: "Helvetica Neue", Arial, sans-serif;
|
||||||
|
box-shadow: 0 10px 30px rgba(0, 0, 0, 0.35);
|
||||||
|
user-select: none;
|
||||||
|
resize: both;
|
||||||
|
overflow: auto;
|
||||||
|
}
|
||||||
|
#ai-interview-resize {
|
||||||
|
position: absolute;
|
||||||
|
right: 6px;
|
||||||
|
bottom: 6px;
|
||||||
|
width: 14px;
|
||||||
|
height: 14px;
|
||||||
|
cursor: se-resize;
|
||||||
|
background: radial-gradient(circle at center, rgba(255, 255, 255, 0.8) 0 2px, transparent 2px);
|
||||||
|
opacity: 0.6;
|
||||||
|
}
|
||||||
|
#ai-interview-overlay.minimized #ai-interview-body {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
#ai-interview-header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: space-between;
|
||||||
|
padding: 10px 12px;
|
||||||
|
cursor: move;
|
||||||
|
font-weight: 600;
|
||||||
|
font-size: 13px;
|
||||||
|
letter-spacing: 0.02em;
|
||||||
|
border-bottom: 1px solid rgba(255, 255, 255, 0.1);
|
||||||
|
}
|
||||||
|
#ai-interview-title {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
#ai-interview-indicator {
|
||||||
|
width: 10px;
|
||||||
|
height: 10px;
|
||||||
|
border-radius: 50%;
|
||||||
|
background: rgba(255, 255, 255, 0.25);
|
||||||
|
box-shadow: 0 0 0 rgba(255, 255, 255, 0.3);
|
||||||
|
}
|
||||||
|
#ai-interview-indicator.active {
|
||||||
|
background: #41f59a;
|
||||||
|
animation: aiPulse 1.2s ease-in-out infinite;
|
||||||
|
box-shadow: 0 0 8px rgba(65, 245, 154, 0.7);
|
||||||
|
}
|
||||||
|
@keyframes aiPulse {
|
||||||
|
0% { transform: scale(0.9); opacity: 0.6; }
|
||||||
|
50% { transform: scale(1.3); opacity: 1; }
|
||||||
|
100% { transform: scale(0.9); opacity: 0.6; }
|
||||||
|
}
|
||||||
|
#ai-interview-controls {
|
||||||
|
display: flex;
|
||||||
|
gap: 6px;
|
||||||
|
}
|
||||||
|
.ai-interview-btn {
|
||||||
|
background: rgba(255, 255, 255, 0.12);
|
||||||
|
border: none;
|
||||||
|
color: #f5f5f5;
|
||||||
|
font-size: 12px;
|
||||||
|
padding: 4px 8px;
|
||||||
|
border-radius: 6px;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
.ai-interview-btn:hover {
|
||||||
|
background: rgba(255, 255, 255, 0.22);
|
||||||
|
}
|
||||||
|
#ai-interview-body {
|
||||||
|
padding: 12px;
|
||||||
|
font-size: 12px;
|
||||||
|
line-height: 1.4;
|
||||||
|
}
|
||||||
|
#ai-interview-mode {
|
||||||
|
font-size: 11px;
|
||||||
|
opacity: 0.8;
|
||||||
|
margin-bottom: 6px;
|
||||||
|
}
|
||||||
|
#ai-interview-meter {
|
||||||
|
height: 6px;
|
||||||
|
background: rgba(255, 255, 255, 0.12);
|
||||||
|
border-radius: 999px;
|
||||||
|
overflow: hidden;
|
||||||
|
margin-bottom: 10px;
|
||||||
|
}
|
||||||
|
#ai-interview-meter-bar {
|
||||||
|
height: 100%;
|
||||||
|
width: 0%;
|
||||||
|
background: linear-gradient(90deg, #41f59a, #48c5ff);
|
||||||
|
transition: width 80ms linear;
|
||||||
|
}
|
||||||
|
#ai-interview-transcript,
|
||||||
|
#ai-interview-response {
|
||||||
|
background: rgba(0, 0, 0, 0.35);
|
||||||
|
border-radius: 8px;
|
||||||
|
padding: 8px;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
max-height: 200px;
|
||||||
|
overflow: auto;
|
||||||
|
user-select: text;
|
||||||
|
}
|
||||||
|
`;
|
||||||
|
document.head.appendChild(style);
|
||||||
|
|
||||||
|
const overlay = document.createElement('div');
|
||||||
|
overlay.id = 'ai-interview-overlay';
|
||||||
|
overlay.innerHTML = `
|
||||||
|
<div id="ai-interview-header">
|
||||||
|
<div id="ai-interview-title">
|
||||||
|
<span id="ai-interview-indicator"></span>
|
||||||
|
<span>AI Interview Assistant</span>
|
||||||
|
</div>
|
||||||
|
<div id="ai-interview-controls">
|
||||||
|
<button class="ai-interview-btn" id="ai-interview-detach">Detach</button>
|
||||||
|
<button class="ai-interview-btn" id="ai-interview-minimize">Minimize</button>
|
||||||
|
<button class="ai-interview-btn" id="ai-interview-hide">Hide</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="ai-interview-body">
|
||||||
|
<div id="ai-interview-mode">Mode: ${activeCaptureMode}</div>
|
||||||
|
<div id="ai-interview-meter"><div id="ai-interview-meter-bar"></div></div>
|
||||||
|
<div id="ai-interview-transcript">Transcript will appear here.</div>
|
||||||
|
<div id="ai-interview-response">Answer will appear here.</div>
|
||||||
|
</div>
|
||||||
|
<div id="ai-interview-resize" title="Resize"></div>
|
||||||
|
`;
|
||||||
|
document.body.appendChild(overlay);
|
||||||
|
|
||||||
|
const header = overlay.querySelector('#ai-interview-header');
|
||||||
|
const minimizeBtn = overlay.querySelector('#ai-interview-minimize');
|
||||||
|
const detachBtn = overlay.querySelector('#ai-interview-detach');
|
||||||
|
const hideBtn = overlay.querySelector('#ai-interview-hide');
|
||||||
|
const resizeHandle = overlay.querySelector('#ai-interview-resize');
|
||||||
|
|
||||||
|
let isDragging = false;
|
||||||
|
let startX = 0;
|
||||||
|
let startY = 0;
|
||||||
|
let startLeft = 0;
|
||||||
|
let startTop = 0;
|
||||||
|
|
||||||
|
header.addEventListener('mousedown', (event) => {
|
||||||
|
isDragging = true;
|
||||||
|
startX = event.clientX;
|
||||||
|
startY = event.clientY;
|
||||||
|
const rect = overlay.getBoundingClientRect();
|
||||||
|
startLeft = rect.left;
|
||||||
|
startTop = rect.top;
|
||||||
|
overlay.style.right = 'auto';
|
||||||
|
});
|
||||||
|
|
||||||
|
document.addEventListener('mousemove', (event) => {
|
||||||
|
if (!isDragging) return;
|
||||||
|
const nextLeft = startLeft + (event.clientX - startX);
|
||||||
|
const nextTop = startTop + (event.clientY - startY);
|
||||||
|
overlay.style.left = `${Math.max(8, nextLeft)}px`;
|
||||||
|
overlay.style.top = `${Math.max(8, nextTop)}px`;
|
||||||
|
});
|
||||||
|
|
||||||
|
document.addEventListener('mouseup', () => {
|
||||||
|
isDragging = false;
|
||||||
|
});
|
||||||
|
|
||||||
|
resizeHandle.addEventListener('mousedown', (event) => {
|
||||||
|
event.preventDefault();
|
||||||
|
event.stopPropagation();
|
||||||
|
const startWidth = overlay.offsetWidth;
|
||||||
|
const startHeight = overlay.offsetHeight;
|
||||||
|
const startMouseX = event.clientX;
|
||||||
|
const startMouseY = event.clientY;
|
||||||
|
|
||||||
|
const onMove = (moveEvent) => {
|
||||||
|
const nextWidth = Math.max(280, startWidth + (moveEvent.clientX - startMouseX));
|
||||||
|
const nextHeight = Math.max(240, startHeight + (moveEvent.clientY - startMouseY));
|
||||||
|
overlay.style.width = `${nextWidth}px`;
|
||||||
|
overlay.style.height = `${nextHeight}px`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const onUp = () => {
|
||||||
|
document.removeEventListener('mousemove', onMove);
|
||||||
|
document.removeEventListener('mouseup', onUp);
|
||||||
|
};
|
||||||
|
|
||||||
|
document.addEventListener('mousemove', onMove);
|
||||||
|
document.addEventListener('mouseup', onUp);
|
||||||
|
});
|
||||||
|
|
||||||
|
minimizeBtn.addEventListener('click', () => {
|
||||||
|
overlay.classList.toggle('minimized');
|
||||||
|
minimizeBtn.textContent = overlay.classList.contains('minimized') ? 'Expand' : 'Minimize';
|
||||||
|
});
|
||||||
|
|
||||||
|
detachBtn.addEventListener('click', () => {
|
||||||
|
chrome.runtime.sendMessage({ action: 'openAssistantWindow' });
|
||||||
|
});
|
||||||
|
|
||||||
|
hideBtn.addEventListener('click', () => {
|
||||||
|
setOverlayHidden(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
updateOverlayIndicator();
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateOverlay(type, text) {
|
||||||
|
ensureOverlay();
|
||||||
|
applyOverlayHiddenState();
|
||||||
|
const modeEl = document.getElementById('ai-interview-mode');
|
||||||
|
if (modeEl) {
|
||||||
|
modeEl.textContent = `Mode: ${activeCaptureMode}`;
|
||||||
|
}
|
||||||
|
if (type === 'transcript') {
|
||||||
|
const transcriptEl = document.getElementById('ai-interview-transcript');
|
||||||
|
if (transcriptEl) transcriptEl.textContent = text;
|
||||||
|
}
|
||||||
|
if (type === 'response') {
|
||||||
|
const responseEl = document.getElementById('ai-interview-response');
|
||||||
|
if (responseEl) responseEl.textContent = text;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateOverlayIndicator() {
|
||||||
|
const indicator = document.getElementById('ai-interview-indicator');
|
||||||
|
if (!indicator) return;
|
||||||
|
if (overlayListening) {
|
||||||
|
indicator.classList.add('active');
|
||||||
|
} else {
|
||||||
|
indicator.classList.remove('active');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!overlayListening) {
|
||||||
|
const bar = document.getElementById('ai-interview-meter-bar');
|
||||||
|
if (bar) bar.style.width = '0%';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setOverlayHidden(hidden) {
|
||||||
|
overlayHidden = hidden;
|
||||||
|
applyOverlayHiddenState();
|
||||||
|
}
|
||||||
|
|
||||||
|
function applyOverlayHiddenState() {
|
||||||
|
const overlay = document.getElementById('ai-interview-overlay');
|
||||||
|
if (!overlay) return;
|
||||||
|
overlay.style.display = overlayHidden ? 'none' : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
function createAudioMeter(stream) {
|
||||||
|
if (!audioContext) {
|
||||||
|
audioContext = new AudioContext();
|
||||||
|
}
|
||||||
|
stopAudioMeter();
|
||||||
|
|
||||||
|
analyserNode = audioContext.createAnalyser();
|
||||||
|
analyserNode.fftSize = 512;
|
||||||
|
analyserNode.smoothingTimeConstant = 0.8;
|
||||||
|
|
||||||
|
meterSource = audioContext.createMediaStreamSource(stream);
|
||||||
|
meterSource.connect(analyserNode);
|
||||||
|
|
||||||
|
const data = new Uint8Array(analyserNode.fftSize);
|
||||||
|
|
||||||
|
const tick = () => {
|
||||||
|
if (!analyserNode) return;
|
||||||
|
analyserNode.getByteTimeDomainData(data);
|
||||||
|
let sum = 0;
|
||||||
|
for (let i = 0; i < data.length; i++) {
|
||||||
|
const v = (data[i] - 128) / 128;
|
||||||
|
sum += v * v;
|
||||||
|
}
|
||||||
|
const rms = Math.sqrt(sum / data.length);
|
||||||
|
const normalized = Math.min(1, rms * 2.5);
|
||||||
|
const bar = document.getElementById('ai-interview-meter-bar');
|
||||||
|
if (bar) {
|
||||||
|
bar.style.width = `${Math.round(normalized * 100)}%`;
|
||||||
|
}
|
||||||
|
meterRaf = requestAnimationFrame(tick);
|
||||||
|
};
|
||||||
|
|
||||||
|
meterRaf = requestAnimationFrame(tick);
|
||||||
|
}
|
||||||
|
|
||||||
|
function stopAudioMeter() {
|
||||||
|
if (meterRaf) {
|
||||||
|
cancelAnimationFrame(meterRaf);
|
||||||
|
meterRaf = null;
|
||||||
|
}
|
||||||
|
if (meterSource) {
|
||||||
|
try {
|
||||||
|
meterSource.disconnect();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to disconnect meter source:', error);
|
||||||
|
}
|
||||||
|
meterSource = null;
|
||||||
|
}
|
||||||
|
if (analyserNode) {
|
||||||
|
try {
|
||||||
|
analyserNode.disconnect();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to disconnect analyser:', error);
|
||||||
|
}
|
||||||
|
analyserNode = null;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
@@ -5,6 +5,7 @@
|
|||||||
"description": "Monitors audio and answers questions in real-time using AI",
|
"description": "Monitors audio and answers questions in real-time using AI",
|
||||||
"permissions": [
|
"permissions": [
|
||||||
"tabCapture",
|
"tabCapture",
|
||||||
|
"audioCapture",
|
||||||
"storage",
|
"storage",
|
||||||
"activeTab",
|
"activeTab",
|
||||||
"scripting",
|
"scripting",
|
||||||
@@ -16,6 +17,7 @@
|
|||||||
"https://api.openai.com/*",
|
"https://api.openai.com/*",
|
||||||
"https://api.anthropic.com/*",
|
"https://api.anthropic.com/*",
|
||||||
"https://generativelanguage.googleapis.com/*",
|
"https://generativelanguage.googleapis.com/*",
|
||||||
|
"https://api.deepseek.com/*",
|
||||||
"http://localhost:11434/*"
|
"http://localhost:11434/*"
|
||||||
],
|
],
|
||||||
"background": {
|
"background": {
|
||||||
|
|||||||
@@ -16,6 +16,7 @@
|
|||||||
<option value="openai">OpenAI (GPT)</option>
|
<option value="openai">OpenAI (GPT)</option>
|
||||||
<option value="anthropic">Anthropic (Claude)</option>
|
<option value="anthropic">Anthropic (Claude)</option>
|
||||||
<option value="google">Google (Gemini)</option>
|
<option value="google">Google (Gemini)</option>
|
||||||
|
<option value="deepseek">DeepSeek</option>
|
||||||
<option value="ollama">Ollama (Local)</option>
|
<option value="ollama">Ollama (Local)</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
@@ -33,6 +34,50 @@
|
|||||||
<div id="apiKeyStatus" class="status-message"></div>
|
<div id="apiKeyStatus" class="status-message"></div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div class="capture-mode-section">
|
||||||
|
<label for="captureModeSelect">Capture mode:</label>
|
||||||
|
<select id="captureModeSelect">
|
||||||
|
<option value="tab">Tab-only (default)</option>
|
||||||
|
<option value="mic">Mic-only</option>
|
||||||
|
<option value="mixed">Mixed (experimental)</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="active-state-section">
|
||||||
|
<label>
|
||||||
|
<input type="checkbox" id="extensionActiveToggle" checked>
|
||||||
|
Extension Active
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="overlay-visibility-section">
|
||||||
|
<label>
|
||||||
|
<input type="checkbox" id="autoOpenAssistantWindow">
|
||||||
|
Auto-open assistant window after Start Listening
|
||||||
|
</label>
|
||||||
|
<div class="status-message" id="sidepanelTip">
|
||||||
|
Tip: You can close this side panel while listening; the in-tab overlay will keep running.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="mic-monitor-section">
|
||||||
|
<h4>🎙️ Mic Monitor</h4>
|
||||||
|
<label for="inputDeviceSelect">Input device:</label>
|
||||||
|
<select id="inputDeviceSelect"></select>
|
||||||
|
<div id="inputDeviceStatus" class="status-message"></div>
|
||||||
|
<div class="mic-level">
|
||||||
|
<div class="mic-level-bar" id="micLevelBar"></div>
|
||||||
|
</div>
|
||||||
|
<button id="startMicMonitor">Enable Mic Monitor</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="performance-section">
|
||||||
|
<label>
|
||||||
|
<input type="checkbox" id="speedModeToggle">
|
||||||
|
Optimize for speed (faster, shorter answers)
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div class="context-section">
|
<div class="context-section">
|
||||||
<h4>📄 Context Management</h4>
|
<h4>📄 Context Management</h4>
|
||||||
<div class="context-tabs">
|
<div class="context-tabs">
|
||||||
@@ -80,6 +125,11 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<button id="toggleListening">Start Listening</button>
|
<button id="toggleListening">Start Listening</button>
|
||||||
|
<button id="showOverlay">Show Overlay</button>
|
||||||
|
<button id="requestMicPermission">Request Microphone Permission</button>
|
||||||
|
<button id="grantTabAccess">Grant Tab Access</button>
|
||||||
|
<div id="micPermissionStatus" class="status-message"></div>
|
||||||
|
<div id="tabAccessStatus" class="status-message"></div>
|
||||||
<div id="transcript"></div>
|
<div id="transcript"></div>
|
||||||
<div id="aiResponse"></div>
|
<div id="aiResponse"></div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
466
sidepanel.js
466
sidepanel.js
@@ -7,6 +7,19 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
const aiProviderSelect = document.getElementById('aiProvider');
|
const aiProviderSelect = document.getElementById('aiProvider');
|
||||||
const modelSelect = document.getElementById('modelSelect');
|
const modelSelect = document.getElementById('modelSelect');
|
||||||
const apiKeyStatus = document.getElementById('apiKeyStatus');
|
const apiKeyStatus = document.getElementById('apiKeyStatus');
|
||||||
|
const requestMicPermissionBtn = document.getElementById('requestMicPermission');
|
||||||
|
const showOverlayBtn = document.getElementById('showOverlay');
|
||||||
|
const micPermissionStatus = document.getElementById('micPermissionStatus');
|
||||||
|
const grantTabAccessBtn = document.getElementById('grantTabAccess');
|
||||||
|
const tabAccessStatus = document.getElementById('tabAccessStatus');
|
||||||
|
const speedModeToggle = document.getElementById('speedModeToggle');
|
||||||
|
const captureModeSelect = document.getElementById('captureModeSelect');
|
||||||
|
const autoOpenAssistantWindowToggle = document.getElementById('autoOpenAssistantWindow');
|
||||||
|
const extensionActiveToggle = document.getElementById('extensionActiveToggle');
|
||||||
|
const inputDeviceSelect = document.getElementById('inputDeviceSelect');
|
||||||
|
const inputDeviceStatus = document.getElementById('inputDeviceStatus');
|
||||||
|
const micLevelBar = document.getElementById('micLevelBar');
|
||||||
|
const startMicMonitorBtn = document.getElementById('startMicMonitor');
|
||||||
|
|
||||||
// Context management elements
|
// Context management elements
|
||||||
const contextFileInput = document.getElementById('contextFileInput');
|
const contextFileInput = document.getElementById('contextFileInput');
|
||||||
@@ -28,6 +41,11 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
|
|
||||||
let isListening = false;
|
let isListening = false;
|
||||||
let remoteServerActive = false;
|
let remoteServerActive = false;
|
||||||
|
let micMonitorStream = null;
|
||||||
|
let micMonitorCtx = null;
|
||||||
|
let micMonitorSource = null;
|
||||||
|
let micMonitorAnalyser = null;
|
||||||
|
let micMonitorRaf = null;
|
||||||
|
|
||||||
// AI Provider configurations
|
// AI Provider configurations
|
||||||
const aiProviders = {
|
const aiProviders = {
|
||||||
@@ -52,6 +70,13 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
apiKeyPlaceholder: 'Enter your Google AI API Key',
|
apiKeyPlaceholder: 'Enter your Google AI API Key',
|
||||||
requiresKey: true
|
requiresKey: true
|
||||||
},
|
},
|
||||||
|
deepseek: {
|
||||||
|
name: 'DeepSeek',
|
||||||
|
models: ['deepseek-chat', 'deepseek-reasoner'],
|
||||||
|
defaultModel: 'deepseek-chat',
|
||||||
|
apiKeyPlaceholder: 'Enter your DeepSeek API Key',
|
||||||
|
requiresKey: true
|
||||||
|
},
|
||||||
ollama: {
|
ollama: {
|
||||||
name: 'Ollama',
|
name: 'Ollama',
|
||||||
models: ['llama3.2', 'llama3.1', 'mistral', 'codellama', 'phi3'],
|
models: ['llama3.2', 'llama3.1', 'mistral', 'codellama', 'phi3'],
|
||||||
@@ -60,15 +85,26 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
requiresKey: false
|
requiresKey: false
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
const modelCache = {};
|
||||||
|
const modelFetchState = {};
|
||||||
|
|
||||||
// Load saved settings
|
// Load saved settings
|
||||||
chrome.storage.sync.get(['aiProvider', 'selectedModel', 'apiKeys'], (result) => {
|
chrome.storage.sync.get(['aiProvider', 'selectedModel', 'apiKeys', 'speedMode', 'captureMode', 'autoOpenAssistantWindow', 'inputDeviceId', 'extensionActive'], (result) => {
|
||||||
const savedProvider = result.aiProvider || 'openai';
|
const savedProvider = result.aiProvider || 'openai';
|
||||||
const savedModel = result.selectedModel || aiProviders[savedProvider].defaultModel;
|
const savedModel = result.selectedModel || aiProviders[savedProvider].defaultModel;
|
||||||
const savedApiKeys = result.apiKeys || {};
|
const savedApiKeys = result.apiKeys || {};
|
||||||
|
const speedMode = Boolean(result.speedMode);
|
||||||
|
const captureMode = result.captureMode || 'tab';
|
||||||
|
const autoOpenAssistantWindow = Boolean(result.autoOpenAssistantWindow);
|
||||||
|
const savedInputDeviceId = result.inputDeviceId || '';
|
||||||
|
const extensionActive = result.extensionActive !== false;
|
||||||
|
|
||||||
aiProviderSelect.value = savedProvider;
|
aiProviderSelect.value = savedProvider;
|
||||||
updateModelOptions(savedProvider, savedModel);
|
if (captureModeSelect) captureModeSelect.value = captureMode;
|
||||||
|
if (speedModeToggle) speedModeToggle.checked = speedMode;
|
||||||
|
if (autoOpenAssistantWindowToggle) autoOpenAssistantWindowToggle.checked = autoOpenAssistantWindow;
|
||||||
|
if (extensionActiveToggle) extensionActiveToggle.checked = extensionActive;
|
||||||
|
refreshModelOptions(savedProvider, savedModel, savedApiKeys[savedProvider]);
|
||||||
updateApiKeyInput(savedProvider);
|
updateApiKeyInput(savedProvider);
|
||||||
|
|
||||||
if (savedApiKeys[savedProvider] && aiProviders[savedProvider].requiresKey) {
|
if (savedApiKeys[savedProvider] && aiProviders[savedProvider].requiresKey) {
|
||||||
@@ -77,14 +113,18 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
saveApiKeyButton.textContent = 'API Key Saved';
|
saveApiKeyButton.textContent = 'API Key Saved';
|
||||||
saveApiKeyButton.disabled = true;
|
saveApiKeyButton.disabled = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (inputDeviceSelect) {
|
||||||
|
loadInputDevices(savedInputDeviceId);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Load and display saved contexts
|
// Load and display saved contexts
|
||||||
loadContexts();
|
loadContexts();
|
||||||
|
|
||||||
// Helper functions
|
// Helper functions
|
||||||
function updateModelOptions(provider, selectedModel = null) {
|
function updateModelOptions(provider, selectedModel = null, modelsOverride = null) {
|
||||||
const models = aiProviders[provider].models;
|
const models = modelsOverride || modelCache[provider] || aiProviders[provider].models;
|
||||||
modelSelect.innerHTML = '';
|
modelSelect.innerHTML = '';
|
||||||
|
|
||||||
models.forEach(model => {
|
models.forEach(model => {
|
||||||
@@ -117,6 +157,286 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
apiKeyStatus.className = `status-message ${type}`;
|
apiKeyStatus.className = `status-message ${type}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function updateMicPermissionStatus(message, type) {
|
||||||
|
if (!micPermissionStatus) return;
|
||||||
|
micPermissionStatus.textContent = message;
|
||||||
|
micPermissionStatus.className = `status-message ${type}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateInputDeviceStatus(message, type) {
|
||||||
|
if (!inputDeviceStatus) return;
|
||||||
|
inputDeviceStatus.textContent = message;
|
||||||
|
inputDeviceStatus.className = `status-message ${type}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateTabAccessStatus(message, type) {
|
||||||
|
if (!tabAccessStatus) return;
|
||||||
|
tabAccessStatus.textContent = message;
|
||||||
|
tabAccessStatus.className = `status-message ${type}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function pickModel(provider, preferredModel, models) {
|
||||||
|
if (preferredModel && models.includes(preferredModel)) {
|
||||||
|
return preferredModel;
|
||||||
|
}
|
||||||
|
if (aiProviders[provider].defaultModel && models.includes(aiProviders[provider].defaultModel)) {
|
||||||
|
return aiProviders[provider].defaultModel;
|
||||||
|
}
|
||||||
|
return models[0];
|
||||||
|
}
|
||||||
|
|
||||||
|
async function refreshModelOptions(provider, preferredModel, apiKey) {
|
||||||
|
if (modelFetchState[provider]) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
modelSelect.disabled = true;
|
||||||
|
modelSelect.innerHTML = '<option>Loading models...</option>';
|
||||||
|
modelFetchState[provider] = true;
|
||||||
|
|
||||||
|
try {
|
||||||
|
let models = null;
|
||||||
|
|
||||||
|
if (provider === 'ollama') {
|
||||||
|
models = await fetchOllamaModels();
|
||||||
|
} else if (aiProviders[provider].requiresKey && apiKey) {
|
||||||
|
models = await fetchRemoteModels(provider, apiKey);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (models && models.length) {
|
||||||
|
modelCache[provider] = models;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.warn(`Failed to fetch models for ${provider}:`, error);
|
||||||
|
} finally {
|
||||||
|
modelFetchState[provider] = false;
|
||||||
|
const availableModels = modelCache[provider] || aiProviders[provider].models;
|
||||||
|
const selected = pickModel(provider, preferredModel, availableModels);
|
||||||
|
updateModelOptions(provider, selected, availableModels);
|
||||||
|
chrome.storage.sync.set({ selectedModel: selected });
|
||||||
|
modelSelect.disabled = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadInputDevices(preferredDeviceId = '') {
|
||||||
|
if (!navigator.mediaDevices || !navigator.mediaDevices.enumerateDevices) {
|
||||||
|
updateInputDeviceStatus('Device enumeration is not supported in this browser.', 'error');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const devices = await navigator.mediaDevices.enumerateDevices();
|
||||||
|
const inputs = devices.filter(device => device.kind === 'audioinput');
|
||||||
|
const hasLabels = inputs.some(device => device.label);
|
||||||
|
inputDeviceSelect.innerHTML = '';
|
||||||
|
|
||||||
|
if (!inputs.length) {
|
||||||
|
const option = document.createElement('option');
|
||||||
|
option.value = '';
|
||||||
|
option.textContent = 'No input devices found';
|
||||||
|
inputDeviceSelect.appendChild(option);
|
||||||
|
inputDeviceSelect.disabled = true;
|
||||||
|
updateInputDeviceStatus('No microphone devices detected.', 'error');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
inputs.forEach((device, index) => {
|
||||||
|
const option = document.createElement('option');
|
||||||
|
option.value = device.deviceId;
|
||||||
|
option.textContent = device.label || `Microphone ${index + 1}`;
|
||||||
|
if (device.deviceId === preferredDeviceId) {
|
||||||
|
option.selected = true;
|
||||||
|
}
|
||||||
|
inputDeviceSelect.appendChild(option);
|
||||||
|
});
|
||||||
|
|
||||||
|
inputDeviceSelect.disabled = false;
|
||||||
|
const selectedOption = inputDeviceSelect.options[inputDeviceSelect.selectedIndex];
|
||||||
|
if (!hasLabels) {
|
||||||
|
updateInputDeviceStatus('Grant mic permission to see device names.', '');
|
||||||
|
} else {
|
||||||
|
updateInputDeviceStatus(`Selected: ${selectedOption ? selectedOption.textContent : 'Unknown'}`, '');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to enumerate devices:', error);
|
||||||
|
updateInputDeviceStatus('Failed to list input devices.', 'error');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function stopMicMonitor() {
|
||||||
|
if (micMonitorRaf) {
|
||||||
|
cancelAnimationFrame(micMonitorRaf);
|
||||||
|
micMonitorRaf = null;
|
||||||
|
}
|
||||||
|
if (micMonitorSource) {
|
||||||
|
try {
|
||||||
|
micMonitorSource.disconnect();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to disconnect mic monitor source:', error);
|
||||||
|
}
|
||||||
|
micMonitorSource = null;
|
||||||
|
}
|
||||||
|
if (micMonitorAnalyser) {
|
||||||
|
try {
|
||||||
|
micMonitorAnalyser.disconnect();
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to disconnect mic monitor analyser:', error);
|
||||||
|
}
|
||||||
|
micMonitorAnalyser = null;
|
||||||
|
}
|
||||||
|
if (micMonitorCtx) {
|
||||||
|
micMonitorCtx.close();
|
||||||
|
micMonitorCtx = null;
|
||||||
|
}
|
||||||
|
if (micMonitorStream) {
|
||||||
|
micMonitorStream.getTracks().forEach(track => track.stop());
|
||||||
|
micMonitorStream = null;
|
||||||
|
}
|
||||||
|
if (micLevelBar) {
|
||||||
|
micLevelBar.style.width = '0%';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function startMicMonitor() {
|
||||||
|
if (!micLevelBar || !inputDeviceSelect) return;
|
||||||
|
stopMicMonitor();
|
||||||
|
updateInputDeviceStatus('Requesting microphone access...', '');
|
||||||
|
|
||||||
|
const deviceId = inputDeviceSelect.value;
|
||||||
|
const constraints = deviceId ? { audio: { deviceId: { exact: deviceId } } } : { audio: true };
|
||||||
|
|
||||||
|
try {
|
||||||
|
micMonitorStream = await navigator.mediaDevices.getUserMedia(constraints);
|
||||||
|
micMonitorCtx = new AudioContext();
|
||||||
|
micMonitorAnalyser = micMonitorCtx.createAnalyser();
|
||||||
|
micMonitorAnalyser.fftSize = 512;
|
||||||
|
micMonitorAnalyser.smoothingTimeConstant = 0.8;
|
||||||
|
micMonitorSource = micMonitorCtx.createMediaStreamSource(micMonitorStream);
|
||||||
|
micMonitorSource.connect(micMonitorAnalyser);
|
||||||
|
|
||||||
|
const data = new Uint8Array(micMonitorAnalyser.fftSize);
|
||||||
|
const tick = () => {
|
||||||
|
if (!micMonitorAnalyser) return;
|
||||||
|
micMonitorAnalyser.getByteTimeDomainData(data);
|
||||||
|
let sum = 0;
|
||||||
|
for (let i = 0; i < data.length; i++) {
|
||||||
|
const v = (data[i] - 128) / 128;
|
||||||
|
sum += v * v;
|
||||||
|
}
|
||||||
|
const rms = Math.sqrt(sum / data.length);
|
||||||
|
const normalized = Math.min(1, rms * 2.5);
|
||||||
|
micLevelBar.style.width = `${Math.round(normalized * 100)}%`;
|
||||||
|
micMonitorRaf = requestAnimationFrame(tick);
|
||||||
|
};
|
||||||
|
|
||||||
|
micMonitorRaf = requestAnimationFrame(tick);
|
||||||
|
const selectedOption = inputDeviceSelect.options[inputDeviceSelect.selectedIndex];
|
||||||
|
updateInputDeviceStatus(`Mic monitor active: ${selectedOption ? selectedOption.textContent : 'Unknown'}`, 'success');
|
||||||
|
} catch (error) {
|
||||||
|
console.warn('Failed to start mic monitor:', error);
|
||||||
|
if (error && error.name === 'NotAllowedError') {
|
||||||
|
updateInputDeviceStatus('Microphone permission denied. Click "Request Microphone Permission".', 'error');
|
||||||
|
} else if (error && error.name === 'NotFoundError') {
|
||||||
|
updateInputDeviceStatus('No microphone found for the selected device.', 'error');
|
||||||
|
} else {
|
||||||
|
updateInputDeviceStatus('Microphone permission denied or unavailable.', 'error');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchRemoteModels(provider, apiKey) {
|
||||||
|
if (provider === 'openai') {
|
||||||
|
return fetchOpenAIModels(apiKey);
|
||||||
|
}
|
||||||
|
if (provider === 'anthropic') {
|
||||||
|
return fetchAnthropicModels(apiKey);
|
||||||
|
}
|
||||||
|
if (provider === 'google') {
|
||||||
|
return fetchGoogleModels(apiKey);
|
||||||
|
}
|
||||||
|
if (provider === 'deepseek') {
|
||||||
|
return fetchDeepSeekModels(apiKey);
|
||||||
|
}
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchOpenAIModels(apiKey) {
|
||||||
|
const response = await fetch('https://api.openai.com/v1/models', {
|
||||||
|
headers: {
|
||||||
|
'Authorization': `Bearer ${apiKey}`
|
||||||
|
}
|
||||||
|
});
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`OpenAI models request failed: ${response.status}`);
|
||||||
|
}
|
||||||
|
const data = await response.json();
|
||||||
|
const ids = (data.data || []).map((item) => item.id).filter(Boolean);
|
||||||
|
const chatModels = ids.filter((id) => (
|
||||||
|
id.startsWith('gpt-') ||
|
||||||
|
id.startsWith('o1') ||
|
||||||
|
id.startsWith('o3') ||
|
||||||
|
id.startsWith('o4') ||
|
||||||
|
id.startsWith('o5')
|
||||||
|
));
|
||||||
|
const models = chatModels.length ? chatModels : ids;
|
||||||
|
return Array.from(new Set(models)).sort();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchAnthropicModels(apiKey) {
|
||||||
|
const response = await fetch('https://api.anthropic.com/v1/models', {
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'x-api-key': apiKey,
|
||||||
|
'anthropic-version': '2023-06-01'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`Anthropic models request failed: ${response.status}`);
|
||||||
|
}
|
||||||
|
const data = await response.json();
|
||||||
|
const items = data.data || data.models || [];
|
||||||
|
const ids = items.map((item) => item.id || item.name).filter(Boolean);
|
||||||
|
return Array.from(new Set(ids)).sort();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchGoogleModels(apiKey) {
|
||||||
|
const response = await fetch(`https://generativelanguage.googleapis.com/v1beta/models?key=${apiKey}`);
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`Google models request failed: ${response.status}`);
|
||||||
|
}
|
||||||
|
const data = await response.json();
|
||||||
|
const models = (data.models || [])
|
||||||
|
.filter((model) => (model.supportedGenerationMethods || []).includes('generateContent'))
|
||||||
|
.map((model) => model.name || '')
|
||||||
|
.map((name) => name.replace(/^models\//, ''))
|
||||||
|
.filter(Boolean);
|
||||||
|
return Array.from(new Set(models)).sort();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchDeepSeekModels(apiKey) {
|
||||||
|
const response = await fetch('https://api.deepseek.com/v1/models', {
|
||||||
|
headers: {
|
||||||
|
'Authorization': `Bearer ${apiKey}`
|
||||||
|
}
|
||||||
|
});
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`DeepSeek models request failed: ${response.status}`);
|
||||||
|
}
|
||||||
|
const data = await response.json();
|
||||||
|
const ids = (data.data || []).map((item) => item.id).filter(Boolean);
|
||||||
|
return Array.from(new Set(ids)).sort();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchOllamaModels() {
|
||||||
|
const response = await fetch('http://localhost:11434/api/tags');
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`Ollama models request failed: ${response.status}`);
|
||||||
|
}
|
||||||
|
const data = await response.json();
|
||||||
|
const models = (data.models || []).map((model) => model.name).filter(Boolean);
|
||||||
|
return Array.from(new Set(models)).sort();
|
||||||
|
}
|
||||||
|
|
||||||
// Context Management Functions
|
// Context Management Functions
|
||||||
async function loadContexts() {
|
async function loadContexts() {
|
||||||
const result = await chrome.storage.local.get('contexts');
|
const result = await chrome.storage.local.get('contexts');
|
||||||
@@ -326,15 +646,8 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
// Event listeners
|
// Event listeners
|
||||||
aiProviderSelect.addEventListener('change', function() {
|
aiProviderSelect.addEventListener('change', function() {
|
||||||
const selectedProvider = this.value;
|
const selectedProvider = this.value;
|
||||||
updateModelOptions(selectedProvider);
|
|
||||||
updateApiKeyInput(selectedProvider);
|
updateApiKeyInput(selectedProvider);
|
||||||
|
|
||||||
// Save provider selection
|
|
||||||
chrome.storage.sync.set({
|
|
||||||
aiProvider: selectedProvider,
|
|
||||||
selectedModel: aiProviders[selectedProvider].defaultModel
|
|
||||||
});
|
|
||||||
|
|
||||||
// Load saved API key for this provider
|
// Load saved API key for this provider
|
||||||
chrome.storage.sync.get('apiKeys', (result) => {
|
chrome.storage.sync.get('apiKeys', (result) => {
|
||||||
const apiKeys = result.apiKeys || {};
|
const apiKeys = result.apiKeys || {};
|
||||||
@@ -348,6 +661,13 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
saveApiKeyButton.textContent = 'Save API Key';
|
saveApiKeyButton.textContent = 'Save API Key';
|
||||||
saveApiKeyButton.disabled = !aiProviders[selectedProvider].requiresKey;
|
saveApiKeyButton.disabled = !aiProviders[selectedProvider].requiresKey;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
refreshModelOptions(selectedProvider, aiProviders[selectedProvider].defaultModel, apiKeys[selectedProvider]);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Save provider selection
|
||||||
|
chrome.storage.sync.set({
|
||||||
|
aiProvider: selectedProvider
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -355,6 +675,57 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
chrome.storage.sync.set({ selectedModel: this.value });
|
chrome.storage.sync.set({ selectedModel: this.value });
|
||||||
});
|
});
|
||||||
|
|
||||||
|
if (captureModeSelect) {
|
||||||
|
captureModeSelect.addEventListener('change', function() {
|
||||||
|
chrome.storage.sync.set({ captureMode: this.value });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (autoOpenAssistantWindowToggle) {
|
||||||
|
autoOpenAssistantWindowToggle.addEventListener('change', function() {
|
||||||
|
chrome.storage.sync.set({ autoOpenAssistantWindow: this.checked });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (extensionActiveToggle) {
|
||||||
|
extensionActiveToggle.addEventListener('change', function() {
|
||||||
|
const isActive = this.checked;
|
||||||
|
chrome.runtime.sendMessage({ action: 'setActiveState', isActive }, (response) => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (response && response.success) {
|
||||||
|
extensionActiveToggle.checked = response.isActive;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (inputDeviceSelect) {
|
||||||
|
inputDeviceSelect.addEventListener('change', function() {
|
||||||
|
const deviceId = this.value;
|
||||||
|
chrome.storage.sync.set({ inputDeviceId: deviceId });
|
||||||
|
const selectedOption = inputDeviceSelect.options[inputDeviceSelect.selectedIndex];
|
||||||
|
updateInputDeviceStatus(`Selected: ${selectedOption ? selectedOption.textContent : 'Unknown'}`, '');
|
||||||
|
if (micMonitorStream) {
|
||||||
|
startMicMonitor();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (startMicMonitorBtn) {
|
||||||
|
startMicMonitorBtn.addEventListener('click', function() {
|
||||||
|
startMicMonitor();
|
||||||
|
});
|
||||||
|
updateInputDeviceStatus('Click \"Enable Mic Monitor\" to see live input level.', '');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (speedModeToggle) {
|
||||||
|
speedModeToggle.addEventListener('change', function() {
|
||||||
|
chrome.storage.sync.set({ speedMode: this.checked });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
apiKeyInput.addEventListener('input', function() {
|
apiKeyInput.addEventListener('input', function() {
|
||||||
if (aiProviders[aiProviderSelect.value].requiresKey) {
|
if (aiProviders[aiProviderSelect.value].requiresKey) {
|
||||||
saveApiKeyButton.textContent = 'Save API Key';
|
saveApiKeyButton.textContent = 'Save API Key';
|
||||||
@@ -381,6 +752,7 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
saveApiKeyButton.textContent = 'API Key Saved';
|
saveApiKeyButton.textContent = 'API Key Saved';
|
||||||
saveApiKeyButton.disabled = true;
|
saveApiKeyButton.disabled = true;
|
||||||
updateApiKeyStatus('API Key Saved', 'success');
|
updateApiKeyStatus('API Key Saved', 'success');
|
||||||
|
refreshModelOptions(provider, modelSelect.value, apiKey);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
} else {
|
} else {
|
||||||
@@ -445,17 +817,30 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
toggleButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
|
||||||
|
|
||||||
if (isListening) {
|
if (isListening) {
|
||||||
|
if (extensionActiveToggle && !extensionActiveToggle.checked) {
|
||||||
|
isListening = false;
|
||||||
|
toggleButton.textContent = 'Start Listening';
|
||||||
|
aiResponseDiv.textContent = 'Extension is inactive. Turn it on to start listening.';
|
||||||
|
return;
|
||||||
|
}
|
||||||
// Send current AI configuration with start listening
|
// Send current AI configuration with start listening
|
||||||
const currentProvider = aiProviderSelect.value;
|
const currentProvider = aiProviderSelect.value;
|
||||||
const currentModel = modelSelect.value;
|
const currentModel = modelSelect.value;
|
||||||
|
const captureMode = captureModeSelect ? captureModeSelect.value : 'tab';
|
||||||
|
|
||||||
chrome.runtime.sendMessage({
|
chrome.runtime.sendMessage({
|
||||||
action: 'startListening',
|
action: 'startListening',
|
||||||
aiProvider: currentProvider,
|
aiProvider: currentProvider,
|
||||||
model: currentModel
|
model: currentModel,
|
||||||
|
captureMode: captureMode
|
||||||
});
|
});
|
||||||
transcriptDiv.textContent = 'Listening for questions...';
|
transcriptDiv.textContent = 'Listening for questions...';
|
||||||
aiResponseDiv.textContent = `Using ${aiProviders[currentProvider].name} (${currentModel}). The answer will appear here.`;
|
aiResponseDiv.textContent = `Using ${aiProviders[currentProvider].name} (${currentModel}). The answer will appear here.`;
|
||||||
|
chrome.storage.sync.get(['autoOpenAssistantWindow'], (result) => {
|
||||||
|
if (result.autoOpenAssistantWindow) {
|
||||||
|
chrome.runtime.sendMessage({ action: 'openAssistantWindow' });
|
||||||
|
}
|
||||||
|
});
|
||||||
} else {
|
} else {
|
||||||
chrome.runtime.sendMessage({action: 'stopListening'});
|
chrome.runtime.sendMessage({action: 'stopListening'});
|
||||||
transcriptDiv.textContent = '';
|
transcriptDiv.textContent = '';
|
||||||
@@ -463,6 +848,63 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
if (showOverlayBtn) {
|
||||||
|
showOverlayBtn.addEventListener('click', function() {
|
||||||
|
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
|
||||||
|
if (chrome.runtime.lastError || !tabs.length) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
chrome.tabs.sendMessage(tabs[0].id, { action: 'showOverlay' });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (requestMicPermissionBtn) {
|
||||||
|
requestMicPermissionBtn.addEventListener('click', function() {
|
||||||
|
updateMicPermissionStatus('Requesting microphone permission...', '');
|
||||||
|
navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {
|
||||||
|
stream.getTracks().forEach(track => track.stop());
|
||||||
|
updateMicPermissionStatus('Microphone permission granted.', 'success');
|
||||||
|
if (inputDeviceSelect) {
|
||||||
|
loadInputDevices(inputDeviceSelect.value);
|
||||||
|
}
|
||||||
|
}).catch((error) => {
|
||||||
|
if (error && error.name === 'NotAllowedError') {
|
||||||
|
updateMicPermissionStatus('Microphone permission denied. Please allow access for the extension.', 'error');
|
||||||
|
} else if (error && error.name === 'NotFoundError') {
|
||||||
|
updateMicPermissionStatus('No microphone found.', 'error');
|
||||||
|
} else {
|
||||||
|
updateMicPermissionStatus(error && error.message ? error.message : 'Failed to request microphone permission.', 'error');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (grantTabAccessBtn) {
|
||||||
|
grantTabAccessBtn.addEventListener('click', function() {
|
||||||
|
updateTabAccessStatus('Requesting tab access...', '');
|
||||||
|
chrome.runtime.sendMessage({ action: 'grantTabAccess' }, (response) => {
|
||||||
|
if (chrome.runtime.lastError) {
|
||||||
|
updateTabAccessStatus('Failed to request tab access. Click the extension icon on the target tab.', 'error');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (response && response.success) {
|
||||||
|
updateTabAccessStatus('Tab access granted. You can start listening now.', 'success');
|
||||||
|
} else {
|
||||||
|
updateTabAccessStatus(response && response.error ? response.error : 'Click the extension icon on the target tab to grant access.', 'error');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (navigator.mediaDevices && navigator.mediaDevices.addEventListener) {
|
||||||
|
navigator.mediaDevices.addEventListener('devicechange', () => {
|
||||||
|
if (inputDeviceSelect) {
|
||||||
|
loadInputDevices(inputDeviceSelect.value);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
|
||||||
if (request.action === 'updateTranscript') {
|
if (request.action === 'updateTranscript') {
|
||||||
transcriptDiv.textContent = request.transcript;
|
transcriptDiv.textContent = request.transcript;
|
||||||
|
|||||||
43
style.css
43
style.css
@@ -37,17 +37,27 @@ input[type="password"], select {
|
|||||||
background-color: white;
|
background-color: white;
|
||||||
}
|
}
|
||||||
|
|
||||||
.ai-provider-section, .model-selection, .api-key-section {
|
.ai-provider-section, .model-selection, .api-key-section, .capture-mode-section, .performance-section, .overlay-visibility-section, .mic-monitor-section, .active-state-section {
|
||||||
margin-bottom: 20px;
|
margin-bottom: 20px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.ai-provider-section label, .model-selection label {
|
.ai-provider-section label, .model-selection label, .capture-mode-section label {
|
||||||
display: block;
|
display: block;
|
||||||
margin-bottom: 5px;
|
margin-bottom: 5px;
|
||||||
font-weight: 600;
|
font-weight: 600;
|
||||||
color: #2c3e50;
|
color: #2c3e50;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.performance-section label,
|
||||||
|
.overlay-visibility-section label,
|
||||||
|
.active-state-section label {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: #2c3e50;
|
||||||
|
}
|
||||||
|
|
||||||
.status-message {
|
.status-message {
|
||||||
font-size: 12px;
|
font-size: 12px;
|
||||||
margin-top: 5px;
|
margin-top: 5px;
|
||||||
@@ -74,6 +84,35 @@ input[type="password"], select {
|
|||||||
background-color: #f8fafc;
|
background-color: #f8fafc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.mic-monitor-section {
|
||||||
|
border: 1px solid #e0e6ed;
|
||||||
|
border-radius: 6px;
|
||||||
|
padding: 15px;
|
||||||
|
background-color: #f8fafc;
|
||||||
|
}
|
||||||
|
|
||||||
|
.mic-monitor-section h4 {
|
||||||
|
margin: 0 0 12px 0;
|
||||||
|
color: #2c3e50;
|
||||||
|
font-size: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.mic-level {
|
||||||
|
height: 8px;
|
||||||
|
width: 100%;
|
||||||
|
border-radius: 999px;
|
||||||
|
background-color: #e8eef5;
|
||||||
|
overflow: hidden;
|
||||||
|
margin: 6px 0 12px 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.mic-level-bar {
|
||||||
|
height: 100%;
|
||||||
|
width: 0%;
|
||||||
|
background: linear-gradient(90deg, #2ecc71, #1abc9c);
|
||||||
|
transition: width 80ms linear;
|
||||||
|
}
|
||||||
|
|
||||||
.context-section h4, .device-section h4 {
|
.context-section h4, .device-section h4 {
|
||||||
margin: 0 0 15px 0;
|
margin: 0 0 15px 0;
|
||||||
color: #2c3e50;
|
color: #2c3e50;
|
||||||
|
|||||||
Reference in New Issue
Block a user