Compare commits

21 Commits

Author SHA1 Message Date
16b3fdabed Ajout documentation dépendances + script de vérification
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Successful in 14m27s
Build and Push Docker Images / Build Frontend Image (push) Failing after 11m42s
- DEPENDENCIES.md: Documentation complète de toutes les dépendances
  * Backend Python (requirements.txt)
  * Dépendances système (apt packages)
  * Frontend Node.js (package.json)
  * Modèles Essentia (28 MB)
  * Variables d'environnement requises

- check_dependencies.py: Script pour vérifier l'installation
  * Teste tous les imports Python
  * Affiche statut / pour chaque package
  * Utile pour debug d'installation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-26 13:03:17 +01:00
eeee538fcd Fix: Ajouter email-validator pour Pydantic EmailStr
Erreur: ImportError: email-validator is not installed
Cause: EmailStr de Pydantic nécessite email-validator
Fix: Ajout de email-validator==2.1.0 dans requirements.txt

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-26 13:00:11 +01:00
c366ca5ce0 Include Essentia models in repo + optimize CI/CD
All checks were successful
Build and Push Docker Images / Build Backend Image (push) Successful in 13m24s
Build and Push Docker Images / Build Frontend Image (push) Successful in 4m56s
Problème: Les modèles Essentia (28 MB) étaient téléchargés à chaque build CI/CD
- Ralentit les builds (~30 secondes de download)
- Consomme bande passante
- Point de défaillance si serveur Essentia down

Solution:
- Commit les 6 modèles dans backend/models/
- Supprime steps "Download Essentia models" du workflow Gitea
- Retire backend/models/*.pb et *.json du .gitignore

Modèles inclus (~28 MB total):
- discogs-effnet-bs64-1.pb (18 MB) - embedding model
- genre_discogs400-discogs-effnet-1.pb (2 MB) - genre classifier
- genre_discogs400-discogs-effnet-1.json (15 KB) - genre metadata
- mtg_jamendo_moodtheme-discogs-effnet-1.pb (2.6 MB) - mood
- mtg_jamendo_instrument-discogs-effnet-1.pb (2.6 MB) - instruments
- mtg_jamendo_genre-discogs-effnet-1.pb (2.7 MB) - genre alt

Bénéfices:
 Builds CI/CD plus rapides (~30s gagnées)
 Pas de dépendance externe au serveur Essentia
 Versioning des modèles avec le code
 Repo offline-friendly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-26 10:11:03 +01:00
774cb799a2 Ajout authentification JWT complète (app 100% protégée)
Backend:
- Nouveau module auth.py avec JWT et password handling
- Endpoint /api/auth/login (public)
- Endpoint /api/auth/me (protégé)
- TOUS les endpoints API protégés par require_auth
- Variables env: ADMIN_EMAIL, ADMIN_PASSWORD, JWT_SECRET_KEY
- Dependencies: python-jose, passlib

Frontend:
- Page de login (/login)
- AuthGuard component pour redirection automatique
- Axios interceptor: ajoute JWT token à chaque requête
- Gestion erreur 401: redirect automatique vers /login
- Bouton logout dans header
- Token stocké dans localStorage

Configuration:
- .env.example mis à jour avec variables auth
- Credentials admin configurables via env

Sécurité:
- Aucun endpoint public (sauf /api/auth/login et /health)
- JWT expiration configurable (24h par défaut)
- Password en clair dans env (à améliorer avec hash en prod)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-26 10:05:36 +01:00
6ae861ff54 Actualiser .gitea/workflows/docker.yml
All checks were successful
Build and Push Docker Images / Build Backend Image (push) Successful in 12m28s
Build and Push Docker Images / Build Frontend Image (push) Successful in 58s
Remise en état build backend
2025-12-26 00:15:54 +01:00
b74c6b0b40 Fix scan infini: exclure dossiers transcoded et waveforms
All checks were successful
Build and Push Docker Images / Build Frontend Image (push) Successful in 57s
Problème: Le scanner scannait TOUS les dossiers, y compris les dossiers
générés (transcoded/ et waveforms/), créant:
1. Boucle infinie: scan original → crée transcoded → re-scan transcoded
2. Segfaults: tentative de transcoder des fichiers déjà transcodés
3. Doublons en base de données

Solution:
- library.py: Exclut transcoded, waveforms, .transcoded, .waveforms
- scanner.py: Même exclusion dans le CLI

Technique: Modifie dirs[:] dans os.walk() pour skip ces dossiers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-26 00:08:13 +01:00
4d8fa57ab2 Fix tous les appels API pour utiliser getApiUrl() au lieu de process.env
All checks were successful
Build and Push Docker Images / Build Frontend Image (push) Successful in 3m31s
Problème: Le commit précédent n'avait corrigé que api.ts, mais AudioPlayer
et page.tsx utilisaient encore directement process.env.NEXT_PUBLIC_API_URL,
ce qui ignorait la config runtime.

Fichiers corrigés:
1. lib/api.ts:
   - Export getApiUrl() pour usage externe

2. app/page.tsx:
   - Import getApiUrl
   - /api/library/scan: process.env → getApiUrl()
   - /api/library/scan/status: process.env → getApiUrl()

3. components/AudioPlayer.tsx:
   - Import getApiUrl
   - /api/audio/waveform: process.env → getApiUrl()
   - /api/audio/stream: process.env → getApiUrl()
   - /api/audio/download: process.env → getApiUrl()

Maintenant TOUS les appels API utilisent la config runtime
(window.__RUNTIME_CONFIG__) côté client.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-24 10:54:38 +01:00
36652ea2cc Fix API URL configuration pour production
All checks were successful
Build and Push Docker Images / Build Frontend Image (push) Successful in 3m59s
Problème: Le frontend utilisait localhost:8001 au lieu de l'URL de production
car NEXT_PUBLIC_API_URL était évalué au build time et non au runtime.

Changements:
1. Frontend (api.ts):
   - Remplace apiClient statique par getApiClient() dynamique
   - Chaque appel crée une instance axios avec l'URL runtime
   - getStreamUrl/getDownloadUrl utilisent getApiUrl() au lieu de API_BASE_URL
   - Supprime l'export default apiClient (non utilisé)

2. Docker Compose:
   - Configure NEXT_PUBLIC_API_URL=https://api.audioclassifier.benoitsz.com
   - Simplifie la config (retire le fallback)

Le runtime config (window.__RUNTIME_CONFIG__) fonctionne maintenant correctement
car il est évalué à chaque appel API côté client.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-24 10:12:38 +01:00
3b236d6af4 Merge pull request 'Fix localhost en prod' (#4) from prochains-changements into main
All checks were successful
Build and Push Docker Images / Build Frontend Image (push) Successful in 5m45s
Reviewed-on: #4
2025-12-23 15:34:34 +01:00
04603cd5e9 Fix localhost en prod
Résumé des modifications
J'ai implémenté une configuration runtime pour le frontend qui permet de changer l'URL de l'API sans rebuilder l'image Docker. Voici ce qui a été fait :
📝 Fichiers modifiés :
frontend/Dockerfile - Ajout du script de génération de config au démarrage
frontend/lib/api.ts - Lecture de la config depuis window.__RUNTIME_CONFIG__
frontend/app/layout.tsx - Chargement du script config.js
docker-compose.yml - Utilisation de la variable d'environnement
.env.example - Documentation de la variable
DEPLOYMENT.md - Documentation de la configuration runtime
📄 Fichiers créés :
frontend/generate-config.sh - Script de génération de config
frontend/public/config.js - Fichier de config (placeholder)
frontend/README.md - Documentation du frontend
🚀 Pour résoudre votre problème en production :
Sur votre serveur, modifiez le fichier .env :

# URL publique de l'API (utilisée par le navigateur)
NEXT_PUBLIC_API_URL=https://audioclassifier.benoitsz.com:8001

# CORS doit accepter les requêtes du frontend
CORS_ORIGINS=https://audioclassifier.benoitsz.com,https://audioclassifier.benoitsz.com:3000
Ensuite :

# Pull les dernières modifications
git pull

# Rebuild l'image frontend (une seule fois)
# Soit via Gitea Actions en poussant sur main
# Soit manuellement :
# docker build -t git.benoitsz.com/benoit/audio-classifier-frontend:dev -f frontend/Dockerfile .
# docker push git.benoitsz.com/benoit/audio-classifier-frontend:dev

# Redémarrer les containers
docker-compose down
docker-compose up -d
 Avantages :
 Aucun rebuild nécessaire après le premier déploiement
 Configuration flexible via variables d'environnement
 Compatible avec tous les environnements (dev, staging, prod)
 Testé et fonctionnel localement
Le frontend générera automatiquement le bon fichier de configuration au démarrage du container avec l'URL de votre serveur !
2025-12-23 15:33:52 +01:00
64ba7f9006 Merge pull request 'Fix CORS' (#3) from prochains-changements into main
All checks were successful
Build and Push Docker Images / Build Backend Image (push) Successful in 12m50s
Build and Push Docker Images / Build Frontend Image (push) Successful in 7m10s
Reviewed-on: #3
2025-12-23 14:34:05 +01:00
cc2f1d0051 Fix CORS 2025-12-23 14:33:25 +01:00
169a759b57 Fix Build backend Gitea
All checks were successful
Build and Push Docker Images / Build Backend Image (push) Successful in 27m9s
Build and Push Docker Images / Build Frontend Image (push) Successful in 56s
2025-12-23 13:27:50 +01:00
88db8cc9c8 Fix build backend depuis Gitea 2025-12-23 13:27:33 +01:00
3e225b158f Fix build et actions
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Failing after 37s
Build and Push Docker Images / Build Frontend Image (push) Has been cancelled
2025-12-23 13:23:07 +01:00
8ec8b1aa42 Merge branch 'main' of https://git.benoitsz.com/benoit/Audio-Classifier
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Failing after 38s
Build and Push Docker Images / Build Frontend Image (push) Failing after 2m6s
2025-12-23 13:10:37 +01:00
e3d85f4775 Merge branch 'Backend'
Merge Backend
2025-12-23 13:08:43 +01:00
2a0d022e37 Fix Actions avec qwen
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Failing after 37s
Build and Push Docker Images / Build Frontend Image (push) Failing after 46s
2025-12-23 12:10:51 +01:00
5fb56a636f Fix Gitea Actions
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Failing after 38s
Build and Push Docker Images / Build Frontend Image (push) Failing after 46s
2025-12-23 12:03:55 +01:00
721f7b51f7 Ajouter .gitea/workflows/docker.yml
Some checks failed
Build and Push Docker Images / Build Backend Image (push) Failing after 38s
Build and Push Docker Images / Build Frontend Image (push) Failing after 3m15s
2025-12-23 11:24:24 +01:00
54086236c6 Merge pull request 'Backend' (#1) from Backend into main
Reviewed-on: #1
2025-12-23 10:58:10 +01:00
38 changed files with 1724 additions and 91 deletions

View File

@@ -10,7 +10,8 @@
"Bash(curl:*)",
"Bash(docker logs:*)",
"Bash(docker exec:*)",
"Bash(ls:*)"
"Bash(ls:*)",
"Bash(docker build:*)"
]
}
}

View File

@@ -5,7 +5,9 @@ POSTGRES_PASSWORD=audio_password
POSTGRES_DB=audio_classifier
# Backend API
CORS_ORIGINS=http://localhost:3000,http://127.0.0.1:3000
# Use "*" to allow all origins (recommended for development/local deployment)
# Or specify comma-separated URLs for production: http://yourdomain.com,https://yourdomain.com
CORS_ORIGINS=*
API_HOST=0.0.0.0
API_PORT=8000
@@ -15,5 +17,14 @@ ANALYSIS_NUM_WORKERS=4
ESSENTIA_MODELS_PATH=/app/models
AUDIO_LIBRARY_PATH=/path/to/your/audio/library
# Authentication
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=changeme
JWT_SECRET_KEY=your-super-secret-jwt-key-change-this-in-production
JWT_EXPIRATION_HOURS=24
# Frontend
NEXT_PUBLIC_API_URL=http://localhost:8000
# API URL accessed by the browser (use port 8001 since backend is mapped to 8001)
# For production on a remote server, set this to your server's public URL
# Example: NEXT_PUBLIC_API_URL=http://yourserver.com:8001
NEXT_PUBLIC_API_URL=http://localhost:8001

122
.gitea/workflows/docker.yml Normal file
View File

@@ -0,0 +1,122 @@
name: Build and Push Docker Images
on:
push:
branches:
- main
tags:
- 'v*.*.*'
env:
REGISTRY: git.benoitsz.com
IMAGE_BACKEND: audio-classifier-backend
IMAGE_FRONTEND: audio-classifier-frontend
jobs:
build-backend:
name: Build Backend Image
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ gitea.actor }}
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Determine version
id: version
run: |
if [[ "${{ gitea.ref }}" == refs/tags/v* ]]; then
echo "VERSION=${GITEA_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_BACKEND }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=latest,enable=${{ startsWith(gitea.ref, 'refs/tags/v') }}
type=raw,value=dev,enable=${{ gitea.ref == 'refs/heads/main' }}
type=sha,prefix=dev-,format=short,enable=${{ gitea.ref == 'refs/heads/main' }}
- name: Build and push backend
uses: docker/build-push-action@v5
with:
context: .
file: ./backend/Dockerfile
push: true
build-args: |
VERSION=${{ steps.version.outputs.VERSION }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_BACKEND }}:buildcache
cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_BACKEND }}:buildcache,mode=max
build-frontend:
name: Build Frontend Image
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ gitea.actor }}
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Determine version
id: version
run: |
if [[ "${{ gitea.ref }}" == refs/tags/v* ]]; then
echo "VERSION=${GITEA_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_FRONTEND }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=latest,enable=${{ startsWith(gitea.ref, 'refs/tags/v') }}
type=raw,value=dev,enable=${{ gitea.ref == 'refs/heads/main' }}
type=sha,prefix=dev-,format=short,enable=${{ gitea.ref == 'refs/heads/main' }}
- name: Build and push frontend
uses: docker/build-push-action@v5
with:
context: .
file: ./frontend/Dockerfile
push: true
build-args: |
VERSION=${{ steps.version.outputs.VERSION }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_FRONTEND }}:buildcache
cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ gitea.repository_owner }}/${{ env.IMAGE_FRONTEND }}:buildcache,mode=max

4
.gitignore vendored
View File

@@ -75,10 +75,6 @@ yarn-error.log*
# Docker
postgres_data/
# Essentia models (large files, download separately)
backend/models/*.pb
backend/models/*.json
# Audio analysis cache
*.peaks.json
.audio_cache/

140
DEPENDENCIES.md Normal file
View File

@@ -0,0 +1,140 @@
# Dépendances du projet
## Backend Python (requirements.txt)
### Web Framework
- `fastapi==0.109.0` - Framework web moderne
- `uvicorn[standard]==0.27.0` - Serveur ASGI
- `python-multipart==0.0.6` - Support formulaires multipart
### Database
- `sqlalchemy==2.0.25` - ORM
- `psycopg2-binary==2.9.9` - Driver PostgreSQL
- `pgvector==0.2.4` - Extension vecteurs PostgreSQL
- `alembic==1.13.1` - Migrations de base de données
### Audio Processing
- `librosa==0.10.1` - Analyse audio
- `soundfile==0.12.1` - Lecture/écriture fichiers audio
- `audioread==3.0.1` - Décodage formats audio
- `mutagen==1.47.0` - Métadonnées ID3
### Machine Learning
- `essentia-tensorflow` - Classification genre/mood/instruments (installé via Dockerfile)
- `numpy==1.24.3` - Calcul numérique
- `scipy==1.11.4` - Calcul scientifique
### Configuration & Validation
- `pydantic==2.5.3` - Validation de données
- `pydantic-settings==2.1.0` - Configuration via env vars
- `python-dotenv==1.0.0` - Chargement fichier .env
- `email-validator==2.1.0` - Validation emails (requis par Pydantic EmailStr)
### Authentication
- `python-jose[cryptography]==3.3.0` - JWT tokens
- `passlib[bcrypt]==1.7.4` - Hashing passwords
### Utilities
- `aiofiles==23.2.1` - I/O fichiers asynchrones
- `httpx==0.26.0` - Client HTTP asynchrone
## Dépendances Système (Dockerfile)
### Requis pour le backend
```bash
apt-get install -y \
ffmpeg # Transcodage audio (MP3, etc.)
libsndfile1 # Lecture formats audio
gcc g++ gfortran # Compilation packages Python
libopenblas-dev # Algèbre linéaire optimisée
liblapack-dev # Routines algèbre linéaire
libfftw3-dev # Transformées de Fourier rapides
libavcodec-dev # Codecs audio/vidéo
libavformat-dev # Formats conteneurs
libavutil-dev # Utilitaires FFmpeg
libswresample-dev # Resampling audio
libsamplerate0-dev # Conversion taux d'échantillonnage
libtag1-dev # Métadonnées audio
libchromaprint-dev # Audio fingerprinting
```
## Frontend (package.json)
### Framework
- `next@15.5.6` - Framework React
- `react@19.0.0` - Bibliothèque UI
- `react-dom@19.0.0` - Rendu React
### State Management & Data Fetching
- `@tanstack/react-query@5.62.11` - Gestion état serveur
- `axios@1.7.9` - Client HTTP
### UI & Styling
- `tailwindcss@3.4.17` - Framework CSS utility-first
### Types
- `typescript@5.7.2` - Typage statique
- `@types/react@19.0.1`
- `@types/node@22.10.1`
## Modèles Essentia (inclus dans le repo)
Total: ~28 MB
- `discogs-effnet-bs64-1.pb` (18 MB) - Modèle d'embedding
- `genre_discogs400-discogs-effnet-1.pb` (2 MB) - Classification genre
- `genre_discogs400-discogs-effnet-1.json` (15 KB) - Métadonnées genres
- `mtg_jamendo_moodtheme-discogs-effnet-1.pb` (2.6 MB) - Classification mood
- `mtg_jamendo_instrument-discogs-effnet-1.pb` (2.6 MB) - Classification instruments
- `mtg_jamendo_genre-discogs-effnet-1.pb` (2.7 MB) - Classification genre (alternatif)
## Vérification des dépendances
### Backend
```bash
cd backend
python check_dependencies.py
```
### Build Docker
```bash
# Backend
docker build -t audio-classifier-backend -f backend/Dockerfile .
# Frontend
docker build -t audio-classifier-frontend -f frontend/Dockerfile .
```
## Notes de compatibilité
- **Python**: 3.9 (requis pour essentia-tensorflow)
- **Architecture**: amd64 (meilleure compatibilité Essentia)
- **Node.js**: 20+ (pour Next.js 15)
- **PostgreSQL**: 16+ avec extension pgvector
## Installation locale
### Backend
```bash
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install essentia-tensorflow
```
### Frontend
```bash
cd frontend
npm install
```
## Variables d'environnement requises
Voir `.env.example` pour la liste complète des variables nécessaires.
### Critiques
- `DATABASE_URL` - Connexion PostgreSQL
- `ADMIN_EMAIL` - Email admin
- `ADMIN_PASSWORD` - Mot de passe admin
- `JWT_SECRET_KEY` - Secret pour JWT (générer avec `openssl rand -hex 32`)

View File

@@ -14,7 +14,7 @@ Le système est **100% autonome** - aucune action manuelle requise ! Les modèle
1. **Cloner le projet** :
```bash
git clone <votre-repo>
git clone https://git.benoitsz.com/benoit/Audio-Classifier.git
cd Audio-Classifier
```
@@ -36,6 +36,8 @@ docker-compose up -d
C'est tout ! 🎉
**Note** : Les images Docker sont automatiquement téléchargées depuis git.benoitsz.com. Aucun build nécessaire !
### Premier Scan
1. Ouvrir http://localhost:3000
@@ -202,13 +204,20 @@ cd Audio-Classifier
# Chemin vers musique
AUDIO_LIBRARY_PATH=/mnt/musique
# Domaine public
CORS_ORIGINS=http://votre-domaine.com,https://votre-domaine.com
# URL publique de l'API (IMPORTANT pour le frontend)
# Cette URL est utilisée par le navigateur pour accéder à l'API
# Remplacer par votre domaine ou IP publique + port 8001
NEXT_PUBLIC_API_URL=https://votre-serveur.com:8001
# Domaine public pour CORS (doit inclure l'URL du frontend)
CORS_ORIGINS=https://votre-domaine.com,https://votre-domaine.com:3000
# Credentials BDD (sécurisés !)
POSTGRES_PASSWORD=motdepasse_fort_aleatoire
```
**Important :** Le frontend utilise maintenant une configuration **runtime**, ce qui signifie que vous pouvez changer `NEXT_PUBLIC_API_URL` dans le fichier `.env` et redémarrer les containers sans avoir à rebuilder les images.
4. **Démarrer** :
```bash
docker-compose up -d

View File

@@ -41,8 +41,8 @@ Outil de classification audio automatique capable d'indexer et analyser des bibl
```bash
# 1. Cloner le projet
git clone <repo>
cd audio-classifier
git clone https://git.benoitsz.com/benoit/Audio-Classifier.git
cd Audio-Classifier
# 2. Configurer le chemin audio (optionnel)
echo "AUDIO_LIBRARY_PATH=/chemin/vers/votre/musique" > .env
@@ -53,6 +53,8 @@ docker-compose up -d
**C'est tout !** 🎉
Les images Docker sont automatiquement téléchargées depuis le registry Gitea.
- Frontend : http://localhost:3000
- API : http://localhost:8001
- API Docs : http://localhost:8001/docs
@@ -66,13 +68,26 @@ docker-compose up -d
### ✨ Particularités
- **Aucun téléchargement manuel** : Les modèles Essentia (28 MB) sont inclus dans l'image Docker
- **Images pré-construites** : Téléchargées automatiquement depuis git.benoitsz.com
- **Modèles inclus** : Les modèles Essentia (28 MB) sont intégrés dans l'image
- **Aucune configuration** : Tout fonctionne out-of-the-box
- **Transcodage automatique** : MP3 128kbps créés pour streaming rapide
- **Waveforms pré-calculées** : Chargement instantané
📖 **Documentation complète** : Voir [DEPLOYMENT.md](DEPLOYMENT.md)
### 🛠 Build local (développement)
Si vous voulez builder les images localement, les modèles Essentia doivent être présents dans `backend/models/` (28 MB).
```bash
# Build avec docker-compose
docker-compose -f docker-compose.build.yml build
docker-compose -f docker-compose.build.yml up -d
```
**Note** : Les modèles Essentia (`.pb`, 28 MB) ne sont pas versionnés dans Git. Le workflow CI/CD les télécharge automatiquement depuis essentia.upf.edu pendant le build.
## 📖 Utilisation
### Scanner un dossier

View File

@@ -32,7 +32,7 @@ WORKDIR /app
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
# Copy requirements
COPY requirements.txt .
COPY backend/requirements.txt .
# Install Python dependencies in stages for better caching
# Using versions compatible with Python 3.9
@@ -45,11 +45,11 @@ RUN pip install --no-cache-dir essentia-tensorflow
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY src/ ./src/
COPY alembic.ini .
COPY backend/src/ ./src/
COPY backend/alembic.ini .
# Copy Essentia models into image
COPY models/ ./models/
# Copy Essentia models into image (28 MB total)
COPY backend/models/ ./models/
RUN ls -lh /app/models
# Expose port

View File

@@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""Check all required dependencies are installed."""
import sys
def check_import(module_name, package_name=None):
"""Try to import a module and report status."""
package = package_name or module_name
try:
__import__(module_name)
print(f"{package}")
return True
except ImportError as e:
print(f"{package}: {e}")
return False
def main():
"""Check all dependencies."""
print("🔍 Checking Python dependencies...\n")
dependencies = [
# Web Framework
("fastapi", "fastapi"),
("uvicorn", "uvicorn"),
("multipart", "python-multipart"),
# Database
("sqlalchemy", "sqlalchemy"),
("psycopg2", "psycopg2-binary"),
("pgvector.sqlalchemy", "pgvector"),
("alembic", "alembic"),
# Audio Processing
("librosa", "librosa"),
("soundfile", "soundfile"),
("audioread", "audioread"),
("mutagen", "mutagen"),
# Scientific
("numpy", "numpy"),
("scipy", "scipy"),
# Configuration
("pydantic", "pydantic"),
("pydantic_settings", "pydantic-settings"),
("dotenv", "python-dotenv"),
("email_validator", "email-validator"),
# Authentication
("jose", "python-jose"),
("passlib", "passlib"),
# Utilities
("aiofiles", "aiofiles"),
("httpx", "httpx"),
# Essentia (optional)
("essentia.standard", "essentia-tensorflow"),
]
all_ok = True
for module, package in dependencies:
if not check_import(module, package):
all_ok = False
print("\n" + "="*50)
if all_ok:
print("✅ All dependencies installed!")
return 0
else:
print("❌ Some dependencies are missing")
print("\nInstall missing dependencies with:")
print(" pip install -r requirements.txt")
return 1
if __name__ == "__main__":
sys.exit(main())

52
backend/models/README.md Normal file
View File

@@ -0,0 +1,52 @@
# Essentia Models
Ce dossier contient les modèles pré-entraînés Essentia-TensorFlow pour la classification audio (28 MB total).
## Modèles requis
Les fichiers suivants sont nécessaires pour le fonctionnement de l'application :
1. **discogs-effnet-bs64-1.pb** (18 MB) - Embedding model
2. **genre_discogs400-discogs-effnet-1.pb** (2 MB) - Genre classifier
3. **genre_discogs400-discogs-effnet-1.json** (15 KB) - Genre metadata
4. **mtg_jamendo_moodtheme-discogs-effnet-1.pb** (2.7 MB) - Mood classifier
5. **mtg_jamendo_instrument-discogs-effnet-1.pb** (2.6 MB) - Instrument classifier
6. **mtg_jamendo_genre-discogs-effnet-1.pb** (2.7 MB) - Alternative genre classifier
## Téléchargement automatique
**Pour les utilisateurs** : Les modèles sont déjà inclus dans les images Docker depuis le registry `git.benoitsz.com`. Aucune action nécessaire.
**Pour le CI/CD** : Les modèles sont téléchargés automatiquement depuis essentia.upf.edu pendant le build (voir `.gitea/workflows/docker.yml`).
**Pour le développement local** : Si vous avez besoin de builder localement, vous devez avoir les modèles dans ce dossier. Ils ne sont pas versionnés dans Git car ils pèsent 28 MB.
### Téléchargement manuel (si nécessaire)
```bash
cd backend/models
# Embedding model (18 MB)
curl -L -O https://essentia.upf.edu/models/feature-extractors/discogs-effnet/discogs-effnet-bs64-1.pb
# Genre classifier (2 MB)
curl -L -O https://essentia.upf.edu/models/classification-heads/genre_discogs400/genre_discogs400-discogs-effnet-1.pb
curl -L -O https://essentia.upf.edu/models/classification-heads/genre_discogs400/genre_discogs400-discogs-effnet-1.json
# Mood classifier (2.7 MB)
curl -L -O https://essentia.upf.edu/models/classification-heads/mtg_jamendo_moodtheme/mtg_jamendo_moodtheme-discogs-effnet-1.pb
# Instrument classifier (2.6 MB)
curl -L -O https://essentia.upf.edu/models/classification-heads/mtg_jamendo_instrument/mtg_jamendo_instrument-discogs-effnet-1.pb
# Alternative genre classifier (2.7 MB)
curl -L -O https://essentia.upf.edu/models/classification-heads/mtg_jamendo_genre/mtg_jamendo_genre-discogs-effnet-1.pb
```
## Source
Tous les modèles proviennent du projet Essentia : https://essentia.upf.edu/models/
## Licence
Ces modèles sont fournis par le Music Technology Group de l'Universitat Pompeu Fabra sous licence permissive pour usage académique et commercial.

Binary file not shown.

View File

@@ -0,0 +1,462 @@
{
"name": "Genre Discogs400",
"type": "Music genre classification",
"link": "https://essentia.upf.edu/models/classification-heads/genre_discogs400/genre_discogs400-discogs-effnet-1.pb",
"version": "1",
"description": "Prediction of 400 music styles in the from the Discogs taxonomy",
"author": "Pablo Alonso",
"email": "pablo.alonso@upf.edu",
"release_date": "2023-05-04",
"framework": "tensorflow",
"framework_version": "2.8.0",
"classes": [
"Blues---Boogie Woogie",
"Blues---Chicago Blues",
"Blues---Country Blues",
"Blues---Delta Blues",
"Blues---Electric Blues",
"Blues---Harmonica Blues",
"Blues---Jump Blues",
"Blues---Louisiana Blues",
"Blues---Modern Electric Blues",
"Blues---Piano Blues",
"Blues---Rhythm & Blues",
"Blues---Texas Blues",
"Brass & Military---Brass Band",
"Brass & Military---Marches",
"Brass & Military---Military",
"Children's---Educational",
"Children's---Nursery Rhymes",
"Children's---Story",
"Classical---Baroque",
"Classical---Choral",
"Classical---Classical",
"Classical---Contemporary",
"Classical---Impressionist",
"Classical---Medieval",
"Classical---Modern",
"Classical---Neo-Classical",
"Classical---Neo-Romantic",
"Classical---Opera",
"Classical---Post-Modern",
"Classical---Renaissance",
"Classical---Romantic",
"Electronic---Abstract",
"Electronic---Acid",
"Electronic---Acid House",
"Electronic---Acid Jazz",
"Electronic---Ambient",
"Electronic---Bassline",
"Electronic---Beatdown",
"Electronic---Berlin-School",
"Electronic---Big Beat",
"Electronic---Bleep",
"Electronic---Breakbeat",
"Electronic---Breakcore",
"Electronic---Breaks",
"Electronic---Broken Beat",
"Electronic---Chillwave",
"Electronic---Chiptune",
"Electronic---Dance-pop",
"Electronic---Dark Ambient",
"Electronic---Darkwave",
"Electronic---Deep House",
"Electronic---Deep Techno",
"Electronic---Disco",
"Electronic---Disco Polo",
"Electronic---Donk",
"Electronic---Downtempo",
"Electronic---Drone",
"Electronic---Drum n Bass",
"Electronic---Dub",
"Electronic---Dub Techno",
"Electronic---Dubstep",
"Electronic---Dungeon Synth",
"Electronic---EBM",
"Electronic---Electro",
"Electronic---Electro House",
"Electronic---Electroclash",
"Electronic---Euro House",
"Electronic---Euro-Disco",
"Electronic---Eurobeat",
"Electronic---Eurodance",
"Electronic---Experimental",
"Electronic---Freestyle",
"Electronic---Future Jazz",
"Electronic---Gabber",
"Electronic---Garage House",
"Electronic---Ghetto",
"Electronic---Ghetto House",
"Electronic---Glitch",
"Electronic---Goa Trance",
"Electronic---Grime",
"Electronic---Halftime",
"Electronic---Hands Up",
"Electronic---Happy Hardcore",
"Electronic---Hard House",
"Electronic---Hard Techno",
"Electronic---Hard Trance",
"Electronic---Hardcore",
"Electronic---Hardstyle",
"Electronic---Hi NRG",
"Electronic---Hip Hop",
"Electronic---Hip-House",
"Electronic---House",
"Electronic---IDM",
"Electronic---Illbient",
"Electronic---Industrial",
"Electronic---Italo House",
"Electronic---Italo-Disco",
"Electronic---Italodance",
"Electronic---Jazzdance",
"Electronic---Juke",
"Electronic---Jumpstyle",
"Electronic---Jungle",
"Electronic---Latin",
"Electronic---Leftfield",
"Electronic---Makina",
"Electronic---Minimal",
"Electronic---Minimal Techno",
"Electronic---Modern Classical",
"Electronic---Musique Concr\u00e8te",
"Electronic---Neofolk",
"Electronic---New Age",
"Electronic---New Beat",
"Electronic---New Wave",
"Electronic---Noise",
"Electronic---Nu-Disco",
"Electronic---Power Electronics",
"Electronic---Progressive Breaks",
"Electronic---Progressive House",
"Electronic---Progressive Trance",
"Electronic---Psy-Trance",
"Electronic---Rhythmic Noise",
"Electronic---Schranz",
"Electronic---Sound Collage",
"Electronic---Speed Garage",
"Electronic---Speedcore",
"Electronic---Synth-pop",
"Electronic---Synthwave",
"Electronic---Tech House",
"Electronic---Tech Trance",
"Electronic---Techno",
"Electronic---Trance",
"Electronic---Tribal",
"Electronic---Tribal House",
"Electronic---Trip Hop",
"Electronic---Tropical House",
"Electronic---UK Garage",
"Electronic---Vaporwave",
"Folk, World, & Country---African",
"Folk, World, & Country---Bluegrass",
"Folk, World, & Country---Cajun",
"Folk, World, & Country---Canzone Napoletana",
"Folk, World, & Country---Catalan Music",
"Folk, World, & Country---Celtic",
"Folk, World, & Country---Country",
"Folk, World, & Country---Fado",
"Folk, World, & Country---Flamenco",
"Folk, World, & Country---Folk",
"Folk, World, & Country---Gospel",
"Folk, World, & Country---Highlife",
"Folk, World, & Country---Hillbilly",
"Folk, World, & Country---Hindustani",
"Folk, World, & Country---Honky Tonk",
"Folk, World, & Country---Indian Classical",
"Folk, World, & Country---La\u00efk\u00f3",
"Folk, World, & Country---Nordic",
"Folk, World, & Country---Pacific",
"Folk, World, & Country---Polka",
"Folk, World, & Country---Ra\u00ef",
"Folk, World, & Country---Romani",
"Folk, World, & Country---Soukous",
"Folk, World, & Country---S\u00e9ga",
"Folk, World, & Country---Volksmusik",
"Folk, World, & Country---Zouk",
"Folk, World, & Country---\u00c9ntekhno",
"Funk / Soul---Afrobeat",
"Funk / Soul---Boogie",
"Funk / Soul---Contemporary R&B",
"Funk / Soul---Disco",
"Funk / Soul---Free Funk",
"Funk / Soul---Funk",
"Funk / Soul---Gospel",
"Funk / Soul---Neo Soul",
"Funk / Soul---New Jack Swing",
"Funk / Soul---P.Funk",
"Funk / Soul---Psychedelic",
"Funk / Soul---Rhythm & Blues",
"Funk / Soul---Soul",
"Funk / Soul---Swingbeat",
"Funk / Soul---UK Street Soul",
"Hip Hop---Bass Music",
"Hip Hop---Boom Bap",
"Hip Hop---Bounce",
"Hip Hop---Britcore",
"Hip Hop---Cloud Rap",
"Hip Hop---Conscious",
"Hip Hop---Crunk",
"Hip Hop---Cut-up/DJ",
"Hip Hop---DJ Battle Tool",
"Hip Hop---Electro",
"Hip Hop---G-Funk",
"Hip Hop---Gangsta",
"Hip Hop---Grime",
"Hip Hop---Hardcore Hip-Hop",
"Hip Hop---Horrorcore",
"Hip Hop---Instrumental",
"Hip Hop---Jazzy Hip-Hop",
"Hip Hop---Miami Bass",
"Hip Hop---Pop Rap",
"Hip Hop---Ragga HipHop",
"Hip Hop---RnB/Swing",
"Hip Hop---Screw",
"Hip Hop---Thug Rap",
"Hip Hop---Trap",
"Hip Hop---Trip Hop",
"Hip Hop---Turntablism",
"Jazz---Afro-Cuban Jazz",
"Jazz---Afrobeat",
"Jazz---Avant-garde Jazz",
"Jazz---Big Band",
"Jazz---Bop",
"Jazz---Bossa Nova",
"Jazz---Contemporary Jazz",
"Jazz---Cool Jazz",
"Jazz---Dixieland",
"Jazz---Easy Listening",
"Jazz---Free Improvisation",
"Jazz---Free Jazz",
"Jazz---Fusion",
"Jazz---Gypsy Jazz",
"Jazz---Hard Bop",
"Jazz---Jazz-Funk",
"Jazz---Jazz-Rock",
"Jazz---Latin Jazz",
"Jazz---Modal",
"Jazz---Post Bop",
"Jazz---Ragtime",
"Jazz---Smooth Jazz",
"Jazz---Soul-Jazz",
"Jazz---Space-Age",
"Jazz---Swing",
"Latin---Afro-Cuban",
"Latin---Bai\u00e3o",
"Latin---Batucada",
"Latin---Beguine",
"Latin---Bolero",
"Latin---Boogaloo",
"Latin---Bossanova",
"Latin---Cha-Cha",
"Latin---Charanga",
"Latin---Compas",
"Latin---Cubano",
"Latin---Cumbia",
"Latin---Descarga",
"Latin---Forr\u00f3",
"Latin---Guaguanc\u00f3",
"Latin---Guajira",
"Latin---Guaracha",
"Latin---MPB",
"Latin---Mambo",
"Latin---Mariachi",
"Latin---Merengue",
"Latin---Norte\u00f1o",
"Latin---Nueva Cancion",
"Latin---Pachanga",
"Latin---Porro",
"Latin---Ranchera",
"Latin---Reggaeton",
"Latin---Rumba",
"Latin---Salsa",
"Latin---Samba",
"Latin---Son",
"Latin---Son Montuno",
"Latin---Tango",
"Latin---Tejano",
"Latin---Vallenato",
"Non-Music---Audiobook",
"Non-Music---Comedy",
"Non-Music---Dialogue",
"Non-Music---Education",
"Non-Music---Field Recording",
"Non-Music---Interview",
"Non-Music---Monolog",
"Non-Music---Poetry",
"Non-Music---Political",
"Non-Music---Promotional",
"Non-Music---Radioplay",
"Non-Music---Religious",
"Non-Music---Spoken Word",
"Pop---Ballad",
"Pop---Bollywood",
"Pop---Bubblegum",
"Pop---Chanson",
"Pop---City Pop",
"Pop---Europop",
"Pop---Indie Pop",
"Pop---J-pop",
"Pop---K-pop",
"Pop---Kay\u014dkyoku",
"Pop---Light Music",
"Pop---Music Hall",
"Pop---Novelty",
"Pop---Parody",
"Pop---Schlager",
"Pop---Vocal",
"Reggae---Calypso",
"Reggae---Dancehall",
"Reggae---Dub",
"Reggae---Lovers Rock",
"Reggae---Ragga",
"Reggae---Reggae",
"Reggae---Reggae-Pop",
"Reggae---Rocksteady",
"Reggae---Roots Reggae",
"Reggae---Ska",
"Reggae---Soca",
"Rock---AOR",
"Rock---Acid Rock",
"Rock---Acoustic",
"Rock---Alternative Rock",
"Rock---Arena Rock",
"Rock---Art Rock",
"Rock---Atmospheric Black Metal",
"Rock---Avantgarde",
"Rock---Beat",
"Rock---Black Metal",
"Rock---Blues Rock",
"Rock---Brit Pop",
"Rock---Classic Rock",
"Rock---Coldwave",
"Rock---Country Rock",
"Rock---Crust",
"Rock---Death Metal",
"Rock---Deathcore",
"Rock---Deathrock",
"Rock---Depressive Black Metal",
"Rock---Doo Wop",
"Rock---Doom Metal",
"Rock---Dream Pop",
"Rock---Emo",
"Rock---Ethereal",
"Rock---Experimental",
"Rock---Folk Metal",
"Rock---Folk Rock",
"Rock---Funeral Doom Metal",
"Rock---Funk Metal",
"Rock---Garage Rock",
"Rock---Glam",
"Rock---Goregrind",
"Rock---Goth Rock",
"Rock---Gothic Metal",
"Rock---Grindcore",
"Rock---Grunge",
"Rock---Hard Rock",
"Rock---Hardcore",
"Rock---Heavy Metal",
"Rock---Indie Rock",
"Rock---Industrial",
"Rock---Krautrock",
"Rock---Lo-Fi",
"Rock---Lounge",
"Rock---Math Rock",
"Rock---Melodic Death Metal",
"Rock---Melodic Hardcore",
"Rock---Metalcore",
"Rock---Mod",
"Rock---Neofolk",
"Rock---New Wave",
"Rock---No Wave",
"Rock---Noise",
"Rock---Noisecore",
"Rock---Nu Metal",
"Rock---Oi",
"Rock---Parody",
"Rock---Pop Punk",
"Rock---Pop Rock",
"Rock---Pornogrind",
"Rock---Post Rock",
"Rock---Post-Hardcore",
"Rock---Post-Metal",
"Rock---Post-Punk",
"Rock---Power Metal",
"Rock---Power Pop",
"Rock---Power Violence",
"Rock---Prog Rock",
"Rock---Progressive Metal",
"Rock---Psychedelic Rock",
"Rock---Psychobilly",
"Rock---Pub Rock",
"Rock---Punk",
"Rock---Rock & Roll",
"Rock---Rockabilly",
"Rock---Shoegaze",
"Rock---Ska",
"Rock---Sludge Metal",
"Rock---Soft Rock",
"Rock---Southern Rock",
"Rock---Space Rock",
"Rock---Speed Metal",
"Rock---Stoner Rock",
"Rock---Surf",
"Rock---Symphonic Rock",
"Rock---Technical Death Metal",
"Rock---Thrash",
"Rock---Twist",
"Rock---Viking Metal",
"Rock---Y\u00e9-Y\u00e9",
"Stage & Screen---Musical",
"Stage & Screen---Score",
"Stage & Screen---Soundtrack",
"Stage & Screen---Theme"
],
"model_types": [
"frozen_model",
"SavedModel",
"onnx"
],
"dataset": {
"name": "Discogs-4M (unreleased)",
"citation": "In-house dataset",
"size": "4M full tracks (3.3M used)",
"metrics": {
"ROC-AUC": 0.95417,
"PR-AUC": 0.20629
}
},
"schema": {
"inputs": [
{
"name": "serving_default_model_Placeholder",
"type": "float",
"shape": [
"batch_size",
1280
]
}
],
"outputs": [
{
"name": "PartitionedCall:0",
"type": "float",
"shape": [
"batch_size",
400
],
"op": "Sigmoid",
"output_purpose": "predictions"
}
]
},
"citation": "@inproceedings{alonso2022music,\n title={Music Representation Learning Based on Editorial Metadata from Discogs},\n author={Alonso-Jim{\\'e}nez, Pablo and Serra, Xavier and Bogdanov, Dmitry},\n booktitle={Conference of the International Society for Music Information Retrieval (ISMIR)},\n year={2022}\n}",
"inference": {
"sample_rate": 16000,
"algorithm": "TensorflowPredict2D",
"embedding_model": {
"algorithm": "TensorflowPredictEffnetDiscogs",
"model_name": "discogs-effnet-bs64-1",
"link": "https://essentia.upf.edu/models/music-style-classification/discogs-effnet/discogs-effnet-bs64-1.pb"
}
}
}

Binary file not shown.

Binary file not shown.

View File

@@ -26,6 +26,11 @@ scipy==1.11.4
pydantic==2.5.3
pydantic-settings==2.1.0
python-dotenv==1.0.0
email-validator==2.1.0
# Authentication
python-jose[cryptography]==3.3.0
passlib[bcrypt]==1.7.4
# Utilities
aiofiles==23.2.1

View File

@@ -1,14 +1,15 @@
"""FastAPI main application."""
from fastapi import FastAPI
from fastapi import FastAPI, Depends
from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager
from ..utils.config import settings
from ..utils.logging import setup_logging, get_logger
from ..models.database import engine, Base
from ..core.auth import require_auth
# Import routes
from .routes import tracks, search, audio, analyze, similar, stats, library
from .routes import tracks, search, audio, analyze, similar, stats, library, auth
# Setup logging
setup_logging()
@@ -62,13 +63,17 @@ async def health_check():
# Include routers
app.include_router(tracks.router, prefix="/api/tracks", tags=["tracks"])
app.include_router(search.router, prefix="/api/search", tags=["search"])
app.include_router(audio.router, prefix="/api/audio", tags=["audio"])
app.include_router(analyze.router, prefix="/api/analyze", tags=["analyze"])
app.include_router(similar.router, prefix="/api", tags=["similar"])
app.include_router(stats.router, prefix="/api/stats", tags=["stats"])
app.include_router(library.router, prefix="/api/library", tags=["library"])
# Auth endpoints (public - no auth required)
app.include_router(auth.router, prefix="/api/auth", tags=["auth"])
# Protected endpoints (auth required for ALL routes)
app.include_router(tracks.router, prefix="/api/tracks", tags=["tracks"], dependencies=[Depends(require_auth)])
app.include_router(search.router, prefix="/api/search", tags=["search"], dependencies=[Depends(require_auth)])
app.include_router(audio.router, prefix="/api/audio", tags=["audio"], dependencies=[Depends(require_auth)])
app.include_router(analyze.router, prefix="/api/analyze", tags=["analyze"], dependencies=[Depends(require_auth)])
app.include_router(similar.router, prefix="/api", tags=["similar"], dependencies=[Depends(require_auth)])
app.include_router(stats.router, prefix="/api/stats", tags=["stats"], dependencies=[Depends(require_auth)])
app.include_router(library.router, prefix="/api/library", tags=["library"], dependencies=[Depends(require_auth)])
@app.get("/", tags=["root"])

View File

@@ -0,0 +1,82 @@
"""Authentication endpoints."""
from datetime import timedelta
from fastapi import APIRouter, HTTPException, status, Depends
from pydantic import BaseModel, EmailStr
from ...core.auth import authenticate_user, create_access_token, get_current_user
from ...utils.config import settings
from ...utils.logging import get_logger
router = APIRouter()
logger = get_logger(__name__)
class LoginRequest(BaseModel):
"""Login request model."""
email: EmailStr
password: str
class LoginResponse(BaseModel):
"""Login response model."""
access_token: str
token_type: str = "bearer"
user: dict
class UserResponse(BaseModel):
"""User response model."""
email: str
role: str
@router.post("/login", response_model=LoginResponse)
async def login(request: LoginRequest):
"""Authenticate user and return JWT token.
Args:
request: Login credentials
Returns:
Access token and user info
Raises:
HTTPException: 401 if credentials are invalid
"""
user = authenticate_user(request.email, request.password)
if not user:
logger.warning(f"Failed login attempt for: {request.email}")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect email or password",
headers={"WWW-Authenticate": "Bearer"},
)
# Create access token
access_token_expires = timedelta(hours=settings.JWT_EXPIRATION_HOURS)
access_token = create_access_token(
data={"sub": user["email"], "role": user["role"]},
expires_delta=access_token_expires
)
logger.info(f"User logged in: {user['email']}")
return {
"access_token": access_token,
"token_type": "bearer",
"user": user
}
@router.get("/me", response_model=UserResponse)
async def get_me(current_user: dict = Depends(get_current_user)):
"""Get current authenticated user info.
Args:
current_user: Current user from JWT token
Returns:
User information
"""
return current_user

View File

@@ -41,6 +41,9 @@ def find_audio_files(directory: str) -> list[Path]:
return []
for root, dirs, files in os.walk(directory_path):
# Skip transcoded and waveforms directories
dirs[:] = [d for d in dirs if d not in ['transcoded', 'waveforms', '.transcoded', '.waveforms']]
for file in files:
file_path = Path(root) / file
if file_path.suffix.lower() in AUDIO_EXTENSIONS:

View File

@@ -46,6 +46,9 @@ def find_audio_files(directory: str) -> List[Path]:
logger.info(f"Scanning directory: {directory}")
for root, dirs, files in os.walk(directory_path):
# Skip transcoded and waveforms directories
dirs[:] = [d for d in dirs if d not in ['transcoded', 'waveforms', '.transcoded', '.waveforms']]
for file in files:
file_path = Path(root) / file
if file_path.suffix.lower() in AUDIO_EXTENSIONS:

151
backend/src/core/auth.py Normal file
View File

@@ -0,0 +1,151 @@
"""Authentication utilities."""
from datetime import datetime, timedelta
from typing import Optional
from jose import JWTError, jwt
from passlib.context import CryptContext
from fastapi import HTTPException, status, Depends
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from ..utils.config import settings
from ..utils.logging import get_logger
logger = get_logger(__name__)
# Password hashing
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
# HTTP Bearer for JWT
security = HTTPBearer()
def verify_password(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against its hash.
Args:
plain_password: Plain text password
hashed_password: Hashed password
Returns:
True if password matches
"""
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password: str) -> str:
"""Hash a password.
Args:
password: Plain text password
Returns:
Hashed password
"""
return pwd_context.hash(password)
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
"""Create JWT access token.
Args:
data: Data to encode in token
expires_delta: Token expiration time
Returns:
JWT token string
"""
to_encode = data.copy()
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(hours=settings.JWT_EXPIRATION_HOURS)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, settings.JWT_SECRET_KEY, algorithm="HS256")
return encoded_jwt
def verify_token(token: str) -> dict:
"""Verify and decode JWT token.
Args:
token: JWT token string
Returns:
Decoded token payload
Raises:
HTTPException: If token is invalid
"""
try:
payload = jwt.decode(token, settings.JWT_SECRET_KEY, algorithms=["HS256"])
return payload
except JWTError as e:
logger.error(f"Token verification failed: {e}")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
def authenticate_user(email: str, password: str) -> Optional[dict]:
"""Authenticate user with email and password.
Args:
email: User email
password: User password
Returns:
User data if authenticated, None otherwise
"""
# Check against admin credentials from environment
if email == settings.ADMIN_EMAIL and password == settings.ADMIN_PASSWORD:
return {
"email": email,
"role": "admin"
}
return None
async def get_current_user(credentials: HTTPAuthorizationCredentials = Depends(security)) -> dict:
"""Get current authenticated user from JWT token.
Args:
credentials: HTTP Bearer credentials
Returns:
User data from token
Raises:
HTTPException: If authentication fails
"""
token = credentials.credentials
payload = verify_token(token)
email: str = payload.get("sub")
if email is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
return {
"email": email,
"role": payload.get("role", "user")
}
async def require_auth(current_user: dict = Depends(get_current_user)) -> dict:
"""Dependency to require authentication.
Args:
current_user: Current user from get_current_user
Returns:
Current user data
"""
return current_user

View File

@@ -10,7 +10,8 @@ class Settings(BaseSettings):
DATABASE_URL: str = "postgresql://audio_user:audio_password@localhost:5432/audio_classifier"
# API Configuration
CORS_ORIGINS: str = "http://localhost:3000,http://127.0.0.1:3000"
# Comma-separated list of allowed origins, or use "*" to allow all
CORS_ORIGINS: str = "*"
API_HOST: str = "0.0.0.0"
API_PORT: int = 8000
@@ -20,6 +21,12 @@ class Settings(BaseSettings):
ESSENTIA_MODELS_PATH: str = "./models"
AUDIO_LIBRARY_PATH: str = "/audio"
# Authentication
ADMIN_EMAIL: str = "admin@example.com"
ADMIN_PASSWORD: str = "changeme"
JWT_SECRET_KEY: str = "your-secret-key-change-in-production"
JWT_EXPIRATION_HOURS: int = 24
# Application
APP_NAME: str = "Audio Classifier API"
APP_VERSION: str = "1.0.0"
@@ -33,7 +40,13 @@ class Settings(BaseSettings):
@property
def cors_origins_list(self) -> List[str]:
"""Parse CORS origins string to list."""
"""Parse CORS origins string to list.
If CORS_ORIGINS is "*", allow all origins.
Otherwise, parse comma-separated list.
"""
if self.CORS_ORIGINS.strip() == "*":
return ["*"]
return [origin.strip() for origin in self.CORS_ORIGINS.split(",")]

64
docker-compose.build.yml Normal file
View File

@@ -0,0 +1,64 @@
# Docker Compose pour build local (développement)
# Usage: docker-compose -f docker-compose.build.yml build
services:
postgres:
image: pgvector/pgvector:pg16
container_name: audio_classifier_db
environment:
POSTGRES_USER: ${POSTGRES_USER:-audio_user}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-audio_password}
POSTGRES_DB: ${POSTGRES_DB:-audio_classifier}
ports:
- "5433:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./backend/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-audio_user}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
backend:
build:
context: .
dockerfile: backend/Dockerfile
container_name: audio_classifier_api
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: postgresql://${POSTGRES_USER:-audio_user}:${POSTGRES_PASSWORD:-audio_password}@postgres:5432/${POSTGRES_DB:-audio_classifier}
CORS_ORIGINS: ${CORS_ORIGINS:-*}
ANALYSIS_USE_CLAP: ${ANALYSIS_USE_CLAP:-false}
ANALYSIS_NUM_WORKERS: ${ANALYSIS_NUM_WORKERS:-4}
ESSENTIA_MODELS_PATH: /app/models
ports:
- "8001:8000"
volumes:
# Mount your audio library (read-write for transcoding and waveforms)
- ${AUDIO_LIBRARY_PATH:-./audio_samples}:/audio
restart: unless-stopped
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
args:
NEXT_PUBLIC_API_URL: http://localhost:8001
container_name: audio_classifier_ui
environment:
# Use localhost:8001 because the browser (client-side) needs to access the API
# The backend is mapped to port 8001 on the host machine
NEXT_PUBLIC_API_URL: http://localhost:8001
ports:
- "3000:3000"
depends_on:
- backend
restart: unless-stopped
volumes:
postgres_data:
driver: local

View File

@@ -19,14 +19,14 @@ services:
restart: unless-stopped
backend:
build: ./backend
image: git.benoitsz.com/benoit/audio-classifier-backend:dev
container_name: audio_classifier_api
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: postgresql://${POSTGRES_USER:-audio_user}:${POSTGRES_PASSWORD:-audio_password}@postgres:5432/${POSTGRES_DB:-audio_classifier}
CORS_ORIGINS: ${CORS_ORIGINS:-http://localhost:3000}
CORS_ORIGINS: ${CORS_ORIGINS:-*}
ANALYSIS_USE_CLAP: ${ANALYSIS_USE_CLAP:-false}
ANALYSIS_NUM_WORKERS: ${ANALYSIS_NUM_WORKERS:-4}
ESSENTIA_MODELS_PATH: /app/models
@@ -38,15 +38,10 @@ services:
restart: unless-stopped
frontend:
build:
context: ./frontend
args:
NEXT_PUBLIC_API_URL: http://localhost:8001
image: git.benoitsz.com/benoit/audio-classifier-frontend:dev
container_name: audio_classifier_ui
environment:
# Use localhost:8001 because the browser (client-side) needs to access the API
# The backend is mapped to port 8001 on the host machine
NEXT_PUBLIC_API_URL: http://localhost:8001
NEXT_PUBLIC_API_URL: https://api.audioclassifier.benoitsz.com
ports:
- "3000:3000"
depends_on:

View File

@@ -4,23 +4,27 @@ FROM node:20-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY frontend/package*.json ./
# Install dependencies
RUN npm ci
# Copy application code
COPY . .
COPY frontend/ .
# Build argument for API URL
# Build argument for API URL (used for default build)
ARG NEXT_PUBLIC_API_URL=http://localhost:8001
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
# Build the application
RUN npm run build
# Copy runtime config generation script
COPY frontend/generate-config.sh /app/generate-config.sh
RUN chmod +x /app/generate-config.sh
# Expose port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
# Generate runtime config and start the application
CMD ["/bin/sh", "-c", "/app/generate-config.sh && npm start"]

19
frontend/Dockerfile.dev Normal file
View File

@@ -0,0 +1,19 @@
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Debug: List files and Node.js version
RUN ls -la && node --version && npm --version
# Install dependencies with more verbose output
RUN npm install --verbose
# Expose port
EXPOSE 3000
# Start the development server
CMD ["npm", "run", "dev"]

93
frontend/README.md Normal file
View File

@@ -0,0 +1,93 @@
# Frontend - Audio Classifier
Frontend Next.js pour Audio Classifier avec configuration runtime.
## Configuration Runtime
Le frontend utilise un système de **configuration runtime** qui permet de changer l'URL de l'API sans rebuilder l'image Docker.
### Comment ça fonctionne
1. Au démarrage du container, le script `generate-config.sh` génère un fichier `/app/public/config.js`
2. Ce fichier contient l'URL de l'API basée sur la variable `NEXT_PUBLIC_API_URL`
3. Le fichier est chargé dans le navigateur via `<Script src="/config.js">`
4. Le code API lit la configuration depuis `window.__RUNTIME_CONFIG__.API_URL`
### Développement Local
```bash
# Installer les dépendances
npm install
# Créer un fichier .env.local
echo "NEXT_PUBLIC_API_URL=http://localhost:8001" > .env.local
# Lancer en mode dev
npm run dev
```
### Production avec Docker
```bash
# Build l'image
docker build -t audio-classifier-frontend -f frontend/Dockerfile .
# Lancer avec une URL personnalisée
docker run -p 3000:3000 \
-e NEXT_PUBLIC_API_URL=https://mon-serveur.com:8001 \
audio-classifier-frontend
```
### Docker Compose
```yaml
frontend:
image: audio-classifier-frontend
environment:
NEXT_PUBLIC_API_URL: ${NEXT_PUBLIC_API_URL:-http://localhost:8001}
ports:
- "3000:3000"
```
## Structure
```
frontend/
├── app/ # Pages Next.js (App Router)
│ ├── layout.tsx # Layout principal (charge config.js)
│ └── page.tsx # Page d'accueil
├── components/ # Composants React
├── lib/ # Utilitaires
│ ├── api.ts # Client API (lit la config runtime)
│ └── types.ts # Types TypeScript
├── public/ # Fichiers statiques
│ └── config.js # Configuration runtime (généré au démarrage)
├── generate-config.sh # Script de génération de config
└── Dockerfile # Image Docker de production
```
## Variables d'Environnement
- `NEXT_PUBLIC_API_URL` : URL de l'API backend (ex: `https://api.example.com:8001`)
## Troubleshooting
### L'API n'est pas accessible
Vérifiez que :
1. La variable `NEXT_PUBLIC_API_URL` est correctement définie
2. Le fichier `/app/public/config.js` existe dans le container
3. Le navigateur peut accéder à l'URL de l'API (pas de CORS, firewall, etc.)
### Voir la configuration active
Ouvrez la console du navigateur et tapez :
```javascript
console.log(window.__RUNTIME_CONFIG__)
```
### Vérifier la config dans le container
```bash
docker exec audio_classifier_ui cat /app/public/config.js
```

View File

@@ -2,6 +2,8 @@ import type { Metadata } from "next"
import { Inter } from "next/font/google"
import "./globals.css"
import { QueryProvider } from "@/components/providers/QueryProvider"
import AuthGuard from "@/components/AuthGuard"
import Script from "next/script"
const inter = Inter({ subsets: ["latin"] })
@@ -17,9 +19,14 @@ export default function RootLayout({
}) {
return (
<html lang="en">
<head>
<Script src="/config.js" strategy="beforeInteractive" />
</head>
<body className={inter.className}>
<QueryProvider>
{children}
<AuthGuard>
{children}
</AuthGuard>
</QueryProvider>
</body>
</html>

124
frontend/app/login/page.tsx Normal file
View File

@@ -0,0 +1,124 @@
"use client"
import { useState } from "react"
import { useRouter } from "next/navigation"
import { getApiUrl } from "@/lib/api"
export default function LoginPage() {
const router = useRouter()
const [email, setEmail] = useState("")
const [password, setPassword] = useState("")
const [error, setError] = useState("")
const [isLoading, setIsLoading] = useState(false)
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault()
setError("")
setIsLoading(true)
try {
const response = await fetch(`${getApiUrl()}/api/auth/login`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ email, password }),
})
if (!response.ok) {
const data = await response.json()
throw new Error(data.detail || "Login failed")
}
const data = await response.json()
// Store token in localStorage
localStorage.setItem("access_token", data.access_token)
localStorage.setItem("user", JSON.stringify(data.user))
// Redirect to home
router.push("/")
} catch (err) {
setError(err instanceof Error ? err.message : "Login failed")
} finally {
setIsLoading(false)
}
}
return (
<div className="min-h-screen flex items-center justify-center bg-gradient-to-br from-gray-900 via-gray-800 to-gray-900">
<div className="max-w-md w-full mx-4">
<div className="bg-white rounded-lg shadow-2xl p-8">
{/* Logo/Title */}
<div className="text-center mb-8">
<h1 className="text-3xl font-bold text-gray-900 mb-2">
Audio Classifier
</h1>
<p className="text-gray-600">Sign in to continue</p>
</div>
{/* Error message */}
{error && (
<div className="mb-4 p-3 bg-red-50 border border-red-200 text-red-700 rounded-md text-sm">
{error}
</div>
)}
{/* Login form */}
<form onSubmit={handleSubmit} className="space-y-6">
<div>
<label
htmlFor="email"
className="block text-sm font-medium text-gray-700 mb-1"
>
Email
</label>
<input
id="email"
type="email"
required
value={email}
onChange={(e) => setEmail(e.target.value)}
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent"
placeholder="admin@example.com"
disabled={isLoading}
/>
</div>
<div>
<label
htmlFor="password"
className="block text-sm font-medium text-gray-700 mb-1"
>
Password
</label>
<input
id="password"
type="password"
required
value={password}
onChange={(e) => setPassword(e.target.value)}
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent"
placeholder="••••••••"
disabled={isLoading}
/>
</div>
<button
type="submit"
disabled={isLoading}
className="w-full bg-blue-600 hover:bg-blue-700 text-white font-medium py-2 px-4 rounded-md transition-colors disabled:bg-blue-400 disabled:cursor-not-allowed"
>
{isLoading ? "Signing in..." : "Sign in"}
</button>
</form>
</div>
{/* Footer */}
<p className="text-center text-gray-400 text-sm mt-6">
Audio Classifier v1.0.0
</p>
</div>
</div>
)
}

View File

@@ -2,7 +2,8 @@
import { useState, useMemo } from "react"
import { useQuery } from "@tanstack/react-query"
import { getTracks } from "@/lib/api"
import { getTracks, getApiUrl } from "@/lib/api"
import { logout, getUser } from "@/lib/auth"
import type { FilterParams, Track } from "@/lib/types"
import FilterPanel from "@/components/FilterPanel"
import AudioPlayer from "@/components/AudioPlayer"
@@ -52,6 +53,7 @@ export default function Home() {
const [filters, setFilters] = useState<FilterParams>({})
const [page, setPage] = useState(0)
const [currentTrack, setCurrentTrack] = useState<Track | null>(null)
const [isPlaying, setIsPlaying] = useState(false)
const [searchQuery, setSearchQuery] = useState("")
const [isScanning, setIsScanning] = useState(false)
const [scanStatus, setScanStatus] = useState<string>("")
@@ -89,7 +91,7 @@ export default function Home() {
setIsScanning(true)
setScanStatus("Démarrage du scan...")
const response = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/library/scan`, {
const response = await fetch(`${getApiUrl()}/api/library/scan`, {
method: 'POST',
})
@@ -102,7 +104,7 @@ export default function Home() {
// Poll scan status
const pollInterval = setInterval(async () => {
try {
const statusResponse = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/library/scan/status`)
const statusResponse = await fetch(`${getApiUrl()}/api/library/scan/status`)
const status = await statusResponse.json()
if (!status.is_scanning) {
@@ -159,6 +161,18 @@ export default function Home() {
{tracksData?.total || 0} piste{(tracksData?.total || 0) > 1 ? 's' : ''}
</div>
{/* Logout button */}
<button
onClick={logout}
className="px-3 py-2 text-sm text-slate-600 hover:text-slate-900 hover:bg-slate-100 rounded-lg transition-colors flex items-center gap-2"
title="Déconnexion"
>
<svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M17 16l4-4m0 0l-4-4m4 4H7m6 4v1a3 3 0 01-3 3H6a3 3 0 01-3-3V7a3 3 0 013-3h4a3 3 0 013 3v1" />
</svg>
Logout
</button>
{/* Rescan button */}
<button
onClick={handleRescan}
@@ -233,10 +247,19 @@ export default function Home() {
<div className="flex items-center gap-4">
{/* Play button */}
<button
onClick={() => setCurrentTrack(track)}
onClick={() => {
if (currentTrack?.id === track.id) {
// Toggle play/pause for current track
setIsPlaying(!isPlaying)
} else {
// Switch to new track and start playing
setCurrentTrack(track)
setIsPlaying(true)
}
}}
className="flex-shrink-0 w-12 h-12 flex items-center justify-center bg-orange-500 hover:bg-orange-600 rounded-full transition-colors shadow-sm"
>
{currentTrack?.id === track.id ? (
{currentTrack?.id === track.id && isPlaying ? (
<svg className="w-5 h-5 text-white" fill="currentColor" viewBox="0 0 24 24">
<path d="M6 4h4v16H6V4zm8 0h4v16h-4V4z"/>
</svg>
@@ -347,7 +370,11 @@ export default function Home() {
{/* Fixed Audio Player at bottom */}
<div className="fixed bottom-0 left-0 right-0 z-50">
<AudioPlayer track={currentTrack} />
<AudioPlayer
track={currentTrack}
isPlaying={isPlaying}
onPlayingChange={setIsPlaying}
/>
</div>
</div>
)

View File

@@ -2,13 +2,15 @@
import { useState, useRef, useEffect } from "react"
import type { Track } from "@/lib/types"
import { getApiUrl } from "@/lib/api"
interface AudioPlayerProps {
track: Track | null
isPlaying: boolean
onPlayingChange: (playing: boolean) => void
}
export default function AudioPlayer({ track }: AudioPlayerProps) {
const [isPlaying, setIsPlaying] = useState(false)
export default function AudioPlayer({ track, isPlaying, onPlayingChange }: AudioPlayerProps) {
const [currentTime, setCurrentTime] = useState(0)
const [duration, setDuration] = useState(0)
const [volume, setVolume] = useState(1)
@@ -22,7 +24,7 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
// Load audio and waveform when track changes
useEffect(() => {
if (!track) {
setIsPlaying(false)
onPlayingChange(false)
setCurrentTime(0)
setWaveformPeaks([])
return
@@ -33,13 +35,13 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
if (audioRef.current) {
audioRef.current.load()
// Autoplay when track loads
audioRef.current.play().then(() => {
setIsPlaying(true)
}).catch((error: unknown) => {
console.error("Autoplay failed:", error)
setIsPlaying(false)
})
// Autoplay when track loads if isPlaying is true
if (isPlaying) {
audioRef.current.play().catch((error: unknown) => {
console.error("Autoplay failed:", error)
onPlayingChange(false)
})
}
}
}, [track?.id])
@@ -54,7 +56,7 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
setDuration(audio.duration)
}
}
const handleEnded = () => setIsPlaying(false)
const handleEnded = () => onPlayingChange(false)
audio.addEventListener("timeupdate", updateTime)
audio.addEventListener("loadedmetadata", updateDuration)
@@ -78,7 +80,7 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
setIsLoadingWaveform(true)
try {
const response = await fetch(
`${process.env.NEXT_PUBLIC_API_URL}/api/audio/waveform/${trackId}`
`${getApiUrl()}/api/audio/waveform/${trackId}`
)
if (response.ok) {
const data = await response.json()
@@ -91,15 +93,24 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
}
}
const togglePlay = () => {
if (!audioRef.current || !track) return
// Sync playing state with audio element
useEffect(() => {
const audio = audioRef.current
if (!audio) return
if (isPlaying) {
audioRef.current.pause()
audio.play().catch((error: unknown) => {
console.error("Play failed:", error)
onPlayingChange(false)
})
} else {
audioRef.current.play()
audio.pause()
}
setIsPlaying(!isPlaying)
}, [isPlaying, onPlayingChange])
const togglePlay = () => {
if (!audioRef.current || !track) return
onPlayingChange(!isPlaying)
}
const handleVolumeChange = (e: React.ChangeEvent<HTMLInputElement>) => {
@@ -151,7 +162,7 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
return (
<div className="bg-gray-50 border-t border-gray-300 shadow-lg" style={{ height: '80px' }}>
{/* Hidden audio element */}
{track && <audio ref={audioRef} src={`${process.env.NEXT_PUBLIC_API_URL}/api/audio/stream/${track.id}`} />}
{track && <audio ref={audioRef} src={`${getApiUrl()}/api/audio/stream/${track.id}`} />}
<div className="h-full flex items-center gap-3 px-4">
{/* Play/Pause button */}
@@ -290,7 +301,7 @@ export default function AudioPlayer({ track }: AudioPlayerProps) {
{/* Download button */}
{track && (
<a
href={`${process.env.NEXT_PUBLIC_API_URL}/api/audio/download/${track.id}`}
href={`${getApiUrl()}/api/audio/download/${track.id}`}
download
className="w-8 h-8 flex items-center justify-center text-gray-600 hover:text-gray-900 transition-colors rounded hover:bg-gray-200 flex-shrink-0"
aria-label="Download"

View File

@@ -0,0 +1,37 @@
"use client"
import { useEffect, useState } from "react"
import { useRouter, usePathname } from "next/navigation"
import { isAuthenticated } from "@/lib/auth"
export default function AuthGuard({ children }: { children: React.ReactNode }) {
const router = useRouter()
const pathname = usePathname()
const [isChecking, setIsChecking] = useState(true)
useEffect(() => {
// Skip auth check for login page
if (pathname === "/login") {
setIsChecking(false)
return
}
// Check if user is authenticated
if (!isAuthenticated()) {
router.push("/login")
} else {
setIsChecking(false)
}
}, [pathname, router])
// Show loading while checking auth
if (isChecking && pathname !== "/login") {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-900">
<div className="text-white">Loading...</div>
</div>
)
}
return <>{children}</>
}

View File

@@ -0,0 +1,15 @@
#!/bin/sh
# Generate runtime configuration file
echo "Generating runtime configuration..."
echo "API URL: ${NEXT_PUBLIC_API_URL:-http://localhost:8001}"
cat > /app/public/config.js << EOF
// Runtime configuration generated at container startup
window.__RUNTIME_CONFIG__ = {
API_URL: '${NEXT_PUBLIC_API_URL:-http://localhost:8001}'
};
EOF
echo "Configuration generated successfully!"
cat /app/public/config.js

View File

@@ -14,28 +14,63 @@ import type {
FilterParams,
} from './types'
const API_BASE_URL = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000'
// Get API URL from runtime config (injected at container startup) or fallback to env var
export function getApiUrl(): string {
if (typeof window !== 'undefined' && (window as any).__RUNTIME_CONFIG__) {
return (window as any).__RUNTIME_CONFIG__.API_URL
}
return process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000'
}
const apiClient = axios.create({
baseURL: API_BASE_URL,
headers: {
'Content-Type': 'application/json',
},
})
// Create axios instance dynamically to use runtime config
function getApiClient() {
const client = axios.create({
baseURL: getApiUrl(),
headers: {
'Content-Type': 'application/json',
},
})
// Add JWT token to requests if available
client.interceptors.request.use((config) => {
if (typeof window !== 'undefined') {
const token = localStorage.getItem('access_token')
if (token) {
config.headers.Authorization = `Bearer ${token}`
}
}
return config
})
// Handle 401 errors (redirect to login)
client.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401 && typeof window !== 'undefined') {
localStorage.removeItem('access_token')
localStorage.removeItem('user')
window.location.href = '/login'
}
return Promise.reject(error)
}
)
return client
}
// Tracks
export async function getTracks(params: FilterParams & { skip?: number; limit?: number }): Promise<TracksResponse> {
const response = await apiClient.get('/api/tracks', { params })
const response = await getApiClient().get('/api/tracks', { params })
return response.data
}
export async function getTrack(id: string): Promise<Track> {
const response = await apiClient.get(`/api/tracks/${id}`)
const response = await getApiClient().get(`/api/tracks/${id}`)
return response.data
}
export async function deleteTrack(id: string): Promise<void> {
await apiClient.delete(`/api/tracks/${id}`)
await getApiClient().delete(`/api/tracks/${id}`)
}
// Search
@@ -43,7 +78,7 @@ export async function searchTracks(
query: string,
filters?: { genre?: string; mood?: string; limit?: number }
): Promise<SearchResponse> {
const response = await apiClient.get('/api/search', {
const response = await getApiClient().get('/api/search', {
params: { q: query, ...filters },
})
return response.data
@@ -51,7 +86,7 @@ export async function searchTracks(
// Similar
export async function getSimilarTracks(id: string, limit: number = 10): Promise<SimilarTracksResponse> {
const response = await apiClient.get(`/api/tracks/${id}/similar`, {
const response = await getApiClient().get(`/api/tracks/${id}/similar`, {
params: { limit },
})
return response.data
@@ -59,30 +94,30 @@ export async function getSimilarTracks(id: string, limit: number = 10): Promise<
// Analysis
export async function analyzeFolder(request: AnalyzeFolderRequest): Promise<{ job_id: string }> {
const response = await apiClient.post('/api/analyze/folder', request)
const response = await getApiClient().post('/api/analyze/folder', request)
return response.data
}
export async function getAnalyzeStatus(jobId: string): Promise<JobStatus> {
const response = await apiClient.get(`/api/analyze/status/${jobId}`)
const response = await getApiClient().get(`/api/analyze/status/${jobId}`)
return response.data
}
export async function deleteJob(jobId: string): Promise<void> {
await apiClient.delete(`/api/analyze/job/${jobId}`)
await getApiClient().delete(`/api/analyze/job/${jobId}`)
}
// Audio
export function getStreamUrl(trackId: string): string {
return `${API_BASE_URL}/api/audio/stream/${trackId}`
return `${getApiUrl()}/api/audio/stream/${trackId}`
}
export function getDownloadUrl(trackId: string): string {
return `${API_BASE_URL}/api/audio/download/${trackId}`
return `${getApiUrl()}/api/audio/download/${trackId}`
}
export async function getWaveform(trackId: string, numPeaks: number = 800): Promise<WaveformData> {
const response = await apiClient.get(`/api/audio/waveform/${trackId}`, {
const response = await getApiClient().get(`/api/audio/waveform/${trackId}`, {
params: { num_peaks: numPeaks },
})
return response.data
@@ -90,14 +125,12 @@ export async function getWaveform(trackId: string, numPeaks: number = 800): Prom
// Stats
export async function getStats(): Promise<Stats> {
const response = await apiClient.get('/api/stats')
const response = await getApiClient().get('/api/stats')
return response.data
}
// Health
export async function healthCheck(): Promise<{ status: string }> {
const response = await apiClient.get('/health')
const response = await getApiClient().get('/health')
return response.data
}
export default apiClient

34
frontend/lib/auth.ts Normal file
View File

@@ -0,0 +1,34 @@
/**
* Authentication utilities
*/
export function getToken(): string | null {
if (typeof window === "undefined") return null
return localStorage.getItem("access_token")
}
export function setToken(token: string): void {
localStorage.setItem("access_token", token)
}
export function removeToken(): void {
localStorage.removeItem("access_token")
localStorage.removeItem("user")
}
export function getUser(): any | null {
if (typeof window === "undefined") return null
const user = localStorage.getItem("user")
return user ? JSON.parse(user) : null
}
export function isAuthenticated(): boolean {
return getToken() !== null
}
export function logout(): void {
removeToken()
if (typeof window !== "undefined") {
window.location.href = "/login"
}
}

20
frontend/middleware.ts Normal file
View File

@@ -0,0 +1,20 @@
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
export function middleware(request: NextRequest) {
// Middleware runs on server, can't access localStorage
// Auth check will be done client-side in layout.tsx
return NextResponse.next()
}
export const config = {
matcher: [
/*
* Match all request paths except for the ones starting with:
* - _next/static (static files)
* - _next/image (image optimization files)
* - favicon.ico (favicon file)
*/
'/((?!_next/static|_next/image|favicon.ico).*)',
],
}

View File

@@ -0,0 +1,4 @@
// This file will be overwritten at container startup
window.__RUNTIME_CONFIG__ = {
API_URL: 'http://localhost:8001'
};