Aucune description
- Python 100%
- GUI (CustomTkinter) with Connect / Browse / Download screens - CLI with list and download subcommands - S3 file listing with patient/date filters and pagination - Parallel downloads (8 threads) with progress bar - Avro-to-CSV conversion for all Empatica sensors - CSV merging per patient/date with contiguity validation - Post-merge cleanup (move avro to subfolder, remove individual CSVs) - Dark Slate theme with Xft-enabled Tk for antialiased fonts - 45 tests (models, AWS client, avro processor, CSV merger) |
||
|---|---|---|
| src | ||
| tests | ||
| .gitignore | ||
| CONTRIBUTING.md | ||
| environment.yml | ||
| pyproject.toml | ||
| README.md | ||
| requirements.txt | ||
Empatica Data Recovery
A desktop tool for downloading and processing Empatica wristwatch data from AWS S3.
Works as a GUI application (point-and-click) or from the command line — whichever you prefer.
What it does
- Connects to your lab's AWS S3 bucket using your personal key file
- Lists all the
.avrodata files, with filtering by patient and date - Downloads the ones you pick into organized
Patient_X/YYYY-MM-DD/folders - Converts each
.avrofile into per-sensor CSV files (accelerometer, gyroscope, EDA, temperature, etc.) - Merges CSVs from the same patient/date into combined files for easier analysis
Quick start
1. Install
You need Python 3.10+ and conda.
Option A: From a local clone (recommended for development)
# Create the full environment from the provided file
conda env create -f environment.yml
conda activate empatica_recovery
This installs everything (Python, Xft-enabled Tk for crisp fonts on Linux, and all dependencies) in one step.
Option B: Install directly from GitHub
# Create a conda env with the Xft Tk build (Linux only — skip the tk line on macOS/Windows)
conda create -n empatica_recovery python=3.12 -y
conda activate empatica_recovery
conda install -c conda-forge "tk=8.6.13=xft_h891c84d_3"
# Install the package straight from the repo
pip install "empatica-recovery @ git+ssh://git@git.interactions-team.fr/fouad_boutaleb/empatica_recovery.git"
Option C: Editable install (for contributors)
git clone ssh://git@git.interactions-team.fr/fouad_boutaleb/empatica_recovery.git
cd empatica_recovery
conda env create -f environment.yml
conda activate empatica_recovery
# Already installed in editable mode by environment.yml — you're good to go
2. Run the GUI
empatica-gui
Three screens will guide you through:
- Connect — pick your AWS key file (.csv) and click "Load & Connect"
- Browse — filter by patient IDs and/or dates, then check the files you want
- Download — choose your output folder, enable CSV conversion + merging, and hit "Start download"
3. Or use the command line
# List all available files
empatica-cli list --key ~/my_aws_key.csv
# List only patients 5-10
empatica-cli list --key ~/my_aws_key.csv --patients "5,[5-10]"
# Download everything from Jan 2025, convert to CSV, and merge
empatica-cli download --key ~/my_aws_key.csv \
--start 2025-01-01 --end 2025-01-31 \
--csv --merge \
--output ~/Documents/EmpaticaData
Your AWS key file
Your key file is a .csv with three columns:
| Access Key ID | Secret Access Key | S3 Access URL |
|---|---|---|
| AKIA... | wJal... | s3://my-bucket/data/prefix |
Ask your lab admin for this file if you don't have one.
Project structure
empatica_recovery/
├── src/
│ ├── main.py # GUI entry point
│ ├── cli.py # CLI entry point
│ ├── config.py # App-wide defaults
│ ├── core/
│ │ ├── models.py # Shared data structures
│ │ ├── aws_client.py # S3 connect / list / download
│ │ ├── avro_processor.py # .avro → CSV conversion
│ │ └── csv_merger.py # Merge CSVs across files
│ ├── gui/
│ │ ├── app.py # Main window + navigation
│ │ ├── frames/ # Connect / Browse / Download screens
│ │ └── widgets/ # File tree, progress panel
│ └── utils/
│ ├── file_utils.py # File size formatting, paths
│ └── logging_config.py
├── tests/ # pytest test suite
├── pyproject.toml # Package definition
└── README.md
Running tests
pip install -e ".[dev]"
pytest tests/ -v
Requirements
- Python 3.10+
- boto3 (AWS SDK)
- fastavro (Avro file reading)
- pandas (data processing)
- customtkinter (GUI)
All dependencies are installed automatically by pip install -e .