# better_wav2lip
**Repository Path**: shaneluik/better_wav2lip
## Basic Information
- **Project Name**: better_wav2lip
- **Description**: No description available
- **Primary Language**: Python
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 2
- **Created**: 2024-01-10
- **Last Updated**: 2024-01-25
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Better wav2lip model version.
Original repo: https://github.com/Rudrabha/Wav2Lip
- [x] model size 288x288, 384x384, 512x512
- [x] PRelu
- [x] LeakyRelu
- [x] Gradient penalty
- [x] Wasserstein Loss
- [x] SAM-UNet: https://github.com/1343744768/Multiattention-UNet
Each line on filelist should be full path
First, Train syncnet
```
python3 train_syncnet_sam.py
```
Second, train wav2lip-Sam
```
python3 hq_wav2lip_sam_train.py
```
Some demo from chinese users:
https://github.com/primepake/wav2lip_288x288/issues/89#issue-2047907323
# New Features: DINet full pipeline training
Original repo: https://github.com/MRzzm/DINet
- [ ] Syncnet training using deepspeech, melspectrogram
- [ ] DINet frame training using melspectrogram
- [ ] DINet clip training using melspectrogram
## Citing
To cite this repository:
```bibtex
@misc{Wav2Lip,
author={Rudrabha},
title={Wav2Lip: Accurately Lip-syncing Videos In The Wild},
year={2020},
url={https://github.com/Rudrabha/Wav2Lip}
}
```