mirror of https://github.com/leafspark/AutoGGUF
docs: update features and shields
This commit is contained in:
parent
e5e18f9966
commit
4aa3eafef8
|
@ -18,6 +18,7 @@ # AutoGGUF - automated GGUF model quantizer
|
|||

|
||||

|
||||

|
||||

|
||||
|
||||
<!-- Contribution -->
|
||||
[](https://github.com/psf/black)
|
||||
|
@ -36,6 +37,7 @@ ## Features
|
|||
- LoRA conversion and merging
|
||||
- Preset saving and loading
|
||||
- AutoFP8 quantization
|
||||
- GGUF splitting
|
||||
|
||||
## Usage
|
||||
|
||||
|
@ -125,15 +127,15 @@ ## Localizations
|
|||
|
||||
## Issues
|
||||
|
||||
- None!
|
||||
- Some inconsistent logging
|
||||
|
||||
## Planned Features
|
||||
|
||||
- Time estimation for quantization
|
||||
- Actual progress bar tracking
|
||||
- Quantization file size estimate
|
||||
- Perplexity testing
|
||||
- HuggingFace upload/download (coming in the next release)
|
||||
- AutoFP8 quantization (partially done) and bitsandbytes (coming soon)
|
||||
- bitsandbytes (coming soon)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
|
Loading…
Reference in New Issue