mirror of https://github.com/leafspark/AutoGGUF
Merge branch 'main' of https://github.com/leafspark/AutoGGUF
This commit is contained in:
commit
e09f54dcb7
|
@ -2,6 +2,15 @@ # Changelog
|
|||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [1.4.2] - 2024-08-04
|
||||
|
||||
### Fixed
|
||||
- Resolves bug where Base Model text was shown even when GGML type was selected
|
||||
- Improved alignment
|
||||
|
||||
### Changed
|
||||
- Minor repository changes
|
||||
|
||||
## [1.4.1] - 2024-08-04
|
||||
|
||||
### Added
|
||||
|
|
|
@ -49,6 +49,7 @@ ### Commit Types:
|
|||
### Python Styleguide
|
||||
|
||||
- Follow PEP 8
|
||||
- Please use Black to format your code first
|
||||
- Use meaningful variable names
|
||||
- Comment your code, but don't overdo it
|
||||
|
||||
|
|
32
README.md
32
README.md
|
@ -2,14 +2,26 @@
|
|||
|
||||
# AutoGGUF - automated GGUF model quantizer
|
||||
|
||||
<!-- Project Status -->
|
||||
[](https://github.com/leafspark/AutoGGUF/releases)
|
||||
[](https://github.com/leafspark/AutoGGUF/commits)
|
||||
[]()
|
||||
|
||||
<!-- Project Info -->
|
||||
[](https://github.com/ggerganov/llama.cpp)
|
||||

|
||||

|
||||

|
||||
[]()
|
||||
[](https://github.com/leafspark/AutoGGUF/blob/main/LICENSE)
|
||||
|
||||
<!-- Repository Stats -->
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<!-- Contribution -->
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/leafspark/AutoGGUF/issues)
|
||||
[](https://github.com/leafspark/AutoGGUF/pulls)
|
||||
|
||||
AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options.
|
||||
|
||||
|
@ -55,17 +67,19 @@ ### Cross-platform
|
|||
|
||||
### Windows
|
||||
```bash
|
||||
build RELEASE/DEV
|
||||
build RELEASE | DEV
|
||||
```
|
||||
Find the executable in `build/<type>/dist/AutoGGUF.exe`.
|
||||
|
||||
## Dependencies
|
||||
|
||||
- PyQt6
|
||||
- requests
|
||||
- psutil
|
||||
- shutil
|
||||
- OpenSSL
|
||||
- numpy
|
||||
- torch
|
||||
- safetensors
|
||||
- gguf (bundled)
|
||||
|
||||
## Localizations
|
||||
|
||||
|
@ -77,7 +91,7 @@ ## Known Issues
|
|||
|
||||
- Saving preset while quantizing causes UI thread crash (planned fix: remove this feature)
|
||||
- Cannot delete task while processing (planned fix: disallow deletion before cancelling or cancel automatically)
|
||||
- Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)
|
||||
- ~~Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)~~ (fixed in v1.4.2)
|
||||
|
||||
## Planned Features
|
||||
|
||||
|
@ -95,7 +109,7 @@ ## Troubleshooting
|
|||
|
||||
## Contributing
|
||||
|
||||
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description.
|
||||
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read `CONTRIBUTING.md` for more information.
|
||||
|
||||
## User Interface
|
||||
|
||||
|
|
Loading…
Reference in New Issue