forked from mrq/bitsandbytes-rocm
Updated docs (#32) and changelog.
This commit is contained in:
parent
62b6a9399d
commit
b844e104b7
13
CHANGELOG.md
13
CHANGELOG.md
|
@ -117,3 +117,16 @@ Features:
|
|||
|
||||
Bug fixes:
|
||||
- fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors
|
||||
|
||||
|
||||
### 0.35.0
|
||||
|
||||
#### CUDA 11.8 support and bug fixes
|
||||
|
||||
Features:
|
||||
- CUDA 11.8 support added and binaries added to the PyPI release.
|
||||
|
||||
Bug fixes:
|
||||
- fixed a bug where too long directory names would crash the CUDA SETUP #35 (thank you @tomaarsen)
|
||||
- fixed a bug where CPU installations on Colab would run into an error #34 (thank you @tomaarsen)
|
||||
- fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52
|
||||
|
|
|
@ -10,6 +10,8 @@ Resources:
|
|||
- [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/)
|
||||
|
||||
## TL;DR
|
||||
**Requirements**
|
||||
Linux distribution (Ubuntu, MacOS, etc.) + CUDA >= 10.0. LLM.int8() requires Turing or Ampere GPUs.
|
||||
**Installation**:
|
||||
``pip install bitsandbytes``
|
||||
|
||||
|
@ -52,6 +54,8 @@ Hardware requirements:
|
|||
|
||||
Supported CUDA versions: 10.2 - 11.7
|
||||
|
||||
The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment.
|
||||
|
||||
The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website.
|
||||
|
||||
## Using bitsandbytes
|
||||
|
|
Loading…
Reference in New Issue
Block a user