Use convert.py to transform ChatGLM-6B into quantized GGML format. For example, to convert the fp16 original model to q4_0 (quantized int4) GGML model, run: python3 ...
So, I have just upgraded to Python 3.12 and I tried to install aiohttp as usual (py -m pip install aiohttp). The full output is down below. Searching online, I only found the answer to downgrade to ...