This gist works as a hands-on note for running llama.cpp on various GPU. It may out-of-date due to the proj update.
This is only a personal record so readers may not have an out-of-box hands-on experience
Record the verified configs. The project is still developing very fast, so the granularity for the record is specified to the commit id. | Imple. | Device | OS | llama.cpp version | 3rd party version | Step | | ------ | --------------------------------- | ------------ | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------