garak.generators.ggml

ggml generator support

This generator works with ggml models in gguf format like llama.cpp.

Put the path to your ggml executable (e.g. “/home/leon/llama.cpp/main”) in an environment variable named GGML_MAIN_PATH, and pass the path to the model you want to run either using –model_name on the command line or as the constructor parameter when instantiating LLaMaGgmlGenerator.

Compatibility or other problems? Please let us know!

https://github.com/NVIDIA/garak/issues

class garak.generators.ggml.GgmlGenerator(name='', config_root=<module 'garak._config' from '/home/docs/checkouts/readthedocs.org/user_builds/garak/checkouts/latest/docs/source/../../garak/_config.py'>)

Bases: Generator

Generator interface for ggml models in gguf format.

Set the path to the model as the model name, and put the path to the ggml executable in environment variable GGML_MAIN_PATH.

DEFAULT_PARAMS = {'context_len': None, 'exception_on_failure': True, 'first_call': True, 'frequency_penalty': 0.0, 'key_env_var': 'GGML_MAIN_PATH', 'max_tokens': 150, 'presence_penalty': 0.0, 'repeat_penalty': 1.1, 'skip_seq_end': None, 'skip_seq_start': None, 'temperature': 0.8, 'top_k': 40, 'top_p': 0.95}
command_params()
generator_family_name = 'ggml'