garak.generators.openai
OpenAI API Compatible generators
Supports chat + chatcompletion models. Put your API key in an environment variable documented in the selected generator. Put the name of the model you want in either the –model_name command line parameter, or pass it as an argument to the Generator constructor.
sources: * https://platform.openai.com/docs/models/model-endpoint-compatibility * https://platform.openai.com/docs/model-index-for-researchers
- class garak.generators.openai.OpenAICompatible(name='', config_root=<module 'garak._config' from '/home/docs/checkouts/readthedocs.org/user_builds/garak/checkouts/latest/docs/source/../../garak/_config.py'>)
Bases:
Generator
Generator base class for OpenAI compatible text2text restful API. Implements shared initialization and execution methods.
- DEFAULT_PARAMS = {'context_len': None, 'frequency_penalty': 0.0, 'max_tokens': 150, 'presence_penalty': 0.0, 'retry_json': True, 'seed': None, 'stop': ['#', ';'], 'suppressed_params': {}, 'temperature': 0.7, 'top_k': None, 'top_p': 1.0}
- ENV_VAR = 'OPENAICOMPATIBLE_API_KEY'
- active = False
- generator_family_name = 'OpenAICompatible'
- supports_multiple_generations = True
- class garak.generators.openai.OpenAIGenerator(name='', config_root=<module 'garak._config' from '/home/docs/checkouts/readthedocs.org/user_builds/garak/checkouts/latest/docs/source/../../garak/_config.py'>)
Bases:
OpenAICompatible
Generator wrapper for OpenAI text2text models. Expects API key in the OPENAI_API_KEY environment variable
- ENV_VAR = 'OPENAI_API_KEY'
- active = True
- generator_family_name = 'OpenAI'
- class garak.generators.openai.OpenAIReasoningGenerator(name='', config_root=<module 'garak._config' from '/home/docs/checkouts/readthedocs.org/user_builds/garak/checkouts/latest/docs/source/../../garak/_config.py'>)
Bases:
OpenAIGenerator
Generator wrapper for OpenAI reasoning models, e.g. o1 family.
- DEFAULT_PARAMS = {'context_len': None, 'frequency_penalty': 0.0, 'max_completion_tokens': 1500, 'max_tokens': 150, 'presence_penalty': 0.0, 'retry_json': True, 'seed': None, 'stop': ['#', ';'], 'suppressed_params': {'max_tokens', 'n', 'stop', 'temperature'}, 'temperature': None, 'top_k': None, 'top_p': 1.0}
- supports_multiple_generations = False