#llm #ai
Created at 041223
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought
Last modified date: 041223
Commit: 0
# Related
# LLM hyperparams cheatsheet
| Parameter | Type | Default | Description | Range |
| ---- | ---- | ---- | ---- | ---- |
| `frequency_penalty` | number | 0 | Penalizes new tokens based on frequency in existing text to avoid repetition. | -2.0 to 2.0 |
| `logit_bias` | map | - | Modifies likelihood of certain tokens. Maps tokens to bias values (-100 to 100). | - |
| `max_tokens` | integer | - | Limits the number of generated tokens. Context length affects total token count. | - |
| `n` | integer | 1 | Number of chat completion choices generated. Lower `n` reduces cost. | - |
| `presence_penalty` | number | 0 | Penalizes tokens already used, encouraging discussion of new topics. | -2.0 to 2.0 |
| `seed` | integer | - | For deterministic sampling. Not guaranteed. | - |
| `stop` | string/array | - | Sets sequences where the API stops generating tokens. | - |
| `temperature` | number | 1 | Controls randomness in output (higher = more random). | 0 to 2 |
| `top_p` | number | 1 | Nucleus sampling. Considers only tokens in the top percentage probability mass. | - |