This model’s maximum context length is 4096 tokens. However, you requested 4108 tokens (1012 in the messages, 3096 in the completion). Please reduce the length of the messages or completion.