Sometimes, I am not able to reproduce the output I see on Langtail elsewhere even after retrying multiple times. It'll be great if I could see the raw request and response from LLM provider on each generation in the chat UI.
This is also helpful to know other response values like system fingerprint while developing a prompt