Do LLMs estimate uncertainty well in instruction-following?

Open in new window