When signal is lost for hard prompts during RL (all trajectories are wrong), the LLM self-generates a hint to help sampling, improving both prompt usage and LLM performance. When an LLM can’t sample ...
Give this prompt to a freshly installed OpenCode instance. It will clone the repo, install everything, and configure itself. The user only needs to provide their API tokens when asked. Copy everything ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results