Towards Human-like AI. An attempt to make AI more general with…
After making many permutations to the thought stream lookback range, model temperature, and few-shot examples, the messages produced seem to be qualitatively worse when using a long thought stream than when using an arbitrarily short one, though this requires more experimentation, and a good benchmark. Intuitively this makes sense because a GPT trained on the internet wouldn't have many training examples of what a human was thinking (at least in a direct access format like this) before they said or wrote something. I'll need to rethink the way thoughts are incorporated or if they can be removed entirely. Perhaps thinking is an emergent property of intelligence and does not need to be explicitly included.
Dec-8-2022, 19:35:29 GMT
- Technology: