The surprising ease and effectiveness of AI in a loop (Interconnected)
AI is still in the foothills of its adoption S-curve, and I love this period of any new technology – the scope of what it can do is unknown, so the main job is to stretch the imagination and try out things. Anyway, the tech am I digging recently is a software framework called LangChain (here are the docs) which does something pretty straightforward: it makes it easy to call OpenAI's GPT, say, a dozen times in a loop to answer a single question, and mix in queries to Wikipedia and other databases. This is a big deal because of a technique called ReAct from a paper out of Princeton and Google Research (the ReAct website links to the Nov 2022 paper, sample code, etc). ReAct looks innocuous but here's the deal: instead of asking GPT to simply do smart-autocomplete on your text, you prompt it to respond in a thought/act/observation loop. Thought: Let's think step by step.
Apr-6-2023, 19:10:45 GMT
- Technology: