In this post, we’ll talk about why fine-tuning is probably not necessary for your app, and why applying two of the most common techniques to the base GPT models — few-shot prompting and retrieval-augmented generation (RAG) — are sufficient for most use cases.
We encourage people to work on problems that are neglected by others and large in scale. Unfortunately those are precisely the problems where people can do the most damage if their approach isn’t carefully thought through. If a problem is very important, then setting back the cause is very bad. If a problem is so neglected that you’re among the first focused on it, then you’ll have a disproportionate influence on the field’s reputation, how likely others are to enter it, and many early decisions that could have path-dependent effects on the field’s long-term success.