Step 1 (1 minute): Create a document index (collection of documents which will be queried together at run-time), upload documents either through our API endpoint or our UI.

step-1.gif

Step 2 (2 minutes): Once the documents are indexed using your chosen embedding model and chunking strategy, they are stored in a vector database and can be queried through our search API. Choose the number of chunks you want returned.

2023-04-11_22-38-28_(1).gif

Step 3 (5 minutes): Go to Vellum Playground, start with our predefined prompt templates, do some prompt engineering, add the relevant chunks to your test cases and confirm the LLM is providing reasonable results.

2023-04-11_22-43-45_(1).gif