docs: improve overall language on all example pages (#1582)

Refine and improve the language clarity and quality across all example
pages in the documentation to ensure better understanding and
readability.

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
This commit is contained in:
Rithik Kumar
2024-08-31 03:48:11 +05:30
committed by GitHub
parent dc72ece847
commit 38015ffa7c
9 changed files with 54 additions and 57 deletions

View File

@@ -1,18 +1,16 @@
**Evaluation: Assessing Text Performance with Precision 📊💡**
====================================================================
**Evaluation Fundamentals 📊**
Evaluation is a comprehensive tool designed to measure the performance of text-based inputs, enabling data-driven optimization and improvement 📈.
**Text Evaluation 101 📚**
By leveraging cutting-edge technologies, this provides a robust framework for evaluating reference and candidate texts across various metrics 📊, ensuring high-quality text outputs that meet specific requirements and standards 📝.
Using robust framework for assessing reference and candidate texts across various metrics📊, ensure that the text outputs are high-quality and meet specific requirements and standards📝.
| **Evaluation** | **Description** | **Links** |
| -------------- | --------------- | --------- |
| **Evaluating Prompts with Prompttools 🤖** | Compare, visualize & evaluate embedding functions (incl. OpenAI) across metrics like latency & custom evaluation 📈📊 | [![Github](../../assets/github.svg)][prompttools_github] <br>[![Open In Collab](../../assets/colab.svg)][prompttools_colab] |
| **Evaluating RAG with RAGAs and GPT-4o 📊** | Evaluate RAG pipelines with cutting-edge metrics and tools, integrate with CI/CD for continuous performance checks, and generate responses with GPT-4o 🤖📈 | [![Github](../../assets/github.svg)][RAGAs_github] <br>[![Open In Collab](../../assets/colab.svg)][RAGAs_colab] |
| **Evaluating Prompts with Prompttools 🤖** | Compare, visualize & evaluate **embedding functions** (incl. OpenAI) across metrics like latency & custom evaluation 📈📊 | [![Github](../../assets/github.svg)][prompttools_github] <br>[![Open In Collab](../../assets/colab.svg)][prompttools_colab] |
| **Evaluating RAG with RAGAs and GPT-4o 📊** | Evaluate **RAG pipelines** with cutting-edge metrics and tools, integrate with CI/CD for continuous performance checks, and generate responses with GPT-4o 🤖📈 | [![Github](../../assets/github.svg)][RAGAs_github] <br>[![Open In Collab](../../assets/colab.svg)][RAGAs_colab] |