Files
lancedb/docs/src/examples/python_examples/evaluations.md
Rithik Kumar 38015ffa7c docs: improve overall language on all example pages (#1582)
Refine and improve the language clarity and quality across all example
pages in the documentation to ensure better understanding and
readability.

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-31 03:48:11 +05:30

1.7 KiB

Evaluation: Assessing Text Performance with Precision 📊💡

Evaluation is a comprehensive tool designed to measure the performance of text-based inputs, enabling data-driven optimization and improvement 📈.

Text Evaluation 101 📚

Using robust framework for assessing reference and candidate texts across various metrics📊, ensure that the text outputs are high-quality and meet specific requirements and standards📝.

Evaluation Description Links
Evaluating Prompts with Prompttools 🤖 Compare, visualize & evaluate embedding functions (incl. OpenAI) across metrics like latency & custom evaluation 📈📊 Github
Open In Collab
Evaluating RAG with RAGAs and GPT-4o 📊 Evaluate RAG pipelines with cutting-edge metrics and tools, integrate with CI/CD for continuous performance checks, and generate responses with GPT-4o 🤖📈 Github
Open In Collab