From e2ca8daee1fdc8586851943350128f9bed8ffca1 Mon Sep 17 00:00:00 2001 From: Prashant Dixit <54981696+PrashantDixit0@users.noreply.github.com> Date: Wed, 2 Oct 2024 21:15:24 +0530 Subject: [PATCH] docs: saleforce's sfr rag (#1717) This PR adds Salesforce's newly released SFR RAG --- docs/mkdocs.yml | 5 ++++- docs/src/rag/sfr_rag.md | 17 +++++++++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) create mode 100644 docs/src/rag/sfr_rag.md diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 5c91577f..5abf132e 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -120,6 +120,7 @@ nav: - Graph RAG: rag/graph_rag.md - Self RAG: rag/self_rag.md - Adaptive RAG: rag/adaptive_rag.md + - SFR RAG: rag/sfr_rag.md - Advanced Techniques: - HyDE: rag/advanced_techniques/hyde.md - FLARE: rag/advanced_techniques/flare.md @@ -247,6 +248,7 @@ nav: - Graph RAG: rag/graph_rag.md - Self RAG: rag/self_rag.md - Adaptive RAG: rag/adaptive_rag.md + - SFR RAG: rag/sfr_rag.md - Advanced Techniques: - HyDE: rag/advanced_techniques/hyde.md - FLARE: rag/advanced_techniques/flare.md @@ -362,4 +364,5 @@ extra: - icon: fontawesome/brands/x-twitter link: https://twitter.com/lancedb - icon: fontawesome/brands/linkedin - link: https://www.linkedin.com/company/lancedb + link: https://www.linkedin.com/company/lancedb + \ No newline at end of file diff --git a/docs/src/rag/sfr_rag.md b/docs/src/rag/sfr_rag.md new file mode 100644 index 00000000..9f063575 --- /dev/null +++ b/docs/src/rag/sfr_rag.md @@ -0,0 +1,17 @@ +**SFR RAG 📑** +==================================================================== +Salesforce AI Research introduces SFR-RAG, a 9-billion-parameter language model trained with a significant emphasis on reliable, precise, and faithful contextual generation abilities specific to real-world RAG use cases and relevant agentic tasks. They include precise factual knowledge extraction, distinguishing relevant against distracting contexts, citing appropriate sources along with answers, producing complex and multi-hop reasoning over multiple contexts, consistent format following, as well as refraining from hallucination over unanswerable queries. + +**[Offical Implementation](https://github.com/SalesforceAIResearch/SFR-RAG)** + +
+ ![agent-based-rag](https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/rag/salesforce_contextbench.png) +
Average Scores in ContextualBench: Source +
+
+ +To reliably evaluate LLMs in contextual question-answering for RAG, Saleforce introduced [ContextualBench](https://huggingface.co/datasets/Salesforce/ContextualBench?ref=blog.salesforceairesearch.com), featuring 7 benchmarks like [HotpotQA](https://arxiv.org/abs/1809.09600?ref=blog.salesforceairesearch.com) and [2WikiHopQA](https://www.aclweb.org/anthology/2020.coling-main.580/?ref=blog.salesforceairesearch.com) with consistent setups. + +SFR-RAG outperforms GPT-4o, achieving state-of-the-art results in 3 out of 7 benchmarks, and significantly surpasses Command-R+ while using 10 times fewer parameters. It also excels at handling context, even when facts are altered or conflicting. + +[Saleforce AI Research Blog](https://blog.salesforceairesearch.com/sfr-rag/)