Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela, 2020Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2005.11401 - Introduces the foundational Retrieve-Augmented Generation (RAG) architecture, highlighting the need and methods for integrating retrieved knowledge into language model prompts.
Prompt Engineering Guide, Prompt Engineering Guide Contributors, 2024 - A comprehensive online resource providing strategies and techniques for designing effective prompts for large language models, including how to structure prompts with external context.
LangChain Documentation - Prompt Templates, LangChain, 2024 - Official documentation illustrating the use of prompt templates in the LangChain framework, a practical approach for managing and injecting context into LLM prompts.
Lost in the Middle: How Language Models Use Long Contexts, Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang, 2023Transactions of the Association for Computational Linguistics (TACL)DOI: 10.48550/arXiv.2307.03172 - This research investigates how the placement of information within long contexts affects language model performance, specifically discussing phenomena like 'recency bias' relevant to context injection.