No Selected Card.
TitleAuthorsYearUrlPublication
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model SizesHsieh CY et al.2023sourcearXiv preprint arXiv:2305.02301
Automatic prompt optimization with 'gradient descent' and beam searchPryzant R et al.2023sourcearXiv preprint arXiv:2305.03495
React: Synergizing reasoning and acting in language modelsYao S et al.2022sourcearXiv preprint arXiv:2210.03629.
Chain-of-thought prompting elicits reasoning in large language modelsWei J et al.2022sourceAdvances in Neural Information Processing Systems
Tree of thoughts: Deliberate problem solving with large language modelsYao S et al.2023sourcearXiv preprint arXiv:2305.10601
Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data LakesArora S et al.2023sourcearXiv preprint arXiv:2304.09433
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language ModelsDuan H et al.2023sourcearXiv preprint arXiv:2305.15594
Your language model is secretly a reward modelRafailov R et al.2023sourcearXiv preprint arXiv:2305.18290
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data OnlyPenedo G et al.2023sourcearXiv preprint arXiv:2306.01116
Attention Is All You NeedVaswani A et al.2017sourceAdvances in neural information processing systems
titleauthorsyearurlpublicationVenue
Page
1 of 3
| Go to page:
10 Rows