A curated collection of resources for Long Chain-of-Thought (Long-CoT) reasoning in LLMs, featuring papers, implementations, and datasets to track the latest advancements in the field.
Awesome Long Chain-of-Thought Reasoning#
Overview#
A curated list of resources dedicated to Long Chain-of-Thought (Long-CoT) Reasoning. Following the release of models like OpenAI o1 and DeepSeek-R1, "Long Chain-of-Thought"—performing deep, multi-step internal reasoning before responding—has become a key technique for enhancing logical capabilities in Large Language Models.
This project systematically organizes research achievements in the Long-CoT field within AI, addressing the information fragmentation caused by the explosive growth of research in this area.
Core Value#
- Academic Research Portal: Quick access to SOTA papers and theoretical foundations in Long-CoT
- Development Reference: Index of open-source reasoning chain datasets and RL fine-tuning code
- Technology Trend Tracking: Understanding mainstream long-reasoning model architectures and paradigm evolution
Resource Categories#
| Category | Description |
|---|---|
| Paper Collection | ArXiv academic paper index related to Long-CoT |
| Open-source Implementations | Related GitHub repos (e.g., DeepSeek-R1, Open-O1) |
| Datasets | Data resources for training long-reasoning capabilities |
| Methodologies | Sub-directions including Tree of Thoughts, Self-Correction, RL Fine-tuning |
Use Cases#
- LLM Reasoning Research
- Prompt Engineering & System 2 Thinking exploration
- RLHF/RLAIF-related Reinforcement Learning applications
- Multi-modal long-reasoning tasks
Quick Start#
# Clone for local browsing
git clone https://github.com/LightChen233/Awesome-Long-Chain-of-Thought-Reasoning.git
Or visit the GitHub repository directly, browse the categorized table of contents in the README, and navigate to resource links of interest.