XLang Paper Reading
🧠A curated collection of papers on building and evaluating language model agents via executable language grounding, covering LLM code generation, agents with tool use, web grounding, and robotics research.
A curated collection of papers on building and evaluating language model agents via executable language grounding, covering LLM code generation, agents with tool use, web grounding, and robotics research.
A benchmark for evaluating the code generation capabilities of large language models, featuring 1,140 software-engineering-oriented programming tasks with two modes (Complete and Instruct) to test models on complex instructions and diverse function call scenarios.
A practical guide for building production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph, ideal for developers looking to move AI prototypes to production environments.
Page 1 / 1 · 3 total
Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.