This page summarizes the GUIDE-LLM checklist: -
- Core items represent the minimum required reporting standards.
- Optional items provide additional context when relevant.
✔️ Core Items
| Item | Description |
|---|---|
| A.1 | LLMs were used in this project for |
| A.2 | Human-in-the-loop vs. fully automated |
| B.1 | Model name, provider, size, version/ID, date of access, source link |
| B.2 | API/web/local access; chat vs. separate call mode |
| B.3 | Relevant parameters (temperature, max tokens, seed, runs) |
| B.4 | Describe any fine-tuning or customization |
| B.5 | Whether the LLM session retained state across interactions |
| C.1 | Exact prompt(s) reported |
| C.2 | System-wide instructions (if any) |
| D.1 | Handling of personal/sensitive data |
| E.1 | Human validation of LLM outputs |
| E.2 | Any filtering, reformatting, or other post-processing |
| F.1 | Code/notebooks/scripts for LLM calls shared |
| G.1 | Funding, support, or relevant relationships |
➕ Optional Items
| Optional Item | Fields |
|---|---|
| Justification for LLM choice | Performance, Transparency, Reproducibility Ethical considerations Others (e.g., cost, ease-of-use) |
| Rationale for prompt design | |
| Comparison against other methods/LLMs | |
| Training data leakage risks addressed | |
| Risk of bias or systematic differences affecting conclusions | |
| Conversation transcripts | |
| Ethical implications of the research | |
| Computational resources |
Download full checklist with detailed instructions:
For practical illustrations: see the how-to-use guide which includes several checklists filled out using real examples.