Clinical cancer staging traditionally requires specialized expertise to interpret complex imaging reports, creating bottlenecks in diagnosis and treatment planning. This constraint becomes particularly challenging when analyzing large datasets for research or quality improvement initiatives across healthcare systems.

Two advanced AI language models demonstrated remarkable precision in analyzing prostate cancer staging from PSMA PET-CT scan reports, with Gemini 2.5 Pro achieving 93.8% accuracy and ChatGPT 4o reaching 91.3% accuracy when compared to expert nuclear medicine specialists. Both systems successfully classified tumor stage, lymph node involvement, metastasis status, and overall disease volume using structured prompts embedded with established clinical criteria. The AI models processed unstructured Turkish-language radiology reports and extracted standardized staging information with Cohen's kappa values exceeding 0.87, indicating near-expert agreement levels.

This performance represents a significant advancement in medical AI applications, as cancer staging requires multi-step clinical reasoning rather than simple pattern recognition. The ability to automatically parse narrative radiology reports could accelerate treatment decisions, enable large-scale retrospective studies, and standardize staging across institutions with varying expertise levels. However, the study's limitation to 80 reports and single-language testing suggests cautious implementation. While these results indicate AI could serve as a valuable clinical decision support tool, the technology likely functions best as an augmentation to, rather than replacement for, specialized radiological expertise in oncology care pathways.