How to Auto-optimize Prompts for Domain Tasks? Adaptive Prompting and Reasoning through Evolutionary Domain Knowledge Adaptation

NeurIPS 2025
Corresponding author

Abstract

Designing optimal prompts and reasoning processes for large language models (LLMs) on domain-specific tasks is both necessary and challenging in real-world applications. Determining how to integrate domain knowledge, enhance reasoning efficiency, and even provide domain experts with refined knowledge integration hints are particularly crucial yet unresolved tasks. In this research, we propose Evolutionary Graph Optimization for Prompting (EGO-Prompt), an automated framework to designing better prompts, efficient reasoning processes and providing enhanced causal-informed process. EGO-Prompt begins with an general prompt and fault-tolerant initial Semantic Causal Graph (SCG) descriptions, constructed by human experts, which is then automatically refined and optimized to guide LLM reasoning. Recognizing that expert-defined SCGs may be partial or imperfect and that their optimal integration varies across LLMs, EGO-Prompt integrates a novel causal-guided textual gradient process in two steps: first, generating nearly deterministic reasoning guidance from the SCG for each instance, and second, adapting the LLM to effectively utilize the guidance alongside the original input. The iterative optimization algorithm further refines both the SCG and the reasoning mechanism using textual gradients with ground-truth. We tested the framework on real-world public health, transportation and human behavior tasks. EGO-Prompt achieves 7.32%–12.61% higher F1 than cutting-edge methods, and allows small models to reach the performence of larger models at under 20% of the original cost. It also outputs a refined, domain-specific SCG that improves interpretability.

Figures

Overview of the proposed EGO-Prompt framework.
Overview of the proposed EGO-Prompt framework. (a) LLMs often struggle with domain-specific tasks due to the optimal prompt design and domain knowledge gap. Existing methods rely on the external database or established graph. In comparison, EGO-Prompt evolutionarily incorporates expert knowledge with minimal cost. (b) We represent external knowledge as a graph-based structure. A graph-enhanced prompt is then generated to guide the LLM’s reasoning. Both the graph and the prompt are iteratively optimized using textual gradients from ground-truth data.

BibTeX citation

@inproceedings{zhao2025how,
author = {Yang Zhao, Pu Wang, Hao Frank Yang},
title = {How to Auto-optimize Prompts for Domain Tasks? Adaptive Prompting and Reasoning through Evolutionary Domain Knowledge Adaptation},
booktitle={The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS)},
year = {2025},
}