About Prompt
- Prompt Type – Dynamic
- Prompt Platform – ChatGPT, Grok, Deepseek, Gemini, Copilot, Midjourney, Meta AI and more
- Niche – Citizen Service Automation
- Language – English
- Category – Public Sector Applications
- Prompt Title – AI Prompt for Automating Public Service Complaint Categorization
Prompt Details
This prompt is designed for dynamic use across various AI platforms to automate the categorization of public service complaints within the Citizen Service Automation niche. It aims to enhance efficiency and accuracy in processing citizen feedback for public sector applications.
**Prompt Template:**
“`
Categorize the following citizen complaint into one or more predefined categories, providing confidence levels for each category and explaining the reasoning behind your classification. Consider the context, sentiment, and specific issues mentioned within the complaint. If the complaint is ambiguous or requires further clarification, indicate this and suggest potential clarifying questions. The complaint may contain multiple issues spanning different categories.
**Predefined Categories:** (Provide a list of your predefined categories here. Example below)
* **Roads and Infrastructure:** Potholes, streetlights, sidewalks, traffic signals, snow removal
* **Waste Management:** Garbage collection, recycling, illegal dumping, yard waste
* **Parks and Recreation:** Park maintenance, playground equipment, recreational programs
* **Public Safety:** Police, fire, emergency services, crime reporting
* **Noise Complaints:** Construction noise, loud music, barking dogs
* **Water and Sewer:** Water quality, leaks, sewer backups, drainage issues
* **Other:** (For complaints not fitting into existing categories)
**Complaint Text:** {{complaint_text}}
**Output Format:**
“`json
{
“categories”: [
{
“category”: “Category Name”,
“confidence”: 0.XX,
“reasoning”: “Explanation for categorization”
},
// … more categories as needed
],
“clarification_needed”: true/false,
“clarification_questions”: [
“Question 1”,
“Question 2”,
// … more questions as needed
]
}
“`
**Example Usage:**
**Predefined Categories:** (Use your specific categories here)
* **Roads and Infrastructure:** … (as above)
* … (rest of your categories)
**Complaint Text:** “The streetlight on the corner of Main Street and Elm Street has been out for a week. It’s very dark and dangerous at night. Also, there’s a large pothole on Main Street a few blocks down that needs to be filled.”
**Expected Output (Example):**
“`json
{
“categories”: [
{
“category”: “Roads and Infrastructure”,
“confidence”: 0.95,
“reasoning”: “The complaint mentions a broken streetlight and a pothole, both related to road infrastructure.”
}
],
“clarification_needed”: false,
“clarification_questions”: []
}
“`
**Dynamic Elements and Best Practices:**
* **Predefined Categories:** This section is crucial for accurate categorization. Ensure the categories are comprehensive, mutually exclusive, and relevant to your specific public service domain. Regularly review and update this list based on incoming complaint data.
* **Complaint Text:** This is the dynamic input where the actual citizen complaint text will be inserted.
* **Confidence Levels:** The model should provide a confidence score (e.g., 0.85) for each assigned category, reflecting the certainty of the classification. This helps identify potential misclassifications and allows for human review when confidence is low.
* **Reasoning:** Requiring the model to explain its reasoning provides transparency and insights into the model’s decision-making process. This is invaluable for debugging and improving the model’s performance.
* **Clarification Needed & Questions:** The model should identify when a complaint is ambiguous or requires further information. It should also suggest specific clarifying questions to be posed to the citizen.
* **JSON Output:** Utilizing a structured JSON output facilitates easy integration with downstream systems and automated workflows.
**Further Refinements:**
* **Sentiment Analysis:** Integrate sentiment analysis to gauge the citizen’s emotional tone (positive, negative, neutral). This adds valuable context to the complaint.
* **Location Information:** If available, incorporate location data (e.g., address, GPS coordinates) from the complaint to improve routing and resolution.
* **Multilingual Support:** Adapt the prompt and predefined categories for multilingual environments to cater to a diverse citizenry.
* **Continuous Improvement:** Regularly evaluate the model’s performance and adjust the prompt, categories, and training data to improve accuracy and efficiency.
This dynamic prompt framework provides a robust and adaptable foundation for automating public service complaint categorization across diverse AI platforms, ultimately contributing to a more responsive and efficient citizen service experience.
“`