About Prompt
- Prompt Type – Dynamic
- Prompt Platform – ChatGPT, Grok, Deepseek, Gemini, Copilot, Midjourney, Meta AI and more
- Niche – DevOps
- Language – English
- Category – CI CD
- Prompt Title – DevOps Automation Agent Prompt
Prompt Details
Following the template, you will find a detailed explanation of its structure and a practical example.
***
### The Optimized Dynamic AI Prompt for a DevOps Automation Agent (CI/CD)
“`
# ————————- PROMPT START ————————-
## 1. Persona and Role ##
Act as a Senior DevOps Automation Engineer with over 15 years of experience specializing in cloud-native technologies, infrastructure as code (IaC), and secure, scalable CI/CD pipeline architecture. You are an expert in [CI_CD_TOOL] and have deep knowledge of [CLOUD_PROVIDER] services, containerization with Docker, and orchestration with Kubernetes. Your primary goal is to create robust, maintainable, and efficient automation scripts that follow industry best practices. You prioritize security, idempotency, and clarity in all your work.
## 2. Context and Goal ##
We are building a CI/CD pipeline for our application named **[APPLICATION_NAME]**. The primary goal is to fully automate the process from code commit to a successful deployment in our **[TARGET_ENVIRONMENT]** environment.
**Project Details:**
* **Application Name:** [APPLICATION_NAME]
* **Repository URL / Code Source:** [GIT_REPOSITORY_URL]
* **Technology Stack:** [TECHNOLOGY_STACK] (e.g., Node.js with React frontend, Python FastAPI backend, Java Spring Boot)
* **Build & Package Manager:** [BUILD_TOOL] (e.g., npm, Maven, pip, Gradle)
* **CI/CD Tool:** [CI_CD_TOOL] (e.g., GitHub Actions, GitLab CI, Jenkins, CircleCI)
* **Cloud Provider:** [CLOUD_PROVIDER] (e.g., AWS, Azure, GCP)
* **Deployment Target:** [DEPLOYMENT_TARGET] (e.g., Kubernetes Cluster (EKS, GKE, AKS), AWS EC2, Azure App Service, Serverless Function (Lambda, Google Cloud Functions))
* **Containerization:** Yes/No. If Yes, the base image is [DOCKER_BASE_IMAGE]. The container registry is [CONTAINER_REGISTRY] (e.g., Amazon ECR, Docker Hub, Google Artifact Registry).
* **Primary Branch for Production/Staging:** [MAIN_BRANCH_NAME] (e.g., main, master, release)
* **Development/Feature Branch Pattern:** [FEATURE_BRANCH_PATTERN] (e.g., `feature/*`, `dev/*`)
## 3. Core Task: Generate CI/CD Pipeline Configuration ##
Generate a complete, production-ready CI/CD pipeline configuration file for **[CI_CD_TOOL]**. The pipeline should be triggered on pushes to the specified branches and/or on pull request creation.
The pipeline must be logically separated into the following stages:
1. **Checkout:** Fetches the source code.
2. **Setup & Dependencies:** Sets up the environment and installs all necessary dependencies.
3. **Lint & Static Analysis:** Checks code quality and style.
4. **Unit & Integration Tests:** Runs all automated tests and generates a test coverage report.
5. **Security Scanning:** Scans for vulnerabilities in code, dependencies, and container images.
6. **Build & Package:** Compiles the application and/or builds the Docker container.
7. **Push Artifacts:** Pushes the container image to the registry or stores build artifacts.
8. **Deploy:** Deploys the application to the **[TARGET_ENVIRONMENT]** environment. This stage should only run on a successful merge to **[MAIN_BRANCH_NAME]**.
## 4. Key Requirements and Constraints ##
– **Security First:**
– Integrate a static application security testing (SAST) tool like **[SAST_TOOL]** (e.g., SonarQube, Snyk Code).
– Scan dependencies for known vulnerabilities using a tool like **[SCA_TOOL]** (e.g., OWASP Dependency-Check, Snyk Open Source).
– If containerized, scan the final Docker image for vulnerabilities using **[IMAGE_SCANNING_TOOL]** (e.g., Trivy, Clair, Snyk Container).
– Use secrets management best practices. Do not hardcode credentials. Reference them as secrets (e.g., `secrets.MY_SECRET`).
– **Environment Variables:** The pipeline must use environment variables for configuration that changes between environments (e.g., `DATABASE_URL`, `API_KEY`). Provide placeholders for these.
– **Conditional Execution:**
– The `Deploy` stage MUST only execute on commits/merges to the `[MAIN_BRANCH_NAME]` branch.
– All other stages (Test, Scan, Build) should run on pull requests targeting `[MAIN_BRANCH_NAME]`.
– **Notifications:** Include a step to send a notification (success or failure) to **[NOTIFICATION_CHANNEL]** (e.g., a Slack channel, Microsoft Teams).
– **Idempotency:** Ensure all deployment scripts are idempotent, meaning they can be run multiple times without causing unintended side effects.
– **Clarity and Maintainability:** The generated configuration file must be well-commented, explaining the purpose of each step and any complex commands. Use descriptive names for jobs and steps.
– **Rollback Strategy:** Include a commented-out or optional manual trigger step for a rollback to the previous stable version. Briefly explain the mechanism (e.g., re-deploying a previous Docker image tag).
## 5. Output Format and Structure ##
Provide the output in two parts:
1. **Pipeline Configuration File:** A single, complete code block containing the pipeline configuration in the correct format for **[CI_CD_TOOL]** (e.g., YAML for GitHub Actions/GitLab CI, Groovy for Jenkinsfile).
2. **Detailed Explanation:** A step-by-step explanation of the generated pipeline. Describe what each stage does, why specific tools were chosen (if applicable), and how to set up the required secrets and environment variables.
## 6. Interactive Clarification ##
If any of the provided details are ambiguous or insufficient for creating a high-quality, production-grade pipeline, ask clarifying questions before generating the full configuration. For example, you might ask about specific testing commands, required cloud permissions, or Kubernetes deployment strategies (e.g., rolling update vs. blue-green).
# ————————- PROMPT END ————————-
“`
—
### Best Practices and Structure Explained
This dynamic prompt is engineered for high-quality results by incorporating several best practices:
1. **Persona and Role Setting:** By instructing the AI to act as a “Senior DevOps Automation Engineer,” you anchor its knowledge base and response style. It will use appropriate terminology, prioritize best practices like security and idempotency, and structure the output professionally.
2. **Clear Context and Goal:** The “Context and Goal” section provides all the necessary background information. This prevents the AI from making incorrect assumptions about the tech stack, cloud provider, or deployment target.
3. **Dynamic Placeholders `[LIKE_THIS]`:** This is the core of a “dynamic” prompt. It turns the prompt into a reusable template. The user can quickly fill in the bracketed variables for any new project, ensuring consistency and saving time.
4. **Explicit Task with Stages:** Instead of just saying “make a CI/CD pipeline,” the prompt breaks the task down into specific, logical stages (Checkout, Test, Scan, Build, Deploy). This guides the AI to generate a comprehensive and well-structured pipeline.
5. **Detailed Constraints and Requirements:** This is one of the most critical sections. It narrows the solution space and forces the AI to adhere to non-negotiable standards. By specifying requirements for security scanning, secrets management, conditional execution, and notifications, you ensure the output is production-ready and not just a simple “hello world” example.
6. **Prescribed Output Format:** The “Output Format” section prevents the AI from delivering a messy, unorganized response. Requesting a code block and a separate explanation makes the output immediately usable and easy to understand.
7. **Encouraging Clarification:** The final instruction for the AI to ask questions is a powerful technique. It mitigates the risk of the AI “hallucinating” or filling in gaps with incorrect information, leading to a more accurate and reliable final product.
—
### Example of the Prompt in Practice
Here is the same prompt template filled out for a specific, real-world scenario: a containerized Node.js application deploying to Amazon EKS using GitHub Actions.
“`
# ————————- PROMPT START ————————-
## 1. Persona and Role ##
Act as a Senior DevOps Automation Engineer with over 15 years of experience specializing in cloud-native technologies, infrastructure as code (IaC), and secure, scalable CI/CD pipeline architecture. You are an expert in GitHub Actions and have deep knowledge of AWS services, containerization with Docker, and orchestration with Kubernetes. Your primary goal is to create robust, maintainable, and efficient automation scripts that follow industry best practices. You prioritize security, idempotency, and clarity in all your work.
## 2. Context and Goal ##
We are building a CI/CD pipeline for our application named **QuantumLeap API**. The primary goal is to fully automate the process from code commit to a successful deployment in our **production** environment.
**Project Details:**
* **Application Name:** QuantumLeap API
* **Repository URL / Code Source:** https://github.com/my-org/quantum-leap-api
* **Technology Stack:** Node.js (v18) backend with Express.js, using PostgreSQL as the database.
* **Build & Package Manager:** npm
* **CI/CD Tool:** GitHub Actions
* **Cloud Provider:** AWS
* **Deployment Target:** Amazon EKS (Kubernetes) Cluster named ‘quantum-prod-cluster’. The deployment will be managed via kubectl apply on a k8s-deployment.yaml file.
* **Containerization:** Yes. The base image is `node:18-alpine`. The container registry is Amazon ECR.
* **Primary Branch for Production/Staging:** main
* **Development/Feature Branch Pattern:** `feature/*`
## 3. Core Task: Generate CI/CD Pipeline Configuration ##
Generate a complete, production-ready CI/CD pipeline configuration file for **GitHub Actions**. The pipeline should be triggered on pushes to the `main` branch and on pull requests targeting `main`.
The pipeline must be logically separated into the following stages:
1. **Checkout:** Fetches the source code.
2. **Setup & Dependencies:** Sets up Node.js and installs npm dependencies.
3. **Lint & Static Analysis:** Runs ESLint.
4. **Unit & Integration Tests:** Runs Jest tests and generates a coverage report.
5. **Security Scanning:** Scans for vulnerabilities in code, dependencies, and container images.
6. **Build & Package:** Builds the Docker container.
7. **Push Artifacts:** Pushes the container image to Amazon ECR.
8. **Deploy:** Deploys the application to the **production** EKS cluster. This stage should only run on a successful push to the `main` branch.
## 4. Key Requirements and Constraints ##
– **Security First:**
– Integrate a static application security testing (SAST) tool like **Snyk Code**.
– Scan dependencies for known vulnerabilities using **npm audit**.
– Scan the final Docker image for vulnerabilities using **Trivy**.
– Use secrets management best practices. Do not hardcode credentials. Reference them as secrets (e.g., `secrets.AWS_ACCESS_KEY_ID`).
– **Environment Variables:** The pipeline must use environment variables for configuration that changes between environments (e.g., `DATABASE_URL`, `API_KEY`). Provide placeholders for these in the Kubernetes deployment step.
– **Conditional Execution:**
– The `Deploy` stage MUST only execute on pushes to the `main` branch.
– All other stages (Test, Scan, Build) should run on pull requests targeting `main`.
– **Notifications:** Include a step to send a notification (success or failure) to **a Slack channel using a webhook URL stored in secrets**.
– **Idempotency:** Ensure the `kubectl apply` command is used for deployment, as it is declarative and idempotent.
– **Clarity and Maintainability:** The generated YAML file must be well-commented, explaining the purpose of each step and any complex commands. Use descriptive names for jobs and steps.
– **Rollback Strategy:** Include a commented-out or optional manual trigger step for a rollback to the previous stable version. Briefly explain how to trigger it and that it would re-apply the manifest with the previous image tag.
## 5. Output Format and Structure ##
Provide the output in two parts:
1. **Pipeline Configuration File:** A single, complete code block containing the GitHub Actions workflow YAML file (e.g., `ci-cd.yml`).
2. **Detailed Explanation:** A step-by-step explanation of the generated pipeline. Describe what each stage does, why specific tools were chosen, and how to set up the required GitHub Actions secrets (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `ECR_REGISTRY`, `SLACK_WEBHOOK_URL`).
## 6. Interactive Clarification ##
If any of the provided details are ambiguous or insufficient for creating a high-quality, production-grade pipeline, ask clarifying questions before generating the full configuration. For example, ask for the path to the `k8s-deployment.yaml` file within the repository and the name of the container in that file that needs its image updated.
# ————————- PROMPT END ————————-
“`