About Prompt
- Prompt Type – Dynamic
- Prompt Platform – ChatGPT, Grok, Deepseek, Gemini, Copilot, Midjourney, Meta AI and more
- Niche – Conversational AI
- Language – English
- Category – AI Platforms
- Prompt Title – AI Prompt for Multilingual Question-Answering System Development
Prompt Details
This prompt facilitates the development of a robust, multilingual conversational question-answering system adaptable across various AI platforms. It leverages dynamic variables to specify language, context, question type, desired response format, and evaluation criteria, enabling granular control over the system’s behavior.
**Prompt Template:**
“`
You are developing a multilingual conversational question-answering system. Given the following parameters, generate the specified output:
**1. Target Languages (required):**
– List of ISO 639-1 language codes (e.g., en, fr, es, de, zh). Specify at least two languages.
**2. Context (required):**
– Provide the textual context from which the question should be answered. This could be a paragraph, a document, a webpage URL, or a knowledge base identifier. Ensure the context is relevant to the target languages. If using a URL, specify how to access it (e.g., “Retrieve content from URL X using GET request”). If referencing a knowledge base, provide clear instructions for accessing the relevant information.
**3. Question (required):**
– The question to be answered in one of the target languages. Clearly indicate the language used for the question.
**4. Question Type (optional):**
– Specify the type of question, such as:
– Factoid (seeking a specific fact)
– Definition (seeking the meaning of a term)
– Explanation (seeking a detailed explanation)
– Opinion (seeking a subjective viewpoint – use with caution)
– Translation (translating the question to another language)
– Summarization (summarizing the context)
– Other (specify the type)
– If unspecified, the system should infer the question type.
**5. Desired Response Format (optional):**
– Specify the desired format for the answer, such as:
– Text
– JSON (specify the schema)
– XML
– HTML
– Code (specify the language)
– If unspecified, the system should default to a concise textual answer.
**6. Evaluation Criteria (optional):**
– Specify criteria for evaluating the quality of the generated answer, such as:
– Accuracy
– Relevance
– Fluency
– Conciseness
– Completeness
– Objectivity (for factual questions)
– Creativity (for opinion-based questions – use with caution)
– Include specific metrics or guidelines whenever possible (e.g., “Accuracy should be measured by comparing the answer to a gold-standard dataset.”).
**7. Output (required):**
– Based on the provided parameters, generate one of the following:
– **”Answer:”** followed by the answer to the question in the same language as the question. If translation is requested, provide the translated question and the answer in the target translation language.
– **”Code:”** followed by the code necessary to implement a component of the question-answering system, such as a function for retrieving context, processing the question, generating the answer, or evaluating the answer. Specify the programming language for the code.
– **”System Design:”** followed by a design document outlining the architecture and components of the multilingual question-answering system. This should include details on language processing, knowledge representation, information retrieval, and answer generation.
– **”Evaluation Results:”** followed by an evaluation of the system’s performance on a specific dataset or task, based on the specified evaluation criteria.
**Example Usage:**
“`
1. Target Languages: en, es
2. Context: “The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower.”
3. Question (en): “Who designed the Eiffel Tower?”
4. Question Type: Factoid
5. Desired Response Format: Text
6. Evaluation Criteria: Accuracy
7. Output: Answer:
“`
Expected Output:
“`
Answer: Gustave Eiffel’s company.
“`
By modifying the parameters in this template, you can generate various outputs for developing and evaluating your multilingual question-answering system. This dynamic approach allows for extensive experimentation and fine-tuning of the system across diverse languages and contexts. Ensure that the context provided is sufficient and relevant for the question being asked. For complex tasks, break down the prompt into smaller, more manageable sub-prompts to achieve optimal results.
“`