The Mega Prompt Workshop

From Iterative Learnings to a Reusable AI Tool

Introduction & Goals

A "mega prompt" is a comprehensive set of instructions, rules, and strategies that you provide to an LLM before your specific task. It's a "superprompt" designed to front-load all your hard-won knowledge, transforming a general-purpose AI into a specialized tool for a specific task.

Your goal is to build, test, and generalize one mega prompt based on your notes and the class's collective notes from a previous exercise. The final mega prompt Google Doc you create is the main deliverable for this exercise.

Part 1: The "Seed" - Mine the Collective Knowledge

Your old notes—and your classmates'—are the raw material for your new mega prompt. You will use an LLM to parse this unstructured data and extract patterns of success and failure.

Task: Analyze Your Notes

  1. Choose Your Topic: Select one previous exercise (e.g., "Generating Scientific Visuals").
  2. Gather Data: Open your personal notes for that exercise. Then, open the class-wide notes vault (from the link in Part 1) and browse 2-3 other students' notes for the same exercise. Look for common themes, tips, or frustrations.
  3. Use an LLM to Summarize: Go to a powerful reasoning LLM (like Gemini 2.5 Pro). You can either attach your notes Google Doc(s) directly or paste their contents into a prompt like the one below, making sure to adapt it for your chosen topic [Your Topic].
Example Prompt
Act as a prompt engineering analyst. I am trying to build a 'mega prompt' for [Your Topic]. I have attached my personal notes Google Doc and a few observations from my classmates. Your task is to analyze all of this feedback and extract two lists: 1. A "Success List" of common successful strategies, pro-tips, and specific instructions. 2. A "Failure List" of common problems, pitfalls, and bad outputs. [Optionally, paste your notes and observations here if you cannot attach files]

Part 2: The First Draft - Building Mega Prompt v1.0

Now you will synthesize your "Success List" into your first mega prompt. The best way to do this is to have an LLM help you.

Task: Generate Mega Prompt v1.0

Use the "Success List" from the previous step to generate a structured set of instructions, paying special attention to the Execution Strategy.

Below is a suggested template, not a "gold standard." Your goal is to create a mega-prompt that works for your task. You should add, remove, or modify these sections. For example, your prompt might not need a 'Style & Tone' section, or it might need a new 'Error Handling' section. Start with this, but make it your own.

Example Prompt
Using the "Success List" you just generated, synthesize these points into a single, coherent "mega prompt" template. Structure the mega prompt with the following sections: - **Persona:** "Act as an expert..." - **Core Task:** A general description of the main goal. - **Context:** Instructions on what source material will be provided. - **Execution Strategy:** HOW the AI must think and apply the rules. (e.g., "For writing, you must first create an outline, then generate paragraph by paragraph." or "For coding, you must first create a file-by-file plan, then generate code for each file sequentially.") - **Rules & Constraints:** A strict, bulleted list of "You must..." and "You must not..." - **Style & Tone:** Instructions on the desired writing style. - **Output Format:** Instructions on how the output should be structured.

Result: You now have your "Mega Prompt v1.0".

Part 3: The "Unit Test" - Applying v1.0

A prompt is only as good as its output. You should first test it on a simple, repeatable task.

Task: Run Your Unit Test

  1. Define Your "Unit Test" Task: Create one simple, "clean" task.
    • Example (for Figures): "Generate Python code for a 2-panel figure showing a line graph and a bar graph using placeholder data."
    • Example (for Related Work): "Write a 3-paragraph related work section based on the attached 2 abstracts."
  2. Run the Test:
    • Start a new, fresh LLM session (critical to avoid context leaks).
    • Attach your "Mega Prompt v1.0" Google Doc file.
    • After it, paste your "Unit Test" Task.
    • Run the prompt and save the one-shot output (you can copy this into your Google Doc for reference).

Part 4: Post-Mortem & AI-Assisted Refinement to v2.0

Critically evaluate the output from v1.0. This is where you identify the flaws in your prompt. Then, you will use the AI to iterate on the prompt itself.

Task: Analyze and Iterate with an LLM

  1. Analyze the Output: Look at the output from v1.0 and compare it to your prompt. Ask yourself:
    • What instructions did the AI follow perfectly?
    • What instructions did it ignore, misunderstand, or get wrong?
    • Strategy Check: Did the AI actually follow your Execution Strategy?
    • What was bad, missing, or incorrect?
    • What new instructions do I need to add to fix these flaws?
  2. Iterate using an LLM (e.g., in the Canvas): Instead of editing the prompt manually, instruct the AI to make the changes for you. Attach your "Mega Prompt v1.0" Google Doc, and then provide a follow-up prompt with your requested edits. This new, improved version will be your "Mega Prompt v2.0".
Example Iteration Prompt
[Attach your Mega Prompt v1.0 Google Doc] --- Now, I need you to modify the attached mega prompt based on my analysis. The v1.0 prompt produced text that was too verbose and redundant. Please add a new rule to the "Rules & Constraints" section that instructs the AI to use concise language and to actively check for and remove unnecessary or repeated sentences.

Part 5: The "Verification Run" - Applying v2.0

Did your changes work? You should test it again using the exact same "Unit Test" task.

Task: Verify Your Fixes

  1. Run the Test (Again):
    • Start another new, fresh LLM session.
    • Attach your "Mega Prompt v2.0" file.
    • After it, paste the exact same "Unit Test" Task from Part 3.
  2. Compare: Place the output from v1.0 and v2.0 side-by-side.

Did your v2.0 prompt fix the problems from v1.0? If yes, you are ready for stress testing. If no, repeat Part 4 and 5 until the unit test passes.

Part 6: The "Stress Test" & Generalizing to v3.0

A prompt that only works on one simple task isn't a "mega prompt"; it's an "overfit" one. Now it's time to test its robustness.

Task: Generalize Your Prompt

  1. Define "Stress Test" Tasks: Create 2-3 new and different versions of your task. They should be more complex or varied.
    • Example (for Figures): (1) "Generate code for a 3-panel figure with a shared X-axis." (2) "Generate code for a scatter plot with a complex legend."
    • Example (for Related Work): (1) "Write a related work section based on 5 attached abstracts, not 2." (2) "Write a related work section for a paper on a niche topic."
  2. Run the Stress Tests: Apply your "Mega Prompt v2.0" (by attaching it) to each new stress test task (in fresh sessions).
  3. Analyze the Failures: Where did v2.0 fall short? What hidden assumptions did it have?
    • Example: "My v2.0 prompt assumed all plots were simple. It failed when I asked for a shared X-axis."
  4. Iterate to v3.0: Analyze these new failures. What rules or strategy changes are needed to handle this new complexity? Edit your prompt again (using the AI-assisted method from Part 4) to create a more robust, generalized "Mega Prompt v3.0". This is your final, battle-tested prompt.

Reflection

Final Analysis Questions

Think about the following questions. We will use them as a basis for our class discussion.

  • How did the output from v2.0 compare to v1.0 on the simple "unit test"?
  • How did your "verified" v2.0 prompt perform on the new, harder "stress tests"? What new weaknesses did this reveal?
  • What was the most important change you made to get from v2.0 to v3.0? (Was it a specific rule or a change to the Execution Strategy?)
  • How did mining your classmates' notes help you build a more robust initial prompt?
  • Is this "mega prompt" approach worth the up-front effort, or do you prefer iterative prompting? When would you use each approach?