Skip to main content

What Prompt History Is

Prompt History is a comprehensive record of every prompt execution across all AI platforms. It shows all queries you’ve run—whether scheduled or manual—along with the brands mentioned in responses, execution timestamps, and platform details. Unlike Scheduled Prompts (which shows your active tracking setup), Prompt History displays the actual execution results. Think of it as your complete audit trail for AI platform monitoring.

Why Prompt History Matters

Prompt History helps you:
  • Analyze historical trends: See how AI responses evolved over weeks or months
  • Compare platform differences: Understand how ChatGPT, Google AI Overview, and Google AI Mode respond differently to the same query
  • Identify brand mention patterns: Track when your brand (or competitors) appear or disappear from responses
  • Audit specific executions: Review individual prompt results to understand what AI platforms cited
  • Export data for reporting: Download execution results for presentations or deeper analysis
This historical view is essential for measuring the impact of your content strategy on AI visibility over time.

Understanding the Prompt History Table

The Prompt History screen displays all executions in a searchable, filterable table.

Query Details Column

Shows the prompt text that was executed and its assigned topic. Prompt text: The exact question or query sent to the AI platform (e.g., “best CRM software for small businesses”). Topic tag: Category assigned to the prompt (e.g., “CRM Solutions”, “Sales Tools”, “Customer Management”). Use this to:
  • Quickly identify which prompts generated specific results
  • Group related queries by topic
  • Understand the context of brand mentions

Brands Mentioned Column

Lists all brands that appeared in the AI-generated response for that execution. This column shows you at a glance which companies the AI platform referenced when answering the query. Brands are displayed as a comma-separated list. Example brands mentioned:
  • “Salesforce, HubSpot, Zoho CRM, Pipedrive, Freshsales, Monday Sales CRM, Insightly…”
Use this to:
  • See if your brand appeared in the response
  • Identify which competitors were mentioned
  • Spot new competitors entering AI responses
  • Track brand mention consistency across executions

Last Run Column

The timestamp when the prompt was executed (e.g., “6 Oct 2025, 2:33 pm”). Use this to:
  • Understand the recency of data
  • Compare executions from different dates
  • Track execution frequency for scheduled prompts

Platform Column

Visual indicator showing which AI platform generated the response. Platform options:
  • Google AI Mode: Google’s AI-powered search mode
  • Google Overview: Google’s AI Overview feature
  • ChatGPT: OpenAI’s ChatGPT responses
Each execution is specific to one platform. If you run the same prompt on multiple platforms, you’ll see separate entries for each. Use this to:
  • Filter results by platform
  • Compare how different platforms respond to the same query
  • Identify platform-specific brand mention patterns

Action Column

Actions you can take on each prompt execution.

View Results

Opens the detailed results page showing:
  • The full AI response text
  • Brand mentions with position rankings
  • Citations and sources referenced
  • Visibility scores and metrics
  • Competitor comparison data
This is your primary action for analyzing individual prompt executions in depth.

Download

Exports the execution data in a downloadable format. What you get:
  • Prompt text and metadata
  • AI response text
  • Brand mentions and positions
  • Citations and URLs
  • Metrics (visibility score, share of voice, etc.)
Use this for reporting, external analysis, or archiving important results.

Delete

Removes the specific execution from your history permanently.
Deleting an execution is permanent and cannot be undone. The execution data will be removed from all analytics and trend calculations. Only delete executions if you’re certain the data is no longer needed.

Search and Filter Options

Search by Name

Use the search bar to find specific prompts quickly. Search by prompt text: Enter keywords from the query (e.g., “CRM software”, “sales automation”). The search filters the execution list in real-time as you type.

Filter by Platform

Select specific AI platforms to narrow results:
  • All Platforms: Show executions from all AI platforms (default)
  • Google AI Mode: Only Google AI Mode executions
  • Google Overview: Only Google AI Overview executions
  • ChatGPT: Only ChatGPT executions
Use case: Compare how the same prompt performs across different platforms by filtering to one platform at a time.

Filter by Topic

Select a topic to view only executions tagged with that category. Use case: Analyze all executions related to a specific theme (e.g., “CRM Comparisons”, “Sales Automation”).

Filter by Date Range

Select a time period to focus on executions within specific dates. Common date ranges:
  • Last 7 days
  • Last 30 days
  • Last 3 months
  • Last 6 months
  • Custom date range
Use case: Analyze how brand mentions changed after publishing new content or launching a campaign.

Execution Count

At the top of the screen, you’ll see the total number of prompt executions (e.g., “1-20 of 394 Prompts”). This shows:
  • Current page range: Which executions you’re viewing (1-20)
  • Total executions: How many prompts have been run across all platforms and topics (394)
Use pagination controls to browse through all executions.

How to Use Prompt History

Compare Brand Mentions Over Time

Goal: See if your brand is gaining or losing visibility in AI responses. Steps:
  1. Search for a specific prompt text
  2. Sort by “Last Run” to see chronological order
  3. Review “Brands Mentioned” column across executions
  4. Note when your brand appeared vs. disappeared
Example: A prompt about “best CRM for real estate agents” shows your brand in 8 out of 10 recent executions, but it didn’t appear in executions from 3 months ago. This indicates improved AI visibility.

Identify Competitor Patterns

Goal: Understand which competitors consistently appear in AI responses. Steps:
  1. Filter by topic (e.g., “CRM Comparisons”)
  2. Review “Brands Mentioned” across multiple executions
  3. Identify brands that appear repeatedly
  4. Note new brands entering responses
Example: You notice a new CRM competitor appears in 5 recent executions but wasn’t mentioned in older results. This signals increased competitive pressure.

Analyze Platform Differences

Goal: Understand how different AI platforms respond to the same query. Steps:
  1. Search for a specific prompt text
  2. Filter by each platform individually
  3. Compare “Brands Mentioned” across platforms
  4. Identify platform-specific brand preferences
Example: ChatGPT consistently mentions your brand for a query, but Google AI Overview does not. This suggests you need to optimize differently for Google’s platform.

Track Topic Performance

Goal: See which topics generate the most brand mentions. Steps:
  1. Filter by topic
  2. Count how many executions mention your brand
  3. Calculate mention rate (your brand mentions / total executions)
  4. Compare mention rates across topics
Example: “CRM Comparisons” topic shows 60% brand mention rate, while “Sales Automation” shows 20%. Focus content efforts on CRM comparison topics.

Audit Specific Dates

Goal: Measure the impact of content or campaigns on AI visibility. Steps:
  1. Set date range to before and after content publication
  2. Compare brand mentions before vs. after
  3. Review any changes in competitor mentions
  4. Analyze citation patterns in detailed results
Example: After publishing a comprehensive CRM comparison guide, your brand appears in 15 new executions within a week, up from 5 the previous week.

Export for Reporting

Goal: Create reports for stakeholders showing AI visibility progress. Steps:
  1. Filter to the relevant time period and topic
  2. Download executions that show strong brand presence
  3. Compile data into presentation format
  4. Highlight trends and improvements
Example: Export all executions from Q3 showing your brand mentions to demonstrate ROI of AI optimization efforts.

Best Practices

1. Review History Weekly

Set a regular cadence to review new prompt executions. Weekly review checklist:
  • Check latest executions for brand mentions
  • Identify any sudden changes in competitor mentions
  • Note new brands appearing in responses
  • Flag executions with unexpected results for deeper analysis

2. Compare Executions for the Same Prompt

When analyzing trends, review multiple executions of the same prompt over time. What to compare:
  • Brand mention consistency
  • Position changes (if your brand moves up or down in responses)
  • Citation source changes
  • Competitor mention frequency

3. Use Filters to Focus Analysis

Don’t try to analyze all 394+ executions at once. Use filters to narrow your focus. Filtering strategy:
  • Start with a specific topic
  • Narrow by date range
  • Filter by platform if needed
  • Search for specific prompt text

4. Download Important Executions

Export executions that show significant wins or losses. Archive executions when:
  • Your brand achieves top mention position
  • A major competitor appears for the first time
  • AI platforms cite your owned content
  • You need evidence for stakeholder reports

5. Delete Only When Necessary

Keep historical data unless it’s genuinely irrelevant or incorrect. Reasons to delete:
  • Duplicate executions from testing
  • Executions with technical errors
  • Irrelevant queries that shouldn’t have been tracked
Don’t delete:
  • Old executions just because they’re old (historical data is valuable)
  • Executions where your brand didn’t appear (useful for gap analysis)

6. Look for Patterns Across Platforms

Don’t analyze platforms in isolation. Compare them to identify platform-specific optimization opportunities. Platform comparison questions:
  • Which platform mentions your brand most frequently?
  • Are competitor mentions consistent across platforms?
  • Do certain platforms prefer specific types of content or citations?

Common Use Cases

Measure Content Impact

Scenario: You published a comprehensive guide and want to see if it improved AI visibility. Approach:
  1. Filter to date range after publication
  2. Search for prompts related to the guide topic
  3. Check if brand mentions increased
  4. Review if AI platforms cite your guide
Scenario: Your product has seasonal demand, and you want to track AI mention patterns. Approach:
  1. Compare executions from the same time last year
  2. Filter by relevant topic
  3. Track brand mention frequency across seasons
  4. Adjust prompt scheduling based on seasonal patterns

Competitive Benchmarking

Scenario: You want to understand your share of AI visibility vs. competitors. Approach:
  1. Filter to a specific topic
  2. Count total executions
  3. Count executions mentioning your brand
  4. Count executions mentioning each competitor
  5. Calculate mention share for each brand

Identify New Competitors

Scenario: New competitors might be emerging in AI responses. Approach:
  1. Review recent executions
  2. Look for unfamiliar brands in “Brands Mentioned”
  3. Compare to older executions to confirm they’re new
  4. Investigate new competitors and adjust strategy

Validate Scheduled Prompt Performance

Scenario: You want to ensure scheduled prompts are running correctly and providing useful data. Approach:
  1. Filter to a specific scheduled prompt
  2. Check execution frequency matches schedule
  3. Review brand mention consistency
  4. Determine if the prompt should continue, be adjusted, or be paused

What’s Next

Now that you understand Prompt History:
  • Review your recent executions to identify trends
  • Compare platform performance for your top prompts
  • Export key results for reporting
  • Adjust your scheduled prompts based on historical insights
  • Explore Citation Analysis to understand which URLs are being cited