What Prompt History Is
Prompt History is a comprehensive record of every prompt execution across all AI platforms. It shows all queries you’ve run—whether scheduled or manual—along with the brands mentioned in responses, execution timestamps, and platform details. Unlike Scheduled Prompts (which shows your active tracking setup), Prompt History displays the actual execution results. Think of it as your complete audit trail for AI platform monitoring.Why Prompt History Matters
Prompt History helps you:- Analyze historical trends: See how AI responses evolved over weeks or months
- Compare platform differences: Understand how ChatGPT, Google AI Overview, and Google AI Mode respond differently to the same query
- Identify brand mention patterns: Track when your brand (or competitors) appear or disappear from responses
- Audit specific executions: Review individual prompt results to understand what AI platforms cited
- Export data for reporting: Download execution results for presentations or deeper analysis
Understanding the Prompt History Table
The Prompt History screen displays all executions in a searchable, filterable table.Query Details Column
Shows the prompt text that was executed and its assigned topic. Prompt text: The exact question or query sent to the AI platform (e.g., “best CRM software for small businesses”). Topic tag: Category assigned to the prompt (e.g., “CRM Solutions”, “Sales Tools”, “Customer Management”). Use this to:- Quickly identify which prompts generated specific results
- Group related queries by topic
- Understand the context of brand mentions
Brands Mentioned Column
Lists all brands that appeared in the AI-generated response for that execution. This column shows you at a glance which companies the AI platform referenced when answering the query. Brands are displayed as a comma-separated list. Example brands mentioned:- “Salesforce, HubSpot, Zoho CRM, Pipedrive, Freshsales, Monday Sales CRM, Insightly…”
- See if your brand appeared in the response
- Identify which competitors were mentioned
- Spot new competitors entering AI responses
- Track brand mention consistency across executions
Last Run Column
The timestamp when the prompt was executed (e.g., “6 Oct 2025, 2:33 pm”). Use this to:- Understand the recency of data
- Compare executions from different dates
- Track execution frequency for scheduled prompts
Platform Column
Visual indicator showing which AI platform generated the response. Platform options:- Google AI Mode: Google’s AI-powered search mode
- Google Overview: Google’s AI Overview feature
- ChatGPT: OpenAI’s ChatGPT responses
- Filter results by platform
- Compare how different platforms respond to the same query
- Identify platform-specific brand mention patterns
Action Column
Actions you can take on each prompt execution.View Results
Opens the detailed results page showing:- The full AI response text
- Brand mentions with position rankings
- Citations and sources referenced
- Visibility scores and metrics
- Competitor comparison data
Download
Exports the execution data in a downloadable format. What you get:- Prompt text and metadata
- AI response text
- Brand mentions and positions
- Citations and URLs
- Metrics (visibility score, share of voice, etc.)
Delete
Removes the specific execution from your history permanently.Deleting an execution is permanent and cannot be undone. The execution data will be removed from all analytics and trend calculations. Only delete executions if you’re certain the data is no longer needed.
Search and Filter Options
Search by Name
Use the search bar to find specific prompts quickly. Search by prompt text: Enter keywords from the query (e.g., “CRM software”, “sales automation”). The search filters the execution list in real-time as you type.Filter by Platform
Select specific AI platforms to narrow results:- All Platforms: Show executions from all AI platforms (default)
- Google AI Mode: Only Google AI Mode executions
- Google Overview: Only Google AI Overview executions
- ChatGPT: Only ChatGPT executions
Filter by Topic
Select a topic to view only executions tagged with that category. Use case: Analyze all executions related to a specific theme (e.g., “CRM Comparisons”, “Sales Automation”).Filter by Date Range
Select a time period to focus on executions within specific dates. Common date ranges:- Last 7 days
- Last 30 days
- Last 3 months
- Last 6 months
- Custom date range
Execution Count
At the top of the screen, you’ll see the total number of prompt executions (e.g., “1-20 of 394 Prompts”). This shows:- Current page range: Which executions you’re viewing (1-20)
- Total executions: How many prompts have been run across all platforms and topics (394)
How to Use Prompt History
Compare Brand Mentions Over Time
Goal: See if your brand is gaining or losing visibility in AI responses. Steps:- Search for a specific prompt text
- Sort by “Last Run” to see chronological order
- Review “Brands Mentioned” column across executions
- Note when your brand appeared vs. disappeared
Identify Competitor Patterns
Goal: Understand which competitors consistently appear in AI responses. Steps:- Filter by topic (e.g., “CRM Comparisons”)
- Review “Brands Mentioned” across multiple executions
- Identify brands that appear repeatedly
- Note new brands entering responses
Analyze Platform Differences
Goal: Understand how different AI platforms respond to the same query. Steps:- Search for a specific prompt text
- Filter by each platform individually
- Compare “Brands Mentioned” across platforms
- Identify platform-specific brand preferences
Track Topic Performance
Goal: See which topics generate the most brand mentions. Steps:- Filter by topic
- Count how many executions mention your brand
- Calculate mention rate (your brand mentions / total executions)
- Compare mention rates across topics
Audit Specific Dates
Goal: Measure the impact of content or campaigns on AI visibility. Steps:- Set date range to before and after content publication
- Compare brand mentions before vs. after
- Review any changes in competitor mentions
- Analyze citation patterns in detailed results
Export for Reporting
Goal: Create reports for stakeholders showing AI visibility progress. Steps:- Filter to the relevant time period and topic
- Download executions that show strong brand presence
- Compile data into presentation format
- Highlight trends and improvements
Best Practices
1. Review History Weekly
Set a regular cadence to review new prompt executions. Weekly review checklist:- Check latest executions for brand mentions
- Identify any sudden changes in competitor mentions
- Note new brands appearing in responses
- Flag executions with unexpected results for deeper analysis
2. Compare Executions for the Same Prompt
When analyzing trends, review multiple executions of the same prompt over time. What to compare:- Brand mention consistency
- Position changes (if your brand moves up or down in responses)
- Citation source changes
- Competitor mention frequency
3. Use Filters to Focus Analysis
Don’t try to analyze all 394+ executions at once. Use filters to narrow your focus. Filtering strategy:- Start with a specific topic
- Narrow by date range
- Filter by platform if needed
- Search for specific prompt text
4. Download Important Executions
Export executions that show significant wins or losses. Archive executions when:- Your brand achieves top mention position
- A major competitor appears for the first time
- AI platforms cite your owned content
- You need evidence for stakeholder reports
5. Delete Only When Necessary
Keep historical data unless it’s genuinely irrelevant or incorrect. Reasons to delete:- Duplicate executions from testing
- Executions with technical errors
- Irrelevant queries that shouldn’t have been tracked
- Old executions just because they’re old (historical data is valuable)
- Executions where your brand didn’t appear (useful for gap analysis)
6. Look for Patterns Across Platforms
Don’t analyze platforms in isolation. Compare them to identify platform-specific optimization opportunities. Platform comparison questions:- Which platform mentions your brand most frequently?
- Are competitor mentions consistent across platforms?
- Do certain platforms prefer specific types of content or citations?
Common Use Cases
Measure Content Impact
Scenario: You published a comprehensive guide and want to see if it improved AI visibility. Approach:- Filter to date range after publication
- Search for prompts related to the guide topic
- Check if brand mentions increased
- Review if AI platforms cite your guide
Track Seasonal Trends
Scenario: Your product has seasonal demand, and you want to track AI mention patterns. Approach:- Compare executions from the same time last year
- Filter by relevant topic
- Track brand mention frequency across seasons
- Adjust prompt scheduling based on seasonal patterns
Competitive Benchmarking
Scenario: You want to understand your share of AI visibility vs. competitors. Approach:- Filter to a specific topic
- Count total executions
- Count executions mentioning your brand
- Count executions mentioning each competitor
- Calculate mention share for each brand
Identify New Competitors
Scenario: New competitors might be emerging in AI responses. Approach:- Review recent executions
- Look for unfamiliar brands in “Brands Mentioned”
- Compare to older executions to confirm they’re new
- Investigate new competitors and adjust strategy
Validate Scheduled Prompt Performance
Scenario: You want to ensure scheduled prompts are running correctly and providing useful data. Approach:- Filter to a specific scheduled prompt
- Check execution frequency matches schedule
- Review brand mention consistency
- Determine if the prompt should continue, be adjusted, or be paused
What’s Next
Now that you understand Prompt History:- Review your recent executions to identify trends
- Compare platform performance for your top prompts
- Export key results for reporting
- Adjust your scheduled prompts based on historical insights
- Explore Citation Analysis to understand which URLs are being cited