Skip to main content

Overview

The LLM block sends prompts to AI language models and returns generated text. Use it for content creation, summarization, analysis, or any text-based AI task.

Configuration

Model

Select the AI model to use for generation. Available models:
  • All the major models are available. We keep updating the list of models.
Different models have different capabilities, speed, and cost. Choose based on your task requirements.

Output Format

Define how the model returns its response. Options:
  • Text: Plain text response
  • JSON: Structured data response
Select JSON when you need structured output in the subsequent blocks.

Temperature

Controls randomness in the model’s output. Range: 0.0 to 1.0
  • Lower values (0.0 - 0.3): More precise and deterministic responses
  • Higher values (0.7 - 1.0): More creative and varied responses
Default: 0.7

Max Tokens

Limits the length of the generated response. Range: 1 to 32,000 (varies by model) Leave empty for no limit. Set a specific value to control output length and cost. Example: 2500 tokens ≈ 1,875 words

System Instructions

Define the model’s role, tone, and behavior. Optional field that sets context for all messages in this block. Example:
You are an SEO expert who writes clear, actionable content briefs.
Use professional tone and focus on data-driven insights.
System instructions apply to the entire conversation with the model.
Allow the model to search the web for current information. Available for: OpenAI and Perplexity models When enabled, the model can retrieve up-to-date information from the internet to answer your queries. Useful for queries requiring recent data, news, or facts beyond the model’s training cutoff. Note: Web search increases response time and credit usage.

Citations

Include source references in the model’s response. Requirements: Enable Web Search must be turned on Citation Formats:
  1. Include within the answer: Inline citations using [1], [2] format with a References section at the end
    According to recent studies [1], the market grew by 15% [2].
    
    References:
    [1] https://example.com/study
    [2] https://example.com/market-report
    
  2. Separate answer and citations: JSON format with separate answer and citations fields
    {
      "answer": "The market grew by 15%",
      "citations": [
        {"title": "Market Study", "url": "https://example.com/study"}
      ]
    }
    
Select “Separate answer and citations” format when you need to process citations programmatically in subsequent blocks.

Allowed Domains

Limit web search results to specific domains. Requirements: Enable Web Search must be turned on Maximum: 20 domains Format: Enter domains without http:// or https:// prefix
  • Correct: wikipedia.org
  • Incorrect: https://wikipedia.org
Examples:
wikipedia.org
nature.com
ncbi.nlm.nih.gov
Model Support:
  • OpenAI: Supports allowed domains only
  • Perplexity: Supports both allowed and disallowed domains (also accepts full URLs for granular filtering)
Use this to ensure responses cite only trusted or relevant sources for your use case.

Messages

Define the conversation between user and assistant.

User Message (Required)

The prompt sent to the AI model. This is where you reference placeholders and define your request. Example:
Analyze the following search results for {{input.keyword}}:

{{step_1.output}}

Create a content outline with 5 main sections.
Click the { } icon to insert placeholders from previous steps or inputs.

Assistant Message

Optional: Pre-fill the start of the assistant’s response to guide output format or structure. Example:
# Content Outline for {{input.keyword}}

## Section 1:
This guides the model to follow your desired format.

Using Liquid Templating in Prompts

You can use Liquid templating syntax in your prompts to create dynamic, data-driven messages. Liquid lets you reference workflow data, apply filters, use loops, and add conditional logic directly in your prompts.

Basic Variable Output

Reference previous step outputs:
Write a blog post about {{step_1.output.topic}}.

Target keyword: {{step_2.output.Keyword}}
Search volume: {{step_2.output.Search_Volume}}

Using Filters

Transform data with filters:
Write a blog post titled "{{step_1.output.title | upcase}}".

The post should be {{step_2.output.word_count | divided_by: 2}} to {{step_2.output.word_count}} words long.

Topic: {{step_1.output.topic | capitalize}}
Extract and join array values:
Write content about {{step_1.output.main_topic}}.

Target these keywords naturally throughout the content:
{{step_2.output | map: "Keyword" | join: ", "}}

Ensure the content covers all these topics.

Loops in Prompts

Loop through arrays to build dynamic prompts:
Write a comprehensive blog post about email marketing tools.

Include information about these tools:
{% for tool in step_1.output %}
- {{tool.name}}: {{tool.description}}
{% endfor %}

Compare their features and pricing.
Loop with conditions:
Create a keyword strategy document.

Focus on these high-volume keywords:
{% for kw in step_1.output %}
  {% if kw.Search_Volume > 10000 %}
- {{kw.Keyword}} ({{kw.Search_Volume}} monthly searches)
  {% endif %}
{% endfor %}

Provide content ideas for each keyword.

Conditional Logic

Adjust prompts based on data:
Write a content brief for "{{step_1.output.Keyword}}".

Search volume: {{step_1.output.Search_Volume}}

{% if step_1.output.Keyword_Difficulty_Index > 70 %}
This is a highly competitive keyword. Focus on:
- Long-tail variations
- Unique angles and perspectives
- Building topical authority
{% elsif step_1.output.Keyword_Difficulty_Index > 40 %}
This is a moderately competitive keyword. Focus on:
- High-quality, comprehensive content
- Strong on-page SEO
- Supporting internal links
{% else %}
This is a low-competition keyword. Focus on:
- Basic optimization
- Quick content production
- Capturing traffic quickly
{% endif %}

Complex Prompt Example

Combine multiple Liquid features:
Create an SEO content strategy for "{{step_1.output.main_keyword}}".

## Primary Keyword Analysis
- Keyword: {{step_1.output.main_keyword | capitalize}}
- Volume: {{step_1.output.Search_Volume}}
- Difficulty: {{step_1.output.Keyword_Difficulty_Index}}/100

## Related Keywords
{% assign related_keywords = step_2.output | map: "Keyword" %}
Target these {{related_keywords | size}} related terms:
{% for kw in step_2.output limit:10 %}
{{forloop.index}}. {{kw.Keyword}} ({{kw.Search_Volume}} searches)
{% endfor %}

## Competitor Analysis
{% for competitor in step_3.output %}
- {{competitor.Domain}}: {{competitor.Organic_Keywords}} ranking keywords
{% endfor %}

## Content Requirements
{% if step_1.output.Search_Volume > 50000 %}
Create pillar content (3000+ words) with comprehensive coverage.
{% elsif step_1.output.Search_Volume > 10000 %}
Create standard long-form content (1500-2000 words).
{% else %}
Create focused content (800-1200 words).
{% endif %}

Include:
- Main sections addressing search intent
- {{related_keywords | slice: 0, 5 | join: ", "}} naturally integrated
- Examples and use cases
- Call-to-action

Tone: Professional and informative

Learn More

See Liquid Templating for complete syntax guide including:
  • All available filters (string, number, array)
  • Control flow (if/elsif/else, for loops, unless, case/when)
  • Variables and assignments
  • Common patterns and examples
  • Input/output samples for all syntax

Error Handling

Define what happens if the LLM block fails. Options:
  1. Terminate Workflow: Stop execution immediately
  2. Continue Execution: Proceed to next step despite error
Select based on whether subsequent steps can handle missing data.

Best Practices

  • Use specific, clear prompts for better results
  • Reference previous step outputs using {{step_n.output}} syntax
  • Set appropriate temperature based on task (low for factual, high for creative)
  • Use system instructions to maintain consistent tone across runs
  • Test with different models to find the best balance of quality and speed
  • Set max tokens to control costs for large-scale workflows
  • Enable web search for queries requiring current information or recent events
  • Use allowed domains to limit sources to trusted, authoritative sites
  • Choose “Separate answer and citations” format when processing citations in LLM blocks
  • Note that web search adds latency and increases credit usage per request

Common Use Cases

Use CaseConfiguration Tips
Content summarizationLow temperature (0.2), clear system instructions
Creative writingHigh temperature (0.8-1.0), flexible max tokens
Data extractionJSON output format, structured user prompt
SEO content briefsMedium temperature (0.5-0.7), detailed system role
Research with sourcesEnable web search, citations on, allowed domains for trusted sources
Current event analysisEnable web search, low temperature (0.3), citations for verification
Fact checkingEnable web search, structured citations format for data processing

What’s Next

Now that you understand the LLM block: