Overview
The LLM block sends prompts to AI language models and returns generated text. Use it for content creation, summarization, analysis, or any text-based AI task.Configuration
Model
Select the AI model to use for generation. Available models:- All the major models are available. We keep updating the list of models.
Output Format
Define how the model returns its response. Options:- Text: Plain text response
- JSON: Structured data response
Temperature
Controls randomness in the model’s output. Range: 0.0 to 1.0- Lower values (0.0 - 0.3): More precise and deterministic responses
- Higher values (0.7 - 1.0): More creative and varied responses
Max Tokens
Limits the length of the generated response. Range: 1 to 32,000 (varies by model) Leave empty for no limit. Set a specific value to control output length and cost. Example: 2500 tokens ≈ 1,875 wordsSystem Instructions
Define the model’s role, tone, and behavior. Optional field that sets context for all messages in this block. Example:Enable Web Search
Allow the model to search the web for current information. Available for: OpenAI and Perplexity models When enabled, the model can retrieve up-to-date information from the internet to answer your queries. Useful for queries requiring recent data, news, or facts beyond the model’s training cutoff. Note: Web search increases response time and credit usage.Citations
Include source references in the model’s response. Requirements: Enable Web Search must be turned on Citation Formats:-
Include within the answer: Inline citations using [1], [2] format with a References section at the end
-
Separate answer and citations: JSON format with separate answer and citations fields
Allowed Domains
Limit web search results to specific domains. Requirements: Enable Web Search must be turned on Maximum: 20 domains Format: Enter domains without http:// or https:// prefix- Correct:
wikipedia.org - Incorrect:
https://wikipedia.org
- OpenAI: Supports allowed domains only
- Perplexity: Supports both allowed and disallowed domains (also accepts full URLs for granular filtering)
Messages
Define the conversation between user and assistant.User Message (Required)
The prompt sent to the AI model. This is where you reference placeholders and define your request. Example:{ } icon to insert placeholders from previous steps or inputs.
Assistant Message
Optional: Pre-fill the start of the assistant’s response to guide output format or structure. Example:Using Liquid Templating in Prompts
You can use Liquid templating syntax in your prompts to create dynamic, data-driven messages. Liquid lets you reference workflow data, apply filters, use loops, and add conditional logic directly in your prompts.Basic Variable Output
Reference previous step outputs:Using Filters
Transform data with filters:Loops in Prompts
Loop through arrays to build dynamic prompts:Conditional Logic
Adjust prompts based on data:Complex Prompt Example
Combine multiple Liquid features:Learn More
See Liquid Templating for complete syntax guide including:- All available filters (string, number, array)
- Control flow (if/elsif/else, for loops, unless, case/when)
- Variables and assignments
- Common patterns and examples
- Input/output samples for all syntax
Error Handling
Define what happens if the LLM block fails. Options:- Terminate Workflow: Stop execution immediately
- Continue Execution: Proceed to next step despite error
Best Practices
- Use specific, clear prompts for better results
- Reference previous step outputs using
{{step_n.output}}syntax - Set appropriate temperature based on task (low for factual, high for creative)
- Use system instructions to maintain consistent tone across runs
- Test with different models to find the best balance of quality and speed
- Set max tokens to control costs for large-scale workflows
- Enable web search for queries requiring current information or recent events
- Use allowed domains to limit sources to trusted, authoritative sites
- Choose “Separate answer and citations” format when processing citations in LLM blocks
- Note that web search adds latency and increases credit usage per request
Common Use Cases
| Use Case | Configuration Tips |
|---|---|
| Content summarization | Low temperature (0.2), clear system instructions |
| Creative writing | High temperature (0.8-1.0), flexible max tokens |
| Data extraction | JSON output format, structured user prompt |
| SEO content briefs | Medium temperature (0.5-0.7), detailed system role |
| Research with sources | Enable web search, citations on, allowed domains for trusted sources |
| Current event analysis | Enable web search, low temperature (0.3), citations for verification |
| Fact checking | Enable web search, structured citations format for data processing |
What’s Next
Now that you understand the LLM block:- Learn about other AI blocks in Google AI Overview
- See how to connect blocks in Building Workflows
- Use placeholders in Placeholder Referencing