127 AI Patterns for Salesforce: Encoding 20 Years of Expertise into Prompts
Each pattern is a structured AI prompt template. system.md for the prompt. pattern.toml for the metadata. 20+ categories. Three quality tiers. Template variables that inject live org data. This is how you scale Salesforce expertise with AI.
What Is a Pattern?
A pattern is a structured AI prompt template loaded from a directory. Each pattern directory contains two files: system.md (the prompt itself) and pattern.toml (metadata about the prompt). The pattern is not the AI output. It is the instruction set that produces the output. Think of it like a function signature for the LLM.
patterns/
review_apex/
system.md # The prompt
pattern.toml # Metadata
analyze_org_health/
system.md
pattern.toml
create_test_class/
system.md
pattern.toml
... (127 more)
sf-fabric loads these at startup via the PatternStore:
// From sf-fabric-core/src/pattern.rs
pub struct Pattern {
pub name: String, // Directory name
pub system_prompt: String, // Contents of system.md
pub variables: HashMap<String, String>, // Extracted {{vars}}
pub metadata: PatternMetadata,
pub source: PathBuf,
}
impl Pattern {
pub fn load(dir: &Path) -> Result<Self> {
let name = dir.file_name()
.and_then(|n| n.to_str())
.ok_or_else(|| anyhow!("Invalid pattern directory"))?
.to_string();
let system_md = dir.join("system.md");
let system_prompt = std::fs::read_to_string(&system_md)?;
let variables = extract_variables(&system_prompt);
let metadata_path = dir.join("pattern.toml");
let metadata = if metadata_path.exists() {
let content = std::fs::read_to_string(&metadata_path)?;
toml::from_str(&content).unwrap_or_default()
} else {
PatternMetadata::default()
};
Ok(Self { name, system_prompt, variables, metadata, source: dir.to_path_buf() })
}
}
Pattern Metadata
The pattern.toml file carries everything the system needs to use the pattern correctly:
# pattern.toml for review_apex
category = "review"
tags = ["apex", "code-quality", "best-practices"]
description = "Reviews Apex code for governor limits, security, bulk patterns, and naming conventions"
difficulty = "intermediate"
requires_org = false
requires_input = true
estimated_tokens = 3500
quality = "gold"
recommended_strategies = ["cot", "governor_aware"]
recommended_models = ["claude-sonnet-4-5-20250929"]
The Rust struct that deserializes this:
pub struct PatternMetadata {
pub category: Option<String>,
pub tags: Vec<String>,
pub description: Option<String>,
pub difficulty: Option<String>, // beginner, intermediate, advanced, expert
pub requires_org: Option<bool>, // Needs --org flag?
pub requires_input: Option<bool>, // Needs stdin/--message?
pub estimated_tokens: Option<u32>, // Cost estimation
pub recommended_models: Vec<String>,
pub recommended_strategies: Vec<String>,
pub quality: Option<String>, // gold, silver, bronze
}
Each field has a specific purpose in the system:
- category groups patterns for discovery. "review", "create", "analyze", "test", "deploy", "migrate", "security", "performance", "data", "integration".
- difficulty helps users find patterns at their skill level. A beginner pattern might generate a simple trigger. An expert pattern might architect a multi-org integration.
- requires_org tells the CLI whether to require a --org flag. Patterns that use {{sf:*}} variables need a live org connection. Patterns that work on pasted code do not.
- estimated_tokens enables cost estimation before execution. A 3,500-token pattern at Claude Sonnet pricing costs roughly $0.02 per run. Users can decide whether to proceed.
- recommended_strategies tells the system which reasoning strategies pair well with this pattern. A code review pattern benefits from Chain-of-Thought (cot) and Governor-Aware reasoning. A security audit pattern benefits from Security-First and Reflexion.
- quality is the quality tier: gold, silver, or bronze.
Quality Tiers
Not all 127 patterns are equal. Gold patterns have been tested against 50+ real inputs, reviewed by Salesforce architects, and consistently produce production-quality output. Silver patterns work well but have edge cases. Bronze patterns are functional but need human review of the output.
Quality tier distribution (127 patterns):
Gold: 41 patterns (32%)
Silver: 58 patterns (46%)
Bronze: 28 patterns (22%)
Gold patterns by category:
review: review_apex, review_lwc, review_flow,
review_security, review_soql
create: create_trigger, create_test_class,
create_lwc_component
analyze: analyze_org_health, analyze_permissions,
analyze_governor_usage
test: generate_unit_tests, generate_integration_tests
security: audit_fls, audit_sharing, audit_crud
The quality tier affects how the system presents results. Gold patterns run unattended. Silver patterns include a disclaimer. Bronze patterns include an explicit "review this output carefully" warning. The tier is metadata, not enforcement. The user can override. But the default behavior protects users from trusting low-quality output.
The 20+ Categories
The 127 patterns cover the full Salesforce development lifecycle:
Category breakdown:
review (18 patterns)
review_apex, review_lwc, review_flow, review_soql,
review_triggers, review_security, review_performance,
review_test_coverage, review_naming, review_architecture,
...
create (24 patterns)
create_trigger, create_test_class, create_lwc_component,
create_apex_class, create_flow, create_validation_rule,
create_permission_set, create_custom_metadata,
create_platform_event, create_batch_class,
...
analyze (16 patterns)
analyze_org_health, analyze_permissions,
analyze_governor_usage, analyze_field_usage,
analyze_automation_inventory, analyze_data_model,
analyze_integration_landscape, analyze_test_coverage,
...
test (12 patterns)
generate_unit_tests, generate_integration_tests,
generate_mock_data, generate_test_utilities,
...
migrate (10 patterns)
plan_data_migration, map_field_types,
generate_migration_script, validate_migration,
...
security (9 patterns)
audit_fls, audit_sharing, audit_crud,
detect_pii, audit_connected_apps,
...
performance (8 patterns)
optimize_soql, optimize_trigger,
analyze_batch_performance, detect_ldv,
...
deploy (7 patterns)
generate_package_xml, validate_deployment,
generate_destructive_changes, plan_release,
...
agentforce (10 patterns)
create_agent_action, create_agent_topic,
analyze_agent_flow, review_agent_security,
...
vertical (13 patterns)
health_cloud_data_model, financial_cloud_compliance,
education_cloud_schema, nonprofit_cloud_npsp,
...
Template Variables
Patterns use three types of template variables:
1. Generic variables: {{name}}, {{language}}, {{role}}
- Replaced by --var flag or defaults from pattern.toml
- Extracted by regex: \{\{([a-zA-Z_][a-zA-Z0-9_]*)\}\}
2. Reserved sf: variables: {{sf:org_name}}, {{sf:metadata:Account}}
- Resolved by querying live Salesforce orgs
- Requires --org flag
3. Plugin variables: {{plugin:datetime:now}}, {{plugin:file:/path}}
- Resolved by registered TemplatePlugin implementations
- Extensible via the plugin system
The variable extraction happens at load time:
fn extract_variables(content: &str) -> HashMap<String, String> {
let re = Regex::new(
r"\{\{([a-zA-Z_][a-zA-Z0-9_]*)\}\}"
).unwrap();
let mut vars = HashMap::new();
for cap in re.captures_iter(content) {
let var_name = cap[1].to_string();
// Skip special variables
if var_name != "input"
&& !var_name.starts_with("sf:")
&& !var_name.starts_with("plugin:") {
vars.entry(var_name).or_insert_with(String::new);
}
}
vars
}
The render method applies substitutions in order: explicit variables first, then defaults for any remaining placeholders:
pub fn render(
&self,
vars: &HashMap<String, String>,
input: &str
) -> String {
let mut result = self.system_prompt.clone();
// Replace {{input}} with user content
result = result.replace("{{input}}", input);
// Replace named variables from --var flags
for (key, value) in vars {
let placeholder = format!("{{{{{key}}}}}");
result = result.replace(&placeholder, value);
}
// Apply defaults for unreplaced variables
for (key, default_value) in &self.variables {
let placeholder = format!("{{{{{key}}}}}");
if result.contains(&placeholder) {
result = result.replace(&placeholder, default_value);
}
}
result
}
What a Pattern Looks Like
Here is a real pattern system.md for review_apex (abbreviated):
# IDENTITY
You are a senior Salesforce developer and architect with
20 years of experience. You specialize in Apex code review
with deep knowledge of governor limits, bulkification
patterns, security best practices, and the Salesforce
execution context.
# PURPOSE
Review the provided Apex code for:
1. Governor limit violations and risks
2. Bulkification issues (queries/DML in loops)
3. Security vulnerabilities (CRUD/FLS, injection, XSS)
4. Naming convention adherence
5. Test coverage and testability
6. Error handling and logging
7. Documentation quality
8. Performance optimization opportunities
# OUTPUT FORMAT
Return a structured review with:
- Overall assessment (1-10 score with rationale)
- Critical issues (must fix before deploy)
- Warnings (should fix, not blocking)
- Suggestions (nice-to-have improvements)
- Refactored code (if critical issues found)
For each issue, provide:
- Line number or code reference
- Severity (Critical / Warning / Suggestion)
- Explanation of the problem
- Concrete fix with code example
# CONSTRAINTS
- Do NOT suggest using @future or Queueable unless the
original code has a legitimate async use case
- Do NOT flag Salesforce-standard patterns as issues
(e.g., trigger handler patterns, selector patterns)
- DO flag any SOQL or DML inside a for loop
- DO flag any missing null checks on SOQL results
- DO flag any hardcoded IDs or org-specific references
# INPUT
{{input}}
Notice how specific this is. It does not say "review this code." It says exactly what to look for, in what order, with what severity levels, and with explicit constraints about what NOT to flag. This specificity is the difference between a pattern and a prompt. A prompt is a question. A pattern is an engineering specification for AI behavior.
The Compound Effect
Individual patterns are useful. Chained patterns are transformative. The real power of 127 patterns is not using one at a time. It is composing them into workflows:
Workflow: New Feature Development
Step 1: analyze_data_model
Input: "We need to track customer subscriptions
with renewal dates and usage metrics"
Output: ERD with objects, fields, relationships
Step 2: create_apex_class
Input: Output from Step 1 + business requirements
Output: Apex service class with CRUD operations
Step 3: review_apex
Input: Output from Step 2
Output: Code review with issues and fixes
Step 4: create_test_class
Input: Output from Step 2 (with Step 3 fixes applied)
Output: Test class with 90%+ coverage
Step 5: review_security
Input: Output from Steps 2 + 4
Output: Security audit with CRUD/FLS compliance check
Step 6: generate_package_xml
Input: All components from Steps 1-5
Output: Deployment package ready for validation
Each step produces output that feeds the next step. The analyze pattern creates context that the create pattern uses. The review pattern catches issues that the test pattern accounts for. The security pattern validates that the create and test patterns did not introduce vulnerabilities.
A senior developer does this workflow mentally. They analyze, design, code, review, test, and secure. They do it well because they have 10+ years of experience internalizing these steps. The pattern chain encodes that same workflow, making it available to a junior developer paired with an LLM.
With 127 patterns and 13 strategies, there are 127 x 13 = 1,651 possible pattern+strategy combinations. Each combination produces different output tuned for different situations. review_apex with governor_aware strategy emphasizes limit violations. review_apex with security_first strategy emphasizes CRUD/FLS compliance. Same pattern, different lens, different output.
Pattern Economics
Token economics per pattern run:
System prompt (pattern): 800-3,000 tokens
Strategy (if applied): 200-500 tokens
Context (if applied): 500-2,000 tokens
User input: 100-5,000 tokens
-------------------------------------------
Total input: 1,600-10,500 tokens
Output budget: 1,000-4,000 tokens
Cost per run (Claude Sonnet):
Low end: ~$0.02
High end: ~$0.09
Average: ~$0.04
Cost per workflow (6-step chain):
Average: ~$0.24
Cost per developer per day (20 pattern runs):
Average: ~$0.80/day
Cost per developer per month:
Average: ~$17.60/month
$17.60 per month per developer for AI-assisted Salesforce development. Compare that to the cost of a senior Salesforce developer ($150-250/hour) doing the same reviews manually. A single code review that takes 30 minutes of senior time costs $75-125. The AI does it in 8 seconds for $0.04.
The pattern library is not replacing senior developers. It is scaling their expertise. One senior developer writes patterns. Every developer on the team uses them. The knowledge compounds instead of staying locked in one person's head.
Building Your Own Patterns
Adding a new pattern to sf-fabric is creating a directory with two files:
mkdir patterns/my_custom_pattern
cat > patterns/my_custom_pattern/system.md <<'EOF'
# IDENTITY
You are a {{role}} specialist.
# PURPOSE
Analyze the provided {{language}} code for {{focus_area}}.
# INPUT
{{input}}
EOF
cat > patterns/my_custom_pattern/pattern.toml <<'EOF'
category = "analyze"
tags = ["custom"]
description = "Custom analysis pattern"
difficulty = "intermediate"
requires_input = true
quality = "bronze"
EOF
The PatternStore picks it up automatically on the next run. No compilation. No registration. No configuration files to edit. Drop a directory with system.md and the system uses it.
This is intentional. The barrier to creating a new pattern should be exactly as low as creating a new text file. Because the person with the expertise to write a good pattern is not always the person who knows how to configure a software system. A Salesforce architect should be able to encode their knowledge without writing any Rust or touching any config.
127 patterns is a starting point. We add new ones every week based on real problems encountered in client engagements. The library grows. The expertise compounds. And every new pattern is immediately available to every user of the system.