Every feature.
No fluff.

This is the full inventory of what Pathfinder does. Not a marketing summary. Not "key highlights." Everything. If you are evaluating this platform against Talend, Informatica, Fivetran, or building your own with Data Loader and spreadsheets, this page tells you exactly what you are getting.

68
Audit checks
15
Connector types
4,695
Automated tests
38
Transform functions
174
AI patterns
212
Backend commands

DuckDB Audit EngineNew

Connect to any Salesforce org and run a comprehensive audit before touching anything. The audit engine uses DuckDB as its analytical backend for fast, columnar processing of org metadata. 68 registered checks across security, data quality, schema health, and architecture. Results in minutes, not days.

68 Checks in Two Tiers

23 Universal Checks

Apply to every Salesforce org regardless of industry. Security configuration, sharing model, field-level security gaps, API limit consumption, profile and permission set analysis, governor limit exposure, automation complexity, storage utilization, login patterns, and schema health metrics.

8+ Vertical-Specific Checks

Industry-aware rules for nonprofits, healthcare, financial services, and other verticals. The engine auto-detects the installed cloud products (14 cloud types including Sales, Service, Nonprofit, Health Cloud, Financial Services Cloud, Education Cloud) and applies relevant checks.

Cloud Detection (14 Types)

Pathfinder identifies which Salesforce clouds are active in the connected org: Sales Cloud, Service Cloud, Nonprofit Cloud (NPC), NPSP, Health Cloud, Financial Services Cloud, Education Cloud, Commerce Cloud, Experience Cloud, Marketing Cloud, CPQ, Field Service, Tableau CRM, and Pardot. The detected cloud type determines which vertical-specific audit checks run and which migration accelerators apply.

pf-audit CLI

Run audits from the command line without opening the desktop app. Four commands:

Report Export

Three output formats: JSON (for programmatic consumption and CI/CD pipelines), Markdown (for documentation and Confluence/Notion pages), and HTML (for client-facing branded reports). Each report includes findings organized by severity (Critical, High, Medium, Low, Info), category groupings, affected metadata references, and remediation guidance.

DuckDB Under the Hood

Org metadata (objects, fields, profiles, permission sets, flows, validation rules, Apex classes) is extracted and loaded into DuckDB in-memory tables. Checks are analytical SQL queries that run at columnar speed. Profiling a 500-object org with 10,000 fields takes seconds, not the minutes required by row-by-row analysis. The same DuckDB engine powers data profiling during migration: percentiles, cardinality, pattern detection, and anomaly scoring.

Data Connectors

Fifteen connector types covering the nonprofit CRM ecosystem, file formats, databases, cloud services, and generic APIs. Each connector handles authentication, schema discovery, pagination, and streaming extraction. Paste a URL or connection string and Pathfinder auto-detects the right connector.

ConnectorDirectionAuthStreamingWhat it connects to
BloomerangSourceAPI KeyYesDonor management CRM for nonprofits. Constituents, gifts, addresses, interactions.
Raiser's Edge NXTSourceOAuth2YesBlackbaud SKY API. Constituents, gifts, campaigns, notes. 600 calls/minute rate limit handled.
DonorPerfectSourceAPI KeyYesFundraising CRM. Donors, gifts, pledges, soft credits.
SalesforceBothOAuth2 JWTYesNPSP or Nonprofit Cloud. SOQL extraction, Composite upsert (200/call), Bulk API v2, Metadata API, Tooling API.
Dynamics 365BothOAuth2YesMicrosoft CRM. OData protocol with batch requests.
HubSpotTargetOAuth2PartialContacts, companies, deals, custom properties, associations.
CSVBothNoneYesAny CSV file. Auto-detects encoding (UTF-8, UTF-16, Latin1, Windows-1252), delimiter, and column types.
ExcelBothNoneNoXLSX and XLS files. Multi-sheet support. Header row detection.
JSON / JSONLSourceNoneYesStructured JSON files, line-delimited JSONL. JSONPath for nested record extraction.
XMLSourceNoneYesXML files with XPath-based record extraction and namespace support.
Generic REST APISourceAPI Key / Bearer / Basic / OAuth2YesConnect to any REST API. Configure base URL, auth, endpoints, JSONPath, pagination.
SQL DatabaseSourceConnection stringYesPostgreSQL, MySQL, SQLite. Schema from information_schema. Read-only extraction.
Google SheetsSourceAPI KeyYesGoogle Sheets API v4. Paste the spreadsheet URL. Each sheet becomes a data table.
AirtableSourceAPI TokenYesAirtable REST API. Base ID from URL. Tables, fields, linked records.
SFTPSourcePassword / SSH KeyYesSecure file transfer. Browse remote directory. Downloads files, delegates to CSV/Excel/JSON/XML connectors.

YAML-Defined Custom Connectors

For source systems not on the list: define a connector in a YAML file. Specify the base URL, auth method, endpoints, JSONPath for record extraction, pagination style, and rate limits. Drop the file in ~/.pathfinder/connectors/ and it appears immediately. No Rust code. No recompilation.

Auto-Detection

Paste a URL, file path, or connection string into the Smart Input field. Pathfinder detects the connector type from file extensions (.csv, .xlsx, .json), URL schemes (postgres://, sftp://), known hosts (docs.google.com/spreadsheets, airtable.com, salesforce.com), or HTTP content types. Pre-fills the credential form with extracted values.

Connector SDK

Five reusable SDK modules that every connector is built on:

That's connectors and auditing. Keep scrolling.

The transform engine, conflict resolution, quality gates, AI patterns, self-healing pipeline, and identity resolution are all below.

Request early access

Transform Engine

A complete expression language with a Pratt parser, 10 operator precedence levels, and 38 built-in functions. Field references use $SourceField for source values and @TargetField for target lookups. Supports null coalescing (??), optional chaining (?.), ternary conditionals, and string concatenation with the & operator.

38 Built-In Functions

String (16 functions)

UPPER LOWER TRIM LEN CONCAT LEFT RIGHT SUBSTRING REPLACE REGEX_REPLACE REGEX_MATCH SPLIT JOIN PROPER LPAD RPAD

Date (7 functions)

TODAY NOW YEAR MONTH DAY DATE_ADD DATE_DIFF

Conditional (2 functions)

IF CASE

Numeric (6 functions)

ROUND FLOOR CEILING ABS MIN MAX

Null Handling (3 functions)

COALESCE NULLIF ISBLANK

Conversion (4 functions)

TEXT VALUE DOUBLE BLANKVALUE

Lookup Tables

Map source values to target values using named lookup tables. Case-insensitive matching with configurable defaults. Load lookup tables from CSV files or define inline. The LOOKUP(table, key, default) function resolves values during transformation. Tables are cached per migration and serialized in checkpoints.

Aggregation Functions

SUM, COUNT, AVG, MIN, MAX, and COUNT_DISTINCT with GROUP BY support. Two-pass architecture: first pass collects group values, second pass applies aggregated results.

Row Context Variables

Access $ROW_NUMBER, $BATCH_NUMBER, $TOTAL_ROWS, $OBJECT_NAME, and $MIGRATION_ID in any expression. Set custom variables with $MY_VARIABLE syntax.

Validation Functions

Pre-built validators return structured results with standardized output:

Conflict Detection and Resolution

When source data and target data disagree, Pathfinder detects the conflict, classifies it, scores confidence for automatic resolution, and either resolves it or sends it to human review. Ten conflict types cover every situation from simple value mismatches to missing lookup references.

10 Conflict Types

TypeDefault SeverityWhat it means
DataMismatchLow to MediumSource and target have different values for the same field
DuplicateMatchHighSource record matches multiple target records
MissingInTargetLowSource record has no matching target record (new record)
SourceWantsNullMediumSource value is empty but target has data. Clearing it may lose information.
TypeMismatchHighData types are incompatible (string in source, number in target)
ConstraintViolationHighValue violates a target system constraint (field length, picklist, format)
DependencyMissingCriticalReferenced record has not been migrated yet (lookup to Account that does not exist)
InvalidPicklistValueMediumValue is not in the target picklist/enum. Needs mapping or addition to picklist.
ConcurrentModificationHighTarget record was modified after extraction. Source data may be stale.
BusinessRuleViolationMediumCustom business rule was violated (configurable per organization)

9 Resolution Strategies

SourceWins

Source values overwrite target values for all conflicting fields

TargetWins

Target values preserved. Source changes rejected.

NewestWins

Most recently modified record wins based on timestamp comparison

OldestWins

Original/first data is authoritative

FieldMerge

Per-field rules: take source for some fields, target for others, concatenate where appropriate

PriorityRules

Configurable rule engine with AND/OR/NOT conditions and field-level actions

CustomRule

Expression-based logic using the transform engine

Skip

Record excluded from migration entirely

ManualReview

Flagged for human decision with full source/target comparison

Automatic Resolution with Confidence Scoring

The auto-resolver evaluates each conflict against four weighted factors: source data quality (40%), target data quality (30%), data recency (20%), and field importance (10%). Conflicts scoring above the configurable confidence threshold (default: 85%) are resolved automatically. Below 85%: human review. Bulk resolution applies a strategy to all conflicts matching a filter (by object, severity, or type).

Rules Engine

Define conflict resolution rules with composable conditions: FieldNameMatches("Phone") AND SeverityEquals(Low) triggers TakeSource. Rules execute in priority order. Supports AND, OR, NOT operators, and field-level actions including TakeSource, TakeTarget, Concatenate, Coalesce, and ApplyTransform.

Half the features page and we are just getting warmed up.

The migration pipeline, quality system, AI engine, identity resolution, and compliance reporting are still below.

Request early access

8-Stage Migration Pipeline

The MigrationOrchestrator executes an eight-phase pipeline with checkpoint/resume at every phase boundary. If the migration fails at stage 5 of 8, you resume from stage 5 without re-running stages 1 through 4.

  1. Extract records from the source system in configurable batches (50 to 2,000 records)
  2. Transform field values using the expression engine, lookup tables, and validation functions
  3. Validate against target system constraints, required fields, picklist values, and field lengths
  4. Detect conflicts by querying existing target records and comparing field-by-field
  5. Auto-resolve conflicts above the confidence threshold using configured strategies
  6. Handle errors by categorizing failures (6 categories), tracking rates, enforcing thresholds
  7. Upsert records to the target system with external ID matching (Composite or Bulk API)
  8. Post-validate by running reconciliation counts, spot-check comparisons, and relationship integrity checks

Adaptive Batch Sizing

Batch size starts at 200 records and adjusts every batch based on two signals: if a batch takes longer than 30 seconds, the next batch is 25% smaller. If a batch completes in under 15 seconds with less than 1% errors, the next batch is 10% larger. Minimum batch size: 50 records. Maximum: 1,000 records. Three presets: Conservative (50-200), Balanced (200-500), and Aggressive (500-2,000).

Checkpoint/Resume

After every batch, Pathfinder saves a checkpoint containing: migration state, current object index, completed objects summary, current batch position, source pagination cursor, lookup cache snapshot, cumulative statistics, recent errors, and an MD5 checksum for integrity verification. If the app crashes, close it, or the power goes out, resume from exactly where you left off. Not from the beginning of the object. From the last batch.

Parallel Object Processing

Objects with no dependency relationships process concurrently (up to 3 in parallel). Dependency ordering uses Kahn's topological sort on the foreign key graph. If Contact depends on Account, Account processes first. If Account and Campaign have no relationship, they process in parallel.

Two-Pass Upsert

For migrations where Contact.AccountId references an Account that has not been created yet: Pass 1 creates all records with external IDs but skips lookup fields. Pass 2 queries the Salesforce IDs of created records and updates the lookup fields. Automatic dependency detection identifies which fields need deferral.

Dry Run Mode

Execute the entire pipeline without writing a single record to Salesforce. Extract, transform, validate, detect conflicts, run quality gates. See exactly what would happen, including which records would fail and why, before committing.

Self-Healing Remediation

Nine remediation action types handle the errors that ruin every other migration tool:

Salesforce ErrorRemediationWhat happens
REQUIRED_FIELD_MISSINGApplyDefaultApplies configured default value, retries the record
STRING_TOO_LONGTruncateFieldTruncates to field max length, retries
DUPLICATE_VALUESkipRecordSkips the duplicate (already exists in target)
Rate limit exceededWaitAndRetryWaits 60 seconds, retries the batch
ENTITY_IS_DELETEDSkipRecordSkips records referencing deleted entities
MALFORMED_IDSkipRecordSkips records with invalid Salesforce IDs
UNABLE_TO_LOCK_ROWWaitAndRetryWaits 5 seconds for row lock release, retries
Lookup not foundQueueForSecondPassDefers to second pass after referenced objects exist

Every remediation is logged in the event log with the original error, action taken, and whether the retry succeeded. Configurable per rule. Maximum 2 remediation attempts per record.

Data Quality System

Five Quality Gates

Every migration passes through five checkpoints. Each gate evaluates a set of checks. If any check fails, the gate turns red and the migration stops. Green gates pass. Yellow gates warn but allow continuation.

12 Field Assertion Types

Attach assertions to any field mapping. Evaluated at Gate 3 against the transformed batch.

  • NotNull with configurable threshold (e.g., 95% of records must be non-null)
  • Unique across all records in the batch
  • InSet with allowed values list (picklist validation)
  • MatchesRegex with custom pattern
  • Range for numeric min/max bounds
  • LengthBetween for string length constraints
  • CustomExpression using the transform engine
  • NoPii detects SSN and credit card patterns
  • ValidEmail checks RFC-compliant format
  • ValidPhone checks digit count by country
  • DateNotFuture rejects future dates
  • Positive rejects zero and negative numbers

29 Pre-Built Quality Rules

Zero-configuration rules that auto-apply based on target object type and detected vertical. Seven categories covering CRM data patterns:

Migration Health Score

Before every migration run, a 0-to-100 health score evaluates six weighted factors: connection health (25%), volume anomaly (20%), data freshness (15%), schema drift (15%), quality trend (15%), and error velocity from the last run (10%). Green (80+): proceed. Yellow (50-79): proceed with caution, specific concerns listed. Red (below 50): migration blocked with explanation.

Data Profiling

Statistical analysis of every field: null percentage, distinct value count, cardinality, min/max/mean/median/standard deviation for numerics, min/max/average length for strings, 25th/75th/95th percentiles, and pattern detection for 16+ data types (email, phone, SSN, zip, URL, UUID, Salesforce ID, currency, dates in multiple formats).

Schema Drift Detection

Compares the current source or target schema against the baseline captured when the mapping was created. Reports added fields, removed fields, type changes, nullable changes, and max length changes. Drift severity: None, Low (added fields only), Medium (10%+ fields changed), High (removed fields or type changes), Critical (20%+ drift or removed fields). High/Critical drift blocks migration.

Your current migration tool does not have a quality gate system.

Or an audit engine. Or a health score. Or adaptive batch sizing. Or a self-healing remediation pipeline. Because it was built for generic ETL, not precision data operations.

Request early access

Identity Resolution

The number one data problem in nonprofit migrations is duplicates. John Smith in Bloomerang, Jon Smyth in the event spreadsheet, and J. Smith in the email list are the same donor. Pathfinder finds them before they become three Salesforce contacts.

Four Matching Signals

Confidence Thresholds

Each match gets a 0-to-100 composite confidence score. Above 85: auto-merge (no human needed). 60 to 84: flagged for human review with side-by-side comparison. Below 60: treated as separate records. Thresholds are configurable per migration.

Salesforce Fabric AI Engine

174 AI patterns for Salesforce development, migration planning, security review, and Agentforce agent design. Not a chatbot. A composable AI composition system with a visual DAG pipeline editor.

Pattern Categories

Analysis (35 patterns)

Apex dependencies, API usage, automation conflicts, component architecture, custom metadata, data quality, Flow inventory, governor limits, org health, relationships, sharing model, storage, technical debt, test coverage

Review (22 patterns)

Apex code, batch Apex, connected apps, destructive changes, Flows, LWC, LWC accessibility, LWC tests, named credentials, package.xml, page layouts, permission sets, platform events, scheduled jobs, SOQL security, triggers, agent guardrails, agent instructions

Architecture (18 patterns)

Agentforce agent design, API strategy, async architecture, audit logging, CDC architecture, data model, integration, mobile strategy, multi-org, multi-tenant, package architecture, permission model, test strategy, case management, experience site, knowledge base, agent testing plan, AgentScript design

Migration (12 patterns)

Apex pattern conversion, Aura to LWC, workflow to Flow, API version migration, custom settings migration, Agentforce migration planning, Flow to agent action, AgentScript to Agentforce conversion

Service/Commerce/Health/Education (23 patterns)

Case management, omnichannel routing, entitlements, knowledge base, service metrics, commerce storefronts, FSC data model, Health Cloud, HIPAA compliance, care plan automation, education data model, student lifecycle

Optimization/Planning/Other (64 patterns)

Apex optimization, automation landscape, SOQL optimization, LWC performance, large data volume, deployment frequency, CI/CD pipeline, git workflow, data archival, duplicate management, Slack integration, Tableau dashboards, agent topic optimization, agent monitoring

14 Reasoning Strategies

Governor-Aware, Security-First, Migration-Safe, Bulkification, Multi-Org, ISV-Compliant, Agentforce Architect, Chain-of-Thought, Tree-of-Thought, Self-Refine, Self-Consistent, Reflexion, Ahead-of-Time, and Decomposition.

DAG Pipeline Editor

Seven node types: Pattern, Source, Sink, Conditional, Loop, Merge, and SubGraph. 15 edge transform operations: trim, uppercase, lowercase, first_line, last_line, first_n, line range, regex extract, JSONPath, template substitution, replace, split, line/word/char count. Conditional routing based on contains, equals, starts_with, ends_with, length, regex, JSONPath, empty/not_empty. Loop iteration over lines, JSON arrays, split delimiters, or JSONPath results with concat, JSON array, or numbered list aggregation.

LLM Support

Anthropic Claude, OpenAI GPT-4, and local Ollama models. Auto-detects available vendors from environment API keys. Streaming token-by-token output. Pattern execution with variable substitution, strategy composition, and context injection from the RAG document store.

Agentforce Integration

Live metadata extraction from connected Salesforce orgs via Tooling API: BotDefinition, BotVersion, BotDialog (Topics), BotDialogAction (Actions). Agent analysis with health scoring, topic overlap detection, guardrail gap identification, and instruction quality review. 10 dedicated Agentforce patterns. AgentScript analysis with complexity scoring and conversion planning.

Compliance, Audit, and Security

Automated Compliance Reports

One-click generation covering six sections: Data Classification (field sensitivity levels), PII Handling (detection methods, masking status, handling actions), Quality Assurance (gate results, error rates), Audit Trail (event log completeness, lineage coverage), Data Integrity (checkpoint counts, rollback availability), and Access Control (credential access log, operator identity). Overall status: Compliant, Conditionally Compliant, or Non-Compliant based on finding severity. Output as structured JSON for programmatic consumption.

Event Log

19 structured event types written as JSON-lines: MigrationStarted, MigrationCompleted, MigrationFailed, ObjectStarted, ObjectCompleted, BatchExtracted, RecordExtracted, FieldTransformed, RecordTransformed, ConflictDetected, ConflictResolved, GateEvaluated, AssertionEvaluated, RecordUpserted, RecordFailed, PiiDetected, RemediationApplied, CircuitBreakerTripped, CheckpointSaved. Query by record ID to replay a single record's full pipeline journey. Query by event type for aggregate analysis.

Data Lineage

Per-field transformation tracking: source field, source value, transform expression, result value, transform type (DirectCopy, Expression, DefaultValue, Lookup, Concatenation, Conditional, Identity), and timestamp. Query: "How did Account.Name get the value 'ACME Corp'?" Answer: "Source field FullName had value 'acme corp', transform UPPER($FullName) produced 'ACME CORP', then PROPER($FullName) produced 'Acme Corp'."

PII Detection and Masking

Detects SSN (XXX-XX-XXXX), credit card numbers (16 digits with separators), email addresses, phone numbers, passport numbers, driver's license numbers, bank accounts, and medical IDs. Four masking strategies: Redact ([REDACTED]), Partial (J**n), Hash ([HASH:a1b2c3d4]), and TypeLabel ([SSN]). Automatic log masking strips PII from event log output, tracing messages, and export files. Field-name-based detection for fields named ssn, credit_card, password, api_key, secret, token, private_key.

Security Model

Native macOS desktop application. Credentials in macOS Keychain (never on disk, never in database, never in environment variables). Per-workspace SQLite database. All processing local. No telemetry. No cloud calls except to the source and target systems being migrated. The only network traffic is the same traffic you would generate using Salesforce Data Loader or Workbench.

Migration Recipes and Accelerators

Recipe Format

A YAML/JSON file encoding a complete migration configuration: source type, target type, object mappings, field mappings with transforms, assertions, conflict strategies, quality rules, and execution config (batch size, timeout, PII detection). Shareable, version-controlled, diffable in Git. Parameterizable with connection references.

Three Pre-Built Accelerators

Each accelerator includes: proven field mappings tested across dozens of migrations, transform expressions for common data transformations, quality rule references, conflict resolution strategies, and execution order with dependency awareness. Load an accelerator, connect your source and target, review and adjust, run.

Data Catalog and Glossary

Full-Text Search

Search any field, object, transform, mapping, connection, quality rule, or recipe across all migrations. Inverted index with tokenization and stop word filtering. Scoring: name matches (3x weight), business terms (2.5x), tags (2x), object name (1.5x), description (1x). "Show me every mapping that touches the Email field" returns results across all migrations with highlighted match context.

Nonprofit Business Glossary

Eight pre-defined terms: Constituent (synonyms: Contact, Donor, Supporter, Member), Gift (Donation, Contribution, Transaction, Payment), Campaign (Appeal, Fund, Initiative, Drive), Household (Family, Household Account), Pledge (Commitment, Promise, Recurring Gift), Soft Credit (Tribute, Honor Gift, Memorial Gift), LYBUNT (Last Year But Unfortunately Not This year), SYBUNT (Some Year But Unfortunately Not This year). Search the glossary to understand what nonprofit terminology maps to which Salesforce objects.

Everything Else

Migration Cost Calculator

Estimates total Salesforce API calls before execution: extract calls (based on record count / SOQL batch size), upsert calls (record count / 200), schema discovery calls (2 per object), conflict detection calls, and post-validation calls. Calculates daily API limit usage percentage and estimated duration. Warns when approaching 80% of daily limit. Warns when execution would exceed the limit.

Circuit Breaker

Tracks consecutive connector failures. After N failures (default: 5), the circuit opens and all calls fast-fail with a clear error message instead of sending more requests to a broken system. Cooldown period transitions to half-open state for a test call. Success closes the circuit. Failure reopens. Configurable threshold and cooldown.

Rate Limiter

Token bucket algorithm with per-period refill. Concurrent request limiting. Daily quota tracking. Proactive throttling when API response headers indicate remaining quota below 10% of total. Tracks X-RateLimit-Remaining from response headers and slows down before hitting the limit, not after.

Incremental Sync

High water mark based extraction. Configurable timestamp field (default: LastModifiedDate). Stores the last successful high water mark per object in migration state. Subsequent runs only extract records modified after the mark. Adds WHERE clause to source queries automatically. Delta statistics show how many records were skipped.

Salesforce Bulk API v2

For migrations over 10,000 records: create an ingest job, upload records as CSV, poll for completion, parse results. 55x faster than Composite API for large datasets. 500,000 records in approximately 7 minutes instead of 6.5 hours. Auto-selection: Pathfinder uses Composite API for small batches and Bulk API v2 for large ones. CSV generation with proper escaping for commas, quotes, and newlines. Failed record parsing from Bulk API response CSV.

Software-Defined Migration Assets

Declarative asset definitions. Instead of "run migration job #47," define: "Salesforce Account should contain all active donors from Bloomerang with Name = CONCAT(First, Last), Email validated, deduped by email." The asset definition is the source of truth. Running twice produces the same result (idempotent). Freshness SLA checking: stale if not materialized within the configured window. Dependency graph with topological sort and cycle detection.

Schema-as-Code

Export schemas as versioned snapshots. Diff any two versions: added fields, removed fields, type changes, nullable changes, max length changes. Auto-generate migration plans from diffs: create_field (low risk), modify_type (high risk), remove_field (critical risk). Estimated effort per plan. MD5 checksums for integrity verification.

Error Handling and Recovery

Six error categories: Connection, Transform, Validation, Conflict, Load, and Threshold. Four severity levels: Warning (proceed), Error (record skipped), Critical (migration halted). Configurable error thresholds: max total errors (default: 100), max error rate (default: 5%), max consecutive errors (default: 10), per-object error limits (default: 50). Sliding window error rate calculation. Error grouping by type and pattern for batch analysis. Failed record export to CSV. Per-record retry with optional transforms.

Client and Project Management

Per-client project tracking with engagement lifecycle phases. Decision log documenting who approved what and when. Communication notes. Active client context for all operations. Project phases with status tracking.

Reporting

Seven report types: executive summary, detailed migration, data quality, compliance, conflict resolution, error analysis, and audit trail. Four output formats: PDF with custom logos and branding, HTML interactive, Excel with charts, and CSV tabular. Custom report templates. Report comparison across migration runs. Historical trending.

Workspace Management

Per-workspace SQLite database at the workspace directory path. Workspace archival for backup. Import/export workspace configuration. Multiple workspace support with recent workspace history. Workspace health checks. Parallel migration execution within a workspace.

Application Details

Technology Stack

  • Rust backend (168,000 lines of code)
  • React 18 frontend with TypeScript
  • Tauri v2 native macOS desktop shell
  • SQLite via sqlx (compile-time checked queries)
  • macOS Keychain for credential storage
  • Vite for frontend bundling
  • TanStack Query for server state
  • Zustand for client state
  • Tailwind CSS for styling
  • React Flow for DAG visualization

Testing

  • 4,695 automated tests across the workspace
  • pathfinder-core: 2,800+ tests
  • fabric-engine: 292 tests
  • connectors: 875+ tests
  • Tauri commands: 87 tests
  • Vitest for frontend unit tests
  • Playwright for end-to-end tests
  • Criterion benchmarks for performance-critical paths

23 Application Pages

Dashboard, Connections, Mappings, Mapping Dashboard, Mapping Studio, Migrations, Migration Detail, Conflicts, Data Profile, Audit, Duplicate Review, Client Management, Settings, Fabric AI Hub, Pattern Editor, Tapestry Gallery, Salesforce Explorer, Agentforce Explorer, Platform, Docs, and a few more.

212 Backend Commands

Every operation the frontend can perform is a typed Rust function behind a Tauri IPC command. 49 for Fabric/AI, 23 for mapping and collaboration, 17 for platform observability, 12 for client management, 9 for connections, 9 for workspace scaling, 8 for deduplication, 8 for reporting, 6 for migrations, 6 for conflicts, 5 for data audit, 5 for OAuth, 5 for error handling, 4 for exports, 4 for ordering, 4 for settings, 3 for schema, 3 for profiling, 3 for recovery, 3 for Salesforce validation, and more.

That is the platform.

68 audit checks. 15 connectors. 38 transform functions. 10 conflict types. 9 resolution strategies. 29 quality rules. 174 AI patterns. 5 quality gates. An 8-stage pipeline with checkpoint/resume, adaptive batching, self-healing remediation, identity resolution, compliance reporting, DuckDB analytics, and a Konami code. Built in Rust. Tested obsessively. Ready for production data.

Request early access

Back to Pathfinder overview