Docs/Data & Integration/Bulk Import Admin Panel

Bulk Import Admin Panel

CSV import wizard with intelligent field mapping, duplicate detection, and real-time progress tracking

8 min read

Overview#

The Bulk Import admin panel provides a guided, 5-step wizard for importing large datasets via CSV files. It supports all major data types -- customers, vendors, employees, accounts, items, dimensions, fixed assets, and transactions -- with intelligent field mapping, duplicate detection, and real-time progress tracking.


Supported Object Types#

Object TypeRequired FieldsDuplicate Detection
CustomerName, emailTax ID, then name (case-insensitive)
VendorName, email, payment termsTax ID, then name (case-insensitive)
EmployeeFirst name, last name, work email, hire dateWork email
AccountAccount number, name, type, normal balanceAccount number
ItemItem number, item nameItem number
Dimension TypeName, display name, applies toName
Dimension ValueDimension type, code, nameType + code
Fixed AssetAsset number, name, entity, cost, useful life, in-service dateAsset number + entity
TransactionTransaction type, entityType + reference number + party

The 5-Step Wizard#

Step 1: Select Type#

Choose the object type you want to import and select the target legal entity. The entity selection determines where the imported data will be scoped.

Step 2: Upload CSV#

Upload your CSV file by dragging and dropping or browsing. The system:

  • Parses the CSV instantly for a preview
  • Shows the first 50 rows in a scrollable table
  • Displays the total row count and detected column headers
  • Uploads the file to secure cloud storage for processing

Step 3: Field Mapping#

This is where the wizard maps your CSV columns to the target fields:

  • Auto-mapping: Column headers are automatically matched using intelligent normalization (lowercasing, stripping whitespace, replacing spaces with underscores)
  • Required fields: Highlighted and must be mapped or given a default value
  • Optional fields: Can be left unmapped
  • Default values: Set fallback values for unmapped required fields (e.g., default currency for all records)
  • Metadata catch-all: Toggle to store all unmapped CSV columns as metadata -- this preserves legacy IDs and custom fields from your source system without schema changes
  • Mapping templates: Save and reuse mapping configurations for recurring imports

Step 4: Review#

A summary of all settings before processing:

  • Object type and target entity
  • File information and row count
  • Complete field mapping table (CSV column to target field)
  • Default values and import options
  • Duplicate handling mode (skip, update, or error)

Step 5: Progress#

Real-time monitoring during import processing:

  • Progress bar with percentage
  • Status badge (pending, validating, importing, completed, failed)
  • Live counters: imported, updated, skipped, errors
  • Error list with row numbers, field names, and record identifiers for easy cross-referencing
  • Duration display on completion

Import History#

The import history page lists all past imports with:

  • Status, record counts, and processing duration
  • Filter by object type
  • Pagination (20 per page)
  • Color-coded status badges
  • Quick link to start a new import

Address Support#

Three object types support importing addresses via flat CSV columns:

Customer addresses use billing_ and shipping_ prefixed columns:

  • billing_address_line_1, billing_city, billing_state, billing_postal_code, billing_country
  • shipping_address_line_1, shipping_city, etc.

Vendor addresses use billing_ (primary) and shipping_ (remit-to) prefixed columns with the same field pattern.

Employee addresses use home_ and mailing_ prefixed columns:

  • home_address_line_1, home_city, home_state, etc.
  • mailing_address_line_1, mailing_city, etc.

Metadata Catch-All#

When the "Store unmapped columns as metadata" option is enabled:

  1. All CSV columns not mapped to target fields are identified
  2. Values from these columns are collected for each row
  3. The data is stored as structured metadata on each record

This is especially useful for data migration, as it preserves information from the source system (legacy IDs, custom fields, internal codes) without requiring any schema changes. The data is available for reference and can be used for future lookups.


Transaction Import: Flexible Party Resolution#

When importing transactions, parties (vendors, customers, employees) can be identified by multiple identifier types:

Vendor resolution (checked in order):

  1. Internal ID
  2. Vendor name (case-insensitive)
  3. Global vendor ID
  4. External vendor ID (from metadata)
  5. Tax ID

Customer and employee resolution follow similar patterns with their respective identifiers.

Account resolution accepts either account number or external account ID.

All reference data is pre-cached at import start for fast lookups during processing.


Mapping Templates#

Reusable mapping configurations save time for recurring imports:

  • Save a mapping after configuring it in Step 3
  • Load a saved template to auto-populate mappings for future imports
  • Delete templates that are no longer needed
  • Templates are scoped per object type

Error Handling#

Validation Errors#

Each error includes:

  • Row number for CSV cross-reference
  • Field name causing the error
  • Descriptive error message
  • Record identifier (e.g., company name) for quick identification

Batch Safety#

Each batch of records is processed within a database transaction. If any record in a batch fails fatally, that batch rolls back while other batches remain unaffected.

Error Limits#

  • First 100 errors are stored and displayed
  • First 100 warnings are stored
  • First 1,000 created/updated record IDs are tracked
  • Processing continues after validation errors (does not abort on individual failures)

Key Design Principles#

PrincipleBenefit
Memory-efficient streamingA 500,000-row CSV uses the same memory as a 500-row CSV
Name-based duplicate detectionWorks reliably with bulk CSV data where emails may be shared
Entity-level field injectionSelect the legal entity once; it is applied to every record
Metadata preservationUnmapped columns are stored, not discarded
Batch-level transactionsOne bad record does not corrupt the entire import
Reusable templatesSave time on recurring import workflows

Subscribe to new posts

Get notified when we publish new insights on AI-native finance.