Skip to content

Test strategy

Describe what automated testing means for leekimerp on Frappe / ERPNext: layers of the pyramid, how bench run-tests fits, and coverage as an org policy — not a hard number mandated by this repo. Operational evidence lives in Test evidence & CI.

Wide base

Integration

Narrow / few

Manual / E2E smoke

(UAT, critical paths)

Integration tests

(API, DocType, hooks)

Unit tests

(pure logic, helpers)

LayerTypical scope in Frappe appsNotes
UnitPure Python helpers, validation logic isolated from DB where feasibleFast; prefer deterministic inputs.
IntegrationDocType lifecycle, @frappe.whitelist methods with test site, doc_events / scheduler behaviorUses Bench site; dominant layer for ERP extensions.
E2E / manualFull flows: payroll run, billing → Xero, webhooks from provider sandboxesCapture in UAT checklists; align with tutorials.

Exact test modules are defined in app hooks.py and team convention — this document does not replace pytest discovery configuration.

From Bench root with site context:

Terminal window
bench --site <site> run-tests --app leekimerp

Handover evidence: stdout/stderr log, date, bench version output, git SHA — see Test evidence & CI.

This repository does not enforce a global coverage percentage in CI by default. Recommended policy:

AreaExpectation
Critical pathsPayment-adjacent code, payroll hooks, guest webhooks (allow_guest=True), and idempotency — should have automated tests before large refactors.
New whitelisted APIsAdd or extend tests when behavior is non-trivial; cross-check API inventory.
Coverage reportsIf enabled (coverage / CI artifact), archive HTML or summary with release — optional column in Test evidence.

Set concrete thresholds (e.g. minimum % on leekimerp/api/) in your pipeline configuration; link that policy from your internal wiki into release notes.

The Astro site has its own CI (see Test evidence & CI): generate:doctypes, inventory CSV checks, npm run build. That is not Python coverage — treat it as docs quality gates.