Performance and Scaling
This page outlines observed performance KPIs typically used to evaluate Certificate Lifecycle Management (CLM) and Certificate Authority (CA) platforms at enterprise scale. These metrics are derived from industry benchmarks, customer references, and field deployments across large public and private trust environments, and are provided as guidance values rather than hard guarantees.
Actual performance depends on deployment topology, CA and HSM latency, validation methods, certificate volumes, and automation maturity.
KPI Categories
CertiNext performance is generally evaluated across four primary dimensions:
Certificate request and issuance workflows
Automation and ACME protocol performance
UI responsiveness and operator experience
Platform scalability and concurrency
1. Certificate Request and Issuance KPIs
These KPIs measure how the platform performs when users, APIs, or automation systems submit certificate requests concurrently.
Concurrent certificate request submissions
200–1,000+ concurrent requests
API/UI request acknowledgement (p95)
< 500 ms
Policy validation + request acceptance (p95)
< 2–5 seconds
End-to-end issuance (DV, automated path)
Seconds to minutes (CA & DCV dependent)
End-to-end issuance (OV/EV)
Minutes to hours (validation dependent)
Request failure rate under load
< 0.1%
Notes
End-to-end issuance time is dominated by CA validation and HSM operations, not the CLM layer.
Mature CLM platforms decouple request ingestion from issuance to maintain responsiveness under load.
2. ACME Protocol and Automation KPIs
ACME is typically the highest-volume transaction path in modern CLM deployments due to short-lived certificates and automated renewals.
ACME newOrder / finalize API latency (p95)
300–700 ms
Certificate issuance throughput
Hundreds to thousands per hour
Burst renewal handling
2–3× normal peak without degradation
ACME success rate
> 99.9%
Retry / backoff handling
Automatic, no manual intervention
Renewal-related outages
Near zero in automated environments
Notes
Platforms that lack strong queuing and idempotency controls tend to fail during renewal storms.
ACME performance is highly sensitive to DNS provider latency and CA responsiveness.
3. UI and Console Performance KPIs
These KPIs measure operator experience for security, PKI, and DevOps teams managing large certificate estates.
Login → dashboard load (p95)
< 2 seconds
Certificate inventory page load (p95)
< 2 seconds
Certificate detail view (p95)
< 1–2 seconds
Search and filter operations (p95)
< 1–2 seconds
Large reports / exports
Seconds to minutes (async)
Notes
High-scale deployments rely on server-side pagination, indexing, and caching.
UI performance degradation is often an early indicator of database sizing issues.
4. Discovery and Inventory Scale KPIs
Discovery and inventory operations place sustained load on the platform and database.
Certificates tracked per tenant
100k – 1M+
Discovery scan size
Thousands of endpoints per run
Discovery execution impact
No visible UI/API degradation
Inventory refresh latency
Near-real-time to minutes
Orphan / unmanaged certificate detection
Continuous
Notes
Discovery workloads are typically offloaded to bots/agents to protect core platform performance.
Inventory performance depends heavily on database indexing strategy.
5. Platform Scalability Characteristics
Observed characteristics of enterprise-grade CLM platforms:
Horizontally scalable application tier (stateless services)
Database as the primary scaling constraint
Separation of:
UI/API ingestion
Policy validation
Issuance and CA interaction
Ability to scale automation independently of UI usage
No certificate outages caused by platform-level bottlenecks
Recommended Baseline Configuration (Production for On-Prem)
To achieve the above KPI ranges, typical production deployments use:
Application Tier
3+ application nodes (or pods)
2–4 vCPU, 6–12 GB RAM per node
Java 21 with tuned JVM heap
Autoscaling enabled
Database Tier
Highly available database with replication
4–8 vCPU, 16–32 GB RAM (baseline)
SSD / NVMe storage
Strong indexing for inventory and audit logs
Automation / Bots
Deployed separately from core platform
Scaled by number of endpoints and renewal frequency
Isolated from UI/API workloads
Interpreting These Benchmarks
These KPIs represent what a well-run, mature deployment typically achieves in production, not theoretical maxima. Organizations consistently meeting these benchmarks usually are configured with:
High automation adoption (ACME, APIs, bots)
Short certificate lifecycles
Clear ownership and policy enforcement
Proper database sizing and monitoring
Last updated
