// free resource

SLA Guide for Internal
Developer Platforms

How to define, measure and enforce platform SLAs before your first incident.

Read Online

Built from real IDP deployments  ·  For platform engineering and SRE teams  ·  Free  ·  No signup required

01 //

Why Platform SLAs Matter

Most IDPs launch without SLAs. The first major incident reveals why that was a mistake, and by then, the political capital to fix it has already been spent.

A platform team without SLAs is operating on trust alone. Your consumers, the engineers building products on top of your IDP, have no formal understanding of what to expect from you. When something breaks, there is no contract to point to, no defined response time, and no shared definition of "resolved". Everything becomes a negotiation at the worst possible moment.

SLAs change that dynamic. They turn the platform team into a service provider with documented commitments, and they give consumers a legitimate basis for escalation when those commitments are not met. More importantly, they force the platform team to measure things, and you cannot improve what you do not measure.

Without SLAs

"You're always firefighting. Nothing is ever officially broken."

Every incident is an informal negotiation. Engineers escalate based on frustration, not facts. The platform team defends based on effort, not outcomes.

With SLAs

"You have a contract with your consumers. Everyone knows what good looks like."

Incidents are handled by protocol. Consumers know when to escalate. The platform team is measured on outcomes they agreed to, and can be proud of hitting.

The cost of inaction

"One 4-hour outage blocking 50 engineers = 200 engineer-hours lost."

At a blended cost of €120/h, that is €24,000 per incident. A platform with a documented 1-hour P1 MTTR SLA limits that exposure to €6,000, and creates the incentive to keep improving.

02 //

The 4 SLA Pillars for IDPs

Most SLA frameworks are built for external-facing services. IDPs need a different set of pillars, ones that capture both the technical reliability and the developer experience your consumers actually care about.

Pillar 01 //
Uptime

Availability

The percentage of time the platform is operational and accessible to developers. Measured per capability, not as a single aggregate score for the entire platform.

Recommended Target

99.5% for internal platforms
99.9% for regulated industries

How to Measure

Synthetic uptime monitoring per capability: developer portal, CI/CD pipeline, environment provisioner, observability stack. Monthly aggregation.

⚠ Common Mistake

Measuring a single "platform uptime" instead of per-capability uptime. A 99.9% overall SLA is meaningless if the environment provisioner is down 5% of the time, that is the capability developers depend on most.

// Example SLA

"The developer portal shall be available 99.5% of the time, measured monthly, excluding scheduled maintenance windows communicated 48 hours in advance."

Pillar 02 //
Incident Response

Mean Time to Restore (MTTR)

The time from incident detection to full service restoration. The single most important measure of how well your platform team responds when things go wrong.

Recommended Target

P1 (platform unavailable): < 1 hour
P2 (capability degraded): < 4 hours
P3 (minor impact): < 24 hours

How to Measure

Incident tracking from detection timestamp to resolution timestamp. Run a monthly rolling average. Track separately per severity tier.

⚠ Common Mistake

Measuring MTTR from the time an incident is reported rather than when it was detected. This hides the true response time and rewards delayed reporting, exactly the wrong incentive.

// Example SLA

"P1 incidents (platform unavailable) shall be resolved within 1 hour of detection, with status updates communicated to affected teams every 15 minutes until resolution."

Pillar 03 //
Deployment Quality

Change Failure Rate

The percentage of platform changes, deployments, configuration updates, infrastructure changes, that cause incidents or require rollback within 24 hours.

Recommended Target

< 5% for mature platforms
< 15% for early-stage platforms
DORA elite benchmark: < 5%

How to Measure

Track post-deployment incident rate over a rolling 30-day window. Count both incidents triggered within 24h of a deploy and explicit rollbacks.

⚠ Common Mistake

Not classifying rollbacks as failures. Teams often treat a rollback as a "routine operation" rather than a failed change, which artificially inflates apparent quality and hides systemic testing gaps.

// Example SLA

"No more than 5% of platform releases shall require rollback or cause a P1/P2 incident within 24 hours of deployment, measured over a rolling 30-day window."

Pillar 04 //
Developer Experience

Developer Experience (DevEx)

Time-to-value and overall quality of the platform experience for developers. The pillar most often missing from IDP SLA frameworks, and the one developers care about most.

Recommended Targets

Env provisioning: < 15 min
CI/CD execution: < 10 min
Support response: < 4 hours
New service onboarding: < 1 day

How to Measure

Automated pipeline timing instrumentation for technical targets. Quarterly developer satisfaction survey (target: > 7/10 NPS-equivalent). Track trend over time, not just point-in-time scores.

⚠ Common Mistake

Measuring only technical SLAs and ignoring developer experience entirely. A platform with 99.9% uptime and an 8-hour environment provisioning time is still failing its consumers, it just has clean graphs.

// Example SLA

"Standard environment provisioning via the self-service portal shall complete within 15 minutes for 95% of requests, measured monthly. Developer satisfaction score shall remain at or above 7/10 on the quarterly survey."

03 //

SLA Tiers Template

Not all platform consumers have the same criticality requirements. A tiered SLA model lets you offer differentiated commitments without fragmenting your operations, most of the tooling is shared across tiers.

Use this template as a starting point. Tier 1 is appropriate for non-critical internal tools and experimentation environments. Tier 2 covers most production IDPs. Tier 3 is for regulated industries or platforms supporting mission-critical services.

Tier 1, Essential Tier 2, Standard Tier 3, Enterprise
Availability 99.0% 99.5% 99.9%
P1 MTTR 4 hours 2 hours 1 hour
Change Failure Rate < 15% < 10% < 5%
Env Provisioning < 60 min < 30 min < 15 min
Support Response Next business day 8 hours 4 hours
Status Page No Yes Yes, real-time
SLA Credits No No Yes
04 //

Enforcement & Reporting

Defining an SLA is the easy part. The operational machinery that enforces it, monthly reports, escalation paths, breach responses, is what determines whether the SLA has any meaning in practice.

Monthly SLA Report

Publish a monthly SLA report within 5 business days of month end. Keep it short: one page of facts, no narrative. Here is a reference structure:

Platform SLA Report, October 2025

Prepared by Platform Engineering · Published 5 Nov 2025

All SLAs Met
Developer Portal
99.7%  
CI/CD Pipeline
99.4%  
Environment Provisioning
99.9%  
P1 incidents (1)
52 min MTTR  
P2 incidents (3)
3.2h MTTR  
Change Failure Rate (30d)
3.2%  
Avg env provisioning time
11 min  
Developer satisfaction (Q3)
7.4 / 10  

Escalation Path

Define escalation before you need it. When an SLA is breached, consumers should not have to figure out who to contact.

Step 01

Consumer Reports Issue

Via status page, Slack channel, or support ticket. Timestamp is logged automatically.

Step 02

Platform Team Triages

Acknowledges within 15 min (P1) or 1h (P2). Classifies severity. Assigns owner.

Step 03

SLA Clock Running

Status updates per SLA cadence. If MTTR target is at risk, proactively escalate.

Step 04

Engineering Leadership

Auto-escalates if SLA breach is confirmed. Triggers credit process if applicable.

SLA Credits & Breach Response

For internal platforms, "SLA credits" typically take the form of documented remediation commitments rather than financial credits. When a Tier 3 SLA is breached, the platform team commits to a written post-incident review within 48 hours, a root cause analysis published to consumers within 5 business days, and a specific mitigation action with an agreed delivery date. This creates accountability without the complexity of internal chargeback, though some organisations do implement chargeback for Tier 3 consumers to align financial incentives.

Tooling Recommendations

The right tooling depends on your maturity level. These are the combinations that work well across most IDP deployments:

Prometheus + Alertmanager

Availability and MTTR measurement. Define SLO recording rules per capability. Alert on error budget burn rate, not raw downtime.

Grafana SLO Dashboards

Visualise SLA compliance against targets. Publish a read-only consumer dashboard, transparency builds trust faster than any status email.

Backstage + TechInsights

Developer experience SLAs. Track onboarding time, golden path adoption, and satisfaction scores. TechInsights plugin handles automated scorecards.

PagerDuty / Incident.io

Incident response and MTTR tracking. Auto-generate post-incident reports. Ensures detection-to-resolution timestamps are captured accurately.

05 //

Common Mistakes

These are the five mistakes that consistently undermine IDP SLA programmes, even at organisations with experienced platform teams.

  • Setting SLAs without tooling to measure them

    An SLA you cannot measure is a promise you cannot keep. Before you publish any SLA, confirm that you have instrumentation in place to track the metric in real time. If you cannot currently measure it, set an internal target first and build the tooling before making a consumer-facing commitment.

  • One SLA for the entire platform instead of per capability

    A single "platform availability" SLA averages away the most important signal. Your environment provisioner failing 3% of the time is invisible inside a 99.7% overall uptime figure, but it is the capability your developers hit five times a day. Measure and commit to SLAs per capability: portal, CI/CD, environments, observability, secrets management.

  • Ignoring developer experience metrics, uptime is not everything

    Technical SLAs tell you whether your platform is running. Developer experience SLAs tell you whether it is working for the people who use it. A platform with 99.9% uptime but a 45-minute environment provisioning time is failing its consumers in a way that no availability graph will reveal. Include at least one developer experience metric in your SLA framework from the start.

  • No escalation path when SLAs are breached

    Documenting what happens when an SLA is met is easy. Documenting what happens when it is not met is what most teams skip. Every SLA should have a corresponding breach response: who gets notified, who owns the incident, what the consumer can expect, and what remediation looks like. Without this, an SLA breach just creates confusion and blame rather than resolution.

  • Never reviewing SLAs as the platform matures

    An SLA set when your platform served 20 developers is wrong for a platform serving 200. Review your SLA targets at least annually, or after any significant change in platform scale, team size, or consumer criticality. Targets that are too easy create complacency. Targets that are too hard create learned helplessness. The right SLA is the one that stretches your team without breaking it.

06 //

Next Steps

Three things you can do this week to start building a platform SLA framework that actually works.

Audit your current platform against the 4 pillars

Run a gap assessment: do you currently measure availability per capability, MTTR, change failure rate, and developer experience? For each pillar you cannot currently measure, define the instrumentation you need.

Choose your SLA tier based on consumer criticality

Talk to your largest consumers and understand what downtime actually costs them. Use that conversation to select the right SLA tier, and to identify the gaps between your current performance and your target commitments.

Book a discovery call, we'll review your SLA framework

Out.Cloud works with platform teams at all stages of SLA maturity. Bring your current framework (or lack of one) and we will give you specific, actionable feedback, no pitch, no pressure.

Book free review
// ready to build it?

Ready to run a platform
with a real SLA?

We'll review your current platform posture and help you define SLA targets you can actually hit, and prove.

Book Discovery Call