{"id":295,"date":"2025-06-23T10:08:31","date_gmt":"2025-06-23T10:08:31","guid":{"rendered":"http:\/\/sreschool.com\/blog\/?p=295"},"modified":"2026-05-05T07:30:01","modified_gmt":"2026-05-05T07:30:01","slug":"request-latency-in-devsecops-a-complete-tutorial","status":"publish","type":"post","link":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/","title":{"rendered":"Request Latency in DevSecOps: A Complete Tutorial"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">1. Introduction &amp; Overview<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What is Request Latency?<\/h3>\n\n\n\n<p><strong>Request Latency<\/strong> is the time taken between sending a request to a service (e.g., an API or web application) and receiving the first byte of the response. It\u2019s a crucial performance metric in microservices, web applications, and cloud-native architectures.<\/p>\n\n\n\n<p>In DevSecOps, request latency is not just a performance concern\u2014<strong>it intersects with reliability, security, scalability, and compliance<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">History or Background<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Origins in Networking<\/strong>: Latency has always been a core network metric, tracked since the early days of TCP\/IP and HTTP protocols.<\/li>\n\n\n\n<li><strong>Modern Shift<\/strong>: In the cloud-native and microservices era, latency measurement evolved from infrastructure-level metrics to <strong>application and API-specific observability<\/strong>, especially with <strong>SRE, DevOps, and DevSecOps<\/strong> practices.<\/li>\n\n\n\n<li><strong>Tooling Evolution<\/strong>: Tools like Prometheus, Grafana, Datadog, and New Relic now provide deep visibility into latency metrics across distributed systems.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Why Is It Relevant in DevSecOps?<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Security Validation<\/strong>: Latency spikes may indicate attacks like DoS, injection attempts, or resource starvation.<\/li>\n\n\n\n<li><strong>Performance Monitoring<\/strong>: Helps ensure SLAs\/SLOs are met in CI\/CD pipelines.<\/li>\n\n\n\n<li><strong>Root Cause Analysis<\/strong>: Correlating latency with build versions, deployments, or misconfigured policies aids faster incident resolution.<\/li>\n\n\n\n<li><strong>Policy Enforcement<\/strong>: Gatekeeping in CI\/CD can be based on latency metrics (e.g., fail build if p95 latency &gt; 500ms).<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">2. Core Concepts &amp; Terminology<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Term<\/strong><\/th><th><strong>Definition<\/strong><\/th><\/tr><\/thead><tbody><tr><td><strong>Latency<\/strong><\/td><td>Time delay between request initiation and response start.<\/td><\/tr><tr><td><strong>p50 \/ p95 \/ p99<\/strong><\/td><td>Percentile-based latency thresholds.<\/td><\/tr><tr><td><strong>SLI\/SLO\/SLA<\/strong><\/td><td>Service Level Indicator \/ Objective \/ Agreement related to latency metrics.<\/td><\/tr><tr><td><strong>Throughput<\/strong><\/td><td>Number of requests per second. Often inversely affects latency.<\/td><\/tr><tr><td><strong>Tail Latency<\/strong><\/td><td>High-percentile (e.g., p99) latencies \u2014 crucial in distributed systems.<\/td><\/tr><tr><td><strong>Cold Start<\/strong><\/td><td>Delay caused by just-in-time provisioning (common in serverless).<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">How It Fits into the DevSecOps Lifecycle<\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Phase<\/strong><\/th><th><strong>Latency Relevance<\/strong><\/th><\/tr><\/thead><tbody><tr><td><strong>Plan<\/strong><\/td><td>Define latency SLOs and SLA metrics<\/td><\/tr><tr><td><strong>Develop<\/strong><\/td><td>Use latency-aware SDKs, monitor API latency during testing<\/td><\/tr><tr><td><strong>Build<\/strong><\/td><td>Add latency thresholds in CI tests<\/td><\/tr><tr><td><strong>Test<\/strong><\/td><td>Run load tests and track latency changes<\/td><\/tr><tr><td><strong>Release<\/strong><\/td><td>Enforce latency checks before deployment<\/td><\/tr><tr><td><strong>Deploy<\/strong><\/td><td>Monitor real-time latency post-deployment<\/td><\/tr><tr><td><strong>Operate<\/strong><\/td><td>Use alerts on latency deviations<\/td><\/tr><tr><td><strong>Monitor<\/strong><\/td><td>Dashboarding &amp; AIOps integration for latency tracking<\/td><\/tr><tr><td><strong>Secure<\/strong><\/td><td>Correlate anomalous latency with intrusion detection<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">3. Architecture &amp; How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udd27 Components Involved<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Clients \/ Consumers<\/strong>: Web\/mobile apps making HTTP\/gRPC calls.<\/li>\n\n\n\n<li><strong>Load Balancers<\/strong>: AWS ELB, NGINX, HAProxy \u2014 can add or mitigate latency.<\/li>\n\n\n\n<li><strong>Middleware \/ Microservices<\/strong>: Actual code running app logic.<\/li>\n\n\n\n<li><strong>Monitoring Tools<\/strong>: Prometheus, Grafana, Datadog, ELK Stack.<\/li>\n\n\n\n<li><strong>Tracing Tools<\/strong>: Jaeger, OpenTelemetry \u2014 help pinpoint latency bottlenecks.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Internal Workflow<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>&#091;Client Request] \n    \u2b07\n&#091;Ingress Gateway \/ Load Balancer] \n    \u2b07\n&#091;Service Mesh (e.g., Istio)] \n    \u2b07\n&#091;Microservices (App Code + DB Calls)] \n    \u2b07\n&#091;Response Time Measured at Various Hops] \n    \u2b07\n&#091;Latency Metrics Sent to Monitoring Stack]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture Diagram (Described)<\/h3>\n\n\n\n<p>If a diagram were shown, it would include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Client &gt; API Gateway &gt; Load Balancer &gt; Service Mesh &gt; App Pod &gt; DB<\/li>\n\n\n\n<li>Arrows between each component labeled with timing (e.g., T1, T2\u2026)<\/li>\n\n\n\n<li>Sidecars collecting metrics<\/li>\n\n\n\n<li>Prometheus scraping endpoints<\/li>\n\n\n\n<li>Grafana dashboard visualizing p50\/p95\/p99<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integration Points<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>CI\/CD Tools<\/strong>: Jenkins, GitHub Actions can run post-deploy latency tests.<\/li>\n\n\n\n<li><strong>Cloud Providers<\/strong>: AWS CloudWatch, GCP Stackdriver track latency natively.<\/li>\n\n\n\n<li><strong>Service Meshes<\/strong>: Istio\/Linkerd provide real-time latency metrics.<\/li>\n\n\n\n<li><strong>Security Tools<\/strong>: Use latency anomalies to trigger WAF\/DDoS rules.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">4. Installation &amp; Getting Started<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisites<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Kubernetes cluster (e.g., using Minikube or EKS)<\/li>\n\n\n\n<li>Helm installed<\/li>\n\n\n\n<li>Prometheus + Grafana stack<\/li>\n\n\n\n<li>Sample microservices app (like <code>sock-shop<\/code>)<\/li>\n\n\n\n<li><code>kubectl<\/code>, <code>curl<\/code>, <code>hey<\/code> (load testing)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Step-by-Step Guide<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">Step 1: Deploy Prometheus + Grafana<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>helm repo add prometheus-community https:\/\/prometheus-community.github.io\/helm-charts\nhelm install monitoring prometheus-community\/kube-prometheus-stack\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Step 2: Deploy Sample App (e.g., <code>sock-shop<\/code>)<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>kubectl apply -f https:\/\/raw.githubusercontent.com\/microservices-demo\/microservices-demo\/master\/deploy\/kubernetes\/complete-demo.yaml\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Step 3: Enable Latency Scraping<\/h4>\n\n\n\n<p>Ensure services expose <code>\/metrics<\/code> endpoints and ServiceMonitors are configured.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Step 4: Load Test and Measure<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>hey -z 30s -c 10 http:\/\/&lt;app-url&gt;\/api\/catalogue\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Step 5: View Latency Metrics in Grafana<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Query: <code>histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))<\/code><\/li>\n\n\n\n<li>Dashboards: Import JSON from Grafana dashboards library.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">5. Real-World Use Cases<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Use Case 1: API Gateway Throttling in FinTech<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Measure and enforce rate-limiting when p99 latency exceeds 1s.<\/li>\n\n\n\n<li>Prevents fraudulent API floods and DDoS.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Use Case 2: E-commerce Spike Monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>On sale days, use latency dashboards to auto-scale microservices.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Use Case 3: Healthcare Compliance Monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Regulatory constraints mandate &lt;300ms latency for diagnostic APIs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Use Case 4: DevSecOps Gate in CI\/CD<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Reject PR merges if latency regression is &gt;10% from baseline.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">6. Benefits &amp; Limitations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Key Benefits<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Early detection of performance bottlenecks<\/li>\n\n\n\n<li>Improved customer experience<\/li>\n\n\n\n<li>Enhanced threat detection<\/li>\n\n\n\n<li>SLA\/SLO compliance enforcement<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Limitations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Overhead from too much instrumentation<\/li>\n\n\n\n<li>False positives due to network jitter<\/li>\n\n\n\n<li>May require APM tools with licensing costs<\/li>\n\n\n\n<li>Cannot always differentiate between app vs infra delays<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">7. Best Practices &amp; Recommendations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Security<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Monitor sudden latency spikes as attack vectors.<\/li>\n\n\n\n<li>Use mTLS and rate-limiting in service mesh.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Set alerts on p95\/p99 latency.<\/li>\n\n\n\n<li>Use sidecar proxies like Envoy for non-intrusive tracing.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Maintenance<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Regularly update dashboards and alerting rules.<\/li>\n\n\n\n<li>Correlate latency with deployments.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance &amp; Automation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Automate latency validation in GitOps workflows.<\/li>\n\n\n\n<li>Include SLI\/SLO checks in release pipelines.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">8. Comparison with Alternatives<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Metric<\/strong><\/th><th><strong>Request Latency<\/strong><\/th><th><strong>Error Rate<\/strong><\/th><th><strong>Throughput<\/strong><\/th><\/tr><\/thead><tbody><tr><td>Focus<\/td><td>Response Time<\/td><td>Failures<\/td><td>Volume<\/td><\/tr><tr><td>Use in DevSecOps<\/td><td>Perf + Security<\/td><td>Reliability<\/td><td>Scalability<\/td><\/tr><tr><td>Ideal for<\/td><td>Bottleneck analysis<\/td><td>Alerting<\/td><td>Load tracking<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">When to Choose Latency<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>When SLAs\/SLOs are strict<\/li>\n\n\n\n<li>When performance is linked to compliance (e.g., FHIR APIs)<\/li>\n\n\n\n<li>In microservices where every ms counts<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">9. Conclusion<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Final Thoughts<\/h3>\n\n\n\n<p>Request latency is not just a performance KPI\u2014it\u2019s a <strong>DevSecOps guardrail<\/strong>. It ensures security, compliance, reliability, and user trust in distributed systems.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Future Trends<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AI-based latency prediction<\/li>\n\n\n\n<li>Auto-tuning of services based on latency<\/li>\n\n\n\n<li>Integration with Policy-as-Code<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Introduction &amp; Overview What is Request Latency? Request Latency is the time taken between sending a request to a [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-295","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Request Latency in DevSecOps: A Complete Tutorial - SRE School<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Request Latency in DevSecOps: A Complete Tutorial - SRE School\" \/>\n<meta property=\"og:description\" content=\"1. Introduction &amp; Overview What is Request Latency? Request Latency is the time taken between sending a request to a [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/\" \/>\n<meta property=\"og:site_name\" content=\"SRE School\" \/>\n<meta property=\"article:published_time\" content=\"2025-06-23T10:08:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-05T07:30:01+00:00\" \/>\n<meta name=\"author\" content=\"priteshgeek\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"priteshgeek\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/\",\"url\":\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/\",\"name\":\"Request Latency in DevSecOps: A Complete Tutorial - SRE School\",\"isPartOf\":{\"@id\":\"https:\/\/sreschool.com\/blog\/#website\"},\"datePublished\":\"2025-06-23T10:08:31+00:00\",\"dateModified\":\"2026-05-05T07:30:01+00:00\",\"author\":{\"@id\":\"https:\/\/sreschool.com\/blog\/#\/schema\/person\/6a53e3870889dd6a65b2e04b7bc3d7db\"},\"breadcrumb\":{\"@id\":\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sreschool.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Request Latency in DevSecOps: A Complete Tutorial\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sreschool.com\/blog\/#website\",\"url\":\"https:\/\/sreschool.com\/blog\/\",\"name\":\"SRESchool\",\"description\":\"Master SRE. Build Resilient Systems. Lead the Future of Reliability\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sreschool.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/sreschool.com\/blog\/#\/schema\/person\/6a53e3870889dd6a65b2e04b7bc3d7db\",\"name\":\"priteshgeek\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\/\/sreschool.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/231a0e8b7a02636f2fbacf8dcf4494cb1cc0d49ecc9a8165fbaeaeeaf102641a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/231a0e8b7a02636f2fbacf8dcf4494cb1cc0d49ecc9a8165fbaeaeeaf102641a?s=96&d=mm&r=g\",\"caption\":\"priteshgeek\"},\"url\":\"https:\/\/sreschool.com\/blog\/author\/priteshgeek\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Request Latency in DevSecOps: A Complete Tutorial - SRE School","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/","og_locale":"en_US","og_type":"article","og_title":"Request Latency in DevSecOps: A Complete Tutorial - SRE School","og_description":"1. Introduction &amp; Overview What is Request Latency? Request Latency is the time taken between sending a request to a [&hellip;]","og_url":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/","og_site_name":"SRE School","article_published_time":"2025-06-23T10:08:31+00:00","article_modified_time":"2026-05-05T07:30:01+00:00","author":"priteshgeek","twitter_card":"summary_large_image","twitter_misc":{"Written by":"priteshgeek","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/","url":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/","name":"Request Latency in DevSecOps: A Complete Tutorial - SRE School","isPartOf":{"@id":"https:\/\/sreschool.com\/blog\/#website"},"datePublished":"2025-06-23T10:08:31+00:00","dateModified":"2026-05-05T07:30:01+00:00","author":{"@id":"https:\/\/sreschool.com\/blog\/#\/schema\/person\/6a53e3870889dd6a65b2e04b7bc3d7db"},"breadcrumb":{"@id":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sreschool.com\/blog\/request-latency-in-devsecops-a-complete-tutorial\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sreschool.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Request Latency in DevSecOps: A Complete Tutorial"}]},{"@type":"WebSite","@id":"https:\/\/sreschool.com\/blog\/#website","url":"https:\/\/sreschool.com\/blog\/","name":"SRESchool","description":"Master SRE. Build Resilient Systems. Lead the Future of Reliability","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sreschool.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Person","@id":"https:\/\/sreschool.com\/blog\/#\/schema\/person\/6a53e3870889dd6a65b2e04b7bc3d7db","name":"priteshgeek","image":{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/sreschool.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/231a0e8b7a02636f2fbacf8dcf4494cb1cc0d49ecc9a8165fbaeaeeaf102641a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/231a0e8b7a02636f2fbacf8dcf4494cb1cc0d49ecc9a8165fbaeaeeaf102641a?s=96&d=mm&r=g","caption":"priteshgeek"},"url":"https:\/\/sreschool.com\/blog\/author\/priteshgeek\/"}]}},"_links":{"self":[{"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/posts\/295","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/comments?post=295"}],"version-history":[{"count":2,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/posts\/295\/revisions"}],"predecessor-version":[{"id":297,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/posts\/295\/revisions\/297"}],"wp:attachment":[{"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/media?parent=295"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/categories?post=295"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sreschool.com\/blog\/wp-json\/wp\/v2\/tags?post=295"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}