APM Blog

Subscribe to APM Blog: eMailAlertsEmail Alerts
Get APM Blog: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

Applying Dynatrace AI into our Digital Performance Life: Best of December 2017

Dynatrace blog

Share Your PurePath was my personal program to help Dynatrace AppMon users make sense of their captured application performance data. I analyzed their exported PurePaths and sent my findings back in a PowerPoint. Thanks to several hundred users that sent me PurePaths in the last years, I’ve written numerous blogs based on the problems we discovered. Many of the detected patterns made it into an out-of-the-box feature in AppMon: PurePath Analysis using Automatic Problem Detection!

In our new Dynatrace world, most of this Analysis Magic happens automatically, behind the scenes and on a much larger data set, on a new scale. We invested in OneAgent (better quality full stack data), Anomaly Detection (multi-dimensional baselining) and the Dynatrace Artificial Intelligence. If you want to read how the AI works, check out my blog on Dynatrace AI Demystified.

But does it work outside of your demo environments?

Many of our AppMon users, folks that use competitive products and have seen a Dynatrace demo,  often wonder: “Looks great in the demo! BUT – what type of problems does Dynatrace detect in non-demo environments? How will it make my life as a Cloud Operator, SRE, DevOps Engineer or Performance Architect easier?”

Educate through Share your AI-Detected Problem!

To help shine a light on automatic problem detection in Dynatrace, I thought to start a new program that I call: “Share your AI-Detected Problem

Any Dynatrace user (paying or trial) can send me a link or screenshots to their Dynatrace AI-detected Problem(s). The purpose of this is not so much about diagnosing the captured data and finding root cause (that step has been automated), it is more about educating the larger digital performance community on what type of problems our AI detects and explains how to access the root cause data for faster problem resolution. I also share my thoughts building self-healing, auto-remediation scripts for these scenarios. I strongly believe that this is going to be our next major task in our self-driven IT industry!

Now, for this blog I picked three simple scenarios:

  • 3rd party Gemfire Service Outage resulting in high end user service failure rate
  • Broken links (HTTP 404) on new rolled out features on Dynatrace Partner Portal
  • Slow disk on EC2 causing Nginx errors and impacting dynatrace.com slowdown!

Problem Ticket #1: Gemfire Service Outage

This problem was detected during a recent Dynatrace Proof of Concept. Special thanks to my colleagues Lauren, Jeff, Matt and Andrew for sharing this story. They forwarded me email exchanges with the prospect – highlighting the detected impact and the actual root cause. For data privacy reasons, the screenshots have been blurred but I think you can see how helpful the AI was in this particular case:

Step #1: Everything Starts with a Problem Ticket

Every time Dynatrace detects a problem, it opens a problem ticket which stays open until the problem impact was resolved. Dynatrace captures all relevant events while the problem is impacting your end users and SLAs. In demos, we most often show the problem details and each automatically correlated event (log messages, infrastructure problems, configuration changes, response time hotspots …) in the Dynatrace UI. In production environments or during Proof of Concepts, our users typically trigger notifications via the Dynatrace Incident Notification Integration.  (e.g. send the details to ServiceNow, PagerDuty, VictorOps, OpsGenie, a Lambda Function, our mobile app…)

Now, let’s get to the first shared problem. The following screenshot is what Dynatrace shows in the problem overview screen for each detected problem. Dynatrace automatically detected that multiple services were impacted over a period of 1h 53mins. It lists all impacted services by name and gives us information about how many service requests were actually impacted.

Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-300x190.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-200x127.png 200w" sizes="(min-width: 900px) 900px, 100vw" />
Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!

Step #2: Exploring Problem Details

Full Problem Details –also accessible via the Problem REST API – shows us just how much data and dependencies Dynatrace analyzed for us, the actual problem, the impact and the root cause:

Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-300x242.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-200x162.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-400x323.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-600x485.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!

Step #3: Clicking on the Impacted Service to Find Root Cause

On the problem ticket, we can either click into the Impacted Service or into the detected Root Cause section. In our case, the next click is on the Impacted Service – Failure Rate, which has increased to 93%. This brings us to the automated baseline graph, showing how Dynatrace detected this anomaly. All of this happened fully automated, without having to configure any thresholds, or without having to tell Dynatrace which services and endpoints the service offers. Just install the OneAgent on your hosts. The rest is auto-detected. That’s true zero-configuration monitoring.

In the baseline graph, which is available for all service endpoints across multiple dimensions, the problematic time range gets automatically marked by Dynatrace due to its abnormal behavior:

Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requestshttps://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-300x137.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-768x351.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-200x91.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-400x183.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-600x274.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-800x366.png 800w" sizes="(min-width: 900px) 900px, 100vw" />
Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requests

Tip: Notice the different diagnostics options in the screen above, such as switching between Failure rate and HTTP errors, analyzing Response Time, CPU or Throughput issues (the top tabs) or clicking on the next diagnostics options such as View details of failures or Analyze backtraces. If you want to learn more about these diagnostics options, I suggest you watch my recent Performance Clinic on Basic Diagnostics with Dynatrace.

In our case, we want to see the actual root cause of the increased failure rate. Clicking on View details of failures brings us to that answer:

All failures are HTTP 500s caused by an unavailable Gemfire cache service. https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-293x300.png 293w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-200x205.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-400x410.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-600x615.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
All failures are HTTP 500s caused by an unavailable Gemfire cache service.

Clicking on the Details button in the bottom left even reveals the actual code that tries to call Gemfire but fails with the ServerRefusedConnectionException.

Summary: The external cache service Gemfire became unavailable. This caused requests on our monitored service to receive HTTP 500s from Gemfire, which ultimately led to higher failure rate back to the end user. If the host running Gemfire would have been instrumented with a OneAgent as well, the AI would have automatically pointed us to the crash of that process which ultimately turned out to be the issue.

Self-Healing thoughts: In a recent blog, I started to write about Self-Healing and started to list a couple of auto-remediating examples. In this scenario, we could write self-healing scripts that validate why Gemfire is refusing network connections. It could be a crashed service, a network issue or a configuration issue on the connection pools on both ends (caller and callee). Using the Dynatrace REST API allows us to write better mitigation actions, because all this root cause data is exposed in the context of the actual end user impacting problem.

Problem Ticket #2: Functional Issues on new Feature Rollout

The next problem ticket is from our own Dynatrace production environment we use to monitor our key web properties such as our website, blog, community, support portal … – let’s take-a-peek!

Step #1: Problem Ticket Details

Problem 668 was a problem I looked at while it was still ongoing – hence the color of the problem still being red. This is indicating that the problem has been open for the last 22 minutes. This problem shows us that Dynatrace not only detects anomalies for the whole service, but also on individual service or REST endpoints as well (that’s the automated multi-dimensional baselining capability). In case of Problem 668, Dynatrace detected a Failure Rate increase to 13% on a special endpoint we expose on www.dynatrace.com:

Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via <a href=www.dynatrace.com. Fortunately, just a single user impacted!" height="343" srcset="https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1024x343.png 1024w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-300x101.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-768x257.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-200x67.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-400x134.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-600x201.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-800x268.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1000x335.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1200x402.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1400x469.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1600x536.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via www.dynatrace.com. Fortunately, just a single user impacted!

Step #2: Root Cause Analysis

At first, it almost seems odd that Dynatrace alerts just because one user is having an issue. But once we dig deeper, we understand why!

Clicking on the Impacted Service details brings us to the Failure Rate graph for www.dynatrace.com. The view gets automatically filtered to the problematic endpoint which is /data/rfopartner.json. The sudden jump in failure rate triggered the creation of an anomaly event which then resulted into creating that problem ticket:

Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-300x145.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-768x371.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-200x97.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-400x193.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-600x290.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-800x386.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1000x483.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1200x579.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1400x676.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1600x772.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.

Root cause details for that failure rate spike are just one click away: Analyze failure rate degradation!

17 Broken Link Requests all coming from the same internal dynalabs.io domain.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-300x253.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-768x648.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-200x169.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-400x337.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-600x506.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-800x675.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1000x843.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1200x1012.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4.png 1277w" sizes="(min-width: 900px) 900px, 100vw" />
17 Broken Link Requests all coming from the same internal dynalabs.io domain.

Knowing that these 404s are “only” coming from an internal site is good news, as no real end user has yet seen that problem. But why is that? Turns out that this internal domain was a test site that is used to validate a new feature on our partner portal, that was soon to be released. Automatically detecting this behavior allows our partner portal website team to fix this problem of incorrect links, before deploying this version to the live system. You should check out a YouTube video I did with Stefan Gusenbauer, who showed us how we use Dynatrace internally in combination with automated functional regression tests. Instead of just relying on the functional test results, we can combine the functional test results with the data Dynatrace captured.

Back to this problem ticket: The actual root cause of the 404 was in a service hosted on nginx that connects some of the new capabilities of the partner portal with some legacy data. The PurePaths captured for these errors show exactly that the 404s originate in the legacy connector service and how these 404s propagate back to the www.dynatrace.com.

Our beloved PurePath                 </div>
            
                                  <p class=Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience