If you know your Google Tag Manager, you know that GTM pushes three data layer events into the queue when any page with the container snippet is rendered. Each of these three events signals a specific stage in the page load process. Here are the events (be sure to read my guide on GTM rules to understand further what these events do):

  • gtm.js - This is pushed into the data layer as soon as GTM is initialized and the container is loaded. This is also the default event for all rules without an explicit event macro rule as a condition. Basically, if you want something to happen at the earliest possible moment, you need to have {{event}} equals gtm.js as the rule

  • gtm.dom - When the DOM has been populated with on-page elements, this event is pushed into the data layer. If you have HTML elements or dependent JavaScript snippets loaded at the very bottom of the page template, having your tag fire upon {{event}} equals gtm.dom will ensure that these latecomers can be used in your tags

  • gtm.load - Once the window has finished loading, along with all images, scripts, and other assets, gtm.load is pushed into the data layer. If you have scripts or DOM elements that take a long while to load, and you want to be 100 % sure that they have loaded before your tags fire, using {{event}} equals gtm.load as a firing rule for your tag might be wise

Now, having said that, I wanted to test just how accurate gtm.dom and gtm.load are as trigger events. If I were to have my most important tag, GA page tracking, fire upon either one, just how many hits will I miss compared to the default {{url}} matches RegEx .* rule?

I know there will be some losses in accuracy, because any delay in firing a tag increases the risk of the person viewing the page clicking a link or closing the browser before the tag has had a chance to fire. But just how much data is actually lost?

Results in brief: If you don’t want to go through the rest of the article and are just interested in results, here’s what I found. Using gtm.dom as the trigger is almost as reliable as using gtm.js. With gtm.load, you’ll see far more missed hits, but it might still be within an error margin you find acceptable. However, it is important to remember that the actual results will vary depending on your DOM and page load times. If you have a complex page template with a lot of dynamically created content, huge images, lots of external assets, etc., you’ll see a higher error rate than with my humble blog.

The premise

Here’s how I set up the test:

  • I used my own blog as the guinea pig. I wanted an actual “live” environment to test with, and my blog is a pretty good example of a standard GTM setup

  • For exactly 28 days, I had two non-interaction events firing: one upon {{event}} equals gtm.dom and one upon {{event}} equals gtm.load

  • After 28 days, I could compare the number of events to page views to get the number of hits I’d miss if I chose gtm.dom or gtm.load over the default gtm.js

The test time was from the beginning of Wednesday, 12 March 2014 to the end of Tuesday, 8 April 2014.

Some details about my setup:

(By the way, I’m renaming my TMRs (tags, macros, rules) at some convenient point in the near future, so don’t read too much into my current naming schema.)

There’s a “Dwell and scroll” tag, which starts to work its magic upon gtm.dom. Basically, it waits 30 seconds, looks for a scroll action by the visitor, and if both the timeout and a scroll have taken place, it sends a bounce-rate-killing event to GA.

There’s also a tag for my weather script. This is pretty expensive in terms of performance, since it makes two external API calls. However, it only fires during the first page view of a session, and it initiates with gtm.js. The more expensive weather API call is also done asynchronously.

Finally, there’s my page load time script, set to fire on gtm.load, some event pushes and my listeners.

My tag setup is really lightweight. There shouldn’t be any major reason why my tags would cause gtm.dom or gtm.load to be delayed, unless the weather scripts starts to timeout in the external resource calls.

In my GA account, I don’t filter out my own hits; actually, I don’t have a single filter on my blog profile. I know, you probably have a big, nasty look of disgust on your face right now. But you know what, I never thought I’d get enough traffic to care, and now that I do, I still don’t really care. Furthermore, I find it difficult to move to a new, filtered profile, since I don’t have any historical data. OK. Stop chucking that lettuce at me. I’ll go and create a filtered profile right now!

The results - page views

Here’s what I found out:

  • Total page views: 12,167

  • Total gtm.dom events: 12,115 (-52, 99,6 %)

  • Total gtm.load events: 11,945 (-222, 98,2 %)

Well, that’s pretty good! Based on this result, I wouldn’t hesitate to recommend you to use {{event}} equals gtm.dom if you have even the slightest concern that some vital data in the DOM is required in your tags. Also, gtm.load does pretty well, though I do believe that a near 2 percent error rate might be too much for some large eCommerce sites. My site is very lightweight, so a more complex and flashy site with a significantly longer average page load time will surely have more missed gtm.load hits.

However, I had to probe further. If you remember, I had a couple of other events firing on every page view as well. Because of this, I’d like to take a look at visits to see if there’s some discrepancies between page views sent and visits recorded.

The results - visits

I performed this analysis by segmenting out visits without a single gtm.dom or gtm.load test event. Here’s what I found:

Hold on… what?

Almost 4 percent of all visits occurred without a single gtm.dom or gtm.load test event. So, I must have visits without a single page view, because the number of visits without these GTM events exceeds the number of pageviews without them. And yes, this confirms my suspicions:

So here’s the deal: I have a bunch of visits without a single gtm.dom or gtm.load event being fired, and almost 85 % of these visits don’t have a landing page, i.e. a single page view hasn’t been sent.

Interesting.

Well, when I look at the event catalog for these “ghost visits”, I see a bunch of my adjusted bounce rate events and my weather events.

The interesting thing (not visible in these tables) is that my adjusted bounce rate event actually has more total events than unique events, which would mean that these visits had multiple page loads which didn’t send an actual page view to Google Analytics! How screwed up is that?

Also, because my weather script did fire on a number of occasions, and still my test events weren’t pushed, I’ll have to believe that something interfered with my test events. Remember, my “NoBounce” event waits 30 seconds before firing a hit AND it waits for gtm.dom before initializing. This couldn’t be just a case of gtm.dom and gtm.load not being pushed into the data layer. This was clearly a case of my test scripts just refusing to fire!

Remember also, I don’t have any filters on my profile, so I’m not filtering out page views and just seeing the events. Just over 3 percent of all my visits are completely page-view-less!

This is weird, but I’ll just chalk it up to an error margin associated with increased granularity in measurement. I know I shouldn’t be picking on micro-level phenomena such as this, but it still makes me wonder. Are page-view-less visits thanks to some configuration I have in GTM, or should they be attributed to the visitor?

By the way, I looked through every single report in GA, and they didn’t reveal anything out of the ordinary. It would be interesting to pursue this further, but for the purposes of this test, this is all more just a fascinating detail than anything that you or I can learn from.

Conclusions

Apart from the weirdness with the page-view-less visits, I’m still comfortable in recommending using {{event}} equals gtm.dom for all your tags. If you want to use gtm.load as the trigger, you’ll have to be aware that you will lose a lot more hits, even if the rate is still around just 2 percent. But that’s just with my lightweight setup.

Whether or not race conditions had anything to do with missed hits, since my adjusted bounce rate script also fires on gtm.dom, I don’t know. A huge site with dozens of tags all firing on the same triggers might exhibit more variation in how accurate gtm.dom and gtm.load are as firing rules.

To play it safe, I still recommend having all your critical, independent tags firing as early as possible, i.e. after gtm.js has been pushed into the data layer. However, there’s no reason not not to use gtm.dom and gtm.load as trigger rules as well. You’ll just have to be aware that you might be missing some hits.