jQuery’s $(document).ready() event is something that you probably learned about in your earliest exposure to jQuery and then rarely thought about again. The way it abstracts away DOM timing issues is like a warm security blanket for code running in a variety of cold, harsh browser windows.

Between that comforting insurance and the fact that deferring everything until $(document).ready() will never break your code, it’s understandable not to give much thought to its necessity. Wrapping $(document).ready() around initialization code becomes more habit than conscious decision.

However, what if $(document).ready() is slowing you down? In this post, I’m going show you specific instances where postponing startup code until the document’s ready event slows perceived page load time, could leave your UI needlessly unresponsive, and even causes initialization code to run slower than necessary.


Example: live()

One of the most popular uses for jQuery’s live() is to maintain event handlers on elements that are dynamically created and destroyed over time. Instead of juggling traditional bind() handlers in response to those changes, live()’s event delegation allows you to declare handlers once up-front. Whether targeted elements exist at declaration time, or in the future, one live() handler will apply to them all.

Imagine that we have an application with several slideToggling sidebar blocks which may be dynamically added and removed while the user interacts with the page. You’ve probably seen live() used like this to simplify handling those future changes:

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    <!-- The sidebar event delegation is not registered "here"... -->
    $(document).ready(function () {
      $('#Sidebar h3').live('click', function () {
        $(this).next().slideToggle();
      });
    });
  </script>
</head>
<body>
  <div id="Sidebar">
    <h3>Title 1</h3>
    <p>Text 1</p>
 
    <h3>Title 2</h3>
    <p>Text 2</p>
  </div>
</body>
<!-- ...but roughly down "here" -->
</html>

That usage is natural when you’re hedging against AJAX-driven changes in the future. The volatility that you’re concerned with won’t happen until after the page loads, so it’s intuitive to postpone worrying about them until after the document’s ready event.

However, what would happen if you treated your HTML document’s initial load process the same way as any other dynamic modifications?

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    <!-- The event handler is wired up "here"; immediately -->
    $('#Sidebar h3').live('click', function () {
      $(this).next().slideToggle();
    });
  </script>
</head>
<body>
  <div id="Sidebar">
    <h3>Title 1</h3>
    <!-- At this point, Title 1 is ready for action. -->
    <p>Text 1</p>
 
    <h3>Title 2</h3>
    <!-- Ditto for Title 2 at this point -->
    <p>Text 2</p>
  </div>
</body>
</html>

Not only does that work just as well as postponing the live() declaration until the document’s ready event, but now the handlers are active during the page loading process. As the browser loads each <h3> element and adds it to the DOM, our click events are immediately ready to be handled.

Benefit: A more responsive UI

To see where the latter approach shines, imagine the page was very large and took several seconds to load, or that a script reference somewhere on the page was timing out. In scenarios like those, jQuery’s document ready event may not fire until considerably later than the targeted elements are visible to your users.

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    <!-- The event handler is wired up "here"; immediately -->
    $('#Sidebar h3').live('click', function () {
      $(this).next().slideToggle();
    });
 
    <!-- This handler isn't active until... -->
    $(document).ready(function() {
      $('#Sidebar p').live('click', function() {
        // Important magic goes here.
      });
    });
  </script>
</head>
<body>
  <div id="Sidebar">
    <h3>Title 1</h3>
    <p>Text 1</p>
  </div>
 
  <script src="http://twitter.com/fail-whale.js"></script>
 
  <!-- ...way down here, *after* the script references times out. -->
</body>
</html>

Why hold live() back until the document is ready? It doesn’t matter if the selector matches any elements initially; they will immediately become active as they are rendered and appear on the page.

Benefit: Improved performance

A common criticism of using live() for event delegation is that it requires you to perform an initial selection of all of the elements that it targets. Since event delegation doesn’t require any initial setup on each individual element, this pre-selection is a wasteful performance drag when there are dozens or hundreds of elements targeted.

However, if you register your live() handlers before those elements exist on the page, there is no performance penalty whatsoever. The event delegation can be registered very quickly, yet still works exactly the same as if you had waited until the ready event.

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    <!-- Fast. Runs before any of the TRs exist. -->
    $('#MyTable tr').live('click', function () {
      $(this).toggleClass('highlight');
    });
 
    <!-- Slow. Doesn't run until the table is rendered. -->
    $(document).ready(function() {
      $('#MyTable tr').live('click', function () {
        $(this).toggleClass('highlight');
      });
    });
  </script>
</head>
<body>
  <table>
    <!-- Hundreds or thousands of rows here -->
  </table>
</body>
</html>

Example: $.ajax()

Another situation where $(document).ready() may be holding you back is when you make an AJAX request immediately as a page is loading. Displaying recent Twitter updates is a common example of that:

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    $(document).ready(function() {
      <!-- $.getJSON() request to retrieve Twitter updates. -->
    });
  </script>
</head>
<body>
  <!-- A typically large page here -->
</body>
<!-- The Twitter request doesn't *begin* until here. -->
</html>

Even though the $.getJSON() snippet is located at the beginning of the page, it isn’t executed until the entire page has loaded and the ready event has fired. Why wait until the page is loaded in order to begin the AJAX request?

<html>
<head>
  <script type="text/javascript" src="jquery.js"></script>
  <script type="text/javascript">
    <!-- $.getJSON() request to retrieve Twitter updates. -->
    <!-- The request begins immediately. -->
  </script>
</head>
<body>
  <!-- A typically large page here -->
</body>
</html>

This is a nice improvement even when you’re making a request to a local endpoint, but is even more beneficial here because third-party requests circumvent the browser’s per-domain request limit. That third-party request to Twitter runs in parallel with the rest of the page’s normal loading timeline.

Better yet, since the dynamic script element injection used in a JSONP request is asynchronous, there’s no drawback to initiating the request early. Even if Twitter is slow or down (imagine that), the request won’t drag the page down.

Benefit: Performance

To show you a visualization of how the previous two approaches differ, I used my own site as a guinea pig. First, I wrapped a $.getJSON() request to Twitter in $(document).ready() and placed it in the <head> of my site template.

This is how the site loads in that configuration, taken from Firebug’s Net tab:

Now, here’s the same $.getJSON() request, located in the same position in the <head> of the page, but without the $(document).ready() wrapper:

We just pulled a bit of performance right out of thin air. This isn’t simply perceived performance, which is nice enough, but a truly faster overall load time.

Note: You might notice that the ready event came later in the second example and be concerned that it was due to the early $.getJSON() request. It wasn’t. If you look closely, a blocking <script> reference to one of the page’s local scripts took unusually long in the second run, which pushed everything back about 700ms longer than usual.

Conclusion

I think the preceding examples are compelling, but I’m also not suggesting that this is appropriate in every case. Often, it’s best to move every bit of JavaScript to the bottom of your pages (e.g. a public-facing, non-application site like this one). When your scripts are located at the bottom of the page, it doesn’t matter whether you use $(document).ready() or not; everything is effectively running when the document ready event fires anyway.

However, when you’re building the type of script-heavy “application” that behooves your placing script references in the document’s <head>, keeping these ideas in mind can have a tangible impact on the performance of your application.