I’m interested in the User Experience costs of third-party scripts. Every client I have averages ~30 third-party scripts but discussions about reducing them with stakeholders end in “What if we load them all async?” This is a good rebuttal because there are right and wrong ways to load third-party scripts, but there is still a cost, a cost that’s passed on to the user. And that’s what I want to investigate.

Privacy is probably the most common answer for user costs. But it’s never clear to what extent (or even if) Privacy has been violated. People also have different tolerances for personal data exposure. There are a few levels of nuance between “Did users who clicked the green button from the homepage buy a product?” and “I doxxed you and put it on the dark web.” Privacy becomes a bit of a boogey man when discussing third-party script performance. While it’s very important to discuss I’d like to divorce privacy from this conversation and focus on the tangible costs to a product or application.

To understand the user costs a bit more, I devised a thought experiment on Twitter: Let’s say we have a site with ~300 third-party scripts all responsibly loaded as async, what would be the User Experience costs and risks?

  1. async Code Arrives Before First-Party UI Code
    If your async code arrives before your first-party UI code finishes downloading that will block HTML, CSS, and JS parsing.
  2. DNS Lookup Costs
    Earlier this year I saw Antón Molleda from webhint do some napkin math to show that even 3 DNS lookups would blow your ~5s/170kb on 3G render budget leaving you with 10kb. There’s some HTTP Archive data suggesting more third-party scripts increase load times that seems to confirm this math.
  3. CPU Contention
    Third-party scripts can block the performance of your first-party UI code. It’s a Battle Royale for available CPU cycles and your first-party code is outnumbered.
  4. Network Contention
    Paul Irish mentioned that similar to CPU Contention “new network requests could be starving bandwidth from your more important [first-party] requests.” Your browser has ~5 connection threads that it utilizes for network requests, which means if any part of your first-party UI or single page needs another file or lazyloaded image, it might have to stand in line for a while.
  5. User Data and Battery Costs
    Mobile data has costs and HTTP requests murder phone batteries. I think it stands to reason that users (whether knowingly or guided by the invisible hand of the free market) will eventually pursue non-data consuming and battery-thrashing alternatives (e.g. WeChat over FB Messenger)
  6. Events Overloaded
    This is similar to CPU contention, but lots of scripts bound to scroll, resize, or even click can reduce the UI responsiveness and in extreme cases can create layout thrashing.
  7. Debugging Surface Area Increases
    While more of an organizational cost, it can turn into a User Experience cost. More scripts increase the surface area for debugging. If something goes wrong, would you rather check 5 places for problems or 300? I recently had a problem with an errant third-party script causing slow script errors. It took a whole day to isolate which script was causing the issue and I still don’t know how that script made its way into the build.

Now, 300 is a lot. I hope you know that. But ~50 or even ~150 third-party requests? I have a couple of clients with that type of setup right now. My co-worker Trent Walton ran some numbers and the Alexa Top 50 have a median of ~18 third-party requests. I firmly believe you could encounter some or all of the aforementioned side-effects even with that level of third-party self-control.

I think some general extrapolations can be made here. Single page apps and per-route JS bundles are probably most at risk of being choked out by network and CPU contention. Your initial payload might get 100% of the CPU but subsequent pages might be fighting for a fraction of that. But depending on your bundling strategy that might work out just fine and execute in a timely manner. It isn’t only SPAs that might experience problems, it’s that any resource might arrive late or execute slowly. The Web is an undependable place, so this shouldn’t be very surprising.

If you think of anything I missed, please let me know. I think we’re all in this sinking third-party boat, so we’ll need each other to get out of it.