The redirect technique behind Twitter's links

The redirect technique behind Twitter's links

While users would actually see the actual link in the user interface (either their browser or the Twitter app), the actual redirect behaviour works with an extra step via

In this article, I explain how Twitter outbound clicks work. Because users that are clicking on a link anywhere in the Twitter interface, aren't taken to that page in a straight line. This is how it works:

  1. Link shortening service
    Instead, Twitter first sends users to their a link-shortening service, called
  2. New HTML
    That page would return HTML containing a bit of JavaScript (and a refresh metatag as a no-JavaScript fallback).
  3. To the destination
    You would be redirected to the actual page you expected to go to.

There are some nuances here though. Your click on a link within the user interface would always go to This way, Twitter is able to measure the amount of clicks per link. Because data is important. But the response isn't always HTML. behaviour for real visitor

The response you will get, depends on your user agent string. This can easily be tested by using an online service:

  1. go to this online HTTP status code checker tool;
  2. fill in (which is a link that will go to the homepage of my site)
  3. check the result. It will show you a status code 200 right away, without a 301. See this screenshot taken from the test.

But when visiting, you are actually redirected. And all SEO specialists know that redirects are supposed to be 301 or 302's. So what is happening?

HTML is being returned

Instead of doing a server side redirect that would typically come with a status code of 301 or 302, will just return HTML. And that's an existing page, meaning you will get a 200 status code. Below is the full HTML code (screenshot) and JavaScript as part of the HTML that the browser would receive:

window.opener = null; location.replace("")

Redirect behaviour for bots

Bots however are treated different. When going back to, you will see an option to change the user agent, aligned to the right below the textarea. It will use your user agent string by default, and pass it on as browser information to the URL that you are testing.

That way, the receiving domain (in this case, thinks it is dealing with a real browser and likely a real user. Likely, because Lighthouse is an example of an automated tool sending along browser information, but obviously is not a real user. will do a server side redirect

Now change the User Agent into something that will be identified as a bot. For example LinkedIn bot or Slackbot. Not sending any user agent information might lead to a time-out as such requests might be filtered and denied by Twitter's link shortening service.

You will then see that the first pagehit ends up with a redirect. See the screenshot below, taken from when tested as a bot.

When comparing it with the previous screenshot when tested with an actual browser, we see that:

  • there is a 301 right away;
  • as a matter of fact, there is no content-length as no HTML nor JavaScript is returned;

Why differentiating behaviour between bots and real users?

Obviously, only Twitter knows, but the reason might be as following.


When using redirects for bots, it's easier for bots to determine behaviour when the redirect-information is part of the headers (basically how is able to see the redirects when it's a server side redirect);

Real users

Using a JavaScript redirect in case of a real user will:

  • help flowing information to the destination that would not happen in case of a server side redirect.
    By using document.referrer, they will be able to determine that the visitor came from, of which analytics specialists know is part of Twitter/X;
  • prevent adding too much delay to the destination's TTFB in their pagespeed data. TTFB only starts tracking when the javascript redirect is executed, and not when a user actually clicked on a link in their interface/feed.