security – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Sat, 30 Mar 2024 02:13:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 security – CSS-Tricks https://css-tricks.com 32 32 45537868 Passkeys: What the Heck and Why? https://css-tricks.com/passkeys-what-the-heck-and-why/ https://css-tricks.com/passkeys-what-the-heck-and-why/#respond Wed, 12 Apr 2023 17:41:53 +0000 https://css-tricks.com/?p=377305 These things called passkeys sure are making the rounds these days. They were a main attraction at W3C TPAC 2022, gained support in Safari 16, are finding their way into macOS and iOS, and are slated to …


Passkeys: What the Heck and Why? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
These things called passkeys sure are making the rounds these days. They were a main attraction at W3C TPAC 2022, gained support in Safari 16, are finding their way into macOS and iOS, and are slated to be the future for password managers like 1Password. They are already supported in Android, and will soon find their way into Chrome OS and Windows in future releases.

Geeky OS security enhancements don’t exactly make big headlines in the front-end community, but it stands to reason that passkeys are going to be a “thing”. And considering how passwords and password apps affect the user experience of things like authentication and form processing, we might want to at least wrap our minds around them, so we know what’s coming.

That’s the point of this article. I’ve been studying and experimenting with passkeys — and the WebAuthn API they are built on top of — for some time now. Let me share what I’ve learned.

Table of contents

Terminology

Here’s the obligatory section of the terminology you’re going to want to know as we dig in. Like most tech, passkeys are wrought with esoteric verbiage and acronyms that are often roadblocks to understanding. I’ll try to de-mystify several for you here.

  • Relying Party: the server you will be authenticating against. We’ll use “server” to imply the Relying Party in this article.
  • Client: in our case, the web browser or operating system.
  • Authenticator: Software and/or hardware devices that allow generation and storage for public key pairs.
  • FIDO: An open standards body that also creates specifications around FIDO credentials.
  • WebAuthn: The underlying protocol for passkeys, Also known as a FIDO2 credential or single-device FIDO credentials.
  • Passkeys: WebAuthn, but with cloud syncing (also called multi-device FIDO credentials, discoverable credentials, or resident credentials).
  • Public Key Cryptography: A generated key pair that includes a private and public key. Depending on the algorithm, it should either be used for signing and verification or encrypting and decrypting. This is also known as asymmetric cryptography.
  • RSA: An acronym of the creators’ names, Rivest Shamir and Adel. RSA is an older, but still useful, family of public key cryptography based on factoring primes.
  • Elliptic Curve Cryptography (ECC): A newer family of cryptography based on elliptic curves.
  • ES256: An elliptic curve public key that uses an ECDSA signing algorithm (PDF) with SHA256 for hashing.
  • RS256: Like ES256, but it uses RSA with RSASSA-PKCS1-v1.5 and SHA256.

What are passkeys?

Before we can talk specifically about passkeys, we need to talk about another protocol called WebAuthn (also known as FIDO2). Passkeys are a specification that is built on top of WebAuthn. WebAuthn allows for public key cryptography to replace passwords. We use some sort of security device, such as a hardware key or Trusted Platform Module (TPM), to create private and public keys.

The public key is for anyone to use. The private key, however, cannot be removed from the device that generated it. This was one of the issues with WebAuthn; if you lose the device, you lose access.

Passkeys solves this by providing a cloud sync of your credentials. In other words, what you generate on your computer can now also be used on your phone (though confusingly, there are single-device credentials too).

Currently, at the time of writing, only iOS, macOS, and Android provide full support for cloud-synced passkeys, and even then, they are limited by the browser being used. Google and Apple provide an interface for syncing via their Google Password Manager and Apple iCloud Keychain services, respectively.

How do passkeys replace passwords?

In public key cryptography, you can perform what is known as signing. Signing takes a piece of data and then runs it through a signing algorithm with the private key, where it can then be verified with the public key.

Anyone can generate a public key pair, and it’s not attributable to any person since any person could have generated it in the first place. What makes it useful is that only data signed with the private key can be verified with the public key. That’s the portion that replaces a password — a server stores the public key, and we sign in by verifying that we have the other half (e.g. private key), by signing a random challenge.

As an added benefit, since we’re storing the user’s public keys within a database, there is no longer concern with password breaches affecting millions of users. This reduces phishing, breaches, and a slew of other security issues that our password-dependent world currently faces. If a database is breached, all that’s stored in the user’s public keys, making it virtually useless to an attacker.

No more forgotten emails and their associated passwords, either! The browser will remember which credentials you used for which website — all you need to do is make a couple of clicks, and you’re logged in. You can provide a secondary means of verification to use the passkey, such as biometrics or a pin, but those are still much faster than the passwords of yesteryear.

More about cryptography

Public key cryptography involves having a private and a public key (known as a key pair). The keys are generated together and have separate uses. For example, the private key is intended to be kept secret, and the public key is intended for whomever you want to exchange messages with.

When it comes to encrypting and decrypting a message, the recipient’s public key is used to encrypt a message so that only the recipient’s private key can decrypt the message. In security parlance, this is known as “providing confidentiality”. However, this doesn’t provide proof that the sender is who they say they are, as anyone can potentially use a public key to send someone an encrypted message.

There are cases where we need to verify that a message did indeed come from its sender. In these cases, we use signing and signature verification to ensure that the sender is who they say they are (also known as authenticity). In public key (also called asymmetric) cryptography, this is generally done by signing the hash of a message, so that only the public key can correctly verify it. The hash and the sender’s private key produce a signature after running it through an algorithm, and then anyone can verify the message came from the sender with the sender’s public key.

How do we access passkeys?

To access passkeys, we first need to generate and store them somewhere. Some of this functionality can be provided with an authenticator. An authenticator is any hardware or software-backed device that provides the ability for cryptographic key generation. Think of those one-time passwords you get from Google Authenticator1Password, or LastPass, among others.

For example, a software authenticator can use the Trusted Platform Module (TPM) or secure enclave of a device to create credentials. The credentials can be then stored remotely and synced across devices e.g. passkeys. A hardware authenticator would be something like a YubiKey, which can generate and store keys on the device itself.

To access the authenticator, the browser needs to have access to hardware, and for that, we need an interface. The interface we use here is the Client to Authenticator Protocol (CTAP). It allows access to different authenticators over different mechanisms. For example, we can access an authenticator over NFC, USB, and Bluetooth by utilizing CTAP.

One of the more interesting ways to use passkeys is by connecting your phone over Bluetooth to another device that might not support passkeys. When the devices are paired over Bluetooth, I can log into the browser on my computer using my phone as an intermediary!

The difference between passkeys and WebAuthn

Passkeys and WebAuthn keys differ in several ways. First, passkeys are considered multi-device credentials and can be synced across devices. By contrast, WebAuthn keys are single-device credentials — a fancy way of saying you’re bound to one device for verification.

Second, to authenticate to a server, WebAuthn keys need to provide the user handle for login, after which an allowCredentials list is returned to the client from the server, which informs what credentials can be used to log in. Passkeys skip this step and use the server’s domain name to show which keys are already bound to that site. You’re able to select the passkey that is associated with that server, as it’s already known by your system.

Otherwise, the keys are cryptographically the same; they only differ in how they’re stored and what information they use to start the login process.

The process… in a nutshell

The process for generating a WebAuthn or a passkey is very similar: get a challenge from the server and then use the navigator.credentials.create web API to generate a public key pair. Then, send the challenge and the public key back to the server to be stored.

Upon receiving the public key and challenge, the server validates the challenge and the session from which it was created. If that checks out, the public key is stored, as well as any other relevant information like the user identifier or attestation data, in the database.

The user has one more step — retrieve another challenge from the server and use the navigator.credentials.get API to sign the challenge. We send back the signed challenge to the server, and the server verifies the challenge, then logs us in if the signature passes.

There is, of course, quite a bit more to each step. But that is generally how we’d log into a website using WebAuthn or passkeys.

The meat and potatoes

Passkeys are used in two distinct phases: the attestation and assertion phases.

The attestation phase can also be thought of as the registration phase. You’d sign up with an email and password for a new website, however, in this case, we’d be using our passkey.

The assertion phase is similar to how you’d log in to a website after signing up.

Attestation

View full size

The navigator.credentials.create API is the focus of our attestation phase. We’re registered as a new user in the system and need to generate a new public key pair. However, we need to specify what kind of key pair we want to generate. That means we need to provide options to navigator.credentials.create.

// The `challenge` is random and has to come from the server
const publicKey: PublicKeyCredentialCreationOptions = {
  challenge: safeEncode(challenge),
  rp: {
    id: window.location.host,
    name: document.title,
  },
  user: {
    id: new TextEncoder().encode(crypto.randomUUID()), // Why not make it random?
    name: 'Your username',
    displayName: 'Display name in browser',
  },
  pubKeyCredParams: [
    {
      type: 'public-key',
      alg: -7, // ES256
    },
    {
      type: 'public-key',
      alg: -256, // RS256
    },
  ],
  authenticatorSelection: {
    userVerification: 'preferred', // Do you want to use biometrics or a pin?
    residentKey: 'required', // Create a resident key e.g. passkey
  },
  attestation: 'indirect', // indirect, direct, or none
  timeout: 60_000,
};
const pubKeyCredential: PublicKeyCredential = await navigator.credentials.create({
  publicKey
});
const {
  id // the key id a.k.a. kid
} = pubKeyCredential;
const pubKey = pubKeyCredential.response.getPublicKey();
const { clientDataJSON, attestationObject } = pubKeyCredential.response;
const { type, challenge, origin } = JSON.parse(new TextDecoder().decode(clientDataJSON));
// Send data off to the server for registration

We’ll get PublicKeyCredential which contains an AuthenticatorAttestationResponse that comes back after creation. The credential has the generated key pair’s ID.

The response provides a couple of bits of useful information. First, we have our public key in this response, and we need to send that to the server to be stored. Second, we also get back the clientDataJSON property which we can decode, and from there, get back the typechallenge, and origin of the passkey.

For attestation, we want to validate the typechallenge, and origin on the server, as well as store the public key with its identifier, e.g. kid. We can also optionally store the attestationObject if we wish. Another useful property to store is the COSE algorithm, which is defined above in our  PublicKeyCredentialCreationOptions with alg: -7 or alg: -256, in order to easily verify any signed challenges in the assertion phase.

Assertion

View full size

The navigator.credentials.get API will be the focus of the assertion phase. Conceptually, this would be where the user logs in to the web application after signing up.

// The `challenge` is random and has to come from the server
const publicKey: PublicKeyCredentialRequestOptions = {
  challenge: new TextEncoder().encode(challenge),
  rpId: window.location.host,
  timeout: 60_000,
};
const publicKeyCredential: PublicKeyCredential = await navigator.credentials.get({
  publicKey,
  mediation: 'optional',
});
const {
  id // the key id, aka kid
} = pubKeyCredential;
const { clientDataJSON, attestationObject, signature, userHandle } = pubKeyCredential.response;
const { type, challenge, origin } = JSON.parse(new TextDecoder().decode(clientDataJSON));
// Send data off to the server for verification

We’ll again get a PublicKeyCredential with an AuthenticatorAssertionResponse this time. The credential again includes the key identifier.

We also get the typechallenge, and origin from the clientDataJSON again. The signature is now included in the response, as well as the authenticatorData. We’ll need those and the clientDataJSON to verify that this was signed with the private key.

The authenticatorData includes some properties that are worth tracking First is the SHA256 hash of the origin you’re using, located within the first 32 bytes, which is useful for verifying that request comes from the same origin server. Second is the signCount, which is from byte 33 to 37. This is generated from the authenticator and should be compared to its previous value to ensure that nothing fishy is going on with the key. The value should always 0 when it’s a multi-device passkey and should be randomly larger than the previous signCount when it’s a single-device passkey.

Once you’ve asserted your login, you should be logged in — congratulations! Passkeys is a pretty great protocol, but it does come with some caveats.

Some downsides

There’s a lot of upside to Passkeys, however, there are some issues with it at the time of this writing. For one thing, passkeys is somewhat still early support-wise, with only single-device credentials allowed on Windows and very little support for Linux systems. Passkeys.dev provides a nice table that’s sort of like the Caniuse of this protocol.

Also, Google’s and Apple’s passkeys platforms do not communicate with each other. If you want to get your credentials from your Android phone over to your iPhone… well, you’re out of luck for now. That’s not to say there is no interoperability! You can log in to your computer by using your phone as an authenticator. But it would be much cleaner just to have it built into the operating system and synced without it being locked at the vendor level.

Where are things going?

What does the passkeys protocol of the future look like? It looks pretty good! Once it gains support from more operating systems, there should be an uptake in usage, and you’ll start seeing it used more and more in the wild. Some password managers are even going to support them first-hand.

Passkeys are by no means only supported on the web. Android and iOS will both support native passkeys as first-class citizens. We’re still in the early days of all this, but expect to see it mentioned more and more.

After all, we eliminate the need for passwords, and by doing so, make the world safer for it!

Resources

Here are some more resources if you want to learn more about Passkeys. There’s also a repository and demo I put together for this article.


Passkeys: What the Heck and Why? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/passkeys-what-the-heck-and-why/feed/ 0 377305
How to Safely Share Your Email Address on a Website https://css-tricks.com/how-to-safely-share-your-email-address-on-a-website/ https://css-tricks.com/how-to-safely-share-your-email-address-on-a-website/#comments Thu, 06 Oct 2022 12:54:53 +0000 https://css-tricks.com/?p=373798 Spammers are a huge deal nowadays. If you want to share your contact information without getting overwhelmed by spam email you need a solution. I run into this problem a few months ago. While I was researching how to solve …


How to Safely Share Your Email Address on a Website originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Spammers are a huge deal nowadays. If you want to share your contact information without getting overwhelmed by spam email you need a solution. I run into this problem a few months ago. While I was researching how to solve it, I found different interesting solutions. Only one of them was perfect for my needs.

In this article, I am going to show you how to easily protect your email address from spam bots with multiple solutions. It’s up to you to decide what technique fits your needs.

The traditional case

Let’s say that you have a website. You want to share your contact details, and you don’t want to share only your social links. The email address must be there. Easy, right? You type something like this:

<a href="mailto:email@address.com">Send me an Email</a>

And then you style it according to your tastes.

Well, even if this solution works, it has a problem. It makes your email address available to literally everyone, including website crawlers and all sorts of spam bots. This means that your inbox can be flooded with tons of unwanted rubbish like promotional offers or even some phishing campaign.

We are looking for a compromise. We want to make it hard for bots to get our email addresses, but as simple as possible for normal users.

The solution is obfuscation.

Obfuscation is the practice of making something difficult to understand. This strategy is used with source code for multiple reasons. One of them is hiding the purpose of the source code to make tampering or reverse-engineering more difficult. We will first look at different solutions that are all based on the idea of obfuscation.

The HTML approach

We can think of bots as software that browse the web and crawl through web pages. Once a bot obtains an HTML document, it interprets the content in it and extracts information. This extraction process is called web scraping. If a bot is looking for a pattern that matches the email format, we can try to disguise it by using a different format. For example, we could use HTML comments:

<p>If you want to get in touch, please drop me an email at<!-- fhetydagzzzgjds --> email@<!-- sdfjsdhfkjypcs -->addr<!-- asjoxp -->ess.com</p>

It looks messy, but the user will see the email address like this:

If you want to get in touch, please drop me an email at email@address.com

Pros:

  • Easy to set up.
  • It works with JavaScript disabled.
  • It can be read by assistive technology.

Cons:

  • Spam bots can skip known sequences like comments.
  • It doesn’t work with a mailto: link.

The HTML & CSS approach

What if we use the styling power of CSS to remove some content placed only to fool spam bots? Let’s say that we have the same content as before, but this time we place a span element inside:

<p>If you want to get in touch, please drop me an email at <span class="blockspam" aria-hidden="true">PLEASE GO AWAY!</span> email@<!-- sdfjsdhfkjypcs -->address.com</p>.

Then, we use the following CSS style rule:

span.blockspam {
  display: none;
}

The final user will only see this:

If you want to get in touch, please drop me an email at email@address.com.

…which is the content we truly care about.

Pros:

  • It works with JavaScript disabled.
  • It’s more difficult for bots to get the email address.
  • It can be read by assistive technology.

Con:

  • It doesn’t work with a mailto: link.

The JavaScript approach

In this example, we use JavaScript to make our email address unreadable. Then, when the page is loaded, JavaScript makes the email address readable again. This way, our users can get the email address.

The easiest solution uses the Base64 encoding algorithm to decode the email address. First, we need to encode the email address in Base64. We can use some websites like Base64Encode.org to do this. Type in your email address like this:

A large textarea to paste an email address with a series of options beneath it for how to encode the text.

Then, click the button to encode. With these few lines of JavaScript we decode the email address and set the href attribute in the HTML link:

var encEmail = "ZW1haWxAYWRkcmVzcy5jb20=";
const form = document.getElementById("contact");
form.setAttribute("href", "mailto:".concat(atob(encEmail)));

Then we have to make sure the email link includes id="contact" in the markup, like this:

<a id="contact" href="">Send me an Email</a>

We are using the atob method to decode a string of Base64-encoded data. An alternative is to use some basic encryption algorithm like the Caesar cipher, which is fairly straightforward to implement in JavaScript.

Pros:

  • It’s more complicated for bots to get the email address, especially if you use an encryption algorithm.
  • It works with a mailto: link.
  • It can be read by assistive technology.

Con:

  • JavaScript must be enabled on the browser, otherwise, the link will be empty.

The embedded form approach

Contact forms are everywhere. You certainly have used one of them at least once. If you want a way for people to directly contact you, one of the possible solutions is implementing a contact form service on your website.

Formspree is one example of service which provides you all the benefits of a contact form without worrying about server-side code. Wufoo is too. In fact, here is a bunch you can consider for handling contact form submissions for you.

The first step to using any form service is to sign up and create an account. Pricing varies, of course, as do the features offered between services. But one thing most of them do is provide you with an HTML snippet to embed a form you create into any website or app. Here’s an example I pulled straight from a form I created in my Formspring account

<form action="https://formspree.io/f/[my-key]" method="POST">
  <label> Your email:
    <input type="email" name="email" />
  </label>
  <label> Your message:
    <textarea name="message"></textarea>
  </label>
  <!-- honeypot spam filtering -->
  <input type="text" name="_gotcha" style="display:none" />
  <button type="submit">Send</button>
</form>

In the first line, you should customize action based on your endpoint. This form quite basic, but you can add as many fields as you wish.

Notice the hidden input tag on line 9. This input tag helps you filter the submissions made by regular users and bots. In fact, if Formspree’s back-end sees a submission with that input filled, it will discard it. A regular user wouldn’t do that, so it must be a bot.

Pros:

  • Your email address is safe since it is not public.
  • It works with Javascript disabled.

Con:

  • Relies on a third-party service (which may be a pro, depending on your needs)

There is one other disadvantage to this solution but I left it out of the list since it’s quite subjective and it depends on your use case. With this solution, you are not sharing your email address. You are giving people a way to contact you. What if people want to email you? What if people are looking for your email address, and they don’t want a contact form? A contact form may be a heavy-handed solution in that sort of situation.

Conclusion

We reached the end! In this tutorial, we talked about different solutions to the problem of online email sharing. We walked through different ideas, involving HTML code, JavaScript and even some online services like Formspree to build contact forms. At the end of this tutorial, you should be aware of all the pros and cons of the strategies shown. Now, it’s up to you to pick up the most suitable one for the your specific use case.


How to Safely Share Your Email Address on a Website originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/how-to-safely-share-your-email-address-on-a-website/feed/ 43 373798
Using SVG in WordPress (2 Helpful Plugin Recommendations) https://css-tricks.com/using-svg-in-wordpress/ https://css-tricks.com/using-svg-in-wordpress/#comments Fri, 21 Jan 2022 19:35:31 +0000 https://css-tricks.com/?p=360346 SVG is a great image format, so it's nice to able to use it in WordPress. If you're looking to be using SVG in WordPress. , we've got you covered here with all the best options.


Using SVG in WordPress (2 Helpful Plugin Recommendations) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
There is a little legwork to do if you plan on using SVG in WordPress. For fair-enough reasons, WordPress doesn’t allow SVG out of the box. SVG is a markup syntax that has lots of power, including the ability to load other resources and run JavaScript. So, if WordPress were to blanket-ly allow SVG by default, users even with quite limited roles could upload SVG and cause problems, like XSS vulnerabilities.

But say that’s not a problem for your site and you just use SVG gosh darn it. First, let’s be clear what we mean by using SVG in WordPress: uploading SVG through the media uploader and using the SVG images within post content and as featured images.

There is nothing stopping you from, say, using SVG in your templates. Meaning inline <svg> or SVG files you link up as images in your template from your CSS or whatnot. That’s completely fine and you don’t need to do anything special for that to work in WordPress.

Example of Using SVG in WordPress. the media library is open and shows tile previews of different SVG files.

Taking matters into your own hands

What prevents you from using SVG in WordPress is that the Media Library Uploader rejects the file’s MIME type. To allow SVG in WordPress, you really just need this filter. This would go in your functions.php or a functionality plugin:

function cc_mime_types($mimes) {
  $mimes['svg'] = 'image/svg+xml';
  return $mimes;
}
add_filter('upload_mimes', 'cc_mime_types');

But the problem after that is that the SVG file usually won’t display correctly in the various places it needs to, like the Media Library’s image previews, the Featured Image widget, and possibly even the classic or Block Editor. I have a snippet of CSS that can be injected to fix this. But — and this is kinda why I’m writing this new post — that doesn’t seem to work for me anymore, which has got me thinking.

Plugins for using SVG in WordPress

I used to think, eh, why bother, it’s so little code to allow this might that I may as well just do it myself with the function. But WordPress, of course, has a way of shifting over time, and since supporting SVG isn’t something WordPress is going to do out of the box, this is actually a great idea for a plugin to handle. That way, the SVG plugin can evolve to handle quirks as WordPress evolves and, theoretically, if enough people use the SVG plugin, it will be maintained.

So, with that, here are a couple of plugin recommendations for using SVG in WordPress.

SVG Support

This is the one I’ve been using lately and it seems to work great for me.

Screenshot of the SVG Support plugin for WordPress in the WordPress Plugin Directory.

I just install it, activate it, and do nothing else. It does have a settings screen, but I don’t need any of those things. I really like how it asks you if it’s OK to load additional CSS on the front-end (for me, it’s not OK, so I leave it off) — although even better would be for the plugin to show you what it’s going to load so you can add it to your own CSS if you want.

The setting to restrict uploading SVG in WordPress to admins is smart, although if you want to be more serious about SVG safety, you could use this next plugin instead…

Safe SVG

This one hasn’t been updated in years, but it goes the extra mile for SVG safety in that it literally sanitizes SVG files as you upload them, and even optimizes them while it adds the SVG in WordPress.

Screenshot of the Safe SVG plugin in the WordPress Plugin Directory.

We have fairly tight editorial control over authors and such here on this site, so the security aspects of this SVG plugin aren’t a big worry to me. Plus, I like to be in charge of my own SVG optimization, so this one isn’t as perfect for me, though I’d probably recommend it to a site with less technical expertise at the site owner level.


Looks like there is Easy SVG Support as well, but it doesn’t seem to be as nice as the Support SVG plugin and hasn’t been updated recently, so I can’t recommend that.

What plugins have you successfully tried for using SVG in WordPress? Any recommendations you’d like to add?


Using SVG in WordPress (2 Helpful Plugin Recommendations) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/using-svg-in-wordpress/feed/ 7 360346
CSS-Based Fingerprinting https://css-tricks.com/css-based-fingerprinting/ https://css-tricks.com/css-based-fingerprinting/#comments Mon, 03 Jan 2022 21:45:10 +0000 https://css-tricks.com/?p=360385 Fingerprinting is bad. It’s a term that refers to building up enough metadata about a user that you can essentially figure out who they are. JavaScript has access to all sorts of fingerprinting possibilities, which then combined with the IP …


CSS-Based Fingerprinting originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Fingerprinting is bad. It’s a term that refers to building up enough metadata about a user that you can essentially figure out who they are. JavaScript has access to all sorts of fingerprinting possibilities, which then combined with the IP address that the server has access to, means fingerprinting is all too common.

You don’t generally think of CSS as being a fingerprinting vector though, and thus “safe” in that way. But Oliver Brotchie has documented an idea that allows for some degree of fingerprinting with CSS alone.

Think of all the @media queries we have. We can test for pointer type with any-pointer. Imagine that for each value, we request a totally unique background-image from a server. If that image was requested, we know those @media queries were true. We can start to fingerprint with something like this:

.pointer {
  background-image: url('/unique-id/pointer=none')
}

@media (any-pointer: coarse) {
  .pointer {
    background-image: url('/unique-id/pointer=coarse')
  }
}

@media (any-pointer: fine) {
  .pointer {
    background-image: url('/unique-id/pointer=fine')
  }
}

Combine that with the fact that we can test for a dark mode preference with prefers-color-scheme, the fingerprint gets a bit clearer. In fact, it’s the current draft for CSS user prefer media queries that Oliver is most concerned about:

Not only will the upcoming draft make this method scalable, but it will also increase its precision. Currently, without alternative means, it is hard to conclusively link every request to a specific visitor as the only feasible way to determine their origin, is to group the requests by the IP address of the connection. However, with the new draft, by generating a randomised string and interpolating it into the URL tag for every visitor, we can accurately identify all requests from said visitor.

There are tons more. We can make media queries that are 1px apart and request a background image for each, perfectly guessing the visitor’s window size. There are probably a dozen or more exotic media queries that are rarely used, but are useful specifically to fingerprinting with CSS. Combine that with @supports queries for all sorts of things to essentially guess the exact browser. And combine that with the classic technique of testing for installation of specific local fonts, and you have a half-decent fingerprinting machine.

@font-face {
  font-family: 'some-font';
  src: local(some font), url('/unique-id/some-font');
}

.some-font {
  font-family:'some-font';
}

The generated CSS to do it is massive (here’s the Sass to generate it), but apparently it’s heavily reduced once we can use custom properties in URLs.

I’m not heavily worried about it, mostly because I don’t disable JavaScript and JavaScript is so much more widely capable of fingerprinting already. Plus, there are already other types of CSS security vulnerabilities, from reading visited links (which browsers have addressed), keylogging, and user-generated inline styles, among others that folks have pointed out in another article on the topic.

But Oliver’s research on fingerprinting is really good and worthy of a look by everyone who knows more about web security than I do.


CSS-Based Fingerprinting originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/css-based-fingerprinting/feed/ 5 360385
HTML Sanitizer API https://css-tricks.com/html-sanitizer-api/ https://css-tricks.com/html-sanitizer-api/#comments Thu, 16 Dec 2021 18:21:31 +0000 https://css-tricks.com/?p=359306 Three cheers for (draft stage) progress on a Sanitizer API! It’s gospel that you can’t trust user input. And indeed, any app I’ve ever worked on has dealt with bad actors trying to slip in and execute nefarious code …


HTML Sanitizer API originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Three cheers for (draft stage) progress on a Sanitizer API! It’s gospel that you can’t trust user input. And indeed, any app I’ve ever worked on has dealt with bad actors trying to slip in and execute nefarious code somewhere it shouldn’t.

It’s the web developer’s job to clean user input before it is used again on the page (or stored, or used server-side). This is typically done with our own code or libraries that are pulled down to help. We might write a RegEx to strip anything that looks like HTML (or the like), which has the risk of bugs and those bad actors finding a way around what our code is doing.

Instead of user-land libraries or our dancing with it ourselves, we could let the browser do it:

// some function that turns a string into real nodes
const untrusted_input = to_node("<em onclick='alert(1);'>Hello!</em>");

const sanitizer = new Sanitizer();
sanitizer.sanitize(untrusted_input);  // <em>Hello!</em>

Then let it continue to be a browser responsibility over time. As the draft report says:

The browser has a fairly good idea of when it is going to execute code. We can improve upon the user-space libraries by teaching the browser how to render HTML from an arbitrary string in a safe manner, and do so in a way that is much more likely to be maintained and updated along with the browser’s own changing parser implementation.

This kind of thing is web standards at its best. Spot something annoying (and/or dangerous) that tons of people have to do, and step in to make it safer, faster, and better.

To Shared LinkPermalink on CSS-Tricks


HTML Sanitizer API originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/html-sanitizer-api/feed/ 6 359306
The Invisible JavaScript Backdoor https://css-tricks.com/the-invisible-javascript-backdoor/ https://css-tricks.com/the-invisible-javascript-backdoor/#comments Wed, 08 Dec 2021 16:00:01 +0000 https://css-tricks.com/?p=358496 An interesting (scary) trick of an nearly undetectable exploit. Wolfgang Ettlinger:

What if a backdoor literally cannot be seen and thus evades detection even from thorough code reviews?

I’ll post the screenshot of the exploit from the post with the …


The Invisible JavaScript Backdoor originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
An interesting (scary) trick of an nearly undetectable exploit. Wolfgang Ettlinger:

What if a backdoor literally cannot be seen and thus evades detection even from thorough code reviews?

I’ll post the screenshot of the exploit from the post with the actual exploit circled:

If you were really looking super closely you’d probably see that, but I can see how it would be easy to miss as it would avoid any linting problems and doesn’t mess up syntax highlighting at all. Then the way this code is written, the commands are executed:

Each element in the array, the hardcoded commands as well as the user-supplied parameter, is then passed to the exec function. This function executes OS commands.

They consider it worthy of change:

The Cambridge team proposes restricting Bidi Unicode characters. As we have shown, homoglyph attacks and invisible characters can pose a threat as well.


The Invisible JavaScript Backdoor originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/the-invisible-javascript-backdoor/feed/ 10 358496
Ain’t No Party Like a Third Party https://css-tricks.com/aint-no-party-like-a-third-party/ https://css-tricks.com/aint-no-party-like-a-third-party/#comments Fri, 03 Dec 2021 15:45:19 +0000 https://css-tricks.com/?p=356844 I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s …


Ain’t No Party Like a Third Party originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s modern web it sounds like advice from a tinfoil-hat-wearing conspiracy nut. But just because I’m paranoid doesn’t mean they’re not out to get your user’s data.

All I’m asking is that we treat third-party scripts like third-party cookies. They were a mistake.

Browsers are now beginning to block third-party cookies. Chrome is dragging its heels because the same company that makes the browser also runs an advertising business. But even they can’t resist the tide. Third-party cookies are used almost exclusively for tracking. That was never the plan.

In the beginning, there was no state on the web. A client requested a resource from a server. The server responded. Then they both promptly forgot about it. That made it hard to build shopping carts or log-ins. That’s why we got cookies.

In hindsight, cookies should’ve been limited to a same-origin policy from day one. That would’ve solved the problems of authentication and commerce without opening up a huge security hole that has been exploited to track people as they moved from one website to another. The web went from having no state to having too much.

Now that vulnerability is finally being closed. But only for cookies. I would love it if third-party JavaScript got the same treatment.

When you add any third-party file to your website—an image, a stylesheet, a font—it’s a potential vector for tracking. But third-party JavaScript files go one further. They can execute arbitrary code.

Just take a minute to consider the implications of that: any third-party script on your site is allowing someone else to execute code on your web pages. That’s astonishingly unsafe.

It gets better. One of the pieces of code that this invited intruder can execute is the ability to pull in other third-party scripts.

You might think there’s no harm in adding that one little analytics script. Or that one little Google Tag Manager snippet. It’s such a small piece of code, after all. But in doing that, you’ve handed over your keys to a stranger. And now they’re welcoming in all their shady acquaintances.

Request Map Generator is a great tool for visualizing the resources being loaded on any web page. Try pasting in the URL of an interesting article from a news outlet or magazine that someone sent you recently. Then marvel at the sheer size and number of third-party scripts that sneak in via one tiny script element on the original page.

That’s why I recommend that the one thing people can do to make their website better is to not add third-party scripts.

Easier said than done, right? Especially if you’re working on a site that currently relies on third-party tracking for its business model. But that exploitative business model won’t change unless people like us are willing to engage in a campaign of passive resistance.

I know, I know. If you refuse to add that third-party script, your boss will probably say, “Fine, I’ll get someone else to do it. Also, you’re fired.”

This tactic will only work if everyone agrees to do what’s right. We need to have one another’s backs. We need to support one another. The way people support one another in the workplace is through a union.

So I think I’d like to change my answer to the question that’s been posed.

The one thing people can do to make their website better is to unionize.


Ain’t No Party Like a Third Party originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/aint-no-party-like-a-third-party/feed/ 6 356844
Don’t Snore on CORS https://css-tricks.com/dont-snore-on-cors/ https://css-tricks.com/dont-snore-on-cors/#comments Wed, 10 Nov 2021 22:38:05 +0000 https://css-tricks.com/?p=356058 Whatever, I just needed a title. Everyone’s favorite web security feature has crossed my desk a bunch of times lately and I always feel like that is a sign I should write something because that’s what blogging is.

The main …


Don’t Snore on CORS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Whatever, I just needed a title. Everyone’s favorite web security feature has crossed my desk a bunch of times lately and I always feel like that is a sign I should write something because that’s what blogging is.

The main problem with CORS is that developers don’t understand CORS. The basic concept of it is supposed to be easy: don’t run code across origins. Meaning if I, at css-tricks.com, try to fetch some JavaScript from an external URL, like any-other-website.com, the browser will just stop it by default. You’ll see an error in the console. Not allowed.

Unless, that is, the other website sends a header that specifically allows this. My domain can be whitelisted or there could be a wildcard that allows it. There is way more detail here (like preflighting and credentials) and, as ever, the MDN article does a good job on that front.

What have traditionally been hair-pulling moments for me are when CORS seems to behave inconsistently. Two requests will go through and a third will fail, which seems inexplicable, but was reproducible. (Perhaps there was a load balancer involved with half-cached headers? Who knows.) Or I’m trying to use a proxy and the proxy stops working. I can’t even remember all the examples, but I bet I’ve been in meetings trying to debug CORS issues over 100 times in my life.

Anyway, those times where CORS have crossed my desk recently:

  • This video, Learn CORS In 6 Minutes, has 10,000 likes and seems to have struck a chord with folks. A non-ironic npm install cors was the solution here.
  • You have to literally tell servers to have the correct headers. So, similar to the video above, I had to do that in a video about Cloudflare Workers, where I used cross-origin (but you don’t have to, which is actually a very cool feature of Cloudflare Workers).
  • Jake’s article “How to win at CORS” which includes a playground.
  • There are browser extensions (like ones for Firefox and Chrome) that yank in CORS headers for you, which feels like a questionable workaround, but I wouldn’t blame anybody for using in development.
  • I wrote about how easy it is to proxy… anything, including a third-party JavaScript file and make it first-party. Plenty of people pointed out in the comments that doing that totally removes the protection you get from CORS, which is danger-danger. Agreed, unless you 100% control that third-party, it’s quite dangerous.

Don’t Snore on CORS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/dont-snore-on-cors/feed/ 2 356058
The Options for Password Revealing Inputs https://css-tricks.com/the-options-for-password-revealing-inputs/ https://css-tricks.com/the-options-for-password-revealing-inputs/#comments Wed, 06 Oct 2021 14:22:11 +0000 https://css-tricks.com/?p=352676 In HTML, there is a very clear input type for dealing with passwords:

<input type="password">

If you use that, you get the obfuscated bullet-points when you type into it, like:

••••••••

That’s the web trying to help with security. If …


The Options for Password Revealing Inputs originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
In HTML, there is a very clear input type for dealing with passwords:

<input type="password">

If you use that, you get the obfuscated bullet-points when you type into it, like:

••••••••

That’s the web trying to help with security. If it didn’t do that, the classic problem is that someone could be peering over your shoulder, spying on what you’re entering. Easier than looking at what keys your fingers press, I suppose.

But UX has evolved a bit, and it’s much more common to have an option like:

☑️ Reveal Password?

I think the idea is that we should have a choice. Most of us can have a quick look about and see if there are prying eyes, and if not, we might choose to reveal our password so we can make sure we type I_luv_T@cos696969 correctly and aren’t made to suffer the agony of typing it wrong 8 times.

So! What to do?

Option 1: Use type="password", then swap it out for type="text" with JavaScript

This is what everybody is doing right now, because it’s the one that actually works across all browsers right now.

const input = document.querySelector(".password-input");
// When an input is checked, or whatever...
if (input.getAttribute("type") === "password") {
  input.setAttribute("type", "text");
} else {
  input.setAttribute("type", "password");
}

The problem here — aside from it just being kinda weird that you have to change input types just for this — is password manager tools. I wish I had exact details on this, but it doesn’t surprise me that messing with input types might confuse any tool specifically designed to look for and prefill passwords, maybe even the tools built right into browsers themselves.

Option 2: Use -webkit-text-security in CSS

This isn’t a real option because you can’t just have an important feature work in some browsers and not in others. But clearly there was a desire to move this functionality to CSS at some point, as it does work in some browsers.

input[type="password"] {
  -webkit-text-security: square;
}
form.show-passwords input[type="password"] {
  -webkit-text-security: none;
}

Option 3: input-security in CSS

There is an Editor’s Draft spec for input security. It’s basically a toggle value.

form.show-passwords input[type="password"] {
  input-security: none;
}

I like it. Simple. But I don’t think any browser supports it yet. So, kinda realistically, we’re stuck with Option #1 for now.

Demos, all together


The Options for Password Revealing Inputs originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/the-options-for-password-revealing-inputs/feed/ 5 352676
Securing Your Website With Subresource Integrity https://css-tricks.com/securing-your-website-with-subresource-integrity/ https://css-tricks.com/securing-your-website-with-subresource-integrity/#comments Mon, 14 Jun 2021 13:30:27 +0000 https://css-tricks.com/?p=342070 When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party …


Securing Your Website With Subresource Integrity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party and increasing the attack surface. Trusting a third party is not inherently bad, but it should certainly be taken into consideration in the context of your website’s security.

A real-world example

This isn’t a purely theoretical danger. Ignoring potential security issues can and has already resulted in serious consequences. On June 4th, 2019, Malwarebytes announced their discovery of a malicious skimmer on the website NBA.com. Due to a compromised Amazon S3 bucket, attackers were able to alter a JavaScript library to steal credit card information from customers.

It’s not only JavaScript that’s worth worrying about, either. CSS is another resource capable of performing dangerous actions such as password stealing, and all it takes is a single compromised third-party server for disaster to strike. But they can provide invaluable services that we can’t simply go without, such as CDNs that reduce the total bandwidth usage of a site and serve files to the end-user much faster due to location-based caching. So it’s established that we need to sometimes rely on a host that we have no control over, but we also need to ensure that the content we receive from it is safe. What can we do?

Solution: Subresource Integrity (SRI)

SRI is a security policy that prevents the loading of resources that don’t match an expected hash. By doing this, if an attacker were to gain access to a file and modify its contents to contain malicious code, it wouldn’t match the hash we were expecting and not execute at all.

Doesn’t HTTPS do that already?

HTTPS is great for security and a must-have for any website, and while it does prevent similar problems (and much more), it only protects against tampering with data-in-transit. If a file were to be tampered with on the host itself, the malicious file would still be sent over HTTPS, doing nothing to prevent the attack.

How does hashing work?

A hashing function takes data of any size as input and returns data of a fixed size as output. Hashing functions would ideally have a uniform distribution. This means that for any input, x, the probability that the output, y, will be any specific possible value is similar to the probability of it being any other value within the range of outputs.

Here’s a metaphor:

Suppose you have a 6-sided die and a list of names. The names, in this case, would be the hash function’s “input” and the number rolled would be the function’s “output.” For each name in the list, you’ll roll the die and keep track of what name each number number corresponds to, by writing the number next to the name. If a name is used as input more than once, its corresponding output will always be what it was the first time. For the first name, Alice, you roll 4. For the next, John, you roll 6. Then for Bob, Mary, William, Susan, and Joseph, you get 2, 2, 5, 1, and 1, respectively. If you use “John” as input again, the output will once again be 6. This metaphor describes how hash functions work in essence.

Name (input)Number rolled (output)
Alice4
John6
Bob2
Mary2
William5
Susan1
Joseph1

You may have noticed that, for example, Bob and Mary have the same output. For hashing functions, this is called a “collision.” For our example scenario, it inevitably happens. Since we have seven names as inputs and only six possible outputs, we’re guaranteed at least one collision.

A notable difference between this example and a hash function in practice is that practical hash functions are typically deterministic, meaning they don’t make use of randomness like our example does. Rather, it predictably maps inputs to outputs so that each input is equally likely to map to any particular output.

SRI uses a family of hashing functions called the secure hash algorithm (SHA). This is a family of cryptographic hash functions that includes 128, 256, 384, and 512-bit variants. A cryptographic hash function is a more specific kind of hash function with the properties being effectively impossible to reverse to find the original input (without already having the corresponding input or brute-forcing), collision-resistant, and designed so a small change in the input alters the entire output. SRI supports the 256, 384, and 512-bit variants of the SHA family.

Here’s an example with SHA-256:

For example. the output for hello is:

2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824

And the output for hell0 (with a zero instead of an O) is:

bdeddd433637173928fe7202b663157c9e1881c3e4da1d45e8fff8fb944a4868

You’ll notice that the slightest change in the input will produce an output that is completely different. This is one of the properties of cryptographic hashes listed earlier.

The format you’ll see most frequently for hashes is hexadecimal, which consists of all the decimal digits (0-9) and the letters A through F. One of the benefits of this format is that every two characters represent a byte, and the evenness can be useful for purposes such as color formatting, where a byte represents each color. This means a color without an alpha channel can be represented with only six characters (e.g., red = ff0000)

This space efficiency is also why we use hashing instead of comparing the entirety of a file to the data we’re expecting each time. While 256 bits cannot represent all of the data in a file that is greater than 256 bits without compression, the collision resistance of SHA-256 (and 384, 512) ensures that it’s virtually impossible to find two hashes for differing inputs that match. And as for SHA-1, it’s no longer secure, as a collision has been found.

Interestingly, the appeal of compactness is likely one of the reasons that SRI hashes don’t use the hexadecimal format, and instead use base64. This may seem like a strange decision at first, but when we take into consideration the fact that these hashes will be included in the code and that base64 is capable of conveying the same amount of data as hexadecimal while being 33% shorter, it makes sense. A single character of base64 can be in 64 different states, which is 6 bits worth of data, whereas hex can only represent 16 states, or 4 bits worth of data. So if, for example, we want to represent 32 bytes of data (256 bits), we would need 64 characters in hex, but only 44 characters in base64. When we using longer hashes, such as sha384/512, base64 saves a great deal of space.

Why does hashing work for SRI?

So let’s imagine there was a JavaScript file hosted on a third-party server that we included in our webpage and we had subresource integrity enabled for it. Now, if an attacker were to modify the file’s data with malicious code, the hash of it would no longer match the expected hash and the file would not execute. Recall that any small change in a file completely changes its corresponding SHA hash, and that hash collisions with SHA-256 and higher are, at the time of this writing, virtually impossible.

Our first SRI hash

So, there are a few methods you can use to compute the SRI hash of a file. One way (and perhaps the simplest) is to use srihash.org, but if you prefer a more programmatic way, you can use:

sha384sum [filename here] | head -c 96 | xxd -r -p | base64
  • sha384sum Computes the SHA-384 hash of a file
  • head -c 96 Trims all but the first 96 characters of the string that is piped into it
    • -c 96 Indicates to trim all but the first 96 characters. We use 96, as it’s the character length of an SHA-384 hash in hexadecimal format
  • xxd -r -p Takes hex input piped into it and converts it into binary
    • -r Tells xxd to receive hex and convert it to binary
    • -p Removes the extra output formatting
  • base64 Simply converts the binary output from xxd to base64

If you decide to use this method, check the table below to see the lengths of each SHA hash.

Hash algorithmBitsBytesHex Characters
SHA-2562563264
SHA-3843844896
SHA-51251264128

For the head -c [x] command, x will be the number of hex characters for the corresponding algorithm.

MDN also mentions a command to compute the SRI hash:

shasum -b -a 384 FILENAME.js | awk '{ print $1 }' | xxd -r -p | base64

awk '{print $1}' Finds the first section of a string (separated by tab or space) and passes it to xxd. $1 represents the first segment of the string passed into it.

And if you’re running Windows:

@echo off
set bits=384
openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp
set /p a= < tmp
del tmp
echo sha%bits%-%a%
pause
  • @echo off prevents the commands that are running from being displayed. This is particularly helpful for ensuring the terminal doesn’t become cluttered.
  • set bits=384 sets a variable called bits to 384. This will be used a bit later in the script.
  • openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp is more complex, so let’s break it down into parts.
    • openssl dgst computes a digest of an input file.
    • -sha%bits% uses the variable, bits, and combines it with the rest of the string to be one of the possible flag values, sha256, sha384, or sha512.
    • -binary outputs the hash as binary data instead of a string format, such as hexadecimal.
    • %1% is the first argument passed to the script when it’s run.
    • The first part of the command hashes the file provided as an argument to the script.
    • | openssl base64 -A > tmp converts the binary output piping through it into base64 and writes it to a file called tmp. -A outputs the base64 onto a single line.
    • set /p a= <tmp stores the contents of the file, tmp, in a variable, a.
    • del tmp deletes the tmp file.
    • echo sha%bits%-%a% will print out the type of SHA hash type, along with the base64 of the input file.
    • pause Prevents the terminal from closing.

SRI in action

Now that we understand how hashing and SRI hashes work, let’s try a concrete example. We’ll create two files:

// file1.js
alert('Hello, world!');

and:

// file2.js
alert('Hi, world!');

Then we’ll compute the SHA-384 SRI hashes for both:

FilenameSHA-384 hash (base64)
file1.js3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2
file2.jshtr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9

Then, let’s create a file named index.html:

<!DOCTYPE html>
<html>
  <head>
    <script type="text/javascript" src="./file1.js" integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2" crossorigin="anonymous"></script>
    <script type="text/javascript" src="./file2.js" integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9" crossorigin="anonymous"></script>
  </head>
</html>

Place all of these files in the same folder and start a server within that folder (for example, run npx http-server inside the folder containing the files and then open one of the addresses provided by http-server or the server of your choice, such as 127.0.0.1:8080). You should get two alert dialog boxes. The first should say “Hello, world!” and the second, “Hi, world!”

If you modify the contents of the scripts, you’ll notice that they no longer execute. This is subresource integrity in effect. The browser notices that the hash of the requested file does not match the expected hash and refuses to run it.

We can also include multiple hashes for a resource and the strongest hash will be chosen, like so:

<!DOCTYPE html>
<html>
  <head>
    <script
      type="text/javascript"
      src="./file1.js"
      integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2 sha512-cJpKabWnJLEvkNDvnvX+QcR4ucmGlZjCdkAG4b9n+M16Hd/3MWIhFhJ70RNo7cbzSBcLm1MIMItw
9qks2AU+Tg=="
       crossorigin="anonymous"></script>
    <script 
      type="text/javascript"
      src="./file2.js"
      integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9 sha512-+4U2wdug3VfnGpLL9xju90A+kVEaK2bxCxnyZnd2PYskyl/BTpHnao1FrMONThoWxLmguExF7vNV
WR3BRSzb4g=="
      crossorigin="anonymous"></script>
  </head>
</html>

The browser will choose the hash that is considered to be the strongest and check the file’s hash against it.

Why is there a “crossorigin” attribute?

The crossorigin attribute tells the browser when to send the user credentials with the request for the resource. There are two options to choose from:

Value (crossorigin=)Description
anonymousThe request will have its credentials mode set to same-origin and its mode set to cors.
use-credentialsThe request will have its credentials mode set to include and its mode set to cors.

Request credentials modes mentioned

Credentials modeDescription
same-originCredentials will be sent with requests sent to same-origin domains and credentials that are sent from same-origin domains will be used.
includeCredentials will be sent to cross-origin domains as well and credentials sent from cross-origin domains will be used.

Request modes mentioned

Request modeDescription
corsThe request will be a CORS request, which will require the server to have a defined CORS policy. If not, the request will throw an error.

Why is the “crossorigin” attribute required with subresource integrity?

By default, scripts and stylesheets can be loaded cross-origin, and since subresource integrity prevents the loading of a file if the hash of the loaded resource doesn’t match the expected hash, an attacker could load cross-origin resources en masse and test if the loading fails with specific hashes, thereby inferring information about a user that they otherwise wouldn’t be able to.

When you include the crossorigin attribute, the cross-origin domain must choose to allow requests from the origin the request is being sent from in order for the request to be successful. This prevents cross-origin attacks with subresource integrity.

Using subresource integrity with webpack

It probably sounds like a lot of work to recalculate the SRI hashes of each file every time they are updated, but luckily, there’s a way to automate it. Let’s walk through an example together. You’ll need a few things before you get started.

Node.js and npm

Node.js is a JavaScript runtime that, along with npm (its package manager), will allow us to use webpack. To install it, visit the Node.js website and choose the download that corresponds to your operating system.

Setting up the project

Create a folder and give it any name with mkdir [name of folder]. Then type cd [name of folder] to navigate into it. Now we need to set up the directory as a Node project, so type npm init. It will ask you a few questions, but you can press Enter to skip them since they’re not relevant to our example.

webpack

webpack is a library that allows you automatically combine your files into one or more bundles. With webpack, we will no longer need to manually update the hashes. Instead, webpack will inject the resources into the HTML with integrity and crossorigin attributes included.

Installing webpack

Yu’ll need to install webpack and webpack-cli:

npm i --save-dev webpack webpack-cli 

The difference between the two is that webpack contains the core functionalities whereas webpack-cli is for the command line interface.

We’ll edit our package.json to add a scripts section like so:

{
  //... rest of package.json ...,
  "scripts": {
    "dev": "webpack --mode=development"
  }
  //... rest of package.json ...,
}

This enable us to run npm run dev and build our bundle.

Setting up webpack configuration

Next, let’s set up the webpack configuration. This is necessary to tell webpack what files it needs to deal with and how.

First, we’ll need to install two packages, html-webpack-plugin, and webpack-subresource-integrity:

npm i --save-dev html-webpack-plugin webpack-subresource-integrity style-loader css-loader
Package nameDescription
html-webpack-pluginCreates an HTML file that resources can be injected into
webpack-subresource-integrityComputes and inserts subresource integrity information into resources such as <script> and <link rel=…>
style-loaderApplies the CSS styles that we import
css-loaderEnables us to import css files into our JavaScript

Setting up the configuration:

const path              = require('path'),
      HTMLWebpackPlugin = require('html-webpack-plugin'),
      SriPlugin         = require('webpack-subresource-integrity');

module.exports = {
  output: {
    // The output file's name
    filename: 'bundle.js',
    // Where the output file will be placed. Resolves to 
    // the "dist" folder in the directory of the project
    path: path.resolve(__dirname, 'dist'),
    // Configures the "crossorigin" attribute for resources 
    // with subresource integrity injected
    crossOriginLoading: 'anonymous'
  },
  // Used for configuring how various modules (files that 
  // are imported) will be treated
  modules: {
    // Configures how specific module types are handled
    rules: [
      {
        // Regular expression to test for the file extension.
        // These loaders will only be activated if they match
        // this expression.
        test: /\.css$/,
        // An array of loaders that will be applied to the file
        use: ['style-loader', 'css-loader'],
        // Prevents the accidental loading of files within the
        // "node_modules" folder
        exclude: /node_modules/
      }
    ]
  },
  // webpack plugins alter the function of webpack itself
  plugins: [
    // Plugin that will inject integrity hashes into index.html
    new SriPlugin({
      // The hash functions used (e.g. 
      // <script integrity="sha256- ... sha384- ..." ...
      hashFuncNames: ['sha384']
    }),
    // Creates an HTML file along with the bundle. We will
    // inject the subresource integrity information into 
    // the resources using webpack-subresource-integrity
    new HTMLWebpackPlugin({
      // The file that will be injected into. We can use 
      // EJS templating within this file, too
      template: path.resolve(__dirname, 'src', 'index.ejs'),
      // Whether or not to insert scripts and other resources
      // into the file dynamically. For our example, we will
      // enable this.
      inject: true
    })
  ]
};

Creating the template

We need to create a template to tell webpack what to inject the bundle and subresource integrity information into. Create a file named index.ejs:

<!DOCTYPE html>
<html>
  <body></body>
</html>

Now, create an index.js in the folder with the following script:

// Imports the CSS stylesheet
import './styles.css'
alert('Hello, world!');

Building the bundle

Type npm run build in the terminal. You’ll notice that a folder, called dist is created, and inside of it, a file called index.html that looks something like this:

<!DOCTYPE HTML>
<html><head><script defer src="bundle.js" integrity="sha384-lb0VJ1IzJzMv+OKd0vumouFgE6NzonQeVbRaTYjum4ql38TdmOYfyJ0czw/X1a9b" crossorigin="anonymous">
</script></head>
  <body>
  </body>
</html>

The CSS will be included as part of the bundle.js file.

This will not work for files loaded from external servers, nor should it, as cross-origin files that need to constantly update would break with subresource integrity enabled.

Thanks for reading!

That’s all for this one. Subresource integrity is a simple and effective addition to ensure you’re loading only what you expect and protecting your users; and remember, security is more than just one solution, so always be on the lookout for more ways to keep your website safe.


Securing Your Website With Subresource Integrity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/securing-your-website-with-subresource-integrity/feed/ 4 342070
Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control https://css-tricks.com/weekly-platform-news-focus-rings-donut-scope-ditching-em-units-and-global-privacy-control/ https://css-tricks.com/weekly-platform-news-focus-rings-donut-scope-ditching-em-units-and-global-privacy-control/#comments Thu, 04 Mar 2021 21:33:01 +0000 https://css-tricks.com/?p=335856 In this week’s news, Chrome tackles focus rings, we learn how to get “donut” scope, Global Privacy Control gets big-name adoption, it’s time to ditch pixels in media queries, and a snippet that prevents annoying form validation styling.

Chrome will…


Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
In this week’s news, Chrome tackles focus rings, we learn how to get “donut” scope, Global Privacy Control gets big-name adoption, it’s time to ditch pixels in media queries, and a snippet that prevents annoying form validation styling.

Chrome will stop displaying focus rings when clicking buttons

Chrome, Edge, and other Chromium-based browsers display a focus indicator (a.k.a. focus ring) when the user clicks or taps a (styled) button. For comparison, Safari and Firefox don’t display a focus indicator when a button is clicked or tapped, but do only when the button is focused via the keyboard.

The focus ring will stay on the button until the user clicks somewhere else on the page.

Some developers find this behavior annoying and are using various workarounds to prevent the focus ring from appearing when a button is clicked or tapped. For example, the popular what-input library continuously tracks the user’s input method (mouse, keyboard or touch), allowing the page to suppress focus rings specifically for mouse clicks.

[data-whatintent="mouse"] :focus {
  outline: none;
}

A more recent workaround was enabled by the addition of the CSS :focus-visible pseudo-class to Chromium a few months ago. In the current version of Chrome, clicking or tapping a button invokes the button’s :focus state but not its :focus-visible state. that way, the page can use a suitable selector to suppress focus rings for clicks and taps without affecting keyboard users.

:focus:not(:focus-visible) {
  outline: none;
}

Fortunately, these workarounds will soon become unnecessary. Chromium’s user agent stylesheet recently switched from :focus to :focus-visible, and as a result of this change, button clicks and taps no longer invoke focus rings. The new behavior will first ship in Chrome 90 next month.

The enhanced CSS :not() selector enables “donut scope”

I recently wrote about the A:not(B *) selector pattern that allows authors to select all A elements that are not descendants of a B element. This pattern can be expanded to A B:not(C *) to create a “donut scope.”

For example, the selector article p:not(blockquote *) matches all <p> elements that are descendants of an <article> element but not descendants of a <blockquote> element. In other words, it selects all paragraphs in an article except the ones that are in a block quotation.

The donut shape that gives this scope its name

The New York Times now honors Global Privacy Control

Announced last October, Global Privacy Control (GPC) is a new privacy signal for the web that is designed to be legally enforceable. Essentially, it’s an HTTP Sec-GPC: 1 request header that tells websites that the user does not want their personal data to be shared or sold.

The DuckDuckGo Privacy Essentials extension enables GPC by default in the browser

The New York Times has become the first major publisher to honor GPC. A number of other publishers, including The Washington Post and Automattic (WordPress.com), have committed to honoring it “this coming quarter.”

From NYT’s privacy page:

Does The Times support the Global Privacy Control (GPC)?

Yes. When we detect a GPC signal from a reader’s browser where GDPR, CCPA or a similar privacy law applies, we stop sharing the reader’s personal data online with other companies (except with our service providers).

The case for em-based media queries

Some browsers allow the user to increase the default font size in the browser’s settings. Unfortunately, this user preference has no effect on websites that set their font sizes in pixels (e.g., font-size: 20px). In part for this reason, some websites (including CSS-Tricks) instead use font-relative units, such as em and rem, which do respond to the user’s font size preference.

Ideally, a website that uses font-relative units for font-size should also use em values in media queries (e.g., min-width: 80em instead of min-width: 1280px). Otherwise, the site’s responsive layout may not always work as expected.

For example, CSS-Tricks switches from a two-column to a one-column layout on narrow viewports to prevent the article’s lines from becoming too short. However, if the user increases the default font size in the browser to 24px, the text on the page will become larger (as it should) but the page layout will not change, resulting in extremely short lines at certain viewport widths.

If you’d like to try out em-based media queries on your website, there is a PostCSS plugin that automatically converts min-width, max-width, min-height, and max-height media queries from px to em.

(via Nick Gard)

A new push to bring CSS :user-invalid to browsers

In 2017, Peter-Paul Koch published a series of three articles about native form validation on the web. Part 1 points out the problems with the widely supported CSS :invalid pseudo-class:

  • The validity of <input> elements is re-evaluated on every key stroke, so a form field can become :invalid while the user is still typing the value.
  • If a form field is required (<input required>), it will become :invalid immediately on page load.

Both of these behaviors are potentially confusing (and annoying), so websites cannot rely solely on the :invalid selector to indicate that a value entered by the user is not valid. However, there is the option to combine :invalid with :not(:focus) and even :not(:placeholder-shown) to ensure that the page’s “invalid” styles do not apply to the <input> until the user has finished entering the value and moved focus to another element.

The CSS Selectors module defines a :user-invalid pseudo-class that avoids the problems of :invalid by only matching an <input> “after the user has significantly interacted with it.”

Firefox already supports this functionality via the :-moz-ui-invalid pseudo-class (see it in action). Mozilla now intends to un-prefix this pseudo-class and ship it under the standard :user-invalid name. There are still no signals from other browser vendors, but the Chromium and WebKit bugs for this feature have been filed.


Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/weekly-platform-news-focus-rings-donut-scope-ditching-em-units-and-global-privacy-control/feed/ 3 335856
Apple declined to implement 16 Web APIs in Safari due to privacy concerns https://css-tricks.com/apple-declined-to-implement-16-web-apis-in-safari-due-to-privacy-concerns/ https://css-tricks.com/apple-declined-to-implement-16-web-apis-in-safari-due-to-privacy-concerns/#comments Fri, 24 Jul 2020 22:25:33 +0000 https://css-tricks.com/?p=317835 Why? Fingerprinting. Rather than these APIs being used for what they are meant for, they end up being used for gross ad tech. As in, “hey, we don’t know exactly who you are, but wait, through a script we can …


Apple declined to implement 16 Web APIs in Safari due to privacy concerns originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Why? Fingerprinting. Rather than these APIs being used for what they are meant for, they end up being used for gross ad tech. As in, “hey, we don’t know exactly who you are, but wait, through a script we can tell your phone stopped being idle from 8:00 am to 8:13 am and were near the Bluetooth device JBL BATHROOM, so it’s probably dad taking his morning poop! Let’s show him some ads for nicer speakers and flannel shirts ASAP.”

I’ll pull the complete list here from Catalin Cimpanu’s article:

  • Web Bluetooth – Allows websites to connect to nearby Bluetooth LE devices.
  • Web MIDI API – Allows websites to enumerate, manipulate and access MIDI devices.
  • Magnetometer API – Allows websites to access data about the local magnetic field around a user, as detected by the device’s primary magnetometer sensor.
  • Web NFC API – Allows websites to communicate with NFC tags through a device’s NFC reader.
  • Device Memory API – Allows websites to receive the approximate amount of device memory in gigabytes.
  • Network Information API – Provides information about the connection a device is using to communicate with the network and provides a means for scripts to be notified if the connection type changes
  • Battery Status API – Allows websites to receive information about the battery status of the hosting device.
  • Web Bluetooth Scanning – Allows websites to scan for nearby Bluetooth LE devices.
  • Ambient Light Sensor – Lets websites get the current light level or illuminance of the ambient light around the hosting device via the device’s native sensors.
  • HDCP Policy Check extension for EME – Allows websites to check for HDCP policies, used in media streaming/playback.
  • Proximity Sensor – Allows websites to retrieve data about the distance between a device and an object, as measured by a proximity sensor.
  • WebHID – Allows websites to retrieve information about locally connected Human Interface Device (HID) devices.
  • Serial API – Allows websites to write and read data from serial interfaces, used by devices such as microcontrollers, 3D printers, and othes.
  • Web USB – Lets websites communicate with devices via USB (Universal Serial Bus).
  • Geolocation Sensor (background geolocation) – A more modern version of the older Geolocation API that lets websites access geolocation data.
  • User Idle Detection – Lets website know when a user is idle.

I’m of mixing feelings. I do like the idea of the web being a competitive platform for building any sort of app and sometimes fancy APIs like this open those doors.

Not to mention that some of these APIs are designed to do responsible things, like knowing connections speeds through the Network Information API and sending less data if you can, and the same for the Battery Status API.

This is all a similar situation to :visited in CSS. Have you ever noticed how there are some CSS declarations you can’t use on visited links? JavaScript APIs will even literally lie about the current styling of visited links to make links always appear unvisited. Because fingerprinting.

To Shared LinkPermalink on CSS-Tricks


Apple declined to implement 16 Web APIs in Safari due to privacy concerns originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/apple-declined-to-implement-16-web-apis-in-safari-due-to-privacy-concerns/feed/ 10 317835
HTTPS is Easy! https://css-tricks.com/https-is-easy/ https://css-tricks.com/https-is-easy/#comments Wed, 05 Feb 2020 21:19:46 +0000 https://css-tricks.com/?p=302802 I’ve been guilty of publicly bemoaning the complexity of HTTPS. In the past, I’ve purchased SSL certificates from third-party vendors and had trouble installing them. I’ve had certificates expire and had to scramble to fix them. I’ve had to poke …


HTTPS is Easy! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’ve been guilty of publicly bemoaning the complexity of HTTPS. In the past, I’ve purchased SSL certificates from third-party vendors and had trouble installing them. I’ve had certificates expire and had to scramble to fix them. I’ve had to poke and prod hosting companies to help me ensure things were going to renew correctly, and left unsatisfied.

But I’d say in the last few years, this has really chilled out. CSS-Tricks went HTTPS around five years ago, so we’re well past any struggles with insecure content or anything like that. These days my host (and sponsor) Flywheel makes it a flip of a switch in their admin, so it’s entirely no-brainer. The things I have on Netlify are automatically HTTPS. I also tend to put Cloudflare in front of everything, because of all the flip-switch security and performance things they offer, mostly for free. HTTPS is just getting a lot easier.

The name of this blog post is the name of this little microsite project from Troy Hunt: HTTPS is Easy! It’s a four-part series of five-minute videos walking through the process of adding enterprise-grade security to a site quickly and for free with Cloudflare. It almost feels like an ad for Cloudflare, and I couldn’t care less if it is, but Troy says:

[…] this isn’t a commercial activity on my behalf; Cloudflare didn’t engage me to create this and it’ll come as a surprise to them the first time they see it.

It really is easy and free, so I feel like I’m doing my duty here in making up for past complaints about HTTPS.

Part of what helps me feel more confident is minimizing the number of setups I have for different things. All my WordPress sites are on Flywheel — they aren’t scattered around, so I don’t have to learn multiple systems. All my deployment is through Buddy. All my domains are on a single registrant, so what I learn in managing one domain is useful for all of them. All my sites run through Cloudflare, so I feel confident in my management skills there. This kind of consolidation is good at keeping my stress levels low.


HTTPS is Easy! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/https-is-easy/feed/ 1 302802
“Link In Bio” is a slow knife https://css-tricks.com/link-in-bio-is-a-slow-knife/ Mon, 16 Dec 2019 21:30:47 +0000 https://css-tricks.com/?p=300245 Anil Dash:

If Instagram users could post links willy-nilly, they might even be able to connect directly to their users, getting their email addresses or finding other ways to communicate with them. Links represent a threat to closed systems.


“Link In Bio” is a slow knife originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Anil Dash:

If Instagram users could post links willy-nilly, they might even be able to connect directly to their users, getting their email addresses or finding other ways to communicate with them. Links represent a threat to closed systems.

On CodePen, we have a TextExpander snippet we use for every single Instagram post we schedule through Buffer and it expands to this:

Looking for the code? It’s open-source on CodePen, follow the link 🔗 in our profile and find the author’s Pen there.

I can’t quite explain it, but I feel taken in by this sleight of hand. My brain goes, “Oh, they just can’t allow links on Instagram posts because it would just turn into a cesspool of spam and bad behavior.”. But of course, that isn’t the real reason. Instagram is already a mess of spam, and it’s fairly easy to avoid. Links wouldn’t change that. If anything they would be a helpful honeypot for catching bad actors. Links just make it easy to leave.

Minor note about linking out: Business accounts with over 10,000 followers can add a URL as a “swipe up” gesture on Instagram Stories. Whoop-de-doo.

To Shared LinkPermalink on CSS-Tricks


“Link In Bio” is a slow knife originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
300245
Detecting Inactive Users https://css-tricks.com/detecting-inactive-users/ https://css-tricks.com/detecting-inactive-users/#comments Fri, 13 Dec 2019 14:47:25 +0000 https://css-tricks.com/?p=300072 Most of the time you don’t really care about whether a user is actively engaged or temporarily inactive on your application. Inactive, meaning, perhaps they got up to get a drink of water, or more likely, changed tabs to do …


Detecting Inactive Users originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Most of the time you don’t really care about whether a user is actively engaged or temporarily inactive on your application. Inactive, meaning, perhaps they got up to get a drink of water, or more likely, changed tabs to do something else for a bit. There are situations, though, when tracking the user activity and detecting inactive-ness might be handy.

Let’s think about few examples when you just might need that functionality:

  • tracking article reading time
  • auto saving form or document
  • auto pausing game
  • hiding video player controls
  • auto logging out users for security reasons

I recently encountered a feature that involved that last example, auto logging out inactive users for security reasons.

Why should we care about auto logout?

Many applications give users access to some amount of their personal data. Depending on the purpose of the application, the amount and the value of that data may be different. It may only be user’s name, but it may also be more sensitive data, like medical records, financial records, etc.

There are chances that some users may forget to log out and leave the session open. How many times has it happened to you? Maybe your phone suddenly rang, or you needed to leave immediately, leaving the browser on. Leaving a user session open is dangerous as someone else may use that session to extract sensitive data.

One way to fight this issue involves tracking if the user has interacted with the app within a certain period of time, then trigger logout if that time is exceeded. You may want to show a popover, or perhaps a timer that warns the user that logout is about to happen. Or you may just logout immediately when inactive user is detected.

Going one level down, what we want to do is count the time that’s passed from the user’s last interaction. If that time period is longer than our threshold, we want to fire our inactivity handler. If the user performs an action before the threshold is breached, we reset the counter and start counting again.

This article will show how we can implement such an activity tracking logic based on this example.

Step 1: Implement tracking logic

Let’s implement two functions. The first will be responsible for resetting our timer each time the user interacts with the app, and the second will handle situation when the user becomes inactive:

  • resetUserActivityTimeout – This will be our method that’s responsible for clearing the existing timeout and starting a new one each time the user interacts with the application.
  • inactiveUserAction – This will be our method that is fired when the user activity timeout runs out.
let userActivityTimeout = null;

function resetUserActivityTimeout() {
  clearTimeout(userActivityTimeout);
  userActivityTimeout = setTimeout(() => {
    inactiveUserAction();
  }, INACTIVE_USER_TIME_THRESHOLD);
}

function inactiveUserAction() {
  // logout logic
}

OK, so we have methods responsible for tracking the activity but we do not use them anywhere yet.

Step 2: Tracking activation

Now we need to implement methods that are responsible for activating the tracking. In those methods, we add event listeners that will call our resetUserActivityTimeout method when the event is detected. You can listen on as many events as you want, but for simplicity, we will restrict that list to a few of the most common ones.

function activateActivityTracker() {
  window.addEventListener("mousemove", resetUserActivityTimeout);
  window.addEventListener("scroll", resetUserActivityTimeout);
  window.addEventListener("keydown", resetUserActivityTimeout);
  window.addEventListener("resize", resetUserActivityTimeout);
}

That’s it. Our user tracking is ready. The only thing we need to do is to call the activateActivityTracker on our page load.

We can leave it like this, but if you look closer, there is a serious performance issue with the code we just committed. Each time the user interacts with the app, the whole logic runs. That’s good, but look closer. There are some types of events that are fired an enormous amount of times when the user interacts with the page, even if it isn’t necessary for our tracking. Let’s look at mousemove event. Even if you move your mouse just a touch, mousemove event will be fired dozens of times. This is a real performance killer. We can deal with that issue by introducing a throttler that will allow the user activity logic to be fired only once per specified time period.

Let’s do that now.

Step 3: Improve performance

First, we need to add one more variable that will keep reference to our throttler timeout.

let userActivityThrottlerTimeout = null

Then, we create a method that will create our throttler. In that method, we check if the throttler timeout already exists, and if it doesn’t, we create one that will fire the resetUserActivityTimeout after specific period of time. That is the period for which all user activity will not trigger the tracking logic again. After that time the throttler timeout is cleared allowing the next interaction to reset the activity tracker.

userActivityThrottler() {
  if (!userActivityThrottlerTimeout) {
    userActivityThrottlerTimeout = setTimeout(() => {
      resetUserActivityTimeout();

      clearTimeout(userActivityThrottlerTimeout);
      userActivityThrottlerTimeout = null;
    }, USER_ACTIVITY_THROTTLER_TIME);
  }
}

We just created a new method that should be fired on user interaction, so we need to remember to change the event handlers from resetUserActivityTimeout to userActivityThrottler in our activate logic.

activateActivityTracker() {
  window.addEventListener("mousemove", userActivityThrottler);
  // ...
}

Bonus: Let’s reVue it!

Now that we have our activity tracking logic implemented let’s see how can move that logic to an application build with Vue. We will base the explanation on this example.

First we need to move all variables into our component’s data, that is the place where all reactive props live.

export default {
  data() {
    return {
      isInactive: false,
      userActivityThrottlerTimeout: null,
      userActivityTimeout: null
    };
  },
// ...

Then we move all our functions to methods:

// ...
  methods: {
    activateActivityTracker() {...},
    resetUserActivityTimeout() {...},
    userActivityThrottler() {...},
    inactiveUserAction() {...}
  },
// ...

Since we are using Vue and it’s reactive system, we can drop all direct DOM manipulations i.(i.e. document.getElementById("app").innerHTML) and depend on our isInactive data property. We can access the data property directly in our component’s template like below.

<template>
  <div id="app">
    <p>User is inactive = {{ isInactive }}</p>
  </div>
</template>

Last thing we need to do is to find a proper place to activate the tracking logic. Vue comes with component lifecycle hooks which are exactly what we need — specifically the beforeMount hook. So let’s put it there.

// ...
  beforeMount() {
    this.activateActivityTracker();
  },
// ...

There is one more thing we can do. Since we are using timeouts and register event listeners on window, it is always a good practice to clean up a little bit after ourselves. We can do that in another lifecycle hook, beforeDestroy. Let’s remove all listeners that we registered and clear all timeouts when the component’s lifecycle comes to an end.

// ...
  beforeDestroy() {
    window.removeEventListener("mousemove", this.userActivityThrottler);
    window.removeEventListener("scroll", this.userActivityThrottler);
    window.removeEventListener("keydown", this.userActivityThrottler);
    window.removeEventListener("resize", this.userActivityThrottler);
  
    clearTimeout(this.userActivityTimeout);
    clearTimeout(this.userActivityThrottlerTimeout);
  }
// ...

That’s a wrap!

This example concentrates purely on detecting user interaction with the application, reacting to it and firing a method when no interaction is detected within specific period of time. I wanted this example to be as universal as possible, so that’s why I leave the implementation of what should happened when an inactive user it detected to you.

I hope you will find this solution useful in your project!


Detecting Inactive Users originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/detecting-inactive-users/feed/ 11 300072