Brenton Cleeland

Website Best Practices

Published on

This post discusses practices promoted by checks in ready, a tool that I've made to help developers check for security best practices in their websites. If you're working on a team that's building something served over HTTP you should check it out!

Table of Contents

This table of contents also serves a quick guide / checklist.

Redirect HTTP traffic to HTTPS

You should redirect all HTTP requests to HTTPS using an HTTP 301 redirect. This should be performed before the request reaches your application using a reverse proxy sitting in front it. It's safe to include the path requested in the redirect. When configured correctly this means that the user ends up on the page they are expecting.

You can use the Mozilla SSL Configuration Generator to generate a configuration that handles this redirect in a wide variety of server software.

An alternative to redirecting HTTP to HTTPS is to configure your server to not listen to HTTP connections on port 80. This is a good solution for APIs, CDNs and other URLs that a user will never enter manually into a browser.

Just a warning: I have seen some scanning software flag endpoints that return a 4xx response for HTTP requests. Those scanners are simply looking for a redirect on that HTTP connection.

Don't get fancy with other, unexpected, response codes.

Cookies should be set securely

If your page requires cookies you should set them securely and ensure that Javascript on your page cannot access them. MDN's Set-Cookie page gives a good overview of the ways that server sent cookies can be configured.

Understand the difference between SameSite=Lax and SameSite=Strict. If you can, set cookies to Strict but be aware that means cookies will not be sent when requests are initiated by third-party contexts (i.e. someone links a user to your site). Lax is the default value in all modern browsers but you should explicitly set this to avoid issues with old browsers.

Use Secure in all cookies to ensure that cookies are only available in HTTPS contexts. Setting this helps to prevent accidental leakage of cookies over HTTP connections that could be vulnerable to person-in-the-middle attacks.

Use HttpOnly to prevent cookies from being accessed by Javascript in the client's browser. There are very few use cases for directly accessing the contents of a server set cookie from Javascript.

Finally, you can use the Path attribute to further restrict cookies. This ensures that the cookie will only be sent to the parts of your application that need it. For example, if you set Path=/admin then the cookie will only be sent when requests are made to the /admin route.

Remember that cookies can be read and manipulated by the client. Even with these settings a user or a malicious browser extension/proxy can change the contents of a cookie. You should never store sensitive information in cookies.

The best practice is to use a HttpOnly cookie to store a session id that the server uses to lookup session details from a data store.

Here's an example Set-Cookie header from an unauthenticated request to Github:

Set-Cookie: _octo=GH1.1.827422477.1670273756; Path=/; Domain=github.com; Expires=Tue, 05 Dec 2023 20:55:56 GMT; Secure; SameSite=Lax

You must understand the European Union's ePrivacy regulations and how those relate to your use of cookies. The regulations require you to gain consent before using any cookies that are not strictly necessary. Even for necessary cookies you should explain to users of your website why they are required and what actions they perform.

HSTS headers must be returned

The HTTP Strict Transport Security (HSTS) header tells browsers to disallow insecure requests after a successful secure request. You should return the Strict-Transport-Security header on all HTTPS requests. Browsers ignore HSTS headers sent on HTTP requests so you do not need to include this header in your HTTP → HTTPS redirect response.

Since you should not be supporting HTTP requests it's safe and recommended to set the max-age of the HSTS header to at least 1 year.

If you're comfortable that all subdomains will be served using HTTPS you should add includeSubDomains to the header. With Let's Encrypt and others providing free certificates there is really no excuse to not support HTTPS on all subdomains unless you have specific applications that cannot be configured to use HTTPS. If this is the case consider moving those applications to another domain and using includeSubDomains for your primary domain names.

Here is an example HSTS header with a one year expiry:

Strict-Transport-Security: max-age=31536000; includeSubdomains

In the wild you will see a wide range of max-age values, with some major websites going significantly beyond the 1 year recommendation. Fo example: Twitter, Wikipedia, some Google subdomains and Cash App all set max-age to two or more years. For reference, a value of "63072000" represents two years.

If you are confident that you will never need to support insecure requests then you can add the preload directive and follow the instructions on hstspreload.org to be added to the Chrome preload list. This is a list of sites that are hardcoded into Chrome (and other browsers) as being HTTPS only.

If you use the Mozilla SSL Configuration Generator this header will be included in your configuration.

You should support IPv6

You should add a AAAA DNS record that points to the IPv6 address of your web server. Make sure that you configure your web server to listen on IPv6.

If an ISP or mobile carrier does not assign an IPv4 address to each individual customer they will often assign public IPv6 addresses. Connections to websites that only support IPv4 will be sent via a IPv4 in IPv6 tunnel. Supporting IPv6 will give these users, which includes a large percentage of US cellphones, a more optimal experience.

A handy tool for testing IPv6 support is screenshots.page. It allows you to request screenshots of your website from both an IPv4 and IPv6 connection. You should expect the images to be identical.

Note that ready simply checks for the existence of a AAAA record. This doesn't guarantee that your IPv6 version is working correctly.

Return a decent Content-Security-Policy

A Content Security Policy tells modern browsers where it can load resources from and make connections to. If you are serving a HTML response you must include Content-Security-Policy in your response headers or a <meta http-equiv> tag. You should favour the header unless your page is being served by a hosting provider that does not allow modification of the HTTP headers.

The policy is made up of directives that, with a few exceptions, allow you to specify server origins for different resource types. Before writing a policy for your site you should read through MDN's Content Security Policy introduction.

All Content Security Policies should set the default-src: directive. Ideally this will be set to none, with explicit definitions for all other relevant directives. To simplify your policy this might be set to self to allow resources to be loaded from the same domain.

Never allow default-src to be set to a wildcard value such as https:.

To prevent cross site scripting attacks either default-src or script-src must be set.

script-src defines where Javascript can be loaded from. You should set it to a list of URIs that you know will be used to load Javascript files.

To prevent malicious Javascript connecting to unexpected servers, default-src or connect-src should be configured. If you are not expecting any HTTP requests to be initiated by your Javascript, you should set connect-src to none. You should make sure that your Privacy Policy aligns with the places that Javascript is allowed to send data from the browser.

You must not include unsafe-inline in script-src or default-src. This allows inline Javascript to be executed and prevents many of the security mechanisms provided by the policy. This applies to all sites but is particularly important if you allow user generated content. Making sure this is not set prevents injection of Javascript by in-app browsers and web proxies.

You should not use unsafe-inline in style-src but this has less security implications.

In addition to the unsafe values, you should avoid values that are effectively wildcards. This includes https:, which was intended to force all assets to be loaded over HTTPS. That's nice in theory but it allows resources to be loaded from all HTTPS servers which is unlikely to be the desired effect.

Unless you are using legacy HTML embeds, object-src 'none' should be set. This disables the use of those legacy object embed mechanisms and closes potential security holes.

You should set frame-ancestors 'none' to prevent your page from being loaded in a frame. This is discussed a little more below.

During development and testing use the Content-Security-Policy-Report-Only header to log and report failures without actually blocking the failing directives. You can roll out policy changes all the way to production using this header.

For both the Content-Security-Policy and Content-Security-Policy-Report-Only headers you should include a report-to directive that matches a group in your Report-To header (see below).

Use the CSP Evaluator to check your policy and compare it to Github.com's policy to see a real-world example of a complex, but secure, Content-Security-Policy. The evaluator gives you advice and feedback about each of the directives in your policy.

It's also worth checking out is the OWASP Content-Security-Policy Cheatsheet which takes a more security focussed look at this header.

Permissions-Policy should exist if the response is HTML

It's a frustrating spec but there's no reason not to support Permissions-Policy in your HTTP response. The Permissions-Policy allows you specify an allow list of origins for specific browser features. This lets you ensure, for example, that the microphone cannot be used by third-party Javascript or an embedded frame without your knowledge.

The specification includes an explainer document that gives a great overview.

A common way to define a Permissions-Policy is to define a list of features you know you won't use and specify an empty origin. For example, to disable use of the camera, microphone and autoplaying video you could use:

microphone=(),camera=(),autoplay=(),geolocation=()

New features are expected to be added in the future. To ensure backwards compatibility this means a specific feature missing from the Permissions-Policy is allowing that feature to be used. Keep this in mind when you are updating your policy in the future - you need to keep somewhat up to date with new features that browsers are introducing. A full list of features supported by different browsers is maintained as part of the spec on Github.

The Permissions-Policy will never override client side security prompts for usage of certain features. Expect that browsers will continue to prompt users before allowing access to the camera, microphone, geolocation and related features.

An example of a directive added to the Permissions-Policy by a specific browser is interest-cohort. This was proposed by the Chrome team to give sites the ability to open out of FLoC.

Don't allow your page to be loaded in a frame

Unless you have a specific use case that requires it you should never allow your HTML document to be loaded in a frame. Loading the page in a frame (or iframe) opens your up to clickjacking attacks and can give the parent document permissions on your page that you are not expecting. It can also be used to hide the true URL from end users.

There are two different settings that can be used to prevent your page being loaded in a frame:

Both of the above allow you to specify URLs that can load your page in a frame if that's required. Unless you need to support very old browsers you only need to use the frame-ancestors directive.

Some security scanning tools will fail you for not including X-Frame-Options in your response headers. Provided you have an adequate Content Security Policy set, and you are not supporting old browsers, you should feel comfortable pushing back.

Referrer-Policy should be same-origin or no-referrer

The Referrer-Policy tells browsers what information it should send when making HTTP requests that originate from your page.

Setting this to no-referrer will tell the browser to not set the Referer header. same-origin will set the Referer header when making requests to the same domain.

Note that some Cross Site Request Forgery (CSRF) tools require the Referer header to be set on requests. This is one way that servers can check that the request is coming from the expected page.

Note that a malicious user can set the Referer header to any value they like.

If your site does not use POST forms, use no-referrer. Otherwise test whether that's possible and fall back same-origin if it breaks your CSRF protections.

Multiple nameservers should be configured

When setting up your DNS records you should configure at least two, ideally four, nameservers. Most DNS providers give you the ability to host your DNS records on different servers using different domain names.

The more traffic your site has the more nameservers you should have configured. This distributes load for DNS lookups across more servers. Effectively it's a form of load balancing.

For maximum availability, an ideal setup for your nameservers should:

You can use dig ns <domain> to check the nameservers for a domain. An example response from brntn.me looks like:

;; ANSWER SECTION:
brntn.me.		172800	IN	NS	ns-1391.awsdns-45.org.
brntn.me.		172800	IN	NS	ns-1725.awsdns-23.co.uk.
brntn.me.		172800	IN	NS	ns-386.awsdns-48.com.
brntn.me.		172800	IN	NS	ns-574.awsdns-07.net.

The following response for github.com includes a total of 8 name servers from two different providers:

;; ANSWER SECTION:
github.com.		900	IN	NS	dns1.p08.nsone.net.
github.com.		900	IN	NS	dns2.p08.nsone.net.
github.com.		900	IN	NS	dns3.p08.nsone.net.
github.com.		900	IN	NS	dns4.p08.nsone.net.
github.com.		900	IN	NS	ns-1283.awsdns-32.org.
github.com.		900	IN	NS	ns-1707.awsdns-21.co.uk.
github.com.		900	IN	NS	ns-421.awsdns-52.com.
github.com.		900	IN	NS	ns-520.awsdns-01.net.

DNS TTL should be longer than you think

Higher TTL values for DNS records reduce the frequency of DNS requests that clients have to make. Removing the DNS lookup from the loading time will improve performance for returning users. If it's unlikely that you will change your DNS records, consider using a long TTL (around 86400, or 24 hours).

If you set a long TTL you need to add an additional coordination step to any DNS-related changes. Start by reducing the TTL significantly, then waiting for the old TTL to expire before making any updates.

Don't let browsers detect content-type

Use the X-Content-Type-Options header to disable browsers auto-detecting content types by setting it to nosniff. This means that your server needs to return a valid Content-Type for HTML, Images, Javascript and CSS documents.

Setting this to nostiff also has a positive affect on the way Cross Origin Read Blocking (CORB) works. You can read more about that in Google's CORB explainer.

Disable developer tooling in your production environment

It is important to ensure that any developer tooling in your production environment is disabled. This will help to protect the integrity of your production environment and, in some cases, improve performance.

Check that debuggers, profilers, source maps, or any other tooling that would be used to inspect or modify the running code are disabled. Leaving these on can cause performance issues, requests timing out (i.e. with an interactive debug session) and potential security issues.

Unless you are deliberately enabling them, you should check that API discoverability tools like Swagger and GraphQL Introspection are turned off for your production environments. Don't give a potential attacker free information about how your APIs work.

If you are using a framework consult the deployment documentation and make sure that's you're follow all of the production readiness steps.

You should add automated pre-deployment checks to ensure that your production configuration is correct. For a Python project that might be a simple search for ipdb, if you are using Swagger that could be a smoke test that ensures /swagger/index.html is a 404.

Maybe don't use X-XSS-Protection

The X-XSS-Protection header can be used to turn on cross-site scripting attack filtering in some older browsers. Modern browsers, including Chrome and Firefox, do not use this feature, effectively enabling the functionality by default.

Instead you should use a Content-Security-Policy with a script-src directive that defines where Javascript can be loaded from.

If you must support older browsers, include this header with the value 1; mode=block to enable the most strict filtering.

Some scanning software will complain when this header is missing. As long as you have an appropriate Content Security Policy, and don't support older browsers, you should feel comfortable pushing back.

The first part of your HTML is critical

The first few hundred bytes of HTML is critical to your page rendering time. Specifically the character encoding must be set in the first 1024 bytes of your HTML document.

Your HTML response must start with a valid doctype. For almost all modern web pages this will be <!doctype html>.

Your <html> tag should include a lang attribute, indicating the language of the page. This is specified a <html lang="en"> for an English language page.

The first tag in your <head> you should be <meta charset="utf-8"> (replacing utf-8 with the appropriate charset). This ensures that the browser is correctly interpreting your document.

Explicitly configuring these settings prevents browsers from detecting the values and helps to ensure that pages are rendered as quickly as possible. You can see the HTML5 Boilerplate documentation for more details about the way the <head> is parsed by modern browsers.

Use Google's Pagespeed Insights to test your rendering performance and see tips for improvements.

Set up your favicon and your icon icon

Follow Audrey Feldroy's excellent Favicon Cheat Sheet and return a ICO file at /favicon.ico. In addition you should return a higher resolution PNG or ICO version of the icon in the <link rel="shortcut icon" href=""> tag in your <head>.

https://html.spec.whatwg.org/multipage/links.html#rel-icon

Don't use schemaless URLs in your HTML

A schemaless URL starts with // instead of https://. When used in tags like <script>, <link> or <img> this tells the browser to load the asset with the same scheme as the parent document. Since we are always loading pages of HTTPS it's best to be explicit and always use https://.

Using https:// for resource references will mean that pages loaded over HTTP will have mixed content warnings in all modern browsers. Prevent that by redirecting HTTP traffic to HTTPS with a 301 status code without returning the actual HTML document.

Configure subresource integrity

Subresource Integrety (SRI) prevents resources from being manipulated unexpectedly. A hash of the file is included in the integrity attribute of <script> and <link> tags. This feature is available in all modern browsers.

You must use SRI if you are loading assets from a shared CDN where you do not have full control over the file that is being served. Without SRI, changes to the files being served by the CDN will go undetected, creating a significant security risk.

Even for files that you are serving yourself SRI adds an extra layer of security, ensuring that static files are not manipulated in-transit or on-disk. Since this value only needs to be calculated at build time this should have no performance impact.

Don't use shared CDNs for static assets

While the introduction of SRI does help make the use of CDNs for static assets safer, the introduction of HTTP cache partitioning takes away most of the benefits.

In the JQuery days the key benefit of using CDNs for static assets was that users might already have those assets in their browser cache. That meant that big Javascript files that were used on a large percentage of website wouldn't need to be repeatedly downloaded. Chrome, Firefox and Safari now all use per-origin caches, meaning that each domain will have it's own copy of cached assets.

As noted above, if you use a CDN for static assets you must use SRI for those resources.

Disable X-DNS-Prefetch-Control

Setting X-DNS-Prefetch-Control to off tells browsers not to make DNS requests for links in a HTML document in advance.

Browsers have started making these DNS requests early in order to offer performance improvements when the links on a page are clicked. Unfortunately this can leak information about the contents of the page to the user's DNS provider.

Privacy advocates do encourage servers to return this header to enhance the privacy of users. This is especially important if your site includes user controlled content.

Since the performance improvement is negligible you should include this header and improve the privacy of your users.

In addition to providing this header, users can configure their browser to disable DNS prefetching if they are concerned about the privacy implications.

Note that this header is non-standard and is not recommended by Mozilla because of inconsistent browser implementations.

Let your feeds be consumed by browsers

RSS and JSON feeds form an integral part of the open web. You should configure feeds to return the Access-Control-Allow-Origin header to allow them to be read directly from browser-based applications. The value of the header can safely be *.

By default this cross origin resource sharing (CORS) setup will only allow GET requests to the feeds. You can read more about CORS in MDN's introductory document.

Compress responses

All modern servers support gzip and responses should be compressed if the Accept-Encoding header includes gzip.

HTML compresses extremely well and compressed responses can significantly improve data transfer times. For example, the HTML of my Github dashboard is 33.22kB compressed, and 145.78kB uncompressed. All up the non-image content of the site is 810kB compressed and 3.74MB expanded.

Most reverse proxies and web servers can be configured to compress the response when the browser indicates that it supports it. Most servers will cache the compressed versions, but to maximise performance some servers allow you to pre-compress files. This can be especially beneficial for static assets like Javascript and CSS files.

Expect-CT is enabled by default now

The Expect-CT header indicated to browsers that they should check certificate transparency logs for the website's certificate. This is now the default behaviour in all browsers that supported the header (Chromium-based browsers).

You should not include this header in your responses as it is either not supported or not used by browsers previously supporting it.

Some security scanning services might complain about this header if it is missing. Push back. Share the MDN link above explaining that it's the default in all the browsers that supported it.

Remove headers that leak information

Some frameworks, CDNs, servers and gateways include additional headers in the response that can leak information. Generally, you want to avoid including any non-required information in your headers. Any additional information will give an attacker more details about your infrastructure and tooling.

It's especially important to remove any references to specific version numbers. Version numbers can be directly linked to known vulnerabilities, making an attacker's job much easier.

Audit your HTTP response headers by making requests to your site with a tool like Postman or curl. Look out for headers like: Server, X-Server, Via, X-Powered-By, X-AspNet-Version and Served-By.

Use a secure SSL configuration

You should only support TLSv1.2 and TLSv1.1 for connections to your server. Older SSL/TLS protocols have known vulnerabilities that make them inappropriate for modern web applications. The only relevant browsers that do not support this configuration are Internet Explorer 8-10, which you would likely prefer not to support anyway.

Use secure 2048 or 3072 bit RSA keys for your SSL certificates. If you use certbot or similar this will be handled for you.

DNS CAA should be configured to ensure that only Certificate Authorities you expect can issue certificates for your domain. The policy should disallow wildcard certificates. This website uses DNS CAA to allow both ZeroSSL and Let's Encrypt to issue certificates using this configuration:

issue: letsencrypt.org flags:0
issue: sectigo.com flags:0
issuewild: ; flags:0

Additionally, those CAA records can include accounturi and validationmethods directives to further restrict how ACME clients can issue certificates.

Use the Mozilla SSL Configuration Generator to create your server configurations and the Qualys SSL Server Test to test your configuration. You should be aiming for an A or A+ on the SSL Server Test.

See SSL Lab's SSL and TLS Deployment Best Practices guide for a complete list of practices surrounding HTTPS and certificates.

CORP, COOP, COEP, huh?

Cross Origin Resource Policy (CORP), Cross Origin Opener Policy (COOP) and Cross Origin Embedder Policy (COEP) are part of a set of new HTTP security headers introduced in 2020 / 2021. These headers are supported by all modern browsers and help to prevent both side-channel attacks and cross site scripting attacks.

Scott Helme has the definitive guide on how to use these headers: https://scotthelme.co.uk/coop-and-coep/

Tell browser where to Report-To

The Report-To header indicates to browsers how they should report issues to. This is a relatively new header but is supported by all modern browsers for reporting Content Security Policy issues.

Here is an example Report-To header from brntn.me:

Report-To: {"group":"default","max_age":31536000,"endpoints":[{"url":"https://brntn.report-uri.com/a/d/g"}],"include_subdomains":true}

Here the endpoint is powered by report-uri.com, but it could be a path on your domain configured to handle these reports. The group is the name of the reporting endpoint that can be referenced in other headers.

When using Report-To you should make sure that your Content Security Policy does not include the report-sample directive. This directive tells browsers to share the script that violates the policy to your reporting provider. Sending user scripts to your server invites potential privacy issues, especially where those scripts are being used for accessibility purposes.

Don't set Expires for Documents

For dynamic content you should prefer setting the Cache-Control header over an Expires header. Expires allows you to specific a time that a document should no longer be considered valid. Setting Expires to an invalid date (i.e. "0") will tell a browser or proxy to not cache the document.

The problem with the Expires header is that it relies on the server and client times being accurate. This isn't always going to be reliable and can result in some unexpected behaviour.

A use case where Expires does make sense is for static assets with expiry dates set in the distant future.

Do set Cache-Control

The Cache-Control header gives you more control over how and where a document is cached. Most documents should be able set max-age.

HTML documents should use a low max-age value (less than 24 hours). Static assets should support cache-busting and use a large max-age (over 30 days).

User the private directive to indicate to caches that the response is private and should only be cached in ways compatible with that.

Mark Nottingham provides an in depth guide to caching which all web developers should read.

Configure your email securely

If you send email from your domain you need to ensure that you have SPF, DMARC and DKIM set up. Email administrators should look into MTA-STS and TLS-RPT to ensure that your email is sent via TLS.

A Sender Policy Framework (SPF) DNS record tells mail servers what servers are allowed to send mail for your domain. This is checked by servers receiving email from your domain. SPF records were previously implemented as an SPF DNS record, this has been replaced with a simple TXT record. Your SPF record should include -all to explicitly deny email that is sent from a server that isn't in the allow list.

You should configure your SPF record based on the recommendations of your email provider(s).

You should not return an SPF DNS record, instead favouring the TXT record.

Domain Keys Identified Mail (DKIM) allows a domain owner to cryptographically sign parts of a message to ensure that it hasn't been tampered with. Each sending server for your domain will have it's own DKIM DNS record configured. Those records contain a public key which corresponds with the private key used by that sender to sign mail. Like the SPF records, you should configure DKIM based on the recommendations of your email provider(s). A reference to the key is included in the email headers.

The Domain-based Message Authentication, Reporting and Conformance (DMARC) DNS record tells servers what to do with mail that doesn't meet the SPF or DKIM policy. You should use p=reject to block mail that fails, and you should include rua=mailto:<email> to ensure that you receive aggregated notifications of failures. The DMARC record is served as a TXT DNS record on the "_dmarc." subdomain. For example, email sent from "brntn.me" will use the "_dmarc.brntn.me" record.

Two guides that are useful to refer to when implementing MX records:

Configure your email securely, even if you don't send email!

If you don't use your domain for sending email you should make sure that mail servers know that. Gov.uk has a detailed guide about how to implement this to help block spoofed emails.

Use a "null MX record" and a deny-all SPF record to prevent email being sent from your domain by others.

The null MX record has a priority of "0" and a value of ".". Most good DNS providers will let you configure this.

An SPF record should be created with the value v=spf1 -all. This tells mail servers that you do not have any sending servers and to deny all email coming from this domain.

If you want notifications for attempts to send email from your domains you can configure a DMARC record with p=reject.

Some well-known files you should return

There's two simple text files that all non-trivial websites should return.

The classic is /robots.txt for telling bots what they can and can't access. There's no rule that states that bots must follow your guidelines here but most well-behaved bots do. Google has a good guide on how to create a robots.txt.

Two quick things to note on your robots.txt:

  1. It's supposed to be accessed by bots. Make sure your firewall isn't blocking access by those user agents.
  2. It can give an attacker information about your site. Avoid using exact paths for things like admin interfaces, instead use wildcards or use Google's advice about using a noindex meta tag on those pages instead.

You should include details on how to contact your security team in a file at /.well-known/security.txt. The proposed standard for this file is available. At a minimum a contact email address and expiry date should be included.

Other guides and resources

Post History