The last secure version of TLS being 1.2, without going into details, it would be easy to believe that TLS 1.3 is only a simple evolution whereas it is in fact a real breakthrough and it would have been preferable to call it TLS 2.0 but the naming is something complicated (see the video at the end, at 38:54).
This version brings several notable changes:
- Better speed during the negotiation, by reducing the round trips between the client and the server to only one exchange (against two previously) named "RTT" (Round Trip Time Resumption) and where is the last R you ask? Good question!. If the client has already connected to the server, then we are in a case of optimization named "0-RTT" (zero round trip) allowing to resume a past connection;
- Disabling of all weak or risky cryptographic suites, still supported by TLS 1.2. With TLS 1.3, you are obliged to use strong algorithms, whether for encryption, hashes or block-by-block encryption protocols;
- No more static keys in RSA and Diffie-Hellman key exchanges. Forward secrecy is now mandatory, i.e. the keys change throughout the exchange and it is no longer possible to record the traffic, find the key and decrypt the traffic afterwards;
- Cryptographic alternatives to the NIST and NSA recommendations, providing more confidence. This is because of Dual_EC_DRBG, the pseudo-random number generation algorithm, compromised by the NSA, standardized in FIPS 140-2 and widely distributed (see "Security NSA and PRNG", "FUN NSA backdoor in OpenSSL never worked ( FIPS 140-2 )", "Crypto NIST removes Dual EC DRBG (NSA) from its guide", "Security Dual EC DRBG all history / NSA"). The 25519 elliptic curve is supported and presents a free alternative to the NIST and NSA curves;
- Similarly, the free symmetric encryption algorithm ChaCha20 and the asymmetric EdDSA are supported, to provide alternatives to two of NIST and NSA;
- Requirement to authenticate encrypted messages with in particular 2 modes: GCM (Galois Counter Mode) and CCM (Counter with CBC-MAC). For details, I refer you to the Wikipedia schema which is rather well done: https://en.wikipedia.org/wiki/Galois/Counter_Mode ;
- And many others adjustments: exchange optimizations, reduction of the amount of data exchanged in clear...
Another long debated difference is the possibility of intercepting flows by decrypting them. This is quite possible with TLS 1.3 and a protocol dedicated to this has even been added: ETLS (Enterprise TLS), sometimes called "TLS interception for grostocards".
https://www.etsi.org/deliver/etsi_ts/103500_103599/10352303/01.01.01_60/ts_10352303v010101p.pdf#page=8
This protocol, or option of TLS 1.3, uses, among other things, a static Diffie-Hellman key and allows a third party to retrieve the encrypted traffic and a copy of this key. To make it simple, this disables Forward Secrecy. To make it even simpler: it's poop 💩😋.
If you want to do clean, normal, environment-friendly and human-intelligent SSL/TLS interception, just do what you did before: pass all traffic through a proxy with a certificate authority that signs certificates dynamically (all proxies know how to do this, be it Bluecoat, Ironport or Zscaler) and deploy the public part of this certificate authority in the certificate store of your workstations, servers (which should not directly access the Internet in party mode), your smartphones... as a trusted root authority.
Here is a documentation from Symantec on "ethical" interception 😇: https://www.symantec.com/content/dam/symantec/docs/other-resources/responsibly-intercepting-tls-and-the-impact-of-tls-1.3-en.pdf
On the other hand, you won't be able to put IDS/IPS on your infrastructure exposed to the Internet with a traffic replication (TAP) to decrypt it without being in cut (except using eTLS but I don't tell you the gas factory). Frankly, the interest of an IDS/IPS in this case seems to me very limited if you respect the good practices (update, partitioning, audits...) and if you have for example a WAF or equivalent carrying the encryption (or if it is carried before, as for example with a CDN).
TLS 1.3 is therefore a very good protocol but still had two weaknesses:
- To connect to a service, you have to resolve the domain name, which is done with the DNS protocol, which is not encrypted (No, DNSSEC does not encrypt DNS but only ensures that the integrity of the response has not been altered);
- The domain name we are trying to reach, located in the SNI field of TLS, is not encrypted, because it is present in the first client request, before the establishment of an encrypted channel.
This information alone (the domain name) is sufficient to carry out espionage on a WiFi network or on a state scale, as well as to censor. Fortunately, ESNI allows us to solve this problem, which I will detail next.
Trusted Recursive Resolver / TRR
Before talking about DNS over HTTPS, we just need to introduce a simple notion: trusted DNS resolvers (resolvers that lie are unfortunately frequent, without necessarily talking about hacking). Basically, several browser editors have partnered with companies like CloudFlare to create domain name resolution services with the guarantee that they won't modify the answers. Thus the browser, which previously used the DNS server configured in the operating system, can dispense with it and directly query trusted DNS resolution services.
This is simply a whitelist of trusted servers that act as a relay for DNS queries. They then relay the DNS request to the appropriate party.
In fact... there are two 😉: https://mozilla.cloudflare-dns.com/dns-queryet https://dns.google.com/experimental (https://wiki.mozilla.org/Trusted_Recursive_Resolver).
I quickly pass over the fact that these trusted servers allow (partial) geolocation, useful for CDNs and, ideally, it is the server closest to the user that is used (with a classic CDN type operation).
DNS over HTTPS / DoH
This protocol, described in RFC 8484, requires support for HTTP/2 and its streams in order to avoid losing too much response time.
It is an encapsulation of DNS in HTTP over TLS. It is therefore the content of a classic DNS request that is sent in HTTP, encoded in base64 in the case of GET requests and without encoding in the case of POST requests.
Here is a tool in Perl (sorry) doing this type of request: https://github.com/bagder/dns2doh
Otherwise, there is CURL (in recent version):
~# curl -doh-url https://dns-server.example.com
You will tell me that in order to be able to perform this domain name resolution on HTTPS, you must first perform a classic DNS query in order to obtain the IP address corresponding to the TRR server, which is not encrypted and would be like the chicken and egg problem, but in the end it is only the resolution of the DNS server, which does not leak any information about your real DNS queries. To have a perfect solution, you would have to hard-code the IP addresses of the servers, which seems unfeasible.
Encrypted Server Name Indication / ESNI
For every problem, there is a solution, so it is once again an extension of TLS that has solved the problem of domain names in clear text when connecting to a service: Encrypted Server Name Indication.
The host or company wishing to use ESNI must have a DNS record containing a data structure with a public key in particular. From this public key is derived a symmetric key used to encrypt the domain name in the request.
Note that this potential future standard is still in draft form: https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1
For example, here is the DNS record for CloudFlare (the data structure in red is still in base64):
~# dig TXT _esni.cloudflare.org +short
"/wH7nPYtACQAHQAgGFV9e448B0Nkg0dLwKX3cMwHMcJ4PX29THIg/kguXXEAAhMBAQAAAAAXWlIAAAAAABdcTEAAAA="
For the details, I found few source codes detailing the breakdown of the structure, here is an example in python: https://gist.githubusercontent.com/mosajjal/c088d03225287115a2e1fffef82ed25b/raw/fc37b51ac4067975a1c7e70dc0fb61a5781b078b/esni_creator.py
Since the goal is to hide the name of the visited site, it is strongly recommended to use one key for many services and not one per service. As you can see, this feature is especially useful and advocated by large hosting companies and CDNs like CloudFront. Here is an article from CloudFlare on the subject: https://blog.cloudflare.com/esni/
TRR, DoH, ESNI... All this greatly complicates the kinematics of connecting to a website and relies on few actors, but fortunately it is still possible to work with the old model 😀.
0-RTT and packet replay
Due to the optimization of the TLS exchange, it is possible to replay the first TLS packet sent, provided that the attacker is able to intercept the traffic (WiFi...) :
On the client side, the browser will report a network error, transparent to the user because it is managed by the browser which will replay the request;
On the server side, this specific request will be seen twice.
In fact, it is possible to replay any TLS packet: https://vnhacker.blogspot.com/2015/12/bad-life-advice-never-give-up-replay.html
The risks are limited because the cases of exploitation are very rare and most web applications add unique identifiers that cannot be replayed for sensitive requests such as transfers or payments.
As the risk is not zero, some CDNs like CloudFlare only respond to certain 0-RTT requests such as GETs without parameters and add a specific HTTP header: "Cf-0rtt-Unique: value-unique-liant-and-session-key-and-nego-Tls". On the other hand, for the other packets, nothing 😱.