Fwd: [outages] Open Resolver Project
There seem to be a non-trivial number of open resolvers in the few NZ ASes I just checked. It would be good if there was a concerted effort into shutting them down, or otherwise neutering them so that they can't be effective amplifiers for DDoS attacks. This is last decade's open mail relay problem, except with less spam, more packet love. Begin forwarded message:
From: Jared Mauch
Subject: [outages] Open Resolver Project Date: 7 May 2013 10:52:53 EDT To: "outages(a)outages.org" X-Mailer: Apple Mail (2.1503) The Open Resolver Project made a mistake in the automation process of the weekly data import. If you're unaware of what it is, I suggest looking at the website - openresolverproject.org
If you operate a DNS server, or are a network operator, you should check your IP space for Open Resolvers. These are used in DDoS attacks and pose a significant risk to the global network.
The website has been restored to operation, but please take a moment and check your IPv4 space.
If you want a report based on your ASN, please e-mail dns-scan(a)puck.nether.net from a corporate e-mail address or something matching the RIR database entry.
Thanks,
- jared _______________________________________________ Outages mailing list Outages(a)outages.org https://puck.nether.net/mailman/listinfo/outages
Hi all
Joe Abley's email inspired me to take some action on this. For anyone who
doesn't know me I work at Inspire Net. Here's a brief description of what
we've done so far and what we intend to do.
We contacted the Open Resolver Project and arranged a report of the open
dns resolvers under our ASN (17705). This is a report that changes each
week so the first job was to script retrieving this weekly, matching IPs
vs. customers and emailing it somewhere useful (a ticketing system).
We then tackled some low hanging fruit. We found two categories of these:
1) A NetComm router (NB604N) that is in common use on ADSL connections in
NZ. We supplied this so we took responsibility for it. The particular
firmware that seems to be vulnerable is GAN5.CZ56T-B-NC.NZ-R4B031.EN. Newer
versions are fine, older versions are fine. It seems this version in
particular enables the DNS resolver by default but doesn't activate the
firewall for this. Our helpdesk resolved all of these by arranging a
firmware upgrade for them.
2) Being a smaller ISP we recognised many people on our processed list as
being a) people we know and b) having some technical clue. These people
have been contacted and encouraged to lock things down. We've supplied
handy links like Spamhaus's report on their big DDoS, Team Cymru's
educational video (http://www.youtube.com/watch?v=XhSTlqYIQnI) and Team
Cymru's guide to fixing the issue. The majority of these customers have
promptly fixed the issue.
With a bit more investigation it seems that the remaining open resolvers
fall in to two categories:
1) Bad CPEs. CPEs that run an open resolver by default open to the world.
TP-Link seems to be the biggest issue here. We didn't supply this but
intend on giving a bit of help to customers with fixing this.
2) Customers for one reason or another opening their DNS server to the
world (pinhole or firewall entry) AND not configuring an ACL for it. To me,
the practice of voluntarily opening a service to the world that one doesn't
understand is questionable, but the number of clued people that seem to do
this is non-trivial.
A key is having a nice organised person to drive this along; That isn't me
:)
Finally, some rough stats so far.
-Somewhere between 1 - 2% of our customers have this issue. This being so
high surprised me!
-By tackling my "low hanging fruit" we resolved approx. 15% of the open
resolvers. This was minimal effort.
-At our aimed rate of contact it will take 12 weeks for us to let all of
the customers know they have this issue and offer advice on it.
Finally, as a multi-choice question to the industry...
What should we do about the customers who don't fix this issue within a
reasonable time-frame once we've told them about it?
1) Do nothing
2) Contact them again
3) Block international port 53 requests going to them at our border routers
(can be done with minimal effort and load on the routers in question - I'm
quite against this though)
Any questions let me know. Will aim to update again in the future.
Cheers
Dave
On Wed, May 8, 2013 at 3:18 AM, Joe Abley
There seem to be a non-trivial number of open resolvers in the few NZ ASes I just checked. It would be good if there was a concerted effort into shutting them down, or otherwise neutering them so that they can't be effective amplifiers for DDoS attacks.
This is last decade's open mail relay problem, except with less spam, more packet love.
Begin forwarded message:
From: Jared Mauch
Subject: [outages] Open Resolver Project Date: 7 May 2013 10:52:53 EDT To: "outages(a)outages.org" X-Mailer: Apple Mail (2.1503) The Open Resolver Project made a mistake in the automation process of the weekly data import. If you're unaware of what it is, I suggest looking at the website - openresolverproject.org
If you operate a DNS server, or are a network operator, you should check your IP space for Open Resolvers. These are used in DDoS attacks and pose a significant risk to the global network.
The website has been restored to operation, but please take a moment and check your IPv4 space.
If you want a report based on your ASN, please e-mail dns-scan(a)puck.nether.net from a corporate e-mail address or something matching the RIR database entry.
Thanks,
- jared _______________________________________________ Outages mailing list Outages(a)outages.org https://puck.nether.net/mailman/listinfo/outages
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 11/06/2013, at 11:02 AM, Dave Mill wrote:
1) Do nothing
Does not solve problem
2) Contact them again
They don't care, its working just fine for them
3) Block international port 53 requests going to them at our border routers (can be done with minimal effort and load on the routers in question - I'm quite against this though)
Will make your phone ring and damage your reputation You are between a rock and a hard place :-( How about capturing outbound name resolution traffic and answering with an IP address that resolves to a page providing information on how they can make their Intarweb work again? Unless you reward bad CPE behavior with something that impacts the user experience, I don't think you will make any headway with this class of customer. regards Peter Mott LocalCloud Limited Business Critical Application Hosting +64 21 279 4995 -/-
On 11/06/2013, at 1:11 PM, David Robinson wrote:
On 11 June 2013 11:14, Peter Mott
wrote: 2) Contact them again
They don't care, its working just fine for them Won't the customers care when they get all the overage or slow down when they blow their cap when used in a DDOS?
Sure, but it still wont be their fault :-( regards Peter Mott LocalCloud Limited Business Critical Application Hosting +64 21 279 4995 -/-
It's a bit of a worry that people can be fined $1000 for downloading a
hannah montana album, but we can't do anything about people facilitating
DDoS attacks
On Tue, Jun 11, 2013 at 1:13 PM, Peter Mott
On 11/06/2013, at 1:11 PM, David Robinson wrote:
On 11 June 2013 11:14, Peter Mott
wrote: 2) Contact them again
They don't care, its working just fine for them Won't the customers care when they get all the overage or slow down when they blow their cap when used in a DDOS?
Sure, but it still wont be their fault :-(
regards
Peter Mott LocalCloud Limited Business Critical Application Hosting
+64 21 279 4995 -/-
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 11/06/2013 1:14 p.m., Sam Russell wrote:
It's a bit of a worry that people can be fined $1000 for downloading a hannah montana album, but we can't do anything about people facilitating DDoS attacks
Ah uh... careful what you wish for, etc. Hei konā mai, -- Juha Saarinen AITTP Twitter: juhasaarinen http://juha.saarinen.org
On 11/06/2013, at 1:11 PM, David Robinson
On 11 June 2013 11:14, Peter Mott
wrote: 2) Contact them again
They don't care, its working just fine for them Won't the customers care when they get all the overage or slow down when they blow their cap when used in a DDOS?
I'm a networking prostitute so can't name names[1], but where I'm spending some of my time currently, we have an interesting issue where an older CPE we have out with some customers is half participating in a DDOS attack. Half?: 1) Query comes in to the CPE for ANY? isc.org. 2) CPE asks our recursive nameservers (both of them, for some reason) for the same. 3) Our nameservers send a bunch of packets (about 3.5KB worth, from memory) to the CPE (remember, 2x because it asks both our nameservers). 4) CPE seems to check if it should be answering queries received on the WAN interface, and doesn't respond. It also appears to not cache the answer so next time (1) happens, the whole cycle goes again. So, the CPE doesn't participate in the attack and return results to the (presumably) spoofed source, but it uses up bandwidth. The other bit we're not sure about is how these CPEs are found - if they never respond, scanning for the CPE shouldn't help. Current theory is someone is scanning for a unique hostname and monitoring queries, but that has yet to be proven - plan is to put a CPE online, mirror packets, and investigate. Something worth noting that I haven't seen mentioned in this thread so far (I skim read it) - most of these open recursor attacks, that I've seen, are for ANY? isc.org - I assume because isc.org have a pretty large zone. You might want to as a first step block those queries at your border, if you have the facility to do so. As for our recursive nameservers, we've got about 3 different sets of IP addresses, for various legacy reasons. All of these are being hit with a large number of queries (that are as far as we can tell, legitimate) from people outside our network who are using our resolvers for what looks like a number of different reasons. Some of the resolvers have been on these addresses for over 10 years, so it's not surprising. There's going to be quite a challenge to lock those open resolvers down, and we're debating how to do it at the moment - the industry comms process will be interesting, I'm sure, and I'm sure many people on this list will have a busy day fixing up old boxes that can't when our messages have been ignored :-) Would be interested in any experience people have with something similar.. -- Nathan Ward [1] unless we're drinking beer.
As for our recursive nameservers, we've got about 3 different sets of IP addresses, for various legacy reasons. All of these are being hit with a large number of queries (that are as far as we can tell, legitimate) from people outside our network who are using our resolvers for what looks like a number of different reasons. Some of the resolvers have been on these addresses for over 10 years, so it's not surprising.
There's going to be quite a challenge to lock those open resolvers down, and we're debating how to do it at the moment - the industry comms process will be interesting, I'm sure, and I'm sure many people on this list will have a busy day fixing up old boxes that can't when our messages have been ignored :-)
Would be interested in any experience people have with something similar..
In the past I've split off legacy IPs on resolvers to a different server and installed a completely open Bind resolver on it. Log IPs and contact people who are under your control (on your network I guess). Then hack bind to return one IP address as an answer to any standard query. We just did A and MX. That IP points to a server under your control. Install Apache, postfix, courier-pop3d, etc on there and serve various types of bogus data telling people what to do. It worked well for me. YMMV. I suppose in your case you might need to somehow redirect DNS requests that originate off-net to this other nameserver at your borders or configure this DNS server to handle off-net requests a bit differently. From memory bind will support that. Also, I can't recall if its been mentioned here before but we used a pretty simple approach to split recursive from authoritative nameservers without breaking any customer DNS. It worked well for us. If anyone wants details feel free to ask. It truly didn't seem that hard. Dave
On Jun 11, 2013, at 1:31 PM, Dave Mill wrote:
There's going to be quite a challenge to lock those open resolvers down, and we're debating how to do it at the moment
ACLs.
DNS queries from the outside shouldn't be allowed to hit recursors, which should be functionally separated from authoritative servers:
https://www.box.com/s/72bccbac1636714eb611
-----------------------------------------------------------------------
Roland Dobbins
On 11/06/2013, at 6:31 PM, Dave Mill
In the past I've split off legacy IPs on resolvers to a different server and installed a completely open Bind resolver on it. Log IPs and contact people who are under your control (on your network I guess).
Then hack bind to return one IP address as an answer to any standard query. We just did A and MX. That IP points to a server under your control. Install Apache, postfix, courier-pop3d, etc on there and serve various types of bogus data telling people what to do.
Yeah, tricks like this are fun to do, too :-) I've wondered also about only spoofing replies for say, google for a month or so, before shutting it off entirely. Also, such a thing should (I think) only return A records where a real A record already exists - maybe a patch for bind or unbound is needed to do this.. Maybe you only spoof A records, and leave CNAME etc. untouched. What do you do about DNSSEC?
It worked well for me. YMMV. I suppose in your case you might need to somehow redirect DNS requests that originate off-net to this other nameserver at your borders or configure this DNS server to handle off-net requests a bit differently. From memory bind will support that.
Did you have any customers who had multiple Internet connections that had problems? One example I thought of that might be tricky, is a friend I have that for various silly reasons has two ADSL lines, from two different providers, and has one on a wired ethernet, and one on a wireless ethernet. If you receive (via DHCP) a DNS server over the wireless network, are modern operating systems intelligent enough to only send queries to that DNS server out that interface? I've seen weird things with a particularly annoying VPN client that sometimes leaks DNS queries out a default route, instead of over the VPN.. I'm going to have to do some testing on this, but if someone has already compiled some, I'd vend beer in their direction. -- Nathan Ward
On 11/06/13 20:41, Nathan Ward wrote:
On 11/06/2013, at 6:31 PM, Dave Mill
wrote: In the past I've split off legacy IPs on resolvers to a different server and installed a completely open Bind resolver on it. Log IPs and contact people who are under your control (on your network I guess).
Then hack bind to return one IP address as an answer to any standard query. We just did A and MX. That IP points to a server under your control. Install Apache, postfix, courier-pop3d, etc on there and serve various types of bogus data telling people what to do.
Yeah, tricks like this are fun to do, too :-)
I've wondered also about only spoofing replies for say, google for a month or so, before shutting it off entirely.
Also, such a thing should (I think) only return A records where a real A record already exists - maybe a patch for bind or unbound is needed to do this..
Please please avoid to do this at all costs... I've seen those "clever tricks" before and they cause more breakage than desired. Specially those deploying v6 networks see those tricks as a pain, because A records are rewritten but not AAAA records
Maybe you only spoof A records, and leave CNAME etc. untouched.
What do you do about DNSSEC?
Break it?
It worked well for me. YMMV. I suppose in your case you might need to somehow redirect DNS requests that originate off-net to this other nameserver at your borders or configure this DNS server to handle off-net requests a bit differently. From memory bind will support that.
Did you have any customers who had multiple Internet connections that had problems? One example I thought of that might be tricky, is a friend I have that for various silly reasons has two ADSL lines, from two different providers, and has one on a wired ethernet, and one on a wireless ethernet. If you receive (via DHCP) a DNS server over the wireless network, are modern operating systems intelligent enough to only send queries to that DNS server out that interface?
I've seen weird things with a particularly annoying VPN client that sometimes leaks DNS queries out a default route, instead of over the VPN..
I'm going to have to do some testing on this, but if someone has already compiled some, I'd vend beer in their direction.
-- Nathan Ward _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
On 11/06/2013, at 9:13 PM, Sebastian Castro
On 11/06/13 20:41, Nathan Ward wrote:
On 11/06/2013, at 6:31 PM, Dave Mill
wrote: Then hack bind to return one IP address as an answer to any standard query. We just did A and MX. That IP points to a server under your control. Install Apache, postfix, courier-pop3d, etc on there and serve various types of bogus data telling people what to do.
Yeah, tricks like this are fun to do, too :-)
I've wondered also about only spoofing replies for say, google for a month or so, before shutting it off entirely.
Also, such a thing should (I think) only return A records where a real A record already exists - maybe a patch for bind or unbound is needed to do this..
Please please avoid to do this at all costs... I've seen those "clever tricks" before and they cause more breakage than desired. Specially those deploying v6 networks see those tricks as a pain, because A records are rewritten but not AAAA records
Do you mean the bit I suggested re. serving only if a record of that type exists, or do you mean spoofing stuff entirely? If you mean the latter, and the choice is cut the user off entirely, or server them a bunch of banners saying "don't do that, we already told you", I think I'd prefer the latter. Open to opposing views and alternative friendly ways to manage it, other than simple cut off.
Maybe you only spoof A records, and leave CNAME etc. untouched.
What do you do about DNSSEC?
Break it?
How many end hosts does that likely impact, in todays world? Do many end hosts care about DNSSEC, or is it just nameservers at ISPs, some businesses, and nerdy households so far? Is there a way to test, if you're a service provider? I'm not sure the usual javascript checks would work well, unless you also provide a large amount of the end users' content. I wonder if the numbers are much different if you're talking about hosts configured with recursive name servers on a different network. -- Nathan Ward
On 11/06/13 21:26, Nathan Ward wrote:
On 11/06/2013, at 9:13 PM, Sebastian Castro
wrote: On 11/06/13 20:41, Nathan Ward wrote:
On 11/06/2013, at 6:31 PM, Dave Mill
wrote: Then hack bind to return one IP address as an answer to any standard query. We just did A and MX. That IP points to a server under your control. Install Apache, postfix, courier-pop3d, etc on there and serve various types of bogus data telling people what to do.
Yeah, tricks like this are fun to do, too :-)
I've wondered also about only spoofing replies for say, google for a month or so, before shutting it off entirely.
Also, such a thing should (I think) only return A records where a real A record already exists - maybe a patch for bind or unbound is needed to do this..
Please please avoid to do this at all costs... I've seen those "clever tricks" before and they cause more breakage than desired. Specially those deploying v6 networks see those tricks as a pain, because A records are rewritten but not AAAA records
Do you mean the bit I suggested re. serving only if a record of that type exists, or do you mean spoofing stuff entirely?
I understood complete spoofing.
If you mean the latter, and the choice is cut the user off entirely, or server them a bunch of banners saying "don't do that, we already told you", I think I'd prefer the latter. Open to opposing views and alternative friendly ways to manage it, other than simple cut off.
I understand the problem you are trying to solve, and probably if I were on your shoes, I would attempt something similar. But given I'm on the other side of the road, serving authoritative data, I don't want someone to tamper with my data on transit.
Maybe you only spoof A records, and leave CNAME etc. untouched.
What do you do about DNSSEC?
Break it?
How many end hosts does that likely impact, in todays world? Do many end hosts care about DNSSEC, or is it just nameservers at ISPs, some businesses, and nerdy households so far? Is there a way to test, if you're a service provider? I'm not sure the usual javascript checks would work well, unless you also provide a large amount of the end users' content.
You just touched a very sensitive nerve. One of the missing parts of the DNSSEC ecosystem is how to guarantee the validating resolver is playing nice. Because the current protocol only provides a one-bit flag to signal if the data was authentic or not, a non-cooperative resolver can simply lie and tell you... "yeah, this data is valid, see the flag?" In practice, very few applications use that flag, or do something useful with the signatures, so answering your question it will likely go unnoticed. If you want to test if someone is using your cache resolver as forwarder, you can inspect in DNS queries if the DO (DNSSEC OK) bit is on, or if they query for DNSKEY/DS records. A "normal" end host won't do that. If you are running BIND, you can activate the query log and interpret the flags: http://jpmens.net/2011/02/22/bind-querylog-know-your-flags/
I wonder if the numbers are much different if you're talking about hosts configured with recursive name servers on a different network.
-- Nathan Ward
Cheers, -- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
On 2013-06-11 22:40 , Sebastian Castro wrote:
You just touched a very sensitive nerve. One of the missing parts of the DNSSEC ecosystem is how to guarantee the validating resolver is playing nice. Because the current protocol only provides a one-bit flag to signal if the data was authentic or not, a non-cooperative resolver can simply lie and tell you... "yeah, this data is valid, see the flag?"
If you don't have external reasons to fully trust the validating resolver (ie, it might lie to you, or the responses might be altered in between the validating resolver and the application), _and_ you don't check the trust chain from known good root information, then yes, something in the path could lie to you. There isn't any obvious solution to that which doesn't involve checking a chain of trust in-application. The validating resolver providing more than a one bit flag stating it has checked it, and it's "all good guv", doesn't help: it can still lie, no matter how you dress that up. Without an external reference, you have the self-signed-certificate problem: in order to trust it, you have to trust it.[0] Basically if your "validating resolver" is outside the host you are running on, you're at most crossing your fingers and hoping for the best if you delegate verifying DNSSEC to something else. If the resolver you're trusting is "outside your AS" (eg, upstream provider), you'll need a bunch more fingers to cross (there's a lot of untrusted network in between).[1] Either you really care that the DNSSEC is valid, in which case you validate it yourself (in application), from the root. And do something meaningful in the case of (apparent) subterfuge. Or you accept that what you have is at best "a (tiny) bit better than what you had before" and life goes on. External validating resolvers are, at best, a stepping stone on the path towards full DNSSEC deployment. A necessary step in practice, I think. But they can lie to you too. All you've done is moved your "point of total faith" a bit closer to you. Ewen PS: In a fully checking DNSSEC world these DNS tricks (forged data) may well end up being equivalent to "not answering". Which is also an option today. It's just likely to cause more helpdesk calls stating "the Internet is broken". So much as I too dislike forged data, if the choice is "no answer, it's broken" and "forged data, point at a message saying it's broken" I'd probably lean towards forged data too. [0] Having, eg, a SSL key for the validating resolver doesn't stop it lying -- it just helps you avoid listening to others lie to you: you gain some certainty on where the lies are coming from.... which helps with attribution, but not with certainty you're not being lied to. [1] Maybe you're even crossing your fingers if the validating resolver is on the same host; but if the intruder is in your host, affecting what the validating resolver says, you probably have bigger problems.
On 11/06/13 23:26, Ewen McNeill wrote:
On 2013-06-11 22:40 , Sebastian Castro wrote:
You just touched a very sensitive nerve. One of the missing parts of the DNSSEC ecosystem is how to guarantee the validating resolver is playing nice. Because the current protocol only provides a one-bit flag to signal if the data was authentic or not, a non-cooperative resolver can simply lie and tell you... "yeah, this data is valid, see the flag?"
Thank Ewen for this, you are covering three interesting problems: 1) How do you establish and keep trust on your resolver 2) How do you know if your resolver is not lying to you 3) How to protect from traffic between you and your resolver is not being tampered.
If you don't have external reasons to fully trust the validating resolver (ie, it might lie to you, or the responses might be altered in between the validating resolver and the application), _and_ you don't check the trust chain from known good root information, then yes, something in the path could lie to you.
There isn't any obvious solution to that which doesn't involve checking a chain of trust in-application. The validating resolver providing more than a one bit flag stating it has checked it, and it's "all good guv", doesn't help: it can still lie, no matter how you dress that up. Without an external reference, you have the self-signed-certificate problem: in order to trust it, you have to trust it.[0]
Basically if your "validating resolver" is outside the host you are running on, you're at most crossing your fingers and hoping for the best if you delegate verifying DNSSEC to something else. If the resolver you're trusting is "outside your AS" (eg, upstream provider), you'll need a bunch more fingers to cross (there's a lot of untrusted network in between).[1]
This is an example of issue 3 noted above. For this, for example, OpenDNS proposed a solution called DNSCrypt (http://www.opendns.com/technology/dnscrypt/) Running a validating resolver locally has been the trend lately, but it's a practice that's limited to hosts with some computing power, for example, doesn't sound like a good idea on a smartphone because it will increase the power consumption. On the matter of locally-running resolver, NLnetLabs (the same authors of unbound) have this project called dnssec-trigger to have validation on your host: http://www.nlnetlabs.nl/projects/dnssec-trigger/
Either you really care that the DNSSEC is valid, in which case you validate it yourself (in application), from the root. And do something meaningful in the case of (apparent) subterfuge. Or you accept that what you have is at best "a (tiny) bit better than what you had before" and life goes on. External validating resolvers are, at best, a stepping stone on the path towards full DNSSEC deployment. A necessary step in practice, I think. But they can lie to you too. All you've done is moved your "point of total faith" a bit closer to you.
Effectively we are making little gains at every step, but still we are not totally there with solutions for all the issues. There a lot of work to do :)
Ewen
PS: In a fully checking DNSSEC world these DNS tricks (forged data) may well end up being equivalent to "not answering". Which is also an option today. It's just likely to cause more helpdesk calls stating "the Internet is broken". So much as I too dislike forged data, if the choice is "no answer, it's broken" and "forged data, point at a message saying it's broken" I'd probably lean towards forged data too.
[0] Having, eg, a SSL key for the validating resolver doesn't stop it lying -- it just helps you avoid listening to others lie to you: you gain some certainty on where the lies are coming from.... which helps with attribution, but not with certainty you're not being lied to.
[1] Maybe you're even crossing your fingers if the validating resolver is on the same host; but if the intruder is in your host, affecting what the validating resolver says, you probably have bigger problems. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
On Jun 11, 2013, at 12:53 PM, Nathan Ward wrote:
Something worth noting that I haven't seen mentioned in this thread so far (I skim read it) - most of these open recursor attacks, that I've seen, are for ANY?isc.org - I assume because isc.org have a pretty large zone.
Some, not 'most'.
You might want to as a first step block those queries at your border, if you have the facility to do so.
Not optimal - breaks qmail, for example. Blocking ANY queries for specific domains is sometimes the best thing to do tactically during an attack, but it shouldn't be enacted as a policy. And it's important to block undesirable traffic before it reaches the servers.
Instead, installing RRL, utilizing other DNS defensive mechanisms, makes more sense.
Also, here's an example of the sort of logical functional separation which should feature in DNS architectures:
https://www.box.com/s/72bccbac1636714eb611
-----------------------------------------------------------------------
Roland Dobbins
On 11/06/13 5:53 PM, Nathan Ward wrote:
There's going to be quite a challenge to lock those open resolvers down, and we're debating how to do it at the moment - the industry comms process will be interesting, I'm sure, and I'm sure many people on this list will have a busy day fixing up old boxes that can't when our messages have been ignored:-)
Would be interested in any experience people have with something similar..
A year or two ago we restricted access to a previously open resolver that had been in circulation for about a decade. Although we tried quite hard to communicate with everyone it resulted in a busy day and some grumpy people. I don't think there's any real way around this problem. I was about to write that I didn't think anyone should be surprised today to have open resolvers disappear, but just received a fairly indignant email from a current customer in response to our request for them to disable an open resolver, so, well, um... I sent them Dave's links - thanks Dave :^) Cheers, Gerard
On Jun 11, 2013, at 3:21 PM, Gerard Creamer wrote:
but just received a fairly indignant email from a current customer in response to our request for them to disable an open resolver, so, well, um... I sent them Dave's links - thanks Dave :^)
If customers refuse to cooperate after being provided with educational links and explanations, referring them to your closest competitor would be a good strategy.
;>
-----------------------------------------------------------------------
Roland Dobbins
On 11/06/13 17:53, Nathan Ward wrote:
On 11/06/2013, at 1:11 PM, David Robinson
wrote: Something worth noting that I haven't seen mentioned in this thread so far (I skim read it) - most of these open recursor attacks, that I've seen, are for ANY? isc.org - I assume because isc.org have a pretty large zone. You might want to as a first step block those queries at your border, if you have the facility to do so.
Actually
As for our recursive nameservers, we've got about 3 different sets of IP addresses, for various legacy reasons. All of these are being hit with a large number of queries (that are as far as we can tell, legitimate) from people outside our network who are using our resolvers for what looks like a number of different reasons. Some of the resolvers have been on these addresses for over 10 years, so it's not surprising.
There's going to be quite a challenge to lock those open resolvers down, and we're debating how to do it at the moment - the industry comms process will be interesting, I'm sure, and I'm sure many people on this list will have a busy day fixing up old boxes that can't when our messages have been ignored :-)
Would be interested in any experience people have with something similar..
-- Nathan Ward
[1] unless we're drinking beer.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
On 11/06/2013, at 9:06 PM, Sebastian Castro
On 11/06/13 17:53, Nathan Ward wrote:
On 11/06/2013, at 1:11 PM, David Robinson
wrote: Something worth noting that I haven't seen mentioned in this thread so far (I skim read it) - most of these open recursor attacks, that I've seen, are for ANY? isc.org - I assume because isc.org have a pretty large zone. You might want to as a first step block those queries at your border, if you have the facility to do so.
Actually
is being used too, but less frequently. And for everyone's benefit, isc.org is not used because the zone is large, but because the response is large: a bunch of different records under the same label, isc.org.
Sorry, yes, zone is the wrong word, I meant label :-)
The response for
is 3335 bytes, compared with for example which is 1847
On 11/06/13 21:12, Nathan Ward wrote:
On 11/06/2013, at 9:06 PM, Sebastian Castro
wrote: On 11/06/13 17:53, Nathan Ward wrote:
On 11/06/2013, at 1:11 PM, David Robinson
wrote: Something worth noting that I haven't seen mentioned in this thread so far (I skim read it) - most of these open recursor attacks, that I've seen, are for ANY? isc.org - I assume because isc.org have a pretty large zone. You might want to as a first step block those queries at your border, if you have the facility to do so.
Actually
is being used too, but less frequently. And for everyone's benefit, isc.org is not used because the zone is large, but because the response is large: a bunch of different records under the same label, isc.org. Sorry, yes, zone is the wrong word, I meant label :-)
The response for
is 3335 bytes, compared with for example which is 1847
is about 2500, for reference. Is that the preferred annotation for a query?
I've used that notation for quite some time, but I'm not certain is a documented convention.
-- Nathan Ward
-- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
On 11/06/2013, at 11:02 AM, Dave Mill
... Finally, some rough stats so far.
-Somewhere between 1 - 2% of our customers have this issue. This being so high surprised me! -By tackling my "low hanging fruit" we resolved approx. 15% of the open resolvers. This was minimal effort. -At our aimed rate of contact it will take 12 weeks for us to let all of the customers know they have this issue and offer advice on it.
Do you have any initial figures for the cleanup rate on your not-so-low hanging fruit?
What should we do about the customers who don't fix this issue within a reasonable time-frame once we've told them about it?
1) Do nothing 2) Contact them again 3) Block international port 53 requests going to them at our border routers (can be done with minimal effort and load on the routers in question - I'm quite against this though)
Do you have enough monitoring to be able to spot when a customer's open resolver is being used for a DDOS? If so, you can warn them that if they get pulled in to a DDOS attack you will disconnect them until they fix their resolver. Maybe you could tell them that even if you don't have enough monitoring. Cheers, Lloyd
Replies below. On Tue, Jun 11, 2013 at 12:13 PM, Lloyd Parkes < lloyd(a)must-have-coffee.gen.nz> wrote:
On 11/06/2013, at 11:02 AM, Dave Mill
wrote: ...
Finally, some rough stats so far.
-Somewhere between 1 - 2% of our customers have this issue. This being so high surprised me! -By tackling my "low hanging fruit" we resolved approx. 15% of the open resolvers. This was minimal effort. -At our aimed rate of contact it will take 12 weeks for us to let all of the customers know they have this issue and offer advice on it.
Do you have any initial figures for the cleanup rate on your not-so-low hanging fruit?
That part is only being kicked off this week. Might have some better stats in say 4 weeks time (allowing 2 weeks for customers to fix issues).
What should we do about the customers who don't fix this issue within a reasonable time-frame once we've told them about it?
1) Do nothing 2) Contact them again 3) Block international port 53 requests going to them at our border routers (can be done with minimal effort and load on the routers in question - I'm quite against this though)
Do you have enough monitoring to be able to spot when a customer's open resolver is being used for a DDOS? If so, you can warn them that if they get pulled in to a DDOS attack you will disconnect them until they fix their resolver. Maybe you could tell them that even if you don't have enough monitoring.
We've looked in to our sflow logs on a few customers that we know a few things about. We can see customers of ours being used in what we believe is an amplification attack (many connections from 1 (presumably spoofed) IP, random src port, dst port is 53). We've also looked at the logs of customers of ours who are the target of a large amplification attack - that's pretty scary to see to say the least. From what little I've looked at I'm heading towards the conclusion that the majority of our customers with open DNS resolvers either have been used in DDoSs already or will in the future. If the open resolver project can compile lists of open dns resolvers then its pretty trivial for 'hackers' to come up with the same lists. As an incentive to have customers fix this we're trying to use the "large amounts of unwanted traffic" reason where possible. With a well configured, "nice" botnet the traffic levels seem small, but when a botnet goes a bit haywire traffic levels can be very high. Note, my option 3) is to block port 53 traffic to just these "bad" customers. I'm not in any way talking of blocking all inbound DNS traffic internationally. I still do not like option 3). Peter's option from an earlier email at first glance do seem sane - though still a high impact on customer satisfaction. Cheers Dave
On Jun 11, 2013, at 6:02 AM, Dave Mill wrote:
What should we do about the customers who don't fix this issue within a reasonable time-frame once we've told them about it?
Presumably, your AUP prohibits running servers on broadband connections. If so, enforce that provision.
If it doesn't, update the AUP and then enforce it.
If you don't want to prohibit servers in general, then update your AUP to prohibit running open DNS recursors and open mail relays, and then enforce those provisions.
-----------------------------------------------------------------------
Roland Dobbins
On 11/06/13 14:13, Dobbins, Roland wrote:
On Jun 11, 2013, at 6:02 AM, Dave Mill wrote:
What should we do about the customers who don't fix this issue within a reasonable time-frame once we've told them about it? Presumably, your AUP prohibits running servers on broadband connections. If so, enforce that provision.
If it doesn't, update the AUP and then enforce it.
If you don't want to prohibit servers in general, then update your AUP to prohibit running open DNS recursors and open mail relays, and then enforce those provisions.
Once you've established that a customers insecure machine is being used in an abusive or illegal fashion, rights-to-terminate are not in question. It's whether the business wants to wear the loss of revenue associated with this, when the impact from a non-technical perspective is minimal-to-nil... enforcing disconnection clauses is easy when the abuse is obvious or deliberate. It's fractionally harder to do in other cases. My comment to Dave would be roughly similar to this, though; you're protecting the reputation of your brand, your operational costs (bandwidth utilisation, etc) and preventing illegal behavior / collateral damage to victims, etc - and cost blowouts for the customer. Perhaps the threat of disconnection alone will convince your customer that it is in-fact a matter to take seriously? I for one hope that there aren't any active NZ ISPs who would take the 'it's not a big enough problem to justify cutting them off' line, but the cynic in me says there's gotta be at least a couple :( Cheers Mark
Mark makes a good point - this may scare some people away, but it could be
a good PR opportunity (in the good sense) - showing that you're being
responsible, that you're willing to actually talk geeky stuff with your
customers - as opposed to the big "have you tried turning it off and on
again" ISPs
On Tue, Jun 11, 2013 at 2:39 PM, Mark Foster
On 11/06/13 14:13, Dobbins, Roland wrote:
On Jun 11, 2013, at 6:02 AM, Dave Mill wrote:
What should we do about the customers who don't fix this issue within a reasonable time-frame once we've told them about it? Presumably, your AUP prohibits running servers on broadband connections. If so, enforce that provision.
If it doesn't, update the AUP and then enforce it.
If you don't want to prohibit servers in general, then update your AUP to prohibit running open DNS recursors and open mail relays, and then enforce those provisions.
Once you've established that a customers insecure machine is being used in an abusive or illegal fashion, rights-to-terminate are not in question. It's whether the business wants to wear the loss of revenue associated with this, when the impact from a non-technical perspective is minimal-to-nil... enforcing disconnection clauses is easy when the abuse is obvious or deliberate. It's fractionally harder to do in other cases.
My comment to Dave would be roughly similar to this, though; you're protecting the reputation of your brand, your operational costs (bandwidth utilisation, etc) and preventing illegal behavior / collateral damage to victims, etc - and cost blowouts for the customer. Perhaps the threat of disconnection alone will convince your customer that it is in-fact a matter to take seriously?
I for one hope that there aren't any active NZ ISPs who would take the 'it's not a big enough problem to justify cutting them off' line, but the cynic in me says there's gotta be at least a couple :(
Cheers Mark
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Jun 11, 2013, at 9:53 AM, Sam Russell wrote:
Mark makes a good point - this may scare some people away,
ISPs generally don't want customers who actually cost them money, so scaring those types of customers away is a desirable outcome, IMHO.
;>
-----------------------------------------------------------------------
Roland Dobbins
I've been away from my desk this afternoon and just thought I'd reply to
this whole thread.
I think the cynic in Mark is right. I can't really see us disconnecting a
customer for running an open resolver. We'd be more inclined to try and
help them fix the issue and then potentially block their port 53 traffic
internationally. It's just the way we operate in general.
Our AUP doesn't forbid customers running servers at all and I don't think
that should change at all.
I think the biggest scare tactic is the traffic overage. We've seen along
the lines of 25GBytes in 10 mins when a botnet went "a bit wrong".
Obviously not all that traffic made it to the customer involved...
Dave
On Tue, Jun 11, 2013 at 3:02 PM, Dobbins, Roland
On Jun 11, 2013, at 9:53 AM, Sam Russell wrote:
Mark makes a good point - this may scare some people away,
ISPs generally don't want customers who actually cost them money, so scaring those types of customers away is a desirable outcome, IMHO.
;>
----------------------------------------------------------------------- Roland Dobbins
// http://www.arbornetworks.com Luck is the residue of opportunity and design.
-- John Milton
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
participants (14)
-
Dave Mill
-
Dave Mill
-
David Robinson
-
Dobbins, Roland
-
Ewen McNeill
-
Gerard Creamer
-
Joe Abley
-
Juha Saarinen
-
Lloyd Parkes
-
Mark Foster
-
Nathan Ward
-
Peter Mott
-
Sam Russell
-
Sebastian Castro