Web Servers: Dual-homing or DNAT/Port Forwarding?
Hi All, I'm curious to know which of the following methods is more widely used/accepted today for publishing web servers to the Internet. 1) Dual-home the server - place one NIC on the internet and a second NIC on an internal network for administration, or 2) DNAT/Port Forward my external IP to my internal IP 3) Both - Dual home the server onto two private subnets (external/internal) and DNAT/Port Forward the public IP to the external subnet IP In either case: a) I will be hiding behind a dedicated firewall appliance and not relying on the OS firewalls b) the internal network will still be in its own subnet firewalled away from the rest of the network c) Only HTTP/HTTPS will be permitted from the internet, no RDP, SSH etc d) I will be deploying IPv6 to this machine in the next 12 months which makes option 1 more attractive I personally like option 1 but I'm looking to see if theres any facepalm reasons I shouldn't do it this way. Happy holidays! -- Thanks Christoph
Never do 1 or 3. When you server gets hacked (which is what you should aways assume) then your internal lan also get hacked bypassing your internal firewall. You need a firewall in front of your server, which you are doing and another firewall between the server and your internal network, no bypassing of any sort. So you get 2 internal networks, DMZ and Internal. You can use a second NIC for admin but it should be in the same subnet as the 1st and just give you a dedicated admin/backup NIC so as not to impact web traffic. Avoid DNAT unless you have a good reason to use it. (For example keeping internal numbering intact when you move providers) for c) what about DNS.
Hi All,
I'm curious to know which of the following methods is more widely used/accepted today for publishing web servers to the Internet.
1) Dual-home the server - place one NIC on the internet and a second NIC on an internal network for administration, or
2) DNAT/Port Forward my external IP to my internal IP
3) Both - Dual home the server onto two private subnets (external/internal) and DNAT/Port Forward the public IP to the external subnet IP
In either case:
a) I will be hiding behind a dedicated firewall appliance and not relying on the OS firewalls b) the internal network will still be in its own subnet firewalled away from the rest of the network c) Only HTTP/HTTPS will be permitted from the internet, no RDP, SSH etc d) I will be deploying IPv6 to this machine in the next 12 months which makes option 1 more attractive
I personally like option 1 but I'm looking to see if theres any facepalm reasons I shouldn't do it this way.
Happy holidays!
-- Thanks Christoph -- Jean-Francois Pirus | Technical Manager francois(a)clearfield.com | Mob +64 21 640 779 | DDI +64 9 282 3401
Clearfield Software Ltd | Ph +64 9 358 2081 | www.clearfield.com
On 9/12/2013, at 8:57 pm, Jean-Francois Pirus
Never do 1 or 3. When you server gets hacked (which is what you should aways assume) then your internal lan also get hacked bypassing your internal firewall.
You need a firewall in front of your server, which you are doing and another firewall between the server and your internal network, no bypassing of any sort. So you get 2 internal networks, DMZ and Internal.
That’s true, but not true for a couple of reasons. Firstly, the options don’t specify where the firewalls are, so you can have as many firewalls as you like with all three options. Secondly, the argument that a server can get hacked is a slippery slope argument that logically leads to placing hardware firewalls between every single host because once a host is hacked, any adjacent hosts are now vulnerable according to that argument. The next thing to remember is that firewalls don’t stop your hosts from getting broken into. That must be the case if you are assuming that your web servers (which are behind firewalls) are going to be broken in to. Of course, what you have described is also correct, because it’s current best practice. It is a balance between cost and risk mitigation and like all balances, you get to adjust it until it feels right for you. On top of that, Christoph wants to know how to assign addresses to his web servers, which isn’t affected by any firewall placement. I have deployed option 1 in the past (with two layers of firewalls behind it rather than the one that Jean-Francois recommends ;-) and it has worked well. If you have public IP addresses then use them, not only because that’ll set you up well for IPv6, but also because network reporting can get a bit interesting when you have NAT. Cheers, Lloyd
Thanks all for the on and off-list replies.
I came to the same conclusion DNAT wasn't a favoured option.
In my case, we are serving up the dreaded 'custom written app' on
ASP.NETso I am 100% working on the assumption it will be compromised
at some
point.
I'm now working on following method - it might be a bit OTT:
INTERNET ---- EXTERNAL FIREWALL ---- (PUBLIC IP) WEB SERVER (DMZ IP) ----
INTERNAL FIREWALL ---- LAN
- The web server's default route is to the internet
- The web servers only other route is the connected DMZ interface
- The external firewall only allows HTTP/HTTPS inbound, nothing outbound,
so malware can't call home (easily anyway)
- The internal firewall only allows communication from the LAN to the DMZ
(NAT'd also so I don't have to add the internal subnet to the DMZ servers
route tables)
On Tue, Dec 10, 2013 at 8:22 AM, Lloyd Parkes wrote: On 9/12/2013, at 8:57 pm, Jean-Francois Pirus Never do 1 or 3. When you server gets hacked (which is what you should
aways assume) then your internal lan also get hacked bypassing your
internal firewall. You need a firewall in front of your server, which you are doing and
another firewall between the server and your internal network, no bypassing
of any sort.
So you get 2 internal networks, DMZ and Internal. That’s true, but not true for a couple of reasons. Firstly, the options
don’t specify where the firewalls are, so you can have as many firewalls as
you like with all three options. Secondly, the argument that a server can
get hacked is a slippery slope argument that logically leads to placing
hardware firewalls between every single host because once a host is hacked,
any adjacent hosts are now vulnerable according to that argument. The next
thing to remember is that firewalls don’t stop your hosts from getting
broken into. That must be the case if you are assuming that your web
servers (which are behind firewalls) are going to be broken in to. Of
course, what you have described is also correct, because it’s current best
practice. It is a balance between cost and risk mitigation and like all
balances, you get to adjust it until it feels right for you. On top of that, Christoph wants to know how to assign addresses to his web
servers, which isn’t affected by any firewall placement. I have deployed
option 1 in the past (with two layers of firewalls behind it rather than
the one that Jean-Francois recommends ;-) and it has worked well. If you
have public IP addresses then use them, not only because that’ll set you up
well for IPv6, but also because network reporting can get a bit interesting
when you have NAT. Cheers,
Lloyd
_______________________________________________
NZNOG mailing list
NZNOG(a)list.waikato.ac.nz
http://list.waikato.ac.nz/mailman/listinfo/nznog --
Thanks
Christoph
I generally have a VPN on a server which 'phone's home' to a heavily
hardened vpn concentrator attached to a DMZ/jumphost -> internal networks.
If(when) the server get's b0rked, you can simply regen a new cert,
invalidate the old one and away you go.
I've done this with DRAC/ILO's using a small openwrt box connected to the
ILO to do the vpn smarts (this also gives you a backup channel via cellular
if you need it).
-Joel
On 10 December 2013 10:21, Christoph Berthoud
Thanks all for the on and off-list replies.
I came to the same conclusion DNAT wasn't a favoured option.
In my case, we are serving up the dreaded 'custom written app' on ASP.NETso I am 100% working on the assumption it will be compromised at some point.
I'm now working on following method - it might be a bit OTT:
INTERNET ---- EXTERNAL FIREWALL ---- (PUBLIC IP) WEB SERVER (DMZ IP) ---- INTERNAL FIREWALL ---- LAN
- The web server's default route is to the internet - The web servers only other route is the connected DMZ interface - The external firewall only allows HTTP/HTTPS inbound, nothing outbound, so malware can't call home (easily anyway) - The internal firewall only allows communication from the LAN to the DMZ (NAT'd also so I don't have to add the internal subnet to the DMZ servers route tables)
On Tue, Dec 10, 2013 at 8:22 AM, Lloyd Parkes < lloyd(a)must-have-coffee.gen.nz> wrote:
On 9/12/2013, at 8:57 pm, Jean-Francois Pirus
wrote: Never do 1 or 3. When you server gets hacked (which is what you should aways assume) then your internal lan also get hacked bypassing your internal firewall.
You need a firewall in front of your server, which you are doing and another firewall between the server and your internal network, no bypassing of any sort. So you get 2 internal networks, DMZ and Internal.
That’s true, but not true for a couple of reasons. Firstly, the options don’t specify where the firewalls are, so you can have as many firewalls as you like with all three options. Secondly, the argument that a server can get hacked is a slippery slope argument that logically leads to placing hardware firewalls between every single host because once a host is hacked, any adjacent hosts are now vulnerable according to that argument. The next thing to remember is that firewalls don’t stop your hosts from getting broken into. That must be the case if you are assuming that your web servers (which are behind firewalls) are going to be broken in to. Of course, what you have described is also correct, because it’s current best practice. It is a balance between cost and risk mitigation and like all balances, you get to adjust it until it feels right for you.
On top of that, Christoph wants to know how to assign addresses to his web servers, which isn’t affected by any firewall placement. I have deployed option 1 in the past (with two layers of firewalls behind it rather than the one that Jean-Francois recommends ;-) and it has worked well. If you have public IP addresses then use them, not only because that’ll set you up well for IPv6, but also because network reporting can get a bit interesting when you have NAT.
Cheers, Lloyd _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Thanks Christoph
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Hi, I normally use a combination of "1" and "2". I prefer 1 for weird and "not nat friendly" protocols, like SIP or some other application. The general rule of thumb is to use number 2 in other cases. In both setups, remember to deploy local firewalls as well. This will help for the case when a box on the subnet is hacked. My other twist is to deploy "1" without the private NIC, along with local firewalls (and as you said, dedicated FW). Number 1 gets you thinking along the IPv6 route (no pun, and imho :) ) since you have to treat each boxes as if it was public. Cheers, Pieter On 9/12/2013 15:36, Christoph Berthoud wrote:
Hi All,
I'm curious to know which of the following methods is more widely used/accepted today for publishing web servers to the Internet.
1) Dual-home the server - place one NIC on the internet and a second NIC on an internal network for administration, or
2) DNAT/Port Forward my external IP to my internal IP
3) Both - Dual home the server onto two private subnets (external/internal) and DNAT/Port Forward the public IP to the external subnet IP
In either case:
a) I will be hiding behind a dedicated firewall appliance and not relying on the OS firewalls b) the internal network will still be in its own subnet firewalled away from the rest of the network c) Only HTTP/HTTPS will be permitted from the internet, no RDP, SSH etc d) I will be deploying IPv6 to this machine in the next 12 months which makes option 1 more attractive
I personally like option 1 but I'm looking to see if theres any facepalm reasons I shouldn't do it this way.
Happy holidays!
-- Thanks Christoph
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Dec 9, 2013, at 9:36 AM, Christoph Berthoud
a) I will be hiding behind a dedicated firewall appliance and not relying on the OS firewalls
https://app.box.com/s/a3oqqlgwe15j8svojvzl
Servers really should never be placed behind stateful firewalls - it doesn't actually do any good, it doesn't really make sense (all incoming connections are unsolicited, so there's no state to inspect), and renders them much more vulnerable to DDoS attacks than if the firewalls weren't there.
Network access policy should typically be expressed using stateless ACLs in hardware-based routers or layer-3 switches.
Same goes for NAT - I've seen horror story after horror story about NATted servers.
-----------------------------------------------------------------------
Roland Dobbins
On Tue, Dec 10, 2013 at 6:11 AM, Dobbins, Roland
Servers really should never be placed behind stateful firewalls - it doesn't actually do any good, it doesn't really make sense (all incoming connections are unsolicited, so there's no state to inspect), and renders them much more vulnerable to DDoS attacks than if the firewalls weren't there.
Technically true. However an external level of control (stateful firewall because they're common, router ACLs are 'faster' but have fewer tools to help you maintain them) is essential to prevent accidental services being enabled, or a compromised box being able to call out to the network for C&C. Of course, no-one filters the outbound traffic from their servers, do they? :-( -jim
On Dec 10, 2013, at 2:33 PM, Jim Cheetham
router ACLs are 'faster' but have fewer tools to help you maintain them)
Actually, all you need to maintain ACLs is good old RCS and RANCID. ;>
Of course, no-one filters the outbound traffic from their servers, do they? :-(
Which one ought to do, with stateless ACLs.
;>
-----------------------------------------------------------------------
Roland Dobbins
On Tue, 10 Dec 2013, Jim Cheetham wrote:
Of course, no-one filters the outbound traffic from their servers, do they? :-(
Funny you should mention that; alarming on "strange" outbound traffic has been quite effective at picking up when a server gets compromised, and filtering has preventing such intrusions from doing as much harm as would otherwise have been the case. -Martin
On Tue, Dec 10, 2013 at 8:58 PM, Martin D Kealey
Of course, no-one filters the outbound traffic from their servers, do
On Tue, 10 Dec 2013, Jim Cheetham wrote: they? :-(
Funny you should mention that; alarming on "strange" outbound traffic has been quite effective at picking up when a server gets compromised, and filtering has preventing such intrusions from doing as much harm as would otherwise have been the case.
Of course, you're all actively implementing BCP38 etc aren't you? :-)
participants (9)
-
Andy Linton
-
Christoph Berthoud
-
Dobbins, Roland
-
Jean-Francois Pirus
-
Jim Cheetham
-
Joel Wirāmu Pauling
-
Lloyd Parkes
-
Martin D Kealey
-
Pieter De Wit