L3 Hardening: GWx DDoS Mitigation

In the newer ages of the internet, denial-of-service attacks (DoS), their distributed variants (DDoS) and its newest reflected species (DrDoS/rDDoS) took, take and will take place increasingly often. To explain this very quickly: A Denial-of-Service (DoS) attack takes place when a single host attacks another host over the network. Distributed Denial of Service (DDoS) means that lots of often geographically dispersed aggressor hosts conduct the attack (trinoo, tfn2k and stacheldraht were famous tools for that purpose in 2001). As you can imagine, the first DDoS attacks were pretty spectacular b/c of the bandwidth achieved. Afterwards, more sophisticated Layer 7 (Application Layer) attacks were developed, then reflected attacks and finally amplification came into play (see wikipedia for more details). All these attacks are not only proof of  weaknesses and/or design errors in underlying internet protocols or network service daemons implementing them. They are also depicting their potential power, as such attacks can be a equally handy and  efficient tool for governmental entities and/or their military executive branches that have an interest in e.g. wreaking havoc to a countries essential infrastructure. On a even more sophisticated level, such attacks may be conducted as part of a larger operation w/ the intent to ultimately spoof, intercept or overtake certain communications to and from target host(s) or network(s). The much hyped term "cyberwar" comes into mind, accompanied by a bitter taste of being instrumented by the military-industrial complex to justify questionable regulations and defense budget extensions to "make the internet a safer place". Basic Mitigation Theory If we look into nature, we see that e.g. a river is able to transport certain amounts of water, but when a flood happens b/c of heavy rainfall (a.k.a. distributed denial of service taking place), the original riverbed will be too small to carry all the water which ultimately finds its own ways, forming and rearranging its surrounding landscape by whatever lies on its path. Now, if we look at that on a larger scale, a single river is most of the time only one vein of a certain area's water transportation system, and if floods happen more often, new smaller rivers might be formed to fulfill the need for larger overall capacity. The more rivers there are, the more water can and eventually will be transported w/o the harsh effects of the previous flood. So, a more complex and dynamic river system is potentially able to fully compensate the initial problem. This split-up principle can also be applied during the mitigation of a large-scale DoS, DDoS or DrDoS/rDDoS attack, subsequently described at a basic technical level. Technical GWx Principle Each of the GWx systems is configured to forward and/or proxy packets for given services to the real IP of the productive server. This could be achieved  by implementing packetfilter or routing rules on incoming layer 3 IP traffic or by certain configurations that implement a dedicated proxy / loadbalancer on the application layer. If a network or host has or itself acts as a single gateway, it can be flooded if the amount of data reaches its maximum bandwidth capacity. So, a 1Gbit DDoS attack will most probably fully saturate and thus take down a system connected via a single 1 Gbit link @ GW0. But, if we implement a second, geographically distant GW1 w/ the same linkspeed and use round robin DNS to evenly spread the requests to both gateways, a 1Gbit attack can no longer fully saturate the bandwidth as each of the GWx systems will only receive its 50% share of it. So, a GWx cluster consisting of 3 systems will reduce that to even shares of 33:33:33 percent, 4 systems to percentages of 25:25:25:25 and so on: x systems = overall bandwidth/x per system.  You see that this system comes w/ auto-grown scalability in mind and is ready to be expanded in realtime just by adding more GWx to the cluster and its underlying round robin DNS configuration. GWx Hardening As each and every GWx system will be directly exposed to attack traffic, it should be hardened thoroughly on host and network level. To name only a few, implementing basic packetfilter rules for filtering certainly known-bad, unneeded traffic, and even more sophisticated advances like blocking, delimiting or restricting bandwidth of hosts that send too many requests in a certain timespan, or a mechanism to filter out brute-force attacks to certain services or webpages could be implemented. Extended host and network monitoring also makes sense here, but may heavily depend on your research capabilities or your intention to analyze and further develop your mitigative skillset. Security is a process, and should neither be seen as, nor advertised and marketed as a snake-oilish product. Last but not least, it is of course crucial to retain secrecy of the real IP and also deploy packet filtering there to allow only inbound traffic from GWx boxes to the services protected by them. Practical Insights and Perspectives Having dealt w/ 30+ large scale (that means at least hundreds of megabit up to a few gigabit) attacks only in the last two years, I observed that they shared all of the specific characteristics (4x GWx, 2 providers, 4 DCs):
  • overall attack duration mostly only a few minutes
  • usually shifted by a few minutes
  • maximum + overall attack bandwidth limited
  • attacker unable to fully disrupt GWx protected services ever since
As DDoS attacks and certain, questionable mitigation techniques (as opposed to lotek, simple, functional and achievable) recently have also become a lucrative business model, the "customer" (or rather attacker) most probably pays for a certain package that seems to limit him to a certain target IP at a time and of course a limited bandwidth. Staying rather stealthy in a long-term period seems to also be a plausible demand for the DDoS provider on the one as well as its "customer" on the other hand, so the average attack will take place mostly during high-load periods and last rather short but occur often, so that fully legitimate clients get really frustrated. Generally speaking, and if we left out the fact that core network providers are also able to filter e.g. using BGP, one efficient way to mitigate DoS, DDoS and DrDoS/rDDoS attacks would be to form a cyberarmy of GWx machines, geographically spread all over the world and using different providers and physical datacenters - a technique similarly deployed by the circumventive/anti-censorship tor network. But the GWx cyberarmy - in contrast to  botnets - does not have to consist of hundreds or thousands of machines; we only have high bandwidth servers, ideally carefully chosen dedicated root servers, optionally already DDoS protected in their own network. It could also make sense to have a variable list of GWx systems that could change IPs or even providers every few months (e.g. if the monitoring shows that certain gateways are attacked more often and w/ more bandwidth). In the end, the efficiency of network offense as well as network defense heavily depends on the skillset and creativity of the red and the blue team respectively. Variability and flexibility have always been and always will be an essential part on the road to success, be it for natural species or the survival in a clearly overhyped but nonetheless unambiguously fought cyberwar. From my personal experience, and if you generally look into the successfull spread of lots of things, be it historically relevant inventions or open source software, simplicity is often the key element of consecutive efficiency and widespread usage.

Comments

Comments powered by Disqus