With the rise of IoT and web-facing applications, the DDoS threat is now one to be taken seriously by any organisation providing online services.
Over the past month, I found myself at a number of industry events including the IoT Tech Expo in London and Dublin Tech Summit. The events were pretty thought provoking all round, but particular kudos has to go to DTS for its clever StartUp100 programme where 100 individual start-ups were each given a platform to deliver a 5 minute elevator style product pitch on a dedicated stage.
Although I left each event with a sense of enthusiasm regarding new developments and improvements to existing technologies, there was no getting away from the increasing number of security questions and threats – and there seem to be more questions than answers right now.
The DDoS Threat
The truth is very apparent that threat mitigation is becoming a much bigger and more complex field as more and more smart devices are added to the mix (a mere 5,500,000 internet enabled devices every day in 2016). While we can account, to a certain degree, for the security of our own end-points, there are billions of devices that are either outside of our control or that we simply ignore from a security perspective – and DDoS attackers are subsequently having a field day.
DoS (Denial of Service) essentially renders a website or web service unavailable to its intended users by flooding the target with resource-consuming spurious requests, thus overloading the system. In DDoS (Distributed Denial of Service), this is done by using many devices – often tens of thousands – to effectively make it impossible to stop the attack by merely blocking offending IP addresses. Potential perpetrators don’t even need any great technical knowledge to launch a DDoS attack as there is a whole underground market where DDoS toolkits can be easily purchased or freely obtained.
So how do we mitigate risks associated with DDoS?
Well, one method Public Cloud providers are happy to advocate is utilisation of auto-scaling where new servers are spun up automatically (or more memory, CPU resources and bandwidth are assigned to existing workloads) in order to deal with the attack. Of course the risk isn’t mitigated by the cloud provider here. They merely serve up whatever is required to keep services running – and charge accordingly. If your existing usage policy is broken, the provider will, in probability, null route / black hole filter your traffic until it is paid for – which brings us to the second method of dealing with an attack.
The blunt instrument known as blackholing (or Null Routing) of traffic is the creation of an IP traffic route that goes to a virtual black hole. The black hole method is used to stop other users that share the infrastructure from falling victim to service degradation as a result of the primary attack. Unfortunately, this less than desirable baby-with-bathwater approach effectively takes the victim offline in the same way as the DDoS attack itself and remains an unacceptable approach for organisations that rely on always-on Internet services.
There is a better way
Along with implementation of best practice security measures to reduce the risk in the first place, a number of solutions are available that ‘scrub’ traffic destined to consume resources, thus filtering out the malicious traffic and only allowing the right traffic to hit the target resources.
Both on-premise solutions and cloud-based solutions are employed by organisations, but in my opinion, as attacks become more complex in their arrangements and increase in peak size, the argument for using cloud-based services becomes the stronger one. One reason for this is that the bandwidth required to cope with an attack before the malicious data is filtered can be substantially higher than the organisation has in place, even with burstable bandwidth facilities.
Cloud-based services are custom designed to deal with the large data volumes involved, and only pass clean data onto the final destination. Under normal circumstances all data is directly received by the client and packets are sampled by the service provider, thus ensuring there is no latency overhead. Only in the case where an attack has been identified is the data re-routed through the service provider and ‘scrubbed’ prior to delivery, allowing the organisation to continue its operations. Once the attack is over, normal service resumes.
The truth is that DDoS mitigation comes at a price, and both on-premise and cloud solutions can be expensive. Like Disaster Recovery, it’s an additional cost that doesn’t deliver any day to day productivity gains, but the commercial and reputational damage that would result from an attack – and resulting sustained outage – absolutely has to be considered by any organisation as part of their overall risk mitigation strategy.