SYN Attach Thousands Of Request A Second Maybe Hacked Dont Know Help!
For the last 6 months our site has been under severe brute force, syn flood attack. They keep bombarding a single URL of the server and it is xml file. They are not attacking any other URL.
We have removed the xml page from our site but still they keep on sending requests, this is for the last 6 months non stop.
The IP has been changed just to see and they are sending several thousand requests per second. The requests come from different IPS and different ranges, so you can not even block the IPs. They seem to be coming from a legitimate IPs.
Due to this I have had to pay for an extremely expensive server which holds 8 GB of RAM and quad core processor etc, however, even with this the server still reaches critical level, just because these requests are eating up my resources.
Our technical team has been working on all aspects of apache server security, external modules, firewall, hardware firewall from beginning but still we are not able to stop them.
We have installed following modules.
We have worked with the hosting company and their technical team leader, he installed the best CISCO hardware firewall and tried to stop them, but in vain.
We have checked our server to see if anything from our site is causing the request, no extra file uploaded on to the server. For example if some file has been upload or some text has been added to the file (checked if weve been hacked). Even though we checked for any hacks, I am still wondering if there is something we do not know about. Can a hack lead to huge amounts of traffic?
We need some help to stop these attacks. We have searched a lot and have found that sites that get attacked like this have only one option is to shut down till it stops. I really hope that will not be the case for us. Please let us know if any one has any ideas to deal with this.
We are willing to try any small suggestion which might help from coding to scripting to modules to firewall. So please provide suggestion/solutions and way to get us out of this.
Also could it be our own part of php code which can do this? We are ready to check every php file to make sure it does not have any line of code which can be dangerous?
Yes, it seems that there is single IP and when they send the request, it is coming from different Ip ranges with no referrer. I have read the article and working my technical linux team to try this out.
Also they are sending request and we provide them 404 however as you said, they don't care for the ACK. They just want to send requests.
I will keep updating the forum with latest details. Also please keep giving suggestions to sort this out.
You say they hit a single url. How dynamic is the content of said url? Could you cache it (even if just for a few seconds)? In that case, you could set up a http-cache such as squid. This should take the load of your web server. Of course you still have to handle the load, but at least it evens the game a bit, as the attacker would have to spend much more resources to put you under load. If you don't want/can't set up squid, you could perhaps render the file to a static file and have Apache serve it directly without involving something like php (Assuming you are now). That should take off some load as well.
You said "SYN flood attack", right? If it were SYN flood attack, you would not know which page they are targeting, because syn flood means that TCP session pool is overloaded with false SYN requests (only the first SYN packet is sent and no other info).
Just finished checking full server again, no unintentional file or code on the server.
Also, anyone has idea regarding the Firewall which drops request at entry point for specific URL request? Currently we have tried are IP and pattern based only to slow down the attack, however, they are being smarter and keep generating new bunch of IP address.
Have they contacted you? What gain do they get from DDoSing you?
Do these HTTP requests have an referrer (probably not, but still)? Maybe they have included your xml page as <img src="yoursite.si/yourpage.xml"> in a busy forums/webpages and with enough amount of visitors - they are unbeknownst to them overloading your site with these requests.
What type of firewall you have? Some of them are capable of URL level filtering while others are not.
What was your setup for mod_security and mod_evasive? They should have dealt with problem to some extent.
Red Hat Enterprise Linux Server release 5.2 (Tikanga)
> cat /proc/version
Linux version 2.6.18-92.1.22.el5 (email@example.com) (gcc version 4.1.2 20071124 (Red Hat 4.1.2-42)) #1 SMP Fri Dec 5 09:28:22 EST 2008
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_synack_retries = 2
# Enable IP spoofing protection, turn on Source Address Verification
net.ipv4.conf.all.rp_filter = 1
# Enable TCP SYN Cookie Protection
net.ipv4.tcp_syncookies = 1
# 65536 seems to be the max it will take
net.ipv4.ip_conntrack_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1
Are you telling for some intermediate server or processor? I have not think much for this. Can you please provide some details regarding this?? It will be a great help. Also pls provide steps to achieve this and also the drawbacks for same if there are.
Also following are the updates,
Scanned the server with rootkit antispyware, no infection found. Regarding the firewall, put on BFD firewall over APF, still requests are not getting down.
Also IP table is getting full of new ips and it keeps network slow. Please advice.
kyberfabrikken's suggestion is a good one though I'd use HAProxy rather than squid. HAProxy will deal with each request with a much lower overhead per request than apache, and will not allow malformed requests through. By the time you have let through a request to apache, you are going to use far more resources even if the response is a 404.
HAProxy is a software load balancer/reverse proxy. You would have this as the front end application that processes incoming requests, and then (if valid) passes them through to apache. The point of the exercise is that for a given single incoming request, it can be processed and if necessary rejected for a smaller amount of system resources than allowing apache to process the same request. If you read the applications page you can see how it mitigates attacks such as the recent 'slowloris' exploit by rejecting malformed requests, which would otherwise overwhelm apache. Without full clarification of your attack methodology I couldn't definitively say whether it'd be effective or not.
I'd also mention that you need better people configuring your external firewall if they haven't managed to block a request for a specific file - this should be extremely simple with the 'best CISCO hardware firewall' , which leads to me to suspect your hosting company aren't giving you the full picture, or don't have a qualified individual to operate it.
Hey. Look up DEVICE POLLING for your architechture. I use it on my freebsd server. You can enabled it at the kernel level, and at the device level on certain devices (we're talking about network cards). What it does is put a queue on the input in terms of how many requests a second it can handle. FIFO style, anything else delivered in that time is dropped and completely ignored. You pick a threshold that keeps your server running smooth and still allows your real users to browse your sites.
Thanks for all these details. They provided me a clear picture for HAProxy, we will surly see for this option and whether it slow/stops the request or not.
Also we have dedicated server, we provided our full site to the hosting company to stop this attack, they tried for few days to stop the attack and they sent all details for same including firewall and other modules tried for same.
We also verified all the stuff too once they did the setup regarding the technical aspects and all was fine.
All the settings are done and that is why site is running, we want to stop the request touching the server. Thanks for the updates.
to block your SYN flood, you need to have a large enough SYN backlog so that your server still accepts normal connections, but not too large so that it can use SYN cookies (typically between 5000 to 20000). For this you need to set both net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. You must also reduce tcp_synack_retries to 2 or 3.
Concerning haproxy, there is a feature named "tarpit" which can dramatically slow down the attack when you can identify it, which is your case. The principle consists in keeping the connection up and waiting before returning a 500 status. Since most attack tools only run one connection at a time, you will slow them down. We've already blocked a 20k connections attack using this method. This is better than rejecting the request because it reduces traffic and requests rate, thus protecting your upstream link and your firewalls.
Here's what you must do for that :
- download haproxy 1.3.19 (or even 1.4-dev1, it will save you some bandwidth)
- set the frontend maxconn to a large value (40000 for 2 GB RAM)
- set the global maxconn to slightly higher (eg: 40100)
- define a "timeout tarpit" equal to the time you want to maintain an attacker connection up. 30s is already fine.
- set the "maxconn" value on your "server" lines to reflect your apache's MaxClients (slightly lower so that haproxy does not use all apache slots).
- create a tarpit backend like this :
- create ACLs in the frontend to switch to the tarpit :
acl attack_url url_beg /rss/test.xml
use_backend tarpit if attack_url
Note that you will then be able to add as many attack_url entries as you want, and you will be able to combine them with other criteria (source IP, headers, user-agent, ...).
Keep in mind that as long as your link is not saturated, it is possible to do something.