Just been flicking through SMF’s codebase and noticed this in the index.php
define('SMF', 1);
Then in any scripts that do processing it has:
if (!defined('SMF'))
die('Hacking attempt...');
I use MVC with the index calling a bootstrap class that looks at the URL to determine what models and controllers. Would it be worth placing something similar in the main index.php and then using someing similar to the second in all model and controller files but also logging a potential hacking attempt and serving up a generic status 500 error?
I think it would be a good step. If say someone is trying to access a file directly without having it called through the index file, there’s going to be an error displayed if you are extending classes. Something like
Fatal error: Class 'Model\Model' not found in /var/www/html/sample.php on line 1
This could be a potential problem as they can now start guessing what classes and methods you have in your files. I adopted this hack check from SMF as I have used it in the pass and thought it is a great idea. Usually should be placed before classes and the use keyword.
It shouldn’t be showing raw PHP data. Is PHP engine on?
I normally place it like so.
<?php
namespace Sample;
if(!defined('SMF')) {
die('Hacking attempt..');
}
use \Model\Model;
class Sample extends Model {
......
}
This way is the only way to not throw any errors while browsing the file directly via file location. If you place it after any use keywords, it’ll throw you an error.
was trying to access it direct as if on the local file system! slaps self for making a silly mistake like that
I’m thinking of instead of it just dieing, instead having it include a file that creates a connection to the database and logs in the database the hacking attempt, the user gets severed a 500 error and the script then dies. The user for that db connection will be a separate user which can literally just do inserts into the hacking attempt log table
I thought of that a while back, but then I decided not to do it because you would need to recreate the database connection since when the user accesses the file directly, it won’t actually access the database connection from where ever you had originally called it.
This in theory means that you’ll either have to include a file with the same connection or you’ll have to write the database connection in every single hack check which is redundant. So the simplest way is to either include a file that will be included in all hack checks or simply just don’t do it and just die a message.
Absolutely not. Or at least not this silly way.
This silly code hes been popular something around 15 years ago, but if you think of it, it has nothing to do with any “hacking”.
There shouldn’t be ANY errors displayed on a live site in the first place.
It always makes me wonder, why PHP users are so much concerned of displaying errors they only aware of, but for some reason essentially careless of displaying all other errors that may occur. Displaying errors have to be turned off for the whole site. ALL errors. In a single place. It makes just no sense to handle every single particular error, preventing it from displaying, when there is a PHP ini setting to control displaying errors for a whole site.
Again, are you going to do it manually? For the each file? Don’t you have any other job at hand? JFYI, if you turn display_errors off and log_errors on, PHP already will literally do “logging a potential hacking attempt and serving up a generic status 500 error”. Without a single line of code. Just as default behavior.
Final question, what you gonna do with these logs anyway? What is your reaction if you see such a log entry? What you could do? Nothing. Yet you will have your logs polluted with useless entries, making it harder to spot the real error. Why not to make these errors just impossible to happen instead? Why not to put an .htaccess file with single line
Deny From All
in your application directory and forget all this silly stuff with “hacking” once for all?
Obvious, but some hosting providers don’t have this turned off by default which is why it’s better to solve the errors before hand in case it affects other parts of the code.
Say you have errors turned off and the errors aren’t being logged. This already has happened to me because my hosting provider has errors turned off and the errors weren’t being logged and I had to modify it myself. This is why you should always take proper precautions in case you have no other options.
And I don’t get this whole thing about simplicity. If you don’t take the proper precautions, later in the end, it’ll come to bite you. Doing it the simple way is only for lazy developers. Now if there’s already a way of doing something, there’s no need to recreate the wheel. However if the way has flaws, you shouldn’t be using it.
if (!defined('CHECKER')) {
$location = $_SERVER['REQUEST_URI'];
$user_ip = $_SERVER['REMOTE_ADDR'];
if (isset($_SERVER['HTTP_X_FORWARDED_FOR'])) {
$user_proxy = $_SERVER['HTTP_X_FORWARDED_FOR'];
} else {
$user_proxy = 'No proxy detected';
}
$date = date('l jS \of F Y h:i:s A');
error_log(
"Hacking Attempt at $date!
Location: $location
User IP: $user_ip
User Proxy: $user_proxy
",3,"path/to/log/file");
//trigger_error("Hacking attempt!", E_USER_ERROR);
die;
}
The other way I’m thinking of is having the MVCs 404 handler get passed the request, having the URL in the browser address bar stay as what they tried but have the 404 handler log it as a page not found (will be setting the 404 handler to log “page not founds”/404s to the database, but setting a flag for hack attempt, using the same way of detecting a hack attempt
You have to make your mind. Either this file is accessed directly, and thus there are no mvc handlers happen around, or there is an mvc handler to handle the request, but in such a case a bit useless.
Besides, writing this lot of stuff in the every single file… Well, some php users have a real lot of spare time.
I agree with @colshrapnel, this is an ugly solution that does not really track any hacking attempts. This pollutes all php files with constructs that tie these files to the specific framework/environment and only makes things more difficult when trying to port them to another environment or use them from other contexts.
Not blocking php execution with such defined constructs has no security problems because in a well structured application a php file (apart from index.php) has no executable parts on its own, it just contains class or function definitions so loading them in a browser will just result in an empty page. If you are paranoid about this then simply block browser access to them with a .htaccess.
For tracking such suspicious requests it is enough to observe the server logs. If you have no access to them or really want a separate logger for them then it’s easy to set up a rewrite rule in .htaccess that would redirect these requests to a logger script, which can save the logs to a file, database, etc.
The only valid place for this if (defined()) technique are php template files which send html (or other content type) to the browser because those files will usually send corrupt output when requested on their own. Apart from that this is an ancient and ugly hack which makes the code messy.
First take a look on your access log, and see what you can spot there. The more traffic a site receives, the more “spider” traffic it will receive from script kiddies running a software that automatically try to load pages on the website.
You can either decide to block them, through software firewall, through a hardware firewall solution, all depending on the size of the website (i.e. single server, cluster of servers etc.) or you can decide to just ignore them.
The key is that this is not the kind of “hacking” attempts you need to be worried about assuming the server is patched, locked down and also you make certain the code base is secure.
This brings me to another very important note regarding security. All of the code base should be above web path, only the php file calling the controller etc. and files that need to be public (css, images and javascript) should be in/below web path and by that publicly accessible.
If this is done, it is impossible for anyone to access any file other than through the proper channels (which of course if the code is bad, could be a security risk as well).
While most open source packages today does not do this by default, it is still possible to modify most to have the library files above web path.
I was mostly thinking on complete software packages that people tend to use today, since this is where most of the security issues will happen when people just install the software and believe they are safe for years to come. I guess the forum mentioned by SpacePhoneix or for example Wordpress is good examples of the trend where all of the code is expected to be above web path.
If you use a framework, you would normally write the code as well, and in those cases even if it by default is not above web path, it takes you less than five minutes to move it and update the autoloader.
Not when you’re working on Code Ignitor project built by a bunch of Indians who hard-coded $_SERVER['DOCUMENT_ROOT'] everywhere for including files or for that matter Code Ignitor in general.
So put include files where the code expects and have a single include statement in that file that points to the spot above the root where the file really is.
I have never used Code Igniter, but that sounds like it would be easy to solve if the framework use a controller/router, since the $_SERVER variables can be changed, allowing you to modify it to point to the correct location instead of the initial file that created it.
It would be but files are used inside and outside of the Code Ignitor. Some parts of the project that have been built using Code Ignitor and others that bypass the framework but include shared files where the standard framework bootstrap has not occurred.
[off-topic]
CodieIgniter usually has the .htaccess with a RewriteRule to index.php. Latter has two settings to the System and Application folders. This makes swapping CI Versions and Application locations very easy.
[/off-topic]