Thoughts on Logging Errors in a DB Table instead of a log file

I’m just curious what some of the arguments for or against logging errors in a MySQL table over a regular log file are.

I’m in the process of writing a custom error handler function to be set with set_error_handler. I was going to use the error_log function to log the errors somewhere in a log file, but then I started wondering well wouldn’t it be better to have them in a DB table?

Here are some of the Pros/Cons I can think of…

PROS

  1. Can create a nice web interface to view the errors
  2. Can sort by date, error_type, frequency of error_type
  3. Can create other reports based on this data

CONS

  1. More overhead - This is actually really the only Con I can come up with. It will take a little longer to execute the MySQL query than to just write to a regular file. Also, if the error occurs in a static area of the site, where no DB connection is required, I would have to create one to log the error, which is again more overhead.

What do you all think?

What if the error is “cannot connect to database”?

Ha. Good point. I knew there was a reason I should post here before doing this.

I suppose I could add logic to make sure I have a valid database connection first. Then depending on the outcome either log to a file or the database. Of course the downside would be that I now have to check two places for errors. But then again, if the majority of the errors are not “cannot connect to database” it may still be convenient to have most of my errors in a DB Table.

What if the error is “I don’t have permissions to write to the log file”?

That’s not a good argument.

  1. More overhead

That’s really irrelevant. Unless you are logging the error with 20 SQL queries (without a procedure), you won’t notice any overhead.

anonymousdude: for big projects (big ones, millions of db records, many servers, etc), this is what I do:

  • Normal errors go to Apache log files.
  • Nagios parses them and notifies us of any unknown / new errors found.
  • Application errors get logged to a logging database, so we can easily parse / do reports on them.
    Application errors are errors like: log(“this should never happen!”); or } else { log(“you need to deal with this $case”); }. Basically, not really errors, but more like lose ends that will have to be covered later on, but since they almost never happen, the code is not yet there to deal with them.

Hope this helps.

risoknop:: in big apps, when s**t goes does, it comes rolling. So you need to think of your overhead / implementation.
Under normal operations, the overhead is not that big (few ifs / an insert every few minutes, etc), but once something is messing up, and you get 4000 requests a sec, your overhead will kick in, and db servers / network overload and slow down the stuff that used to work… And then that will start reporting that it’s to slow (logging errors, and slowing down stuff even more).

In other words, make sure you plan what you log, and don’t just log anything (you don’t want 20 inserts per request…).

Interesting insights. Thanks for the response. I was not aware of Nagios. I looked it up and it sounds pretty interesting, although probably a little more than I need right now.

However, it did give me a few ideas…

  1. Creating some type of cron job to parse the log files myself, and then inserting them into a database.

  2. Having some type of a switch that turns off the MySQL logging if s**t really hits the fan. No idea how I would do that right now, but I would think there has to be a way.

  3. Or like you said just logging very specific types of errors - or maybe just certain custom defined Exceptions in MySQL