Multithread calls

Dear All,
Below I have my codes. Each time I receive a connection from the client I open a new thread and save the data into the db. What I want to do is open another thread to preprocess the data then only insert into the db. The preprocessing will involve a lot of db queries. How to achieve that?

public class commServer {
   public static void main(String[] args) {
      try {
			    final ServerSocket serverSocketConn = new ServerSocket(9000);
				while (true) 
					            Socket socketConn1 = serverSocketConn.accept();
                                new Thread(new ConnectionHandler(socketConn1)).start();			            
						catch(Exception e)
							System.out.println("MyError:Socket Accepting has been caught in main loop."+e.toString());
      catch (Exception e) 
         System.out.println("MyError:Socket Conn has been caught in main loop."+e.toString());
  class ConnectionHandler implements Runnable {
    private Socket receivedSocketConn1;
    ConnectionHandler(Socket receivedSocketConn1) {
   public void run() { 
      Connection dbconn = null;
      BufferedWriter w = null;
      BufferedReader r = null;
      try {
         PrintStream out = System.out; 
      	 BufferedWriter fout = null;
         w =  new BufferedWriter(new OutputStreamWriter(receivedSocketConn1.getOutputStream()));
         r = new BufferedReader(new InputStreamReader(receivedSocketConn1.getInputStream()));
         int m = 0, count=0;
         String line="";
         String n="";
         while (( != -1) 
           	  ///**DB processing here.
      catch (IOException ex)  
           System.out.println("MyError:IOException has been caught in in the main first try");
	        if ( w != null ) 
	        	System.out.println("MyError:w is null in finally close");
        catch(IOException ex){
           System.out.println("MyError:IOException has been caught in w in finally close");

why don’t you use the same Thread, so instead of just “saving data into the db” it will do both “preprocessing” and the “saving data into the db”?

Dear Jurn,
There is a lot of database query processing so is still ok to be in this one thread is it?

To add some context to this thread:


Your CommServer class already spawns a new Thread in order to keep the socket listener open for new connections.

As the pre-processing and saving is already occurring in it’s own Thread, what would be the purpose of spawning another Thread? It won’t magically complete faster and, if we’re talking about a single core CPU, it might even take longer with the extra state saving and restoration the CPU will have to do to switch Threads.

Dear Rushiku,
So about the complex query processing need for the received data how to further optimize it any idea?

Given that I have no idea what your requirements are, what the data looks like, what the tables look like, etc., I can only speak generally.

Write better SQL statements.
Look into Prepared Statements.
Use Stored Procedures.
Refactor your tables such that they link together.

Honestly, every time I’ve run into a situation where my DB code runs poorly due to complexity, it’s because I’m doing it wrong.

Story time: I was brought into a project that was having major problems, everything worked fine with the test DB, but when it came time to store a record on a copy of the production DB, it would always time out (web based app).

After a bit of time spent staring at code, I found the problem. To insert a record, they wanted to make sure it didn’t already exist, fair enough. To accomplish this, they pulled the entire contents of the database to get the number of records contained within the database, then pulled the entire contents of the database x times to check the new data against the existing data (eg: first pull, check new against first record, second pull, check new against 2nd record, etc). You can’t make this stuff up…Anyway, we defined what a unique key would be, polled for that, reducing the number of DB calls from megaX to one, and, as if by magic, that part started working.

Dear Rushiku,
Give me some time I will give u some of the table structures involved and maybe you can give me some hints? Is that ok?

We’ll do what we can :slight_smile:

rushiku is right… if you’re looking at performance issues, profile and pinpoint the problem.
Optimizing blindly is bad idea, and creating multiple threads doesn’t sound like it would help the preprocessing!

Dear Rushiku,
Below are my tables.My first table where I insert the raw data is this.

tripID int(11) NOT NULL auto_increment,
latitude float NOT NULL,
longitude double NOT NULL,
speed float NOT NULL,
course float NOT NULL,
dateTimer datetime NOT NULL,
gpsStat tinyint(4) NOT NULL,
unitStat varchar(12) NOT NULL,
battVolt varchar(6) NOT NULL,
deviceSerial varchar(16) NOT NULL,
KEY deviceID (deviceSerial),
KEY dateTimer (dateTimer)

So if I want to process the data for instance. Next thing I need to do first compare it with the device table.
So from tripData I will compare tripData.deviceSerial=device.serialNumber. Then I will get fleetID. Then using this fleetID. I will compare with the table user on device.fleetID=user.fleetID then I genID then using the genID I will do a select on geoFence such as e.g.

geoFence Where fleetID IN (473,34,51,45,67,89,90,102,35,67,88,12,14,15)

Then based on each of the geoFenceIID will compare on geoFenceDetails and get all the relevant latitude and longitude and if the type is radius check if the point from tripData is in it and same goes if polygon to check in or out. For your information this just one of the processing that is geoFence. There is another one I have put if not will be confusing.

deviceID int(11) NOT NULL auto_increment,
serialNumber varchar(24) NOT NULL,
deviceType int(11) default NULL,
simCardID int(11) NOT NULL,
imeiNumber varchar(24) default NULL,
displayName varchar(24) default NULL,
description varchar(128) default NULL,
lastUpdateDateTime timestamp NOT NULL default CURRENT_TIMESTAMP,
creationDateTime timestamp NULL default NULL,
purchaseDate date default NULL,
firmWareVersion varchar(5) default NULL,
hardWareVersion varchar(5) default ‘801’,
registrationDate date default NULL,
deviceAssigned int(11) NOT NULL default ‘0’,
fleetID int(11) NOT NULL,
PRIMARY KEY (deviceID,serialNumber),
UNIQUE KEY serialNumber (serialNumber),
KEY simCardID (simCardID),
KEY deviceType (deviceType)

userID int(11) NOT NULL auto_increment,
userName varchar(10) NOT NULL,
password varchar(25) NOT NULL,
fleetID int(11) default NULL,
genID varchar(3000) NOT NULL,
UNIQUE KEY userName (userName),
KEY driverID (driverID),
KEY fleetID (fleetID)

geoFenceID int(11) NOT NULL auto_increment,
userID int(11) NOT NULL,
fleetID int(11) NOT NULL,
geoFenceString text NOT NULL,
geoFenceType varchar(55) NOT NULL,
geoFenceName varchar(255) default NULL,
geoFenceLocation varchar(250) NOT NULL,
geoFenceDescription varchar(255) default NULL,
geoFenceUserType varchar(250) NOT NULL,
geoFenceRadius int(11) NOT NULL,
geoFencePreference int(11) NOT NULL,
geoFenceFlag varchar(25) default NULL,
KEY fleetID (fleetID),
KEY userID (userID)

geoFenceDetailsID int(11) NOT NULL auto_increment,
geoFenceID int(11) NOT NULL,
geoFenceLat double NOT NULL,
geoFenceLong double NOT NULL,
PRIMARY KEY (geoFenceDetailsID),
KEY geoFenceID (geoFenceID)

Okay, let me see if I understand this.

You get a tripData entry from somewhere, use it’s deviceSerial to look up which device it is so you can get the fleetID for the device.

Some how the fleetID is going to get you one user (despite that, semantically, a fleet can have multiple users and multiple devices), from which you extract the genID.

Using the genID, you pull all the data in geoFence that relates to the user(?), do some calculations, and then…something.

Question, why is genID 3000 characters? It seems that you’re using it to store the IDs of geoFence entries associated to a user, rather than storing the user’s ID in the geoFence entries like nature intended. What are you going to do when the number of entries associated with a user exceeds the size limit of genID?

On second thought, it seems to me that the deviceID should be stored in the geoFence entries as those entries should be unique to a device, while a user may have more than one device.

Dear Rushiku,
The tripData is the data from the gps devices. Ok in user table the genID is actually a collection of fleetID if it is a child of another parent fleetID. So say the fleetID is 10 then its parents is 8 then the parent of 8 is 4. So genID for fleetID=10 will be (‘4’,‘8’,‘10’). I know this is not the best solution but that time we had no choice. So we will get the geoFence id based on fleetID IN(‘genIDValue’); I hope you are clear about the genID now. We cannot store the deviceID in th geoFence because it will be use for more than one deviceID not a single device. I hope I am clear if not I will give you more information.

Okay, I’m having trouble understanding what you’re trying to do…let me try again.

A tripData comes in, using that information, you pull the user, device and geoFence (area?) the device should be reporting from, then determine if the tripData is originating from within the bounds of the geoFence?

As to genID not being the best solution, it’s not a solution at all. If the fleets have a hierarchy, then there should be a parentID field. This way you have a ‘set and forget’ field you can use to determine the hierarchy rather than a complicated aggregated field that needs constant updating, and you’ll be able to reliably determine the correct structure of the hierarchy, rather than relying on your team members to remember to place the IDs into genID in the correct order. Tip: they will not remember and when it does happen, it will take only one commit of bad data to corrupt the entire DB.

Dear Rushiku,
" A tripData comes in, using that information, you pull the user, device and geoFence (area?) the device should be reporting from, then determine if the tripData is originating from within the bounds of the geoFence?" Yes you are correct here. Yes the hiearachy is a problem I know previously left by another guy. I need to find a solution on that but time being I need to solve the other problem first that is crucial to decide where to process the geoFence in java listener or php cron job?

Unless there will thousands of these requests coming in per minute, I’d say that processing via the commServer would be best.

Like so:

request comes into commServer, commServer spins off the request to ConnectionHandler, ConnectionHandler passes off request information and a DB connection to a processing class. You’re already doing this, more or less.

Processing class pulls needed data from the DB, does calculations, saves state, writes to log,(if needed), done.

It seems to me that with some well written SQL you can keep the DB hits to a minimum.

In the long run, you’ll need to address the single point of failure that is genID, but you already knew that.

Dear Rushiku,
So what is your best idea go with the commServer process there then only store in the db or second method retrieve from db and process. What is your idea here? Did you see my commServer coding at the very first. Can it handle thousand of connection without a problem what is your idea?

Your commServer spins off the connection to keep the connection port open for new connections, this must happen or you will only be able to talk to one client at a time.

ConnectionHandler looks like it handles communication with the client and possibly opens a DB connection.

What I would probably do is this: in ConnectionHandler.buildBean, read the client data and build a simple data bean, let’s say TripDataBean.

I would then pass the populated bean and a db connection to another method, maybe saveTripData (or whatever it is you’re doing with it) which would go about the business of figuring out what device the data belongs to and check to see if the trip data originates from within the fence (or whatever, I’m still not clear on your end objective…)

As far as a thousand connections or thousands of connections goes, that depends on your server, the db server and your bandwidth more so than your code.

At any rate, make it work, then make it work fast.

Dear Rushiku,
I dont get your idea of the Bean idea. You mean from my connection handler call another function is it? Then the connection handler I will close is it? So that means the method I am using is ok to handle thousand of connection right. So for that to happen what I must tweak for my mysql db any idea? So far I have tweak is the number of connections. Thank you.

A bean is, strictly speaking, just a container to hold the values of a data object.

eg: Your tripData table has the following fields: latitude, longtitude, speed, etc.
A tripData bean would have the following variables: latitude, longtitude, speed, etc.
In addition to the variables, there will also be gets and sets for each one, like so:

public class TripDataBean {
  String latitude;
  String longitude;
  String speed;

  public String getLatitude() {
    return latitude;

  public void setLatitude( String latitude ) {
    this.latitude = latitude;


You would then read the incoming client data and use it populate the bean, now you have all the data in an easy to access form that you can easily pass to other methods and classes rather than trying to pass the individual bits of data.

As to the number of connections, the code you have can create thousands of connections, yes. Can your code and hardware handle thousands of connections? I have no idea.