I am in the process of plotting a chart of temperature values against a time value.
The temperature values and times are obtained from a database.
I have managed to plot a graph of say 7 points but the timing is incorrect because I have used an average time between the starting and finishing values.
If the temperature values are constant at ten minute apart the chart is correct… unfortunately the device can be stopped and also started again with a different time period of monitoring the temperature values.
Normally I think I’d make the x-axis represent the number of seconds between the first and last times, and then spread the plots out proportionally to the number of seconds from start. But the problem will be that you’ll end up with four plots very close to the left hand side of the chart, and three very close to the right hand side, as there’s an 11-hour gap from the stop to start time in that example data.
Alternatively you could code to detect a gap longer than ‘x’ minutes and draw something to denote the machine was stopped for that period, and re-started. So a gap of more than, say, an hour, would be represented by the equivalent of a 10-minute gap either side of a dotted vertical.
Currently there are more than 12,000 points over a six week period.
The temperatures are correct but the time values are incorrect as can be seen over the following chart where the time intervals are an average from the starting and finishing points.
I’m not sure I understand the problem now, and the gap of a few hours makes less and less difference the longer the time period represented on the graph.
To me, the way to do it is to create an x-axis scale length representing the number of seconds (or minutes if that’s accurate enough) between the start time and end time, then plot the temperature for a given time at the x-pos determined by the number of seconds between that time and the start time. So if first-to-last covers a convenient 1000-minute period, then the second point will be at (second_time - first_time) * (scale / (last_time - first_time)). So if the range represents 1000 minutes, and the second point is 250 seconds in, the x-pos is (250 - 0) * (1000 / (1000-0)), and so on.
Well, scaling it as above you’d need to think of some means of deciding that the gap is too long (longer than twice the average gap of all the data points, perhaps?) and use that to decide you have a ‘gap’, maybe by just plotting down to zero on the y-axis and then back up again when data resumes rather than plotting between points as if nothing had happened.