calibrating thermocouples

Status
Not open for further replies.

Designer69

Senior Member
hey guys I'm not sure if this question belongs here but maybe there are some I&C guys here who can help.

when calibrating a temp indicator, it is typically not calibrated for the full temp range of the associated thermocouple.

I have heard this is done to improve resolution. Is this true? Also if it is, how does it work to actually improve the resolution?

thanks
 

gar

Senior Member
Location
Ann Arbor, Michigan
Occupation
EE
110330-1645 EDT

The resolution of an analog signal source is determined by the signal conditioning equipment and the inherent noise generated by the signal source. A thermocouple is an analog signal source.

Calibration of a signal source has nothing to do with the resolution you can achieve with it. I have digital temperature sensors with which I am working. These have a resolution of 1/16 of a degree C. But their uncalibrated accuracy is specified as only +/-0.5 deg C from -10 to +85 deg C, and +/-2 from -55 to +125. For incremental changes multiply C by 9/5 (180/100) to obtain deg F change.

I believe the short time stability of the sensor is as good as the resolution.

The useful resolution of a sensor and signal conditioning system may be noticeably poorer than the inherent resolution of the sensor. This will be determined by the combined noise of the two parts of the system.

If you have a system with high resolution and stability but poor accuracy, then by using accurate references you can calibrate the system over whatever working range you want and achieve accuracy comparable with the useful resolution of the system. You can do this at as many points as practical and interpolate between these points. How many points you pick at which to calibrate is a function of the inherent linearity, shape of the curve, and money.

Short and long term stability will determine how often recalibration is necessary.

.
 

wiigelec

Member
Location
Red Desert
For example, you have a temp transmitter ranged from 0-200 dF (degrees Fahrenheit) and the output signal is 4-20 mA, ie 4 mA represents 0 dF and 20 mA represents 100 dF. The resolution of this transmitter is 200dF/16mA = 12.5 dF per mA where each mA of output signal will represent 12.5 dF change in temperature.

Now if we range the transmitter from 0-100 dF we get a resolution of 100dF/16mA = 6.25 dF per mA where each mA of output signal will represent 6.25 dF temp change.

The 0-100 dF ranged transmitter has a higher resolution than the 0-200 dF ranged transmitter...
 

Designer69

Senior Member
Awesome explanations, I love this site!

Wiigelec, that makes perfect sense. That was the basic algebra formula I was missing. So it?s simply range of the measurement divided by range of output. That formula should govern all resolution criteria. Agreed?
 

G._S._Ohm

Senior Member
Location
DC area
The displayed resolution should probably be 20% or less of the reading change that the instrument is capable of resolving.

The posted instrument accuracy, precision and repeatability specs will give you some insight into this concept, and possibly the NIST site has an overview. This gets pretty involved pretty quickly and the more accuracy you want the worse it gets.
 

gar

Senior Member
Location
Ann Arbor, Michigan
Occupation
EE
110331-1227 EDT

designer69:

Your resolution is not defined by the full scale range of a system, but by the smallest incremental change you measure.

If two instruments have a full scale range of 0 to 100 units, but one can only display changes of 1 unit, but the other can detect changes of 0.01 units, then the second has a resolution 100 times greater than the first, and has the same full scale range.

From the following site;
http://www.dataforth.com/catalog/pdf/an107.pdf
The sensitivity of a J Type thermocouple is approximately 54 μV/ ?C. It is obvious from Figure 2 that assuming a linear response for J Type thermocouples could result in nearly two degrees of error.
The associated curve, 2, shows a peak error of about 90 microvolts from 0 to 100 deg C.

To calibrate a temperature sensor you need reference temperatures. Two fairly useful points for the 0 to 100 C range are ice water and boiling water. You need pure water and standard pressure for these to be accurate, or some knowledge that will allow for correction of their actual transition points.

Suppose you accurately calibrate your thermocouple measurement system at these two points to within 1/100 of a deg C. This would be possible with your readout limited to a resolution of 1 deg if internally it had 1/100 deg capability. Now amplify the input to the instrument so you can read from 40 to 50 deg C with a resolution of 0.1 degree. You will not have an accuracy of your reading at 50 deg C of 0.1 deg C. Your error will be near 2 deg whether the resolution is increased or not. What you will have achieved is the ability see a smaller change in temperature in a relative sense.

.
 

Designer69

Senior Member
For example, you have a temp transmitter ranged from 0-200 dF (degrees Fahrenheit) and the output signal is 4-20 mA, ie 4 mA represents 0 dF and 20 mA represents 100 dF. The resolution of this transmitter is 200dF/16mA = 12.5 dF per mA where each mA of output signal will represent 12.5 dF change in temperature.
.


Wiig,

why not 201/17?? there are 201 digits between 0 to 200 and 17 between 4 to 20.
 

gar

Senior Member
Location
Ann Arbor, Michigan
Occupation
EE
110331-1434 EDT

Designer69:

There are not 201 increments 1 increment wide from 0 to 200 but only 200. The first increment ends at 1.000000 and the last at 200.000000 . You could define each increment from 1 thru 200 with 1 centered at 0.5 and 200 at 199.5 . If you wanted you could make 0 as an element of both the + and - range and end at +/-199 for your increments. On the other hand if you were talking about a number system modulo 200 it would probably be defined from 0 thru 199. But our clocks are modulo 12 and range from 1 thru 12. I prefer the clock system based on 0 thru 23, 0 thru 59.

If I have a digital display with accuracy better than 1/10 of reading, then I might like transitions to change at the 0.5 points. Then I have 0 for anything in the range of +/-0.5, +1 for the range 0.5+ thru 1.5, etc., and 200 occurs between 199.5+ thru 200.5 . So there are 200 possible values from 0 thru 199 with the mean difference between them of 1. This way 0 has a width of 1 unit like everything else.

When you scale something you take the upper value minus the lower value as the range. Do this for both items being scaled and divide. The result is most likely a non-integer value.

I prefer to treat 0 thru 9 as one decade, and 10 thru 19 as another, and thus I consider 2000 thru 2009 as a decade. Others consider 2001 thru 2010 as a decade. Logically from a number theory perspective I find the 1 thru 10 approach awkward.

.
 

gadfly56

Senior Member
Location
New Jersey
Occupation
Professional Engineer, Fire & Life Safety
hey guys I'm not sure if this question belongs here but maybe there are some I&C guys here who can help.

when calibrating a temp indicator, it is typically not calibrated for the full temp range of the associated thermocouple.

I have heard this is done to improve resolution. Is this true? Also if it is, how does it work to actually improve the resolution?

thanks

Gar's explanation is great. You have to be aware that "calibration" has different meanings for different applications. What gar describes is setting the span and zero for a typical field instrument. You should be aware that this type of "calibration" assumes a linear input/output relationship between the measured variable and the instrument. In fact, it's accurate only at the two measured points, and only for as long as the instrument's stability prevents a drift at either point of 1 unit of display. You can purchase "ice point" calibrators with NIST traceability and DC mV sources to compare your sensor's output at fixed temperatures. Then you construct a "strapping table" of sensor readings vs. "true" inputs. Say when your instrument reads 10 Celsius, your table indicates 9.6 Celsius, you record 9.6 Celsius. Sometimes instruments allow you to program the strapping table into them, allowing direct readout of the calibrated temperature along with interpolation for between-point values. It's been a long time since I had to deal with this sort of thing, but I think you could also have the instrument perform a least-squares polynomial fit and use the resulting equation to report values. This can be handy for process control situations where you don't want discontinuities to cause the process to "hunt" excessively.
 

GeorgeB

ElectroHydraulics engineer (retired)
Location
Greenville SC
Occupation
Retired
Gar's explanation is great. You have to be aware that "calibration" has different meanings for different applications. What gar describes is setting the span and zero for a typical field instrument.
I'm of the opinion that "spanning" and "calibrating" are 2 different things which may be identical for linear systems. Calibrating, to me, is matching the output curve (often a straight line) to the (usually nonlinear) input variable. In this case, a thermocouple's electronics "linearizes" the device. Many thermocouples may be used over a range in excess of 1000C degrees. If I want to monitor room temperature, I may only need 50-86F, 10-30C. The old days, before microprocessors, there were multiple stages which worked over parts of the "transfer function" to have the whole range as linear as practical. As stated in gar's post above, if a J thermocouple is AMPLIFIED (with an appropriate cold junction) without curve correction, and used between 0 and 100C, the voltage error vs a straight line is a maximum of about 1.8C. If I use 2 line segments between 0 and 100C, I believe I can get that down to less than 0.2C, less than the variability due to wire characteristics.

BUT, if I only need 10 to 30C (50-86F), the straight line fit will again be closer than the wire effects.

Then, if I convert that to a (say 10 bit, 1024 steps) digital signal, my analog to digital converted value will much higher resolution (~50 counts/degreeC) than is real. If I use the full 100C range, same 10 bit conversion, my resolution is 10 counts/degreeC ... maybe repeatable, but still it exceeds accuracy. If I take a 1000C range, the same 10 bit conversion ... 1 degree/count ... resolution and accuracy are PROBABLY in the same range.

It's words, only words.

Think about your voltmeter. 1 volt resolution on a 460V system is pretty good. 1 volt resolution on an automobile system is not very good.
 
Status
Not open for further replies.
Top