Pressure Sensors: The Design Engineers' Guide

Understanding pressure sensor specifications

The pressure sensor spec and its impact on accurate readings

There are many aspects of a pressure sensor that determine whether it is the right choice for a given application. Gauge, absolute or differential, transducer or transmitter, measurement range, fitting style/size, and absolute maximum ratings such as burst pressure are among the most important.

Several sensors may meet the application requirements, in these respects. Making the right choice can then be guided by considering factors that affect accuracy. Fundamentally, this determines whether the pressure measurements supplied are dependable to inform decisions made by the application.
 

Factors affecting accuracy

The major sensor characteristics that influence accuracy are temperature coefficients, temperature hysteresis, pressure hysteresis, and non-linearity.

Applicable temperature coefficients include temperature-related changes to zero offset, sensitivity, and measurement span.

A datasheet may describe accuracy-related characteristics individually, or as an overall accuracy statement calculated as the root of the sum of squares (RSS) of individual factors.

Note also that accuracy can be expressed as a percentage of the full-scale range, or as a percentage of the reading. Percent of full scale (% F.S) is commonly used, meaning that if the sensor has a full-scale range of 200 psi and is specified as 1% F.S, any reading at any pressure within 0-200 psi is expected to be within ±2 psi of the true pressure.

Alternatively, if the accuracy is stated as a percentage of reading, 1% accuracy at 200 psi would translate to an error of ±2 psi as before. However, at 100 psi the error would be ±1 psi. Clearly the error cannot tend towards zero at 0 psi: at lower reading, the datasheet may quote an absolute figure, say ±0.4 psi, for pressure readings below a stated threshold.
 

Understanding the datasheet

Temperature errors are expressed over a range, called the Compensated Temperature Range (CTR), which is usually narrower than the operating temperature range. The snapshot below, taken from the datasheet for the TE Connectivity 1210 piezoelectric sensor, illustrates how various sources of error are expressed among the sensor’s key parameters. Understanding the various types of errors and how they are calculated can help when making comparisons between different sensors and choosing the most suitable component for a given application.

Parameters Minimum value Typical value Maximum value Units Notes
Span 75 100 150 mV 1
Span (2 psi version) 30   60 mV 1
Zero pressure output -2   2 mV  
Pressure non linearity -0.1 ±0.05 0.1 %Span 2
Pressure hysteresis -0.05 ±0.01 0.05 %Span  
Input / output resistance 2500 4400 6000 Ω  
Temperature error - span -0.5 ±0.3 0.5 %Span 3
Temperature error - zero -0.5 ±0.1 0.5 %Span 3
Thermal hysteresis - zero   ±0.1   %Span 3
Supply current   1.5 2.0 mA  
Response time (10% to 90%)   1.0   ms 4
Output noise (10Hz to 1kHz)   1.0   μV p-p 5
Long term stability (offset & span)   ±0.1   %Span 5
Pressure overload     3X Rated 6
Compensated temperature 0   50 °C  
Operating temperature -40   +125 °C  
Storage temperature -50   +150 °C  
Weight     3 grams  
Solder temperature 250°C max 5 sec.
Media Non-corrosive dry gases compatible with silicon, pyrex, RTV, gold, ceramic,
nickel, and aluminum


Parameters for TE Connnectivity's 1210 piezoelectric pressure sensor, including offsets and temperature coefficients affecting accuracy

Notes

  1. Ratiometric to supply current
  2. Best fit straight line
  3. Maximum temperature error between 0°C and 50°C
    with respect to 25°C. For 2psi devices, Temperature
    Error -- Zero is ±1%
  4. For a zero-to-full scale pressure step change
  5. Long term stability over a one year period with constant current and temperature
  6. 2X maximum for 100psi device. 20psi maximum for 2 and 5psi devices

Temperature coefficient of zero offset

The sensor’s zero offset is the output when pressure on both sides of the diaphragm is equal. This is expressed as the Zero Pressure Output in the datasheet snapshot above. A constant offset can be trimmed out at manufacture, but the offset also changes with temperature.

The temperature coefficient of zero offset, or Temperature Error – Zero in the datasheet above (alternatively referred to as TCZ) is calculated by measuring the difference between the offset output at the standard temperature and at the lower and upper limits of the Compensated Temperature Range (CTR), and expressing the larger of the two differences as a ratio of full-scale.

Temperature coefficient sensitivity

Sensitivity as quoted in the datasheet quantifies the change in output per unit change in applied pressure. It’s typically affected by the excitation voltage and expressed in terms of output millivolts per applied volt of excitation voltage (mV/V).

The sensitivity may change with operating conditions, particularly temperature. The sensitivity shift across the Compensated Temperature Range (CTR) is expressed as a percentage of full scale per °C change in temperature. The illustration below shows how the temperature coefficient of sensitivity is expressed for Amphenol NPA series surface-mount sensors.

Parameter Minimum value Typical value Maximum value Units Notes
Pressure range   0.36 to 1   psi 10"H₂0 = 2.5KPa
Excitation   1.5   mA 10 VDC max
Input impedance   5000±20%   Ω  
Output impedance   5000±201%   Ω  
Zero offset   ±75      
Full scale output   40 to 120   mV 10°H2O
    75 to 135     1 psi
Linearity   ±0.25   %FSO BFSL
Pressure hysteresis   ±0.20   %FSO  
Temperature coefficient of zero  

±30

  μV / V / °C  
Temperature coefficient of resistance   0.29   % / °C  
Temperature coefficient of sensitivity   -0.2   %FSO / °C  
Thermal hysteresis of zero   ±0.15   %FSO  
Position sensitivity   0.2   %FSO  

 

Temperature coefficient of sensitivity as expressed in the Amphenol NPA series datasheet

Temperature coefficient of measurement span

The magnitude of the sensor full-scale output is affected by temperature. This is called Temperature Error – Span in the TE datasheet sample, and may also be referred to as the temperature coefficient of span (TCS). It is calculated in a similar way to the TCZ. The full-scale output at the upper and lower CTR limits is compared with the full-scale at the standard temperature. The larger of the two differences is expressed as a ratio in percent per degree (%/°C).

Pressure hysteresis and temperature hysteresis

A sensor may give different readings for the same measured pressure, depending on whether the pressure has increased or decreased to reach the measured value. Key factors that cause pressure hysteresis include the characteristics of the diaphragm or strain-gauge material.

Pressure sensors can also exhibit temperature hysteresis, which results in a different pressure reading being produced at a given pressure and temperature depending on whether the temperature has increased or decreased to the value at which the measurement is taken. Temperature hysteresis is influenced by measurement conditions such as dwell time and temperature range, and is expressed as a percentage of full scale over the CTR.

Non-linearity


A graphical representation of non-linearity
using the Best Fit Straight Line method

Non-linearity expresses the difference between the actual output of the sensor and the predicted response according to its typical performance. Non-linear responses can be affected by factors such as temperature, humidity, and vibration or other disturbances. Non-linearity can be expressed mathematically, as a percentage:

where:

  • Din(max) is the maximum input deviation
  • INf.s. is the maximum, full-scale input


Non-linearity can also be shown graphically (see right) which illustrates how the output voltage can deviate across the full-scale range. In this context, linearity can be quantified using the Best Fit Straight Line (BFSL) method, using mathematical regression to plot the BFSL that gives equal weighting to points above and below the line.


Terminal line method for calculating non-linearity
 

 

Alternative methods may be used, such as the terminal line technique, which expresses non-linearity as the maximum deviation from a straight line joining the zero and full-scale points (see left). The terminal line method eliminates zero-point and full-span errors, which simplifies recalibration if a sensor is replaced in the field.  

The datasheet should state which method has been used. A note in the TE datasheet above tells the reader that the BFSL method was used to calculate the 1210’s typical non-linearity to be ±0.05 %span.

High-linearity pressure sensors can be produced by optimising the construction of the sensor, such as the diaphragm mounting, building the sensor using high-quality materials, and applying electronic compensation.

Several other parameters can affect the sensor accuracy, and should be considered when choosing the right sensor for a given application. These include resolution, dynamic characteristics, and long-term stability, as we’ll now explore.

Resolution

Resolution is the smallest incremental change in pressure that can be displayed at the output. It may be expressed as a proportion of the reading or the full-scale range, or as an absolute figure. Depending on the application, the pressure resolution may be easily related to real-world performance: a pressure sensor with 3mbar resolution, used in a depth gauge, will allow depth-measurement resolution of 3cm in water. Note that a sensor’s accuracy cannot be greater than its resolution.

Response time and dynamic performance

Response time is an expression of the sensor’s ability to change and stabilise at the new value, within the specified tolerance, in response to a change in the applied pressure. The response time may be different depending on whether the change is positive- or negative-going.

The datasheet may quote response time as a time constant, which is the time for the sensor signal to change from zero to 63.2% of full-scale range when an instantaneous full-scale change in pressure is applied.

Faster-acting sensors may be described in terms of their frequency response, or flat frequency, which is the maximum pressure-change frequency that can be converted into an output signal without distortion.

Dynamic linearity is an important parameter in applications that must monitor rapidly changing pressure. It can be influenced not only by the response time, but also by other characteristics such as amplitude and phase distortion.

Long-term stability or natural drift

Sensor accuracy tends to drift over time, due to ageing, environmental factors, and other application-related influences and factors. Such drift is not predictable, and may have a positive or negative change coefficient. Referring to the datasheet sample above, TE expresses the long-term stability as a percentage of the full-scale range, over a period of one year and assuming the current and temperature are constant. Hence stability as quoted in the datasheet can only be used as a guide and not as a guarantee of performance in the target application.

Other operational factors to consider

In this article, we’ve described key factors that affect the accuracy of a pressure sensor. Depending on the application, some aspects such as dynamic performance or resolution may be less important than others like linearity or temperature-related drift.

Once the optimum sensor has been selected on paper, it’s important to remember that other factors such as the equipment design, and day-to-day use can also influence pressure-sensing accuracy on setup and in the longer term.

Improper installation, for example, is often the underlying cause if a system fails to deliver the expected accuracy when deployed. This could be prevented by design, or by ensuring the equipment is shipped with clear installation instructions.

Application-related variables such as temperature, specific gravity of monitored fluids, dielectric characteristics, turbulence, changes in atmospheric pressure, or unexpected obstructions, blockages or vapour locks may also impair accuracy. Taking any likely effects into account when designing the equipment, and where possible selecting sensors that are immune or benefit from suitable compensation, can help to mitigate or avoid unacceptable inaccuracy.

And, of course, ensuring initial calibration, with regular recalibration and suitable intervals, is essential to safeguard long-term accuracy.

Looking for more on pressure sensor technology? Check out the further chapters of this guide below, or if you're pressed for time you can download it in a PDF format here.

Need some advice on pressure sensors?

Our pressure sensor experts are on hand to help you make the right choice for your application.

Webinar

Discover the keys to designing pressure sensor applications with this 30-minute technical presentation and Q&A with Nicholas Argyle, Applications Engineer EMEA, TE Connectivity.

Watch On Demand

White paper

Need a more digestible introduction to pressure sensors? Download the white paper, 'Pressure sensors: Design considerations and technology options'.

Download

Sensor solutions brochure

Discover the latest sensor solutions available from Avnet Abacus.

Download

Sensors linecard

Explore our pan-European sensor suppliers and their products and solutions.

Learn More

New products

Discover the latest product announcements from our sensor suppliers.

Learn More

Pressure Sensors Chapter 1 GBL

Display portlet menu

Chapter 1

How pressure sensors work

An introduction to pressure sensors covering the different types, how they work, their function, construction, and what to consider in your design choices.

Pressure sensors chapter 1 graphic

Pressure sensors chapter 5 GBL

Display portlet menu

Chapter 5

Types of pressure measurement

What’s the difference between absolute, gauge and differential pressure sensors? And how do you know which one to use?

Pressure sensors guide chapter 5 graphic

Pressure Sensors Chapter 2 GBL

Display portlet menu

Chapter 2

Pressure sensor applications

Discover the recent innovations in pressure sensor technology that are enabling smarter, safer, and more environmentally friendly electronics for businesses and consumers alike.

Pressure sensor guide chapter 2 graphic

Pressure sensors chapter 6 GBL

Display portlet menu

Chapter 6

The core pressure sensor technologies

What’s the difference between the different pressure sensor technologies? And how do you know which one to use?

Pressure sensor guide chapter 6 graphic

Pressure Sensors Chapter 3 GBL

Display portlet menu

Chapter 3

The different types of pressure sensors

Discover how pressure sensors vary according to the type of pressure measurement, sensing principles, output signal, media, MEMS technology, mounting and more.

Pressure sensor guide chapter 3 graphic

Pressure sensors chapter 8 GBL

Display portlet menu

Chapter 8

Pressure sensing in harsh environments

An in-depth guide to pressure sensors for harsh environments - designing for extreme temperatures, high pressure, and corrosive and dynamic environments.

Pressure sensor guide chapter 8 graphic

Pressure sensors chapter 4 GBL

Display portlet menu

Chapter 4

Pressure sensor output signals

Sensors, transducers, or transmitters? The right selection is important for your application. So what's the difference and how do you choose between them?

Pressure sensor guide chapter 4 graphic

Pressure sensors chapter 9 GBL

Display portlet menu

Chapter 9

Understanding specifications

Explore the datasheet and the different factors affecting the accuracy of pressure sensor readings. Discover how to make the right choice for your application.

Pressure sensor guide chapter 9 graphic