Leak testing is possible by charging a closed vessel to some pressure and monitoring the change in pressure but there are limits of usefulness.
A major limit is the resolution of the monitoring system for detection of the required minimum leak rate.
For instance, detecting a leak of one bubble (assume a bubble is 1mL for sake of argument) in a water submerged 100L vessel would require a resolution of better than 1 part in 100000 (1mL in 100000mL) because resolution is usually plus/minus some number of digits at the low end.
A pressure charged closed vessel is a closed system so Boyles and Charles' Laws apply - the pressure changes with temperature. The change in pressure due to a low leak rate in a (relatively) large vessel would not be noticeable because the pressure changes due to temperature changes would mask the pressure change due to the leak rate.
Although a system might display an analog value to 5 or 6 digits (particularly those that work in floating point math), it does not mean that the low order digits are useable or that they are either precise or accurate.
For a 5 bar system to display a reading of 5000000 pa with any degree of accuracy or precision is not realistic with industrial grade instrumentation.
The best industrial pressure transmitters ar 0.01% accurate, which is 1 part in 10000, which is 2.5 ORDERS of Magnitude less than the 5000000 pa you are looking for.
At 5000 Kpa, an industrial analog transmitter with 0.1% accuracy should give a plus minus 5 Kpa reading. The question then becomes, how much a leak rate needs to be detected over what period of time to declare that the device 'passes'? Just the cooling of a charge of a compressed gas or heating of a source of high pressure depressurized gas (like from a gas bottle) will be evident in a pressure change.
My personal experience is that leak detection via pressure charge is only good for rather catastrophic leaks due to production faults - bad seams/welds that bleed off pressure quite rapidly.