FAQ icon

Reconstruction parameters.

What is the difference between the tags PARTAG_PROJACQUIRED and PARTAG_PROJRECON?

Here is (unfortunately) some mixture of historical and ambiguous issues. In recent cobra versions (Cobra6 and Cobra7) PARTAG_PROJACQUIRED is not effective so does not make any influence on the reconstruction process. It exists for compatibility purposes.

Can you give more detail on PARTAG_SCALEFACTOR and OPTAG_SLICESCALE?

Lets assume that we are dealing with raw data. We typically get 15 bit ADC values as input signal, giving a dynamic range [0,32767]. Then we take the logarithm and calibrate against air (bright field). This converts the input signal to log (attenuation) with dynamic range [0, 15] (floating point). The next step is filtering, which converts the signal to the range
[-15, 15]. The next step, accumulation, is done in integer; so one has to scale the range [-15, 15] up as much as possible to avoid overflow and underflow. The scale factor 600 is the default for biological objects and this is what you get if you
put -1.
Please keep in mind that (as stated in the User Manual, part 6) the auto-detection procedure may fail (it always could) and it is better to set up this factor from analyzing file preprocd_proj.000.
The picture is a little bit different if you have high contrast objects like metal in the projection. The filtered image gets bigger dynamic range (higher gradients), so one should apply a reduced scale factor. For the dental cases, we always recommended 300-500 instead of 600 (100 is probably is too low).
Regarding the difference between 100 and 600: With a scale factor of 100, for "biological regions" of your projections you get the dynamic range [-1800, +1800] (approx), instead of 24000, which is totally fine.
Also please keep in mind that PARTAG_SCALEFACTOR is "compensated out" before slice dumping. The slices with 100 or 1000 as PARTAG_SCALEFACTOR will have the same density numbers (up to level of digital noise) as long as OPTTAG_SLICESCALE is kept the same.
Output signal is the result of the following operation:
S_prepro = PreproOperations (S_input)
S_normalized = S_prepro * PARTAG_SCALEFACTOR
S_bp_accumulated = Backprojecting (S_normalized)
S_slice_output = S_bp_accumulated /Number_Of_Projections / PARTAG_SCALEFACTOR * Normalized_Factor * OPTTAG_SLICESCALE
So PARTAG_SCALEFACTOR does not affect the amplitude of output result though choosing it too high or too low may cause over- or underflow. The purpose of PARTAG_SCALEFACTOR is pushing the scale [-15, +15] of S_prepro to the scale suitable for integer operations in the BP accumulation process.

Is it always better to run a detector at its full resolution?

Image quality depends not only on detector resolution, but also on the reconstruction voxel size. They have to match to produce optimal reconstructed images. A typical implementation of Feldkamp reconstruction is the so-called voxel driven algorithm. Its particular drawback is that each voxel is considered as a point in 3d space. During backprojection, a voxel is being projected onto a projection as a point and receives its value from that spot on the projection, not from an area. To illustrate its effect, let us say your reconstruction cube is 512^3 and your projection is 2048^2. Then the voxels on every vertical slice get contributions only (at most) from 512^2 * 4 = 1024^2 pixels on the projection (a factor of 4 comes from bi-linear interpolation). The rest of the detector pixels (2048^2 - 1024^2) will not be taken into account. Therefore making much finer resolution for projections is going to make the reconstruction result noisier. In other words, beyond the balance point Cube resolution ~= Projection resolution / 2, you do not gain anything in final images but noise. This particularly means that to get a 512 cube, it is better to down-sample the projection size from 2048 to 1024.

What are these text files found in COBRA work directory: ANGLE_ARRAY, HORTILTING_ARRAY, ORIGDETDIST_ARRAY, etc.?

These text files, residing in the COBRA work directory, contain the effective (applied) parameters presented on per-projection basis. If the dataset has (for example) a uniform U-offset, then UOFFSETS_ARRAY contains a set of same numbers.

Could you please help us by telling us the operations that you do to create and apply the projection matrices?

From our experience such matrices are used when tube/detector trajectories are substantially complex and can not be described by set of parameters that are uniform for the entire scan. So every tube/detector position has to get some individual description. If the detector is isotropic and orthogonal (this is usually the case) in very general case one needs 9 parameters per position. They can be presented in different forms. One of such forms might be (for example)

  • XYZ for position of the focal spot
  • XYZ for position of the detector center
  • Three Euler angles for detector plane.

In 3D computing projective matrices are more popular (and typically convenient) though basically they carry the same information.
So it is not any surprise that the matrices you get from our site are describing the complex scanner with (particularly) unstable U- and V-offset (case of complex gantry behavior). This is what matrices are for. Obviously one could use them to describe a good mechanics. But keep in mind that the corresponding reconstruction branch is 50% slower in CPU mode.
Unfortunately we have to say that Exxim does not have a technology/know-how for calibrating complex scanners. However we may utilize such information if somebody provides the matrices within the data. For example, a major Germen medical device manufacturer is using this approach pretty much in their C-Arm products - so COBRA can reconstruct their datasets. We also have an idea how this company does calibration but it is literally a high-level idea and certainly is not ready-to-use technology.
In the same time it's possible that actually you do not need matrices - you just need more precise conventional calibration (though I am a little bit speculating here). And basic parameters would work well enough for you; they must be just more accurate (correct xxm file). Exxim has that technology and it might be a reasonable start unless you are 100% sure that the scanner is really such complex.
Please also keep in mind that most likely if your gantry is "wobbly" then the approach of using matrices is "a way to nowhere". Some manufacturers use matrices mostly because (by many reasons) they use non-circular trajectories but all mechanics is still highly reproducible. Unfortunately we do not know any technology that can help if mechanics is not so good.
Below there is the actual code of mapping (FYI)
code

lb