Generate & track synthetic images#

We start with some imports…

import dolfin # https://fenicsproject.org

import dolfin_warp as dwarp # https://github.com/mgenet/dolfin_warp

import lib_viewer

Synthetic data generation#

Image generation#

Let us introduce a small tool to generate simple synthetic images. Here we define all the images parameters.

# Image parameters
n_dim    = 2           # dimension (2 or 3)
L        = [1.]*n_dim  # spatial extent
n_voxels = [100]*n_dim # spatial discretization
T        = 1.          # temporal extent
n_frames = 21          # temporal discretization

# Structure (i.e., object) parameters
#    (For now we will consider a simple squared object.
#    (For the translation case it is convenient to start with an uncentered square,
#    (whereas for the other cases it is convenient if the square is centered.
structure_type = "box"; structure_Xmin = [0.1, 0.3]; structure_Xmax = [0.5, 0.7]
# structure_type = "box"; structure_Xmin = [0.3, 0.3]; structure_Xmax = [0.7, 0.7]

# Texture parameters
#    ("no" means the image will be the characteristic function of the object in its current position,
#    (which is the simplest model for standard (i.e., untagged) MRI images,
#    (whereas "tagging" means the signal over the object will be sqrt(abs(sin(pi*x/s))*abs(sin(pi*y/s))),
#    (which is the simplest model for tagged MRI images.
texture_type = "tagging"; texture_s = 0.1
# texture_type = "no"

# Noise parameters
noise_level = 0
# noise_level = 0.1
# noise_level = 0.2
# noise_level = 0.3

# Transformation parameters
#    (For now we consider basis transformations of the object.
deformation_type = "translation"; deformation_Dx = 0.4; deformation_Dy = 0.
# deformation_type = "rotation"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Rz = 45.
# deformation_type = "compression"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Exx = -0.20
# deformation_type = "shear"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Fxy = +0.20

# Evolution parameters
#    ("linear" means the object will transform linearly in time,
#    (reaching its presribed deformation on the last frame.
evolution_type = "linear"

# Case
case  = deformation_type
case += "-"+texture_type
if (noise_level > 0):
    case += "-"+str(noise_level)

And then actually generate the images.

# Image properties
images = {
    "n_dim":n_dim,
    "L":L,
    "n_voxels":n_voxels,
    "T":T,
    "n_frames":n_frames,
    "data_type":"float",
    "folder":"E1",
    "basename":"images-"+case}

# Structure (i.e., object) properties
if (structure_type == "box"):
    structure = {"type":"box", "Xmin":structure_Xmin, "Xmax":structure_Xmax}

# Texture properties
if   (texture_type == "no"):
    texture = {"type":"no"}
elif (texture_type == "tagging"):
    texture = {"type":"tagging", "s":texture_s}

# Noise properties
if (noise_level == 0):
    noise = {"type":"no"}
else:
    noise = {"type":"normal", "stdev":noise_level}

# Deformation properties
if   (deformation_type == "translation"):
    deformation = {"type":"translation", "Dx":deformation_Dx, "Dy":deformation_Dy}
elif (deformation_type == "rotation"):
    deformation = {"type":"rotation", "Cx":deformation_Cx, "Cy":deformation_Cy, "Rz":deformation_Rz}
elif (deformation_type == "compression"):
    deformation = {"type":"homogeneous", "X0":deformation_Cx, "Y0":deformation_Cy, "Exx":deformation_Exx}
elif (deformation_type == "shear"):
    deformation = {"type":"homogeneous", "X0":deformation_Cx, "Y0":deformation_Cy, "Fxy":deformation_Fxy}

# Evolution properties
if (evolution_type == "linear"):
    evolution = {"type":"linear"}

# Generate images
dwarp.generate_images(
    images=images,
    structure=structure,
    texture=texture,
    noise=noise,
    deformation=deformation,
    evolution=evolution,
    verbose=0)

We visualize them using itkwidgets.

viewer = lib_viewer.Viewer(
    images="E1/images-"+case+"_*.vti")
viewer.view()

Ground truth generation#

Later on we will track the object throughout the images using finite elements. To assess the quality of the tracking, it is convenient to have a ground truth, in the form of a finite element solution over a mesh. We now create the mesh, and generate this ground truth.

Mesh#

# Mesh parameter
n_elems = 1
# Generate mesh
mesh = dolfin.RectangleMesh(
    dolfin.Point(structure_Xmin),
    dolfin.Point(structure_Xmax),
    n_elems,
    n_elems,
    "crossed")

Ground truth#

# Generate ground truth
dwarp.compute_warped_mesh(
    working_folder="E1",
    working_basename="ground_truth"+"-"+deformation_type,
    images=images,
    structure=structure,
    deformation=deformation,
    evolution=evolution,
    mesh=mesh,
    verbose=0)

We can now visualize the images and meshes. (To better see the mesh on top of the images, select the “Wireframe” mode for the mesh).

viewer = lib_viewer.Viewer(
    images="E1/images-"+case+"_*.vti",
    meshes="E1/ground_truth-"+deformation_type+"_*.vtu")
viewer.view()

Motion tracking#

Motion tracking#

We now introduce a motion tracking tool. Here we define all the tracking parameters.

# Regularization strength
#    (It must be ≥ 0 and < 1.
regul_level = 0.

# Regularization type
#    ("continuous-elastic" means the image term is penalized with the strain energy of the displacement field.
#    ("continuous-equilibrated" means the image term is penalized with the equilibrium gap of the displacement field.
#    ("discrete-equilibrated" means the image term is penalized with the discrete linear equilibrium gap of the displacement field.
# regul_type = "continuous-linear-elastic"
# regul_type = "continuous-linear-equilibrated"
regul_type = "continuous-elastic"
# regul_type = "continuous-equilibrated"
# regul_type = "discrete-simple-elastic"
# regul_type = "discrete-simple-equilibrated"
# regul_type = "discrete-linear-equilibrated"
# regul_type = "discrete-linear-equilibrated-tractions-normal"
# regul_type = "discrete-linear-equilibrated-tractions-tangential"
# regul_type = "discrete-linear-equilibrated-tractions-normal-tangential"
# regul_type = "discrete-equilibrated"
# regul_type = "discrete-equilibrated-tractions-normal"
# regul_type = "discrete-equilibrated-tractions-tangential"
# regul_type = "discrete-equilibrated-tractions-normal-tangential"

# Regularization model
#    ("hooke" means the linear isotropic Hooke law (infinitesimal strain/linearized elasticity).
#    ("ciarletgeymonatneohookean" means the hyperelastic potential made of Ciarlet-Geymonat & neo-Hookean terms (finite strain/nonlinear elasticity).
if any([_ in regul_type for _ in ["linear", "simple"]]):
    regul_model = "hooke"
else:
    regul_model = "ciarletgeymonatneohookean"

And then actually run the tracking.

# Perform tracking
dwarp.warp(
    working_folder="E1",
    working_basename="tracking-"+case,
    #
    images_folder="E1",
    images_basename="images-"+case,
    #
    mesh=mesh,
    #
    regul_type=regul_type,
    regul_model=regul_model,
    regul_level=regul_level,
    #
    relax_type="constant",
    tol_dU=1e-2,
    #
    write_VTU_files=True,
    write_VTU_files_with_preserved_connectivity=True)

We can now visualize the images and tracked meshes. (To better see the mesh on top of the images, select the “Wireframe” mode for the mesh).

viewer = lib_viewer.Viewer(
    images="E1/images-"+case+"_*.vti",
    meshes="E1/tracking-"+case+"_*.vtu")
viewer.view()

Tracking error#

We can compute a normalized tracking error by comparing the tracked displacement \(\underline{U}\) and the ground truth displacement \(\underline{\bar U}\):

\[ e := \dfrac{\sqrt{\frac{1}{T}\int_{t=0}^{T} \frac{1}{\left|\Omega_0\right|}\int_{\Omega_0} \left\|\underline{U} - \underline{\bar U}\right\|^2 d\Omega_0~dT}}{\sqrt{\frac{1}{T}\int_{t=0}^{T} \frac{1}{\left|\Omega_0\right|}\int_{\Omega_0} \left\|\underline{\bar U}\right\|^2 d\Omega_0~dT}} \]

(Watch out, the meshes should match as the error is actually computed at the discrete level!)

dwarp.compute_displacement_error(
    working_folder="E1",
    working_basename="tracking-"+case,
    ref_folder="E1",
    ref_basename="ground_truth-"+deformation_type,
    verbose=0)

References#

[Claire, Hild & Roux (2004). A finite element formulation to identify damage fields: The equilibrium gap method. International Journal for Numerical Methods in Engineering, 61(2), 189–208.]

[Veress, Gullberg & Weiss (2005). Measurement of Strain in the Left Ventricle during Diastole with cine-MRI and Deformable Image Registration. Journal of Biomechanical Engineering, 127(7), 1195.]

[Réthoré, Roux & Hild (2009). An extended and integrated digital image correlation technique applied to the analysis of fractured samples: The equilibrium gap method as a mechanical filter. European Journal of Computational Mechanics.]

[Leclerc, Périé, Roux & Hild (2010). Voxel-Scale Digital Volume Correlation. Experimental Mechanics, 51(4), 479–490.]

[Genet, Stoeck, von Deuster, Lee & Kozerke (2018). Equilibrated Warping: Finite Element Image Registration with Finite Strain Equilibrium Gap Regularization. Medical Image Analysis, 50, 1–22.]

[Lee & Genet (2019). Validation of Equilibrated Warping—Image Registration with Mechanical Regularization—On 3D Ultrasound Images. In Coudière, Ozenne, Vigmond & Zemzemi (Eds.), Functional Imaging and Modeling of the Heart (FIMH) (Vol. 11504, pp. 334–341). Springer International Publishing.]

[Berberoğlu, Stoeck, Moireau, Kozerke & Genet (2019). Validation of Finite Element Image Registration‐based Cardiac Strain Estimation from Magnetic Resonance Images. PAMM, 19(1).]

[Genet (2023). Finite strain formulation of the discrete equilibrium gap principle: application to mechanically consistent regularization for large motion tracking. Comptes Rendus Mécanique, 351, 429-458.]