10. Non-Cartesian MR image reconstruction¶

In this tutorial we will reconstruct an MRI image from radial undersampled kspace measurements. Let us denote $\Omega$ the undersampling mask, the under-sampled Fourier transform now reads $F_{\Omega}$.

Import neuroimaging data¶

We use the toy datasets available in pysap, more specifically a 2D brain slice and the radial under-sampling scheme. We compare zero-order image reconstruction with Compressed sensing reconstructions (analysis vs synthesis formulation) using the FISTA algorithm for the synthesis formulation and the Condat-Vu algorithm for the analysis formulation.

We remind that the synthesis formulation reads (minimization in the sparsifying domain):

$$ \widehat{z} = \text{arg}\,\min_{z\in C^n_\Psi} \frac{1}{2} \|y - F_\Omega \Psi^*z \|_2^2 + \lambda \|z\|_1 $$

and the image solution is given by $\widehat{x} = \Psi^*\widehat{z}$. For an orthonormal wavelet transform, we have $n_\Psi=n$ while for a frame we may have $n_\Psi > n$.

while the analysis formulation consists in minimizing the following cost function (min. in the image domain):

$$ \widehat{x} = \text{arg}\,\min_{x\in C^n} \frac{1}{2} \|y - F_\Omega x\|_2^2 + \lambda \|\Psi x\|_1 \,. $$

  • Author: Chaithya G R & Philippe Ciuciu
  • Date: 01/06/2021
  • Target: ATSI MSc students, Paris-Saclay University
In [1]:
# 导入 MRI 重建运算符
from mri.operators import NonCartesianFFT, WaveletUD2, WaveletN
'''
NonCartesianFFT : 非笛卡尔傅立叶变换(Non-Cartesian FFT),用于处理 螺旋或径向 MRI 采样 的傅立叶变换。
WaveletN : 正交小波变换(如 Symmlet-8 或 Daubechies),用于 合成公式(Synthesis Formulation) 压缩感知重建。
WaveletUD2 : 未降采样双正交小波变换(Undecimated Bi-Orthogonal Wavelet Transform),用于 分析公式(Analysis Formulation) 压缩感知重建。
'''
# 导入 MRI 处理的实用工具
from mri.operators.utils import convert_locations_to_mask, \
    gridded_inverse_fourier_transform_nd
'''
convert_locations_to_mask : 将 k-space 采样坐标转换为二值掩码,用于 重建欠采样的 MRI 数据。
gridded_inverse_fourier_transform_nd : 基于网格插值的逆傅立叶变换,通常用作 MRI 重建的初始估计。
'''
#  导入 MRI 图像重建器
from mri.reconstructors import SingleChannelReconstructor
'''
SingleChannelReconstructor : 单通道 MRI 迭代重建器,支持 FISTA、POGM 和 Condat-Vu 等压缩感知优化算法。
'''

# 导入 MRI 样本数据
from pysap.data import get_sample_data

# Third party import
from modopt.math.metrics import ssim
from modopt.opt.linear import Identity
from modopt.opt.proximity import SparseThreshold
'''
ssim : 结构相似性指数(SSIM),用于 评估重建图像的质量,与原始 MRI 进行比较。
Identity : 单位变换(Identity Transformation),即 不对数据做任何变换,在优化过程中常用于保持数据结构。
SparseThreshold : 稀疏阈值运算(L1 正则化 / 软阈值收缩),用于 压缩感知重建中的稀疏性约束。
'''

import numpy as np
import matplotlib.pyplot as plt
2025-02-28 19:10:46.723576: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-02-28 19:10:46.733869: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1740766246.746392 1612643 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1740766246.750083 1612643 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-02-28 19:10:46.763478: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/home/home/miniconda3/envs/tensor/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

Loading input data¶

In [2]:
image = get_sample_data('2d-mri').data.astype(np.complex64)

# Obtain MRI non-cartesian mask
radial_mask = get_sample_data("mri-radial-samples")                
kspace_loc = radial_mask.data                                                           # a list of sampled k-space coordinates (x, y) 
In [3]:
print(kspace_loc)
[[-0.5         0.        ]
 [-0.49988383  0.        ]
 [-0.49956852  0.        ]
 ...
 [-0.48881698  0.09940265]
 [-0.48928857  0.09949855]
 [-0.48956504  0.09955477]]

View Input data¶

In [4]:
plt.subplot(1, 2, 1)
plt.imshow(np.abs(image), cmap='gray')                                                  # True image
plt.title("MRI Data")
plt.subplot(1, 2, 2)
plt.imshow(convert_locations_to_mask(kspace_loc, image.shape), cmap='gray')             # a list of sampled k-space coordinates (x, y) -> 2D binary mask  
plt.title("K-space Sampling Mask")
plt.show()
No description has been provided for this image

$\leadsto$   The code above is to generate the mask,

  • First, we obtain a list of sampled k-space coordinates $(x, y)$ using kspace_loc = get_sample_data("mri-radial-samples").data

  • Then, we convert this list of sampled k-space coordinates to a 2D binary mask using convert_locations_to_mask(kspace_loc, image.shape)

Generate the kspace¶

From the 2D brain slice and the acquisition mask, we retrospectively undersample the k-space using a radial mask. We then reconstruct the zero-order solution as a baseline

Get the locations of the kspace samples

In [5]:
# Get the locations of the kspace samples and the associated observations
#fourier_op = NonCartesianFFT(samples=kspace_loc, shape=image.shape, implementation='gpuNUFFT
#kspace_obs = fourier_op.op(image)[0]
fourier_op = NonCartesianFFT(samples=kspace_loc, shape=image.shape, implementation='finufft')
'''
NonCartesianFFT : 定义 非笛卡尔傅立叶变换运算符,用于处理 非规则 k-space 采样(如径向、螺旋轨迹等)。
samples=kspace_loc : 指定 k-space 采样点的坐标(即 kspace_loc)。
shape=image.shape : 设置 MRI 图像的形状(匹配输入 image 的尺寸)。
implementation='finufft' : 选择 finufft 作为计算 NUFFT(非均匀傅立叶变换) 的实现方式。
fourier_op 现在是一个 运算符对象,可以用于在非笛卡尔 k-space 进行傅立叶变换。
'''
kspace_obs = fourier_op.op(image)
/home/home/miniconda3/envs/tensor/lib/python3.11/site-packages/mrinufft/_utils.py:94: UserWarning: Samples will be rescaled to [-pi, pi), assuming they were in [-0.5, 0.5)
  warnings.warn(

Gridded solution

In [6]:
grid_space = np.linspace(-0.5, 0.5, num=image.shape[0])
grid2D = np.meshgrid(grid_space, grid_space)
'''
np.linspace(-0.5, 0.5, num=image.shape[0]) : 生成 均匀分布的坐标点,范围在 [-0.5, 0.5] 之间。
image.shape[0] : 设定 坐标点个数,与图像的尺寸保持一致。
np.meshgrid(grid_space, grid_space) : 生成 2D 网格坐标,用于 将非均匀采样数据重新映射到规则网格。
最终结果:grid2D 是 用于插值的 2D 坐标网格
'''

grid_soln = gridded_inverse_fourier_transform_nd(kspace_loc, kspace_obs,
                                                 tuple(grid2D), 'linear')   # 线性插值

'''
gridded_inverse_fourier_transform_nd() : 采用 Gridding 插值方法 进行 k-space 数据的逆傅立叶变换。

kspace_loc : MRI 非笛卡尔 k-space 采样位置(NonCartesianFFT 的 samples)。
kspace_obs : 欠采样 k-space 观测数据(fourier_op.op(image) 计算的结果)。
tuple(grid2D) : 规则网格坐标(转换为 tuple)。
'linear' : 使用 线性插值(Linear Interpolation) 进行 Gridding 重建。
最终结果:grid_soln 是 基于 Gridding 方法的 MRI 影像重建结果。
'''

base_ssim = ssim(grid_soln, image)
plt.imshow(np.abs(grid_soln), cmap='gray')
plt.title('Gridded solution : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
No description has been provided for this image

$\leadsto$   Here we apply the Fourier operator ( fourier_op.op) and its ajoint ( gridded_inverse_fourier_transform_nd ) to perform the zero order solution :

  • fourier_op = NonCartesianFFT(samples=kspace_loc, shape=image.shape, implementation='finufft')

    where we define the Non Cartesian Fourier transform (NonCartesianFFT) operator, which is applied only to the sampled locations ( kspace_loc ) , we can use 'finufft' as the implementation method.

    then we apply this operator to the image : kspace_data = fourier_op.op(image) to obtain the undersampled k-space data.

  • To compute the zero order solution, we apply the ajoint of the operator grid_soln = gridded_inverse_fourier_transform_nd(kspace_loc, kspace_obs, tuple(grid2D), 'linear') , which serves as a basic gridded (interpolation) reconstruction.

FISTA optimization¶

We now want to refine the zero order solution using a FISTA optimization. The cost function is set to Proximity Cost + Gradient Cost

In [38]:
# Setup the operators
linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)
regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")

Generate operators¶

In [ ]:
# Setup Reconstructor
reconstructor = SingleChannelReconstructor(
    fourier_op=fourier_op,                                              # the **Non Cartesian  Fourier transform** (NonCartesianFFT) operator only for the sampled locations ( `kspace_loc` )
    linear_op=linear_op,                                                # Symmlet-8 wavelet transform with 4 scales for multi-scale sparse representation.
    regularizer_op=regularizer_op,                                      # Identity() < - > identity linear transformation, \lambda = 6 * 1e-7, Soft Thresholding < - > L1 norm
    gradient_formulation='synthesis',                                   # using Synthesis Formulation
    verbose=1,
)
WARNING: Making input data immutable.
Lipschitz constant is 17.828099441528323

Synthesis formulation: FISTA optimization¶

We now want to refine the zero order solution using a FISTA optimization. The cost function is set to Proximity Cost + Gradient Cost

In [ ]:
image_rec, costs, metrics = reconstructor.reconstruct(
    kspace_data=kspace_obs,                                              # kspace_obs = fourier_op.op(image)
    optimization_alg='fista',                                            # fista
    num_iterations=200,
)

recon_ssim = ssim(image_rec, image)
plt.imshow(np.abs(image_rec), cmap='gray')
plt.title('FISTA Reconstruction : SSIM = ' + str(np.around(recon_ssim, 3)))
plt.show()

image.png

POGM reconstruction¶

In [ ]:
image_rec2, costs2, metrics2 = reconstructor.reconstruct(
    kspace_data=kspace_obs,                                                   # kspace_obs = fourier_op.op(image)
    optimization_alg='pogm',                                                  # pogm
    num_iterations=200,
)

recon2_ssim = ssim(image_rec2, image)
plt.imshow(np.abs(image_rec2), cmap='gray')
plt.title('POGM Reconstruction : SSIM = ' + str(np.around(recon2_ssim, 3)))
plt.show()

image.png

Q1  

  • Synthesis CS reconstruction: Compare the two optimization algorithms, namely FISTA and POGM.

    Make your own comments on

    • the final image quality,

    • convergence speed (play with the number of iterations),

    • computation cost, etc.

    • Additionally, you may vary the number of scales in the wavelet transform (i.e. WaveletN(wavelet_name="sym8", nb_scales=4)).

In [46]:
def reconstruction (optimization_alg_, num_iterations_, nb_scales_):
    if optimization_alg_ == "fista":
        # Setup the operators
        linear_op = WaveletN(wavelet_name="sym8", nb_scales=nb_scales_)
        regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")           
        # Setup Reconstructor
        reconstructor = SingleChannelReconstructor(
            fourier_op=fourier_op,                                              # the **Non Cartesian  Fourier transform** (NonCartesianFFT) operator only for the sampled locations ( `kspace_loc` )
            linear_op=linear_op,                                                # Symmlet-8 wavelet transform with 4 scales for multi-scale sparse representation.
            regularizer_op=regularizer_op,                                      # Identity() < - > identity linear transformation, \lambda = 6 * 1e-7, Soft Thresholding < - > L1 norm
            gradient_formulation='synthesis',                                   # using Synthesis Formulation
            verbose=1,
        )

        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                             # kspace_obs = fourier_op.op(image)
            optimization_alg='fista',                                           # fista
            num_iterations=num_iterations_,
        )

    elif optimization_alg_ == "pogm":
        # Setup the operators
        linear_op = WaveletN(wavelet_name="sym8", nb_scales=nb_scales_)
        regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")           
        # Setup Reconstructor
        reconstructor = SingleChannelReconstructor(
            fourier_op=fourier_op,                                              # the **Non Cartesian  Fourier transform** (NonCartesianFFT) operator only for the sampled locations ( `kspace_loc` )
            linear_op=linear_op,                                                # Symmlet-8 wavelet transform with 4 scales for multi-scale sparse representation.
            regularizer_op=regularizer_op,                                      # Identity() < - > identity linear transformation, \lambda = 6 * 1e-7, Soft Thresholding < - > L1 norm
            gradient_formulation='synthesis',                                   # using Synthesis Formulation
            verbose=1,
        )
        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                             # kspace_obs = fourier_op.op(image)
            optimization_alg='pogm',                                            # pogm
            num_iterations=num_iterations_,
        )

    elif optimization_alg_ == "condatvu":
        linear_op = WaveletUD2(
            wavelet_id=24,
            nb_scale=nb_scales_,
        )
        regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")

        reconstructor = SingleChannelReconstructor(
            fourier_op=fourier_op,                                               # the **Non Cartesian  Fourier transform** (NonCartesianFFT) operator only for the sampled locations ( `kspace_loc` )
            linear_op=linear_op,                                                 # UnDecimated Bi-Orthogonal Wavelets 未降采样*双正交小波 进行稀疏性约束, 4 级小波分解
            regularizer_op=regularizer_op,                                       # Identity() < - > identity linear transformation, \lambda = 6 * 1e-7, Soft Thresholding < - > L1 norm             
            gradient_formulation='analysis',                                     # using Analysis Formulation
            verbose=1,
        )

        '''
        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                              # kspace_obs = fourier_op.op(image)
            optimization_alg='condatvu',                                         # Condat-Vu
            num_iterations=num_iterations_,
        )
        '''

        x_final, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                                    # kspace_obs = fourier_op.op(image)
            optimization_alg='condatvu',                                               # Condat-Vu
            num_iterations=200,
        )
        image_rec = pysap.Image(data=np.abs(x_final))  

    recon_ssim = ssim(image_rec, image)

    return image_rec, recon_ssim
In [ ]:
vector_optimization_alg = ["fista", "pogm", "condatvu"]
vector_num_iterations = [20, 200, 500, 1000]
vector_nb_scales = [2, 4, 8, 16]


for p in ["fista", "pogm"]:
    optimization_alg_ = p

    fig, axs = plt.subplots(len(vector_num_iterations), len(vector_nb_scales), figsize=(12, 12))
    for i in range(len(vector_num_iterations)):
        num_iterations_ = vector_num_iterations[i]

        for j in range(len(vector_nb_scales)):
            nb_scales_ = vector_nb_scales[j]

            image_rec, recon_ssim = reconstruction (optimization_alg_, num_iterations_, nb_scales_)

            axs[i, j].imshow(np.abs(image_rec), cmap='gray')
            axs[i, j].set_xlabel(f"nb_scales = {nb_scales_} SSIM = {recon_ssim:.3f}")
            axs[i, j].xaxis.set_label_position('top')  # 移动 xlabel 到顶部
            axs[i, j].set_ylabel(f"nb_iter = {num_iterations_}")

    # Add a main title (suptitle) to the figure 
    fig.suptitle(f"Reconstruction Results for Optimization Alg: {p}", fontsize=16)

    # Display the figure with adjusted layout
    plt.tight_layout(rect=[0, 0, 1, 0.96])  # Adjust layout to fit suptitle
    plt.show()

image.png

image.png

$\leadsto$   Synthesis CS reconstruction: Compare the two optimization algorithms, namely FISTA and POGM.

$\circ$ When we vary the number of scales nb_scales in the wavelet transform, the final image quality ( SSIM ) remains unchanged up to three decimal places in both cases.

$\circ$ Initially, as num_iterations increases, the quality of the reconstructed image improves until num_iterations ≃ 80, after which the image quality starts to degrade and then gradually stabilizes, in both cases

$\circ$ For the the final image quality, under the same conditions ( or nb_scales, num_iterations ), the difference between the two algorithms is generally insignificant in most cases.

In [ ]:
vector_optimization_alg = ["fista", "pogm", "condatvu"]
vector_num_iterations = np.concatenate([
    np.linspace(20, 1000, 50), 
    np.linspace(1001, 10000, 10)
]) # [20, 200, 500, 1000]
nb_scales_ = 4

vector_SSIM_FISTA = []
vector_SSIM_POGM = []

optimization_alg_ = "fista"
for i in range(len(vector_num_iterations)):
    num_iterations_ = int(vector_num_iterations[i])
    image_rec, recon_ssim = reconstruction (optimization_alg_, num_iterations_, nb_scales_)
    vector_SSIM_FISTA.append(recon_ssim)

optimization_alg_ = "pogm"
for j in range(len(vector_num_iterations)):
    num_iterations_ = int(vector_num_iterations[j])
    image_rec, recon_ssim = reconstruction (optimization_alg_, num_iterations_, nb_scales_)
    vector_SSIM_POGM.append(recon_ssim)

plt.figure(figsize=(10, 6))

# FISTA plot
plt.plot(vector_num_iterations, vector_SSIM_FISTA, linestyle='-', color='red', 
        markersize=8, markerfacecolor='red', label='FISTA')

# POGM plot
plt.plot(vector_num_iterations, vector_SSIM_POGM, linestyle='-', color='blue', 
        markersize=8, markerfacecolor='blue', label='POGM')

# Labels and title
plt.xlabel('num_iterations')
plt.ylabel('recon_ssim')
plt.title('SSIM Score vs Number of Iterations')

# Grid and legend
# plt.grid(True)
plt.legend()

# Show the plot
plt.show()

image.png

The results show that the maximum SSIM Score for the FISTA algorithm occurs at num_iterations = 80 . Below, we display the reconstructed images for it :

  • num_iterations = 80 (corresponding to the maximum SSIM Score)

In this scenario, we fixed nb_scales = 4 .

In [ ]:
# Find the index and value of the maximum vector_SSIM_FISTA.
max_index_FISTA = np.argmax(vector_SSIM_FISTA)
SSIM_corresponding_max_index_FISTA = vector_SSIM_FISTA[max_index_FISTA]
num_iterations_corresponding_max_index_FISTA = vector_num_iterations[max_index_FISTA]
image_corresponding_max_index_FISTA, recon_ssim_FISTA= reconstruction("fista", int(num_iterations_corresponding_max_index_FISTA), 4)

fig, axes = plt.subplots(1, 2, figsize=(8, 4))

axes[0].imshow(np.abs(image_corresponding_max_index_FISTA), cmap='gray')
axes[0].set_title("FISTA" + f"  num_iter : {int(num_iterations_corresponding_max_index_FISTA)}" +  f" : SSIM = {recon_ssim_FISTA:.3f}")
axes[0].axis('off')

axes[1].imshow(np.abs(grid_soln), cmap='gray')
axes[1].set_title('Gridded solution : SSIM = ' + str(np.around(base_ssim, 3)))
axes[1].axis('off')

plt.tight_layout()
plt.show()

image.png

The results show that the maximum SSIM Score for the POGM algorithm occurs at num_iterations = 60 . Below, we display the reconstructed images for it :

  • num_iterations = 60 (corresponding to the maximum SSIM Score)

In this scenario, we fixed nb_scales = 4 .

In [ ]:
# Find the index and value of the maximum vector_SSIM_POGM.
max_index_POGM = np.argmax(vector_SSIM_POGM)
SSIM_corresponding_max_index_POGM = vector_SSIM_POGM[max_index_POGM]
num_iterations_corresponding_max_index_POGM = vector_num_iterations[max_index_POGM]
image_corresponding_max_index_POGM, recon_ssim_POGM= reconstruction("pogm", int(num_iterations_corresponding_max_index_POGM), 4)

fig, axes = plt.subplots(1, 2, figsize=(8, 4))

axes[0].imshow(np.abs(image_corresponding_max_index_POGM), cmap='gray')
axes[0].set_title("POGM" + f"  num_iter : {int(num_iterations_corresponding_max_index_POGM)}" +  f" : SSIM = {recon_ssim_POGM:.3f}")
axes[0].axis('off')

axes[1].imshow(np.abs(grid_soln), cmap='gray')
axes[1].set_title('Gridded solution : SSIM = ' + str(np.around(base_ssim, 3)))
axes[1].axis('off')

plt.tight_layout()
plt.show()

image.png

Q3   Compare the differences in image quality on the FISTA solutions between Cartesian and non-Cartesian undersampling.

$\leadsto$   Under the same condition ( linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)):

$\circ$ In non-Cartesian undersampling, we now set regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft"), the algorithm converges faster compared to the Cartesian case.

$\circ$ The quality in the non-Cartesian undersampling reconstruction ( num_iter = 80, SSIM = 0.791 ) superior to that of Cartesian undersampling ( num_iter = 940, SSIM = 0.753 ) .

Analysis formulation: Condat-Vu reconstruction¶

In [ ]:
#linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)
linear_op = WaveletUD2(
    wavelet_id=24,
    nb_scale=4,
)

regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")
In [ ]:
reconstructor = SingleChannelReconstructor(
    fourier_op=fourier_op,                                                    # the **Non Cartesian  Fourier transform** (NonCartesianFFT) operator only for the sampled locations ( `kspace_loc` )
    linear_op=linear_op,                                                      # UnDecimated Bi-Orthogonal Wavelets 未降采样*双正交小波 进行稀疏性约束, 4 级小波分解
    regularizer_op=regularizer_op,                                            # Identity() < - > identity linear transformation, \lambda = 6 * 1e-7, Soft Thresholding < - > L1 norm 
    gradient_formulation='analysis',                                          # using Analysis Formulation    
    verbose=1,
)
Lipschitz constant is 17.82810153961182
WARNING: Making input data immutable.
In [14]:
import pysap
In [ ]:
x_final, costs, metrics = reconstructor.reconstruct(
    kspace_data=kspace_obs,                                                    # kspace_obs = fourier_op.op(image)
    optimization_alg='condatvu',                                               # Condat-Vu
    num_iterations=200,
)
image_rec = pysap.Image(data=np.abs(x_final))                               
plt.imshow(np.abs(image_rec), cmap='gray')
recon_ssim = ssim(image_rec, image)
plt.title('Condat-Vu Reconstruction\nSSIM = ' + str(recon_ssim))
plt.show()

image.png

Q2   Analysis vs Synthesis CS reconstruction. Reproduce the experiment you did in the previous Notebook on Cartesian CS MRI reconstruction. Can you draw the same conclusions?

  • Analysis vs Synthesis CS reconstruction: Use the Condat-Vu algorithm to minimize the analysis formulation.

    • First, use the same sparsifying transform (i.e., linear_op set to Dauchechies 8) and compare the results between the two formulations.

    • Second, use an undecimated wavelet transform (linear_op = WaveletUD2) and rerun the analysis formulation. Comment the results

In [11]:
def reconstruction_v1 (Wavelet_, gradient_formulation_, num_iterations_):
    # Setup the operators
    if Wavelet_ == "WaveletN":
        linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)                      # Symmlet-8 wavelet transform with 4 scales for multi-scale sparse representation.
    elif Wavelet_ == "WaveletUD2":
        linear_op = WaveletUD2(wavelet_id=24, nb_scale=4)                           # UnDecimated Bi-Orthogonal Wavelets 未降采样*双正交小波 进行稀疏性约束, 4 级小波分解
        
    regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")  

    if gradient_formulation_ == "synthesis":
        reconstructor = SingleChannelReconstructor(
            fourier_op=fourier_op,                                                  # the **Fourier transform** (FFT) operator only for the sampled locations ( `kspace_loc` )
            linear_op=linear_op,                                                    # 
            regularizer_op=regularizer_op,                                          # Identity() < - > identity linear transformation, \lambda = 0.1, Soft Thresholding < - > L1 norm
            gradient_formulation='synthesis',                                       # using Synthesis Formulation
            verbose=1,
        )

        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                                 # kspace_obs = fourier_op.op(image)
            optimization_alg='fista',                                               # fista
            num_iterations=num_iterations_,
        )

    elif gradient_formulation_ == "analysis":
        reconstructor = SingleChannelReconstructor(
            fourier_op=fourier_op,                                                  # the **Fourier transform** (FFT) operator only for the sampled locations ( `kspace_loc` )
            linear_op=linear_op,                                                    # 
            regularizer_op=regularizer_op,                                          # Identity() < - > identity linear transformation, \lambda = 0.1, Soft Thresholding < - > L1 norm
            gradient_formulation='analysis',                                        # using Analysis Formulation
            verbose=1,
        )

        '''
        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_data,                                                # kspace_data = fourier_op.op(image)
            optimization_alg='condatvu',                                            # fista
            num_iterations=num_iterations_,
        )
        '''
        x_final, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,                                                  # kspace_obs = fourier_op.op(image)
            optimization_alg='condatvu',                                             # Condat-Vu
            num_iterations=num_iterations_,
        )
        image_rec = pysap.Image(data=np.abs(x_final))     

    recon_ssim = ssim(image_rec, image)

    return image_rec, recon_ssim
        
In [ ]:
vector_Wavelet=["WaveletN", "WaveletN", "WaveletUD2"]
vector_gradient_formulation=["synthesis", "analysis", "analysis"]
vector_num_iterations = [20, 100, 400, 500, 600]


fig, axs = plt.subplots(len(vector_num_iterations), 3, figsize=(12, 12))

for i in range(len(vector_num_iterations)) :
    for j in range(3):
        num_iterations_ = vector_num_iterations[i]

        Wavelet_ = vector_Wavelet[j]
        gradient_formulation_ = vector_gradient_formulation[j]

        image_rec, recon_ssim = reconstruction_v1(Wavelet_, gradient_formulation_, num_iterations_)

        # 显示重建的图像
        axs[i, j].imshow(np.abs(image_rec), cmap='gray')
        
        axs[i, j].set_xlabel(f"{Wavelet_}: SSIM = {recon_ssim:.3f}")
        axs[i, j].xaxis.set_label_position('top')  # 移动 xlabel 到顶部
        axs[i, j].set_ylabel(f"num_iter = {num_iterations_}")
    

axs[0, 0].set_title("'synthesis' + fista'")
axs[0, 1].set_title("'analysis' + 'condatvu'")
axs[0, 2].set_title("'analysis' + 'condatvu'")

# Add a main title (suptitle) to the figure
fig.suptitle("Iterative Reconstruction Results")

# Display the figure with adjusted layout
plt.tight_layout(rect=[0, 0, 1, 0.96])  # Adjust layout to fit suptitle
plt.show()

image.png

$\leadsto$   Analysis vs Synthesis CS reconstruction: Use the Condat-Vu algorithm to minimize the analysis formulation.

$\circ$ First, using the same sparsifying transform ( linear_op = WaveletN(wavelet_name="sym8", nb_scales=4) ), the gradient formulation 'synthesis' results in better image quality in the reconstruction compared to the gradient formulation 'analysis'

$\circ$ Second, using an undecimated wavelet transform ( linear_op = WaveletUD2(wavelet_id=24, nb_scale=4) ), and re-running the 'analysis' formulation, this time the image quality in the reconstruction is slightly lower than the first case (i.e., WaveletN, 'synthesis' ) for num_iterations < 20 . However, when num_iterations > 20 , the image quality in the reconstruction using WaveletUD2 'analysis' formulation is significantly lower than using WaveletN 'synthesis' .

In [ ]:
vector_Wavelet=["WaveletN", "WaveletN", "WaveletUD2"]
vector_gradient_formulation=["synthesis", "analysis", "analysis"]
vector_num_iterations = np.concatenate([
    np.linspace(10, 400, 20), 
    np.linspace(401, 1000, 10)
]) # [20, 200, 500, 1000]

matrix_recon_ssim = np.zeros((len(vector_num_iterations), 3))

for i in range(len(vector_num_iterations)) :
    for j in range(3):
        num_iterations_ = vector_num_iterations[i]

        Wavelet_ = vector_Wavelet[j]
        gradient_formulation_ = vector_gradient_formulation[j]

        image_rec, recon_ssim = reconstruction_v1(Wavelet_, gradient_formulation_, int(num_iterations_))

        matrix_recon_ssim[i][j] = recon_ssim

plt.figure(figsize=(10, 6))

# WaveletN Synthesis FISTA plot
plt.plot(vector_num_iterations, matrix_recon_ssim[:, 0], linestyle='-', color='red', 
        markersize=8, markerfacecolor='red', label='WaveletN Synthesis FISTA')

# WaveletN Analysis Condat-Vu plot
plt.plot(vector_num_iterations, matrix_recon_ssim[:, 1], linestyle='-', color='blue', 
        markersize=8, markerfacecolor='blue', label='WaveletN Analysis Condat-Vu')

# WaveletUD2 Analysis Condat-Vu plot
plt.plot(vector_num_iterations, matrix_recon_ssim[:, 2], linestyle='-', color='green', 
        markersize=8, markerfacecolor='green', label='WaveletUD2 Analysis Condat-Vu')


# Labels and title
plt.xlabel('num_iterations')
plt.ylabel('recon_ssim')
plt.title('SSIM Score vs Number of Iterations')

# Grid and legend
# plt.grid(True)
plt.legend()

# Show the plot
plt.show()

image.png