11. Self-calibrated CS-pMR image reconstruction from undersampled Cartesian data In this tutorial we will reconstruct an MRI image from Cartesian undersampled kspace data.¶

Let us denote $\Omega$ the undersampling mask, the under-sampled Fourier transform now reads $F_{\Omega}$. We use the toy datasets available in pysap, more specifically a 2D brain slice and under-sampled Cartesian acquisition over 32 channels. We compare zero-order image reconstruction with self-calibrated multi-coil Compressed sensing reconstructions (analysis vs synthesis formulation) using the FISTA algorithm for the synthesis formulation and the Condat-Vu algorithm for the analysis formulation. The multicoil data $(y_\ell)_\ell$ is collected across multiple, say $L$, channels. The sensitivity maps $(S_\ell)_\ell$ are automically calibrated from the central portion of k-space (e.g. 5%) for all channels $\ell=1, \ldots, L$. We remind that the synthesis formulation of the non-Cartesian CS-PMRI problem reads (minimization in the sparsifying domain):

$$ \widehat{z} = \text{arg}\,\min_{z\in C^n_\Psi} \frac{1}{2} \sum_{\ell=1}^L\|y_\ell - F_\Omega S_\ell \Psi^*z \|_2^2 + \lambda \|z\|_1 $$

and the image solution is given by $\widehat{x} = \Psi^*\widehat{z}$. For an orthonormal wavelet transform, we have $n_\Psi=n$ while for a frame we may have $n_\Psi > n$. while the analysis formulation consists in minimizing the following cost function (min. in the image domain):

$$ \widehat{x} = \text{arg}\,\min_{x\in C^n} \frac{1}{2} \sum_{\ell=1}^L \|y_\ell - F_\Omega S_\ell x\|_2^2 + \lambda \|\Psi x\|_1 \,. $$

  • Author: Chaithya G R & Philippe Ciuciu
  • Date: 01/06/2021, update: 02/13/2024
  • Target: ATSI MSc students, Paris-Saclay University
In [154]:
# 导入 MRI 重建运算符
# Package import
from mri.operators import FFT, WaveletN
'''
WaveletN : 正交小波变换(如 Symmlet-8 或 Daubechies),用于 合成公式(Synthesis Formulation) 压缩感知重建。
WaveletUD2 : 未降采样双正交小波变换(Undecimated Bi-Orthogonal Wavelet Transform),用于 分析公式(Analysis Formulation) 压缩感知重建。
'''
# 导入 MRI 处理的实用工具
from mri.operators.utils import convert_mask_to_locations
'''
convert_mask_to_locations :2D binary mask -> a list of sampled k-space coordinates (x, y)
'''
#  导入 MRI 图像重建器
from mri.reconstructors.utils.extract_sensitivity_maps \
    import get_Smaps, extract_k_space_center_and_locations
from mri.reconstructors import SelfCalibrationReconstructor
'''
get_Smaps :从 k-space 提取 灵敏度图(Sensitivity Maps),用于 多通道 MRI 重建(pMRI)。
extract_k_space_center_and_locations :获取 K-space 采样中心,用于自校准(Self-Calibration)。
SelfCalibrationReconstructor :用于 基于自校准(Self-Calibrated)策略 进行 MRI 图像重建。
'''

# 导入 MRI 样本数据
from pysap.data import get_sample_data

# Third party import
from modopt.math.metrics import ssim
from modopt.opt.linear import Identity
from modopt.opt.proximity import SparseThreshold
'''
ssim : 结构相似性指数(SSIM),用于 评估重建图像的质量,与原始 MRI 进行比较。
Identity : 单位变换(Identity Transformation),即 不对数据做任何变换,在优化过程中常用于保持数据结构。
SparseThreshold : 稀疏阈值运算(L1 正则化 / 软阈值收缩),用于 压缩感知重建中的稀疏性约束。
'''

import numpy as np
import matplotlib.pyplot as plt

Loading input data¶

In [155]:
# Loading input data
cartesian_ref_image = get_sample_data('2d-pmri')
image = np.linalg.norm(cartesian_ref_image, axis=0)
# Obtain MRI cartesian mask
mask = get_sample_data("cartesian-mri-mask").data

# View Input
plt.subplot(1, 2, 1)
plt.imshow(np.abs(image), cmap='gray')
plt.title("MRI Data")
plt.subplot(1, 2, 2)
plt.imshow(mask, cmap='gray')
plt.title("K-space Sampling Mask")
plt.show()
No description has been provided for this image
In [156]:
mask_sampled = np.where(mask==1)
print(512**2/np.size(mask_sampled))
2.6122448979591835
In [157]:
print( f" image data base : \n {cartesian_ref_image.shape}")
 image data base : 
 (32, 512, 512)
In [158]:
print( f"image shape : \n {image.shape} \n ")

print( f"sampled lines : \n {(np.where(mask[:,0]==1))}")
image shape : 
 (512, 512) 
 
sampled lines : 
 (array([  8,  21,  35,  38,  66,  74,  95, 100, 130, 134, 142, 146, 166,
       173, 177, 180, 182, 183, 185, 195, 197, 198, 203, 206, 207, 210,
       215, 219, 220, 221, 222, 223, 225, 226, 232, 235, 237, 238, 239,
       241, 244, 245, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256,
       257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 270, 271, 272,
       277, 278, 283, 285, 290, 292, 293, 294, 295, 297, 301, 302, 304,
       306, 320, 326, 339, 341, 345, 358, 387, 388, 393, 396, 400, 411,
       436, 447, 462, 465, 488, 489, 506]),)

$\leadsto$   The code above is to generate the mask, the key part is

mask = get_sample_data("cartesian-mri-mask").data

where we use the package to import the cartesian-mri-mask already existing.

In [159]:
# Alternative design of 1D VDS

# 2. 生成随机相位编码位置
from mrinufft.trajectories.tools import get_random_loc_1d

# 3. 生成相位编码采样位置
'''
image.shape[0]:图像的宽度/高度(假设是 正方形 MRI 图像)
accel=8:加速因子 = 8,即减少 8 倍的 K-Space 采样(加速 MRI 采集)
center_prop=0.1:中心 10% 的 K-Space 总是被采样(通常低频区域更重要)。
pdf='gaussian':按照高斯分布进行随机采样(中心采样密集,边缘较稀疏)
'''
phase_encoding_locs = get_random_loc_1d(image.shape[0], accel=8, center_prop=0.1, pdf='gaussian')
print(f"phase_encoding_locs: \n {phase_encoding_locs}"  +
      f"\n\nmin(phase_encoding_locs): \n {min(phase_encoding_locs)}" +
      f"\n\nmax(phase_encoding_locs): \n {max(phase_encoding_locs)}")
#print(phase_encoding_locs, min(phase_encoding_locs), max(phase_encoding_locs))

# 4. 调整相位编码位置
'''
+0.5:偏移到中心
* image.shape[0]:映射到像素索引范围(0 ~ image.shape[0])
.astype(int):转换为整数索引
'''
phase_encoding_locs = ((phase_encoding_locs +0.5) * image.shape[0]).astype(int)

# 5. 生成 K-Space 采样掩码
mask = np.zeros(image.shape, dtype=bool)                    # 创建一个全零的掩码
mask[phase_encoding_locs] = 1                               # 只在选定的位置设置为 1

# View Input
plt.subplot(1, 2, 1)
plt.imshow(np.abs(image), cmap='gray')
plt.title("MRI Data")
plt.subplot(1, 2, 2)
plt.imshow(mask, cmap='gray')
plt.title("K-space Sampling Mask")
plt.show()
phase_encoding_locs: 
 [ 0.          0.00195312 -0.00195312  0.00390625 -0.00390625  0.00585938
 -0.00585938  0.0078125  -0.0078125   0.00976562 -0.00976562  0.01171875
 -0.01171875  0.01367188 -0.01367188  0.015625   -0.015625    0.01757812
 -0.01757812  0.01953125 -0.01953125  0.02148438 -0.02148438  0.0234375
 -0.0234375   0.02539062 -0.02539062  0.02734375 -0.02734375  0.02929688
 -0.02929688  0.03125    -0.03125     0.03320312 -0.03320312  0.03515625
 -0.03515625  0.03710938 -0.03710938  0.0390625  -0.0390625   0.04101562
 -0.04101562  0.04296875 -0.04296875  0.04492188 -0.04492188  0.046875
 -0.046875    0.04882812 -0.04882812  0.06054688 -0.05078125  0.06640625
 -0.0546875   0.07226562 -0.05859375  0.07617188 -0.06835938  0.078125
 -0.0703125   0.08398438 -0.078125    0.09960938 -0.0859375   0.1015625
 -0.08984375  0.10351562 -0.109375    0.11914062 -0.11328125  0.12695312
 -0.1171875   0.140625   -0.12304688  0.14453125 -0.1484375   0.15039062
 -0.15234375  0.1640625  -0.15625     0.16601562 -0.171875    0.17578125
 -0.17773438  0.18359375 -0.18554688  0.1953125  -0.20117188  0.19921875
 -0.21679688  0.21679688 -0.21875     0.24023438 -0.22851562  0.27148438
 -0.26171875  0.27734375 -0.29296875  0.3046875  -0.29492188  0.32226562
  0.34375     0.3515625   0.36914062  0.40234375  0.41210938  0.49804688]

min(phase_encoding_locs): 
 -0.294921875

max(phase_encoding_locs): 
 0.498046875
No description has been provided for this image

Generate the kspace¶

From the 2D brain slice and the acquisition mask, we retrospectively undersample the k-space using a cartesian acquisition mask. We then reconstruct the zero order solution as a baseline

In [160]:
# 2. Define the Fourier operator (FFT)

# Get the locations of the kspace samples and the associated observations
fourier_op = FFT(mask=mask, shape=image.shape,
                 n_coils=cartesian_ref_image.shape[0])

# 3. Generate the undersampled k-space data
'''
.op(cartesian_ref_image) applies the forward Fourier transform (FFT) only at the sampled locations corresponding to the mask
kspace_obs is a subsampled k-space representation of the image with size (32, 512, 512) .
'''
kspace_obs = fourier_op.op(cartesian_ref_image)

Zero order solution

In [161]:
# Zero order solution

# Perform the inverse Fourier transform
zero_soln = np.linalg.norm(fourier_op.adj_op(kspace_obs), axis=0)
base_ssim = ssim(zero_soln, image)
plt.imshow(np.abs(zero_soln), cmap='gray')
plt.title('Zero Order Solution : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
No description has been provided for this image

$\leadsto$   Here we apply the Fourier operator and its ajoint to perform the zero order solution :

  • fourier_op = FFT(samples=kspace_loc, shape=image.shape) fourier_op = FFT(mask=mask, shape=image.shape, n_coils=cartesian_ref_image.shape[0])

    where we define the Fourier transform (FFT) operator only for the sampled locations corresponding to the mask ,

    then we apply this operator to the image : kspace_obs = fourier_op.op(cartesian_ref_image) to obtain the undersampled k-space obs data.

  • To compute the zero order solution, we just apply the ajoint of the operator zero_soln = np.linalg.norm(fourier_op.adj_op(kspace_obs), axis=0), which serves as a basic reconstruction:

Q1  

  • Observe and comment the improved image quality on the zero-filled adjoint FFT re- construction when using multi-coil data as compared to single coil data.
In [162]:
fig, axs = plt.subplots(1, 2, figsize=(10, 5))

fourier_op_multi_coil = FFT(mask=mask, shape=image.shape,
                 n_coils=cartesian_ref_image.shape[0])
kspace_obs_multi_coil = fourier_op_multi_coil.op(cartesian_ref_image)
zero_soln_multi_coil = np.linalg.norm(fourier_op.adj_op(kspace_obs), axis=0)
base_ssim_multi_coil = ssim(zero_soln_multi_coil, image)
'''
plt.imshow(np.abs(zero_soln_multi_coil), cmap='gray')
plt.title('Zero Order Solution (multi_coil) : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
'''
axs[0].imshow(np.abs(zero_soln_multi_coil), cmap='gray')
axs[0].set_title('Zero Order Solution (multi_coil) : SSIM = ' + str(np.around(base_ssim_multi_coil, 3)))
axs[0].axis('off')

fourier_op_single_coil = FFT(mask=mask, shape=image.shape,
                 n_coils=1)
kspace_obs_single_coil = fourier_op_single_coil.op(cartesian_ref_image)
zero_soln_single_coil = fourier_op_single_coil.adj_op(kspace_obs[0])
base_ssim_single_coil = ssim(zero_soln_single_coil, image)
'''
plt.imshow(np.abs(zero_soln_single_coil), cmap='gray')
plt.title('Zero Order Solution (single_coil) : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
'''
axs[1].imshow(np.abs(zero_soln_single_coil), cmap='gray')
axs[1].set_title('Zero Order Solution (single_coil) : SSIM = ' + str(np.around(base_ssim_single_coil, 3)))
axs[1].axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

$\leadsto$   There is a noticeable improvement in the reconstructed image quality when using multi-coil data compared to single-coil data :

  • When using multi-coil data, we can take advantage of the different sensitivities of each MRI coil to generate a clearer reconstructed image ;

  • While, when using single-coil data, this information is not utilized.

In [ ]:
# Obtain SMaps
kspace_loc = convert_mask_to_locations(mask)
'''
convert_mask_to_locations(mask) : 将 mask 转换为 k-space 采样坐标。
kspace_loc : 存储 被采样的 k-space 位置,用于 计算灵敏度图(SMaps)。
'''
Smaps, SOS = get_Smaps(
    k_space=kspace_obs,
    img_shape=fourier_op.shape,
    samples=kspace_loc,
    thresh=(0.01, 0.01),    # The cutoff threshold in each kspace direction
                            # between 0 and kspace_max (0.5)
    min_samples=kspace_loc.min(axis=0),
    max_samples=kspace_loc.max(axis=0),
    mode='gridding',
    method='linear',
    n_cpu=-1,
)
'''
get_Smaps() : 计算 MRI 灵敏度图(SMaps),即 每个线圈的空间响应,用于 并行 MRI(pMRI)重建。
Smaps: 每个 MRI 线圈的 灵敏度图(Sensitivity Maps) Smaps.shape = (n_coils, 512, 512)(每个线圈对应 512×512 图像)。
SOS(Sum-of-Squares 重建): 多通道 MRI 数据的合成图像。
'''
h=3;w=5;
f, axs = plt.subplots(h,w)
for i in range(h):
    for j in range(w):
        axs[i, j].imshow(np.abs(Smaps[3 * i + j]))
        axs[i, j].axis('off')
plt.subplots_adjust(wspace=0,hspace=0)
'''
可视化 15 个 MRI 线圈的灵敏度图(SMaps)
h = 3, w = 5 : 3 行 5 列 共 15 个 SMaps。
Smaps[3 * i + j] : 选取 第 3*i + j 个线圈的 SMaps 进行可视化。
'''
plt.show()

image.png

In [169]:
import os
print(os.environ.get('CUDA_HOME'))
/usr/local/cuda-12.6
In [170]:
import pysap

from mri.operators import NonCartesianFFT, WaveletUD2, WaveletN
from mri.operators.utils import convert_locations_to_mask, \
    gridded_inverse_fourier_transform_nd
from mri.reconstructors import SingleChannelReconstructor
from pysap.data import get_sample_data

# Third party import
from modopt.math.metrics import ssim
from modopt.opt.linear import Identity
from modopt.opt.proximity import SparseThreshold
import numpy as np
import matplotlib.pyplot as plt

FISTA optimization¶

We now want to refine the zero order solution using a FISTA optimization. The cost function is set to Proximity Cost + Gradient Cost

In [171]:
# Setup the operators
linear_op = WaveletN(
    wavelet_name='sym8',
    nb_scale=4,
)
regularizer_op = SparseThreshold(Identity(), 1.5e-8, thresh_type="soft")
In [172]:
# Setup Reconstructor
reconstructor = SelfCalibrationReconstructor(
    fourier_op=fourier_op,
    linear_op=linear_op,
    regularizer_op=regularizer_op,
    gradient_formulation='synthesis',
    kspace_portion=0.01,
    verbose=1,
)
In [173]:
image_rec, costs, metrics = reconstructor.reconstruct(
    kspace_data=kspace_obs,
    optimization_alg='fista',
    num_iterations=200,
)
recon_ssim = ssim(image_rec, image)
plt.imshow(np.abs(image_rec), cmap='gray')
plt.title('Iterative FISTA Reconstruction : SSIM = ' + str(np.around(recon_ssim, 3)))
plt.show()
WARNING: Making input data immutable.
WARNING: Making input data immutable.
Lipschitz constant is 1.097248267178452
 - mu:  1.5e-08
 - lipschitz constant:  1.097248267178452
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa510498310> - 4
 - max iterations:  200
 - image variable shape:  (512, 512)
 - alpha variable shape:  (291721,)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [01:59<00:00,  1.67it/s]
 - final iteration number:  200
 - final log10 cost value:  6.0
 - converged:  False
Done.
Execution time:  119.48237159199198  seconds
----------------------------------------

No description has been provided for this image

POGM reconstruction¶

In [174]:
image_rec2, costs, metrics = reconstructor.reconstruct(
    kspace_data=kspace_obs,
    optimization_alg='pogm',
    num_iterations=200,
)
recon2_ssim = ssim(image_rec2, image)
plt.imshow(np.abs(image_rec2), cmap='gray')
plt.title('Iterative POGM Reconstruction : SSIM = ' + str(np.around(recon2_ssim, 3)))
plt.show()
WARNING: Making input data immutable.
Lipschitz constant is 1.097188038847508
 - mu:  1.5e-08
 - lipschitz constant:  1.097188038847508
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa510498310> - 4
 - max iterations:  200
 - image variable shape:  (1, 512, 512)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [02:07<00:00,  1.57it/s]
 - final iteration number:  200
 - final log10 cost value:  6.0
 - converged:  False
Done.
Execution time:  127.13897731900215  seconds
----------------------------------------

No description has been provided for this image

Q2  

  • Do the same for the CS-based (synthesis formulation) solutions (FISTA or POGM).
In [175]:
print(fourier_op.op(cartesian_ref_image)[0].shape)
(512, 512)
In [176]:
def reconstruction (coil):
    if coil == "multi_coil" :
        fourier_op = FFT(mask=mask, shape=image.shape,
                 n_coils=cartesian_ref_image.shape[0])
        kspace_obs = fourier_op.op(cartesian_ref_image)
    elif coil == "single_coil" :
        fourier_op = FFT(mask=mask, shape=image.shape,
                 n_coils=int(1))
        kspace_obs = fourier_op.op(cartesian_ref_image)[0].reshape(1, 512, 512)

    # Setup the operators
    linear_op = WaveletN(
        wavelet_name='sym8',
        nb_scale=4,
    )
    regularizer_op = SparseThreshold(Identity(), 1.5e-8, thresh_type="soft")

    # Setup Reconstructor
    reconstructor = SelfCalibrationReconstructor(
        fourier_op=fourier_op,
        linear_op=linear_op,
        regularizer_op=regularizer_op,
        gradient_formulation='synthesis',
        kspace_portion=0.01,
        verbose=1,
    )

    image_rec, costs, metrics = reconstructor.reconstruct(
        kspace_data=kspace_obs,
        optimization_alg='pogm',
        num_iterations=200,
    )
    recon_ssim = ssim(image_rec, image)

    return image_rec, recon_ssim
In [177]:
fig, axs = plt.subplots(1, 2, figsize=(10, 5))

'''
fourier_op_multi_coil = FFT(mask=mask, shape=image.shape,
                 n_coils=cartesian_ref_image.shape[0])
kspace_obs_multi_coil = fourier_op_multi_coil.op(cartesian_ref_image)
zero_soln_multi_coil = np.linalg.norm(fourier_op.adj_op(kspace_obs), axis=0)
base_ssim_multi_coil = ssim(zero_soln_multi_coil, image)

plt.imshow(np.abs(zero_soln_multi_coil), cmap='gray')
plt.title('Zero Order Solution (multi_coil) : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
'''
POGM_soln_multi_coil, POGM_ssim_multi_coil = reconstruction("multi_coil")
axs[0].imshow(np.abs(POGM_soln_multi_coil), cmap='gray')
axs[0].set_title('Iterative POGM Reconstruction (multi_coil) : SSIM = ' + str(np.around(POGM_ssim_multi_coil, 3)))
axs[0].axis('off')

'''
fourier_op_single_coil = FFT(mask=mask, shape=image.shape,
                 n_coils=1)
kspace_obs_single_coil = fourier_op_single_coil.op(cartesian_ref_image)
zero_soln_single_coil = fourier_op_single_coil.adj_op(kspace_obs[0])
base_ssim_single_coil = ssim(zero_soln_single_coil, image)

plt.imshow(np.abs(zero_soln_single_coil), cmap='gray')
plt.title('Zero Order Solution (single_coil) : SSIM = ' + str(np.around(base_ssim, 3)))
plt.show()
'''

POGM_soln_single_coil, POGM_ssim_single_coil = reconstruction("single_coil")

axs[1].imshow(np.abs(POGM_soln_single_coil), cmap='gray')
axs[1].set_title('Iterative POGM Reconstruction (single_coil) : SSIM = ' + str(np.around(POGM_ssim_single_coil, 3)))
axs[1].axis('off')

plt.tight_layout()
plt.show()
WARNING: Making input data immutable.
Lipschitz constant is 1.0971993561659803
 - mu:  1.5e-08
 - lipschitz constant:  1.0971993561659803
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa524173390> - 4
 - max iterations:  200
 - image variable shape:  (1, 512, 512)
----------------------------------------
WARNING: Making input data immutable.
Starting optimization...
100%|██████████| 200/200 [02:06<00:00,  1.58it/s]
 - final iteration number:  200
 - final log10 cost value:  6.0
 - converged:  False
Done.
Execution time:  126.77331336899078  seconds
----------------------------------------
WARNING: Making input data immutable.
WARNING: Making input data immutable.
Lipschitz constant is 1.099999988109965
 - mu:  1.5e-08
 - lipschitz constant:  1.099999988109965
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa526496250> - 4
 - max iterations:  200
 - image variable shape:  (1, 512, 512)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [00:21<00:00,  9.51it/s]
 - final iteration number:  200
 - final log10 cost value:  6.0
 - converged:  False
Done.
Execution time:  21.0308540550177  seconds
----------------------------------------
No description has been provided for this image

$\leadsto$   There is a noticeable improvement in the reconstructed image quality when using multi-coil data compared to single-coil data during the Iterative POGM Reconstruction :

  • When using multi-coil data, we can take advantage of the different sensitivities of each MRI coil to generate a clearer reconstructed image ;

  • While, when using single-coil data, this information is not utilized.

Analysis formulation: Condat-Vu reconstruction¶

In [178]:
#linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)
linear_op = WaveletUD2(
    wavelet_id=24,
    nb_scale=4,
)
regularizer_op = SparseThreshold(Identity(), 4e-7, thresh_type="soft")
In [179]:
# Setup Reconstructor
reconstructor = SelfCalibrationReconstructor(
    fourier_op=fourier_op,
    linear_op=linear_op,
    regularizer_op=regularizer_op,
    gradient_formulation='analysis',
    verbose=1,
)
In [180]:
x_final, costs, metrics = reconstructor.reconstruct(
    kspace_data=kspace_obs,
    optimization_alg='condatvu',
    num_iterations=200,
)
image_rec = pysap.Image(data=np.abs(x_final))
plt.imshow(np.abs(image_rec), cmap='gray')
recon_ssim = ssim(image_rec, image)
plt.title('Condat-Vu Reconstruction\nSSIM = {}'.format(recon_ssim))
plt.show()
WARNING: Making input data immutable.
Lipschitz constant is 1.0833965993430223
WARNING: <class 'mri.operators.linear.wavelet.WaveletUD2'> does not inherit an operator parent.
 - mu:  4e-07
 - lipschitz constant:  1.0833965993430223
 - tau:  0.9448185977588914
 - sigma:  0.5
 - rho:  1.0
 - std:  None
 - 1/tau - sigma||L||^2 >= beta/2:  True
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletUD2 object at 0x7fa5107ba0d0> - 4
 - max iterations:  200
 - number of reweights:  0
 - primal variable shape:  (512, 512)
 - dual variable shape:  (2621440,)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [05:38<00:00,  1.69s/it]
 - final iteration number:  200
 - final cost value:  1000000.0
 - converged:  False
Done.
Execution time:  339.19452833998366  seconds
----------------------------------------

No description has been provided for this image

Q3   Analysis vs Synthesis CS-pMRI reconstruction:

Using the same sparsifying transform (i.e. linear_op set to Dauchechies 8),

compare the results between the analysis (Condat-Vu algorithm) and synthesis (FISTA) self-calibrated formulations in the multicoil acquisition setting.

In [181]:
def reconstruction_v1 (optimization_alg_) :

    fourier_op = FFT(mask=mask, shape=image.shape,
                 n_coils=cartesian_ref_image.shape[0])
    kspace_obs = fourier_op.op(cartesian_ref_image)

    # Setup the operators
    linear_op = WaveletN(
        wavelet_name='db8',
        nb_scale=4,
    )
    regularizer_op = SparseThreshold(Identity(), 1.5e-8, thresh_type="soft")

    if optimization_alg_ == "fista":

        # Setup Reconstructor
        reconstructor = SelfCalibrationReconstructor(
            fourier_op=fourier_op,
            linear_op=linear_op,
            regularizer_op=regularizer_op,
            gradient_formulation='synthesis',
            kspace_portion=0.01,
            verbose=1,
        )

        image_rec, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,
            optimization_alg='fista',
            num_iterations=200,
        )
        recon_ssim = ssim(image_rec, image)

    elif optimization_alg_ == "condatvu":
        # Setup Reconstructor
        reconstructor = SelfCalibrationReconstructor(
            fourier_op=fourier_op,
            linear_op=linear_op,
            regularizer_op=regularizer_op,
            gradient_formulation='analysis',
            verbose=1,
        )

        x_final, costs, metrics = reconstructor.reconstruct(
            kspace_data=kspace_obs,
            optimization_alg='condatvu',
            num_iterations=200,
        )
        image_rec = pysap.Image(data=np.abs(x_final))
        recon_ssim = ssim(image_rec, image)

    return image_rec, recon_ssim
In [182]:
fig, axs = plt.subplots(1, 2, figsize=(10, 5))


FISTA_soln_multi_coil, FISTA_ssim_multi_coil = reconstruction_v1 ("fista")
axs[0].imshow(np.abs(FISTA_soln_multi_coil), cmap='gray')
axs[0].set_title('Iterative FISTA Reconstruction (multi_coil) : SSIM = ' + str(np.around(FISTA_ssim_multi_coil, 3)))
axs[0].axis('off')

CondatVu_soln_multi_coil, CondatVu_ssim_multi_coil = reconstruction_v1 ("condatvu")

axs[1].imshow(np.abs(CondatVu_soln_multi_coil), cmap='gray')
axs[1].set_title('Iterative CondatVu Reconstruction (multi_coil) : SSIM = ' + str(np.around(CondatVu_ssim_multi_coil, 3)))
axs[1].axis('off')

plt.tight_layout()
plt.show()
/home/home/miniconda3/envs/tensor/lib/python3.11/site-packages/mri/reconstructors/utils/extract_sensitivity_maps.py:86: UserWarning: Data Values seem to have rank 3 (>2). Using is_fft for now.
  warnings.warn('Data Values seem to have rank '
WARNING: Making input data immutable.
WARNING: Making input data immutable.
Lipschitz constant is 1.0972648969380223
 - mu:  1.5e-08
 - lipschitz constant:  1.0972648969380223
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa524134590> - 4
 - max iterations:  200
 - image variable shape:  (512, 512)
 - alpha variable shape:  (291721,)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [01:56<00:00,  1.71it/s]
 - final iteration number:  200
 - final log10 cost value:  6.0
 - converged:  False
Done.
Execution time:  116.62128731000121  seconds
----------------------------------------
WARNING: Making input data immutable.
WARNING: Making input data immutable.
WARNING: <class 'mri.operators.linear.wavelet.WaveletN'> does not inherit an operator parent.
Lipschitz constant is 1.0833981375885406
 - mu:  1.5e-08
 - lipschitz constant:  1.0833981375885406
 - tau:  0.9599701299126275
 - sigma:  0.5
 - rho:  1.0
 - std:  None
 - 1/tau - sigma||L||^2 >= beta/2:  True
 - data:  (512, 512)
 - wavelet:  <mri.operators.linear.wavelet.WaveletN object at 0x7fa52648ae90> - 4
 - max iterations:  200
 - number of reweights:  0
 - primal variable shape:  (512, 512)
 - dual variable shape:  (291721,)
----------------------------------------
Starting optimization...
100%|██████████| 200/200 [02:00<00:00,  1.66it/s]
 - final iteration number:  200
 - final cost value:  1000000.0
 - converged:  False
Done.
Execution time:  120.3171403669985  seconds
----------------------------------------
No description has been provided for this image

Under the same operators conditions as

linear_op = WaveletN(wavelet_name='db8', nb_scale=4)

regularizer_op = SparseThreshold(Identity(), 1.5e-8, thresh_type="soft")

and

num_iterations=200

In the multicoil acquisition setting, the analysis (Condat-Vu algorithm) self-calibrated formulation leads to a better reconstructed image quality than using synthesis (FISTA) self-calibrated formulation.

Configuration¶

In [183]:
!nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Oct_29_23:50:19_PDT_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0
In [184]:
import tensorflow as tf
print(tf.__version__)  # 查看 TensorFlow 版本
print(tf.config.list_physical_devices('GPU'))  # 查看 GPU 设备
2.18.0
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
In [185]:
import torch
print(torch.__version__)  # 查看 PyTorch 版本
print(torch.cuda.is_available())  # 查看是否有 GPU 可用
print(torch.cuda.get_device_name(0))  # 查看使用的 GPU 名称
print(torch.version.cuda)  # 查看 CUDA 版本
2.7.0.dev20250224+cu126
True
NVIDIA T1200 Laptop GPU
12.6