I. INTRODUCTION
The phase contrast imaging technique developed by Zernike [1] is noninvasive, which makes it suitable for observing live cells. However, phase contrast images are usually surrounded by halos obscuring the details along the boundaries of the specimen, making it unsuitable for direct image segmentation or measurement. Therefore, image restoration is crucial for producing artifact-free images, which are useful for automated image segmentation and analysis.
The linear imaging model for microscopic image formation can be described mathematically using a point spread function (PSF), which indicates how a point in the object space is spread out in the image. Thus, the relation between the ideal image x and the recorded image y is a convolution procedure on a background followed by the addition of noise:
where* denotes 2D convolution, and b and n indicate background and noise, respectively. The first step for image restoration is to remove the background to produce a flat background. Background estimation methods have been described previously [2 – 5]. After preprocessing, (1) can be written in matrix notation
The PSF is often band-limited, such that its frequency response displays zero or near-zero values at high frequency. Therefore, the direct inverse of the PSF with the blurred image will amplify noise enormously [6 – 9], and the problem (2) is well known as an ill-posed problem. A classical way to overcome an ill-posed problem is to replace it with a well-posed problem by incorporating regularization terms into the least-squares estimation. Tikhonov regularization is the most commonly used technique for least-squares estimation [10, 11].
Image restoration problems with constraints have been investigated widely [12 – 17]. In this paper, we formulate the image restoration problem as a constrained convex optimization problem, and a finite step method (FSM) for its solution is proposed. The outline of this paper is as follows. In section 2, we describe the image restoration model as a minimization problem of convex cost function, which consists of a least-squares fitting term incorporating Gaussian filter and regularization terms with nonnegative constraints. We propose a FSM to solve the obtained optimization problem. In section 3, we apply the proposed method to the phase contrast microscopic image restoration problem. We provide some numerical examples and compare our proposed method with that of Yin et al. [18] and the gradient projection method (GPM) [19]. Finally, we conclude this work in section 4.
II. METHODS
As a preprocessing step, it is first necessary to remove the background from recorded microscopic images to produce a flat background and compensate for non-uniform illumination [20]. We estimate the background using the “rolling ball” algorithm proposed previously [2]. The corrected image is computed by subtracting the estimated background image from the recorded image. Next, we apply a Gaussian filter to the recorded image to reduce the influence of noise on the performance of the image restoration algorithm [21]. We can compensate for the excessive blurring of the Gaussian filter by convolving the PSF with the same Gaussian filter. Therefore, we consider the following model for restoration of the microscopic image
where G is a Gaussian filter matrix, L is a Laplacian matrix defining the smoothness regularization, and λ, μ are the regularization parameters. Problem (3) can be rewritten as follows:
where
It is easy to verify that C is the positive definite symmetric matrix, f(·) is a strictly convex quadratic function, and the constraint set is convex. Therefore, problem (4) will be a constrained convex optimization problem.
We consider the constrained quadratic minimization problem
where C is a positive definite symmetric matrix and d ∈ ℝn is the given vector. We formulate the optimal condition for problem (5) – (6).
Let x* ∈ Ω. Then x* is a solution of problem (5) – (6) if and only if
Proof. Necessity. We construct a Lagrange function for problem (5) – (6)
where
Let x* be a solution of the considered problem. We write the optimal condition for problem (5) – (6) at the point x* using the Lagrange function. Then, there exists a number and a vector such that the following condition holds
It is easy to show that this condition is equivalent in the following form
It is obvious that . If λ(*)=0, then and . Hence, condition (7) holds trivially. In this way, it is easy to obtain condition (7) in the case of λ*≠0. Thus, we have established that conditions (7) and (9) are equivalent.
Sufficiency. By the convexity of function f(x), we have f(x)−f(x*) ≥ (x−x* )T f′(x*).
Given that f′ = (x*)=Cx*−d, this inequality becomes f(x)−f(x*) ≥ xT(Cx*−d)−x*T(Cx*−d)
By virtue of condition (7), we obtain f(x) − f(x*) ≥ 0 for all x ∈ Ω, which completes the proof.
By introducing the index set I(x)={i|xi=0,1 ≤ i ≤ n}
at a point x ∈ Ω, then the optimality condition (7) for problem (5) – (6) can be reformulated as follows.
x* is a solution for problem (5) – (6) if and only if
We define the auxiliary problem in the following form:
where I is a subset of indices {1,2,…,n}.
The following lemma establishes a connection between problems (5) – (6) and (11).
Lemma 2. If x* is a solution for problem (5) – (6), then it becomes a solution to problem (11) for I = I(x*).
Proof. If I(x*) = ø, then it is obvious that x* > 0, and theorem 2 implies that f′(x*) = Cx* − d = 0. On the other hand, x* = C−1d is a solution of (11). Thus, the assertion of the theorem holds in this case. Let I(x*) ≠ ϕ. We write the Lagrange function for the problem (11)
where λ0 ∈ ℝ and λI = {λj|j∈I} ∈ Rm(I) are Lagrange multipliers, m(I) is the cardinality of the set I, and m(I) = m(I(x*)). As x* is the solution of problem (5) – (6), then it satisfies condition (8), particularly for number j, such that j ∈ I(x*)
We show that holds in (12). Indeed, if , then (12) implies , ∀j ∈ I(x*).
On the other hand, it is easy to see that , j ∉ I(x*) according to condition (8). This contradicts condition . In addition, it is easy to see that there exists a such that , j ∈ I(x*). As
then by condition (10) we obtain
Combining conditions (12) and (13), we have
It is the necessary condition for the problem (11) with Lagrange multipliers at the point x*. As problem (11) is convex, then (14) also becomes a sufficient condition. Consequently, the point x* is the solution of problem (11), which completes the proof.
Before we consider the numerical method for solving problem (5) – (6), we introduce a definition for a stationary point.
Definition 1. v ∈ ℝn is called a stationary point for problem (5) – (6) if v ∈ Ω and V is a solution of (11) for some I ⊂ {1,2,…,n}.
Proposition 1. There exists an FSM for solving problem (5) – (6).
Proof. As the set {1,2,…,n} has a finite number of subsets I, and the problem (11) has a unique solution by virtue of the strong convexity of the function f(x), then the number of stationary points of the problem (5) – (6) is finite. By Lemma 2, to find the solution of problem (5) – (6), it is sufficient to look through all of its stationary points and identify the one that minimizes the value of the function f(x). As problem (11) is an unconstrained minimization problem in the space ℝn−m(I), then this problem can be solved by the conjugate gradient method for a finite number of steps, which is less than n−m(I). Thus, finding the solution to problem (5) – (6) will finish in a finite number of steps.
In practice, of course, brute force methods for finding all the stationary points of problem (5) – (6) would require too much computation even for a value of n that is not very large. We introduce a more efficient iterative method to find stationary points of problem (5) – (6) compared with the brute force method. This method can be divided into two stages. The first stage involves moving from a feasible point x0 ∈ Ω to a stationary point x̅ ∈ Ω, such that f(x̅) ≤ f(x0). In the second stage, we check whether the stationary point x̅ has become a solution to problem (5) – (6), and if not, we find a feasible point x̃, such that f(x̃) < f(x̅).
As a result, we construct a sequence of stationary points, such that the function f(x) is strictly decreasing, and it is therefore impossible to return to a stationary point obtained previously. Since the number of stationary points is finite, when the process of generating the sequence terminates, the current stationary point becomes a solution to problem (5) – (6). We consider these schemes in detail.
Scheme 1. Moving from a feasible point to a stationary point.
Let x0 ∈ Ω be a given point. We need to find a stationary point x̅, such that f(x̅) ≤ f(x0). For this purpose, we construct a sequence xk ∈ Ω, k = 1,2,… and x̅k ∈ ℝn, k = 0,1,2,… as follows. Assume that xk ∈ Ω has already been built for a given k = 0,1,…, then we take the solution of problem (11) as x̅k, which is solved by the conjugate gradient method at I = Ik = I(xk).
Note that
as xk ∈ Ω I.
There are two possible cases: x ̅ k ∈ Ω or x̅k ∉ Ω. If x̅k ∈ Ω, then by definition xk is a stationary point of problem (5) – (6). Let x̅k ∉ Ω. Then, we construct a point
where parameter λk is determined from the following condition
and obviously, λk ≠ 1.
Using the definition of Ω in problem (5) – (6) and definition (17), we can find an explicit formula to calculate the parameter λk.
Indeed, from definition (17), we have
or
Hence, we obtain . This inequality holds trivially for all i ∈ I(xk), as . We define the index set
As x̅k ∉ Ω, there exists a number j ∈ {1,2,…,n}\I(xk), such that . Furthermore, . Hence, . Then, it is easy to see that λk is defined as
It is obvious that 0 < λk < 1. By construction of the sequence {xk}, we have xk+1 ∈ Ω. On the other hand, taking into account (15) and the convexity of f(x), we conclude that
Verify that I(xk) ⊂ I(xk+1), I(xk) ≠ I(xk+1). Indeed, if i ∈ I(xk), then , and therefore, , i.e., i ∈ I(xk+1). On the other hand, it is clear that , and therefore, j0 ∈ I(xk+1), but j0∉I(xk+1). Consequently, the set I(xk+1) contains at least one more element than I(xk). Thus, the following approximation xk+1 ∈ Ω is constructed, such that f(xk+1) ≤ f(xk) and the set I(xk+1) is substantially wider than I(xk).
However, the set I(xk)) ⊂ {1,2,…,n} cannot expand infinitely. Therefore, the described process terminates at some k-th iteration, and x̅ = x̅k is a stationary point of problem (5) – (6) when x̅k ∈ Ω. Thus, by virtue of (15) and (18), we have f(x̅) ≤ f(xk) ≤ f(xk−1) ≤ … ≤ f(x0).
Note that if f(x̅) = f(x0), then the uniqueness of the solution of problem (11) for I = I(x0) implies x̅ = x0. Thus, x̅ = x0 or f(x̅) < f(x0).
Scheme 2. Check the optimality of the stationary point.
Let the stationary point x̅=x̅k be obtained as the result of the previous scheme. It is necessary to check that x̅ is the solution of problem (5) – (6) or to find a point x̅0, such that f(x̅0) < f(x̅). For this purpose, we first need to check the optimal condition (7) at point x̅, i.e.,
If this condition holds, then we conclude that x̅ is a solution of the problem. Otherwise, we need to perform one iteration of the GPM [19], starting from the point x̅.
We construct the point x(α) as follows:
Then, the parameter α is determined from the condition
The number α ̅ >0, satisfying the inequalities f(PrΩ(x̅−(α̅)f′(x̅))) < f(x̅), can be found in a finite number of steps by checking these inequalities sequentially for α = λk, where k = 0,1,… and λ is a fixed number from (0,1) until it holds. After the number (α̅) is found, x̅0 = (PrΩ(x̅ − (α̅)f′(x̅)) will be constructed. In this way, we construct the FSM to solve the problem (5) – (6), as shown in Table 1.
III. EXPERIMENTAL RESULTS
The proposed method was tested using a phase contrast microscopic image sequence of bovine aortic endothelial cells (BAEC), provided by Professor Takeo Kanade’s Cell Image Analysis Group in collaboration with Lee E. Weiss, Research Professor at the Robotics Institute of Carnegie Mellon University, and with Phil Campbell, Research Professor at the Institute for Complex Engineered Systems of Carnegie Mellon University (Cell Image Analysis Data Archive, retrieved April 2, 2013 from http://zzyin.vasc.ri.cmu.edu/archive/index.php). Image sequences were acquired using a Leica DMI 6000B phase contrast microscope at ×10 magnification at 5-min intervals over 16 h; 210 images in total were obtained and captured at a resolution of 1040×1392 pixels per image.
In our experiment, we used theoretical PSF for phase contrast microscopy, which is given as [18]
where airy(r) is an obscured Airy pattern [22]
with the parameters R=4000 and W=800, and J1 (·) is the first order Bessel function of the first kind. The kernel size of PSF is 11×11. We set the initialization as f0 = 0.
In Figure 1, the first column shows certain phase contrast microscopic images with increasing cell densities in the view field. The second column shows the respective restoration results of the first column. After restoration, the cells are revealed with brighter pixels against the uniformly black background.

We evaluate the proposed method according to the convergence speed and the accuracy of cell counting in comparison with the method of Yin et al. and the GPM. To evaluate the convergence speed of image restoration methods, we used relative reconstruction error defined in [23],
where x is the image to be reconstructed and x(k) is the reconstruction after k iterations. Figure 2 shows the relative reconstruction error as a function of the number of iterations for three sample images and elapsed times for each method. In all experiments, the FSM outperformed the method of Yin et al. and the GPM in the number of iterations and computational time; even the time for a single iteration was higher than that for each of these methods.
We denote the cell and background as positive (P) and negative (N), respectively.
The accuracy of cell counting is defined as
where true positive (TP) represents the cell correctly identified as a cell, false positive (FP) represents the background incorrectly identified as a cell, and false negative (FN) represents the cell incorrectly identified as background. Human experts counted cells manually in the 23rd, 85th, and 136th sample images from the image sequence for use as benchmarks. Table 2 shows the accuracy of cell counting. The FSM gives the highest accuracy for all sample images.
Methods | Image 23 | Image 85 | Image 136 |
---|---|---|---|
Yin et al. method | 66.99 | 67.43 | 76.03 |
GPM | 80.40 | 84.85 | 85.97 |
FSM | 83.82 | 85.60 | 88.55 |
IV. CONCLUSION
The halo effect in phase contrast microscopy causes difficulty for automated image analysis. Image formation in phase contrast microscopy has been modeled as a convolution of the sample with the point spread function of the microscope on a background and distorted by additive noise. Based on the linear imaging model, we formulated an image restoration model as a constrained convex optimization problem and incorporated a Gaussian filter into our model to reduce the influence of noise on the restoration result. We proposed an FSM to solve the obtained optimization problem. Our proposed method is fast and converges to the global minimum solution. In the resulting images, cells are shown as bright objects on a uniformly zero background, and the halo effect is eliminated completely. The proposed method outperforms other methods with regard to accuracy in its application in cell counting. In conclusion, the proposed method is a promising technique for image restoration, and the results could be used for high-quality image processing and image analysis tasks, such as cell segmentation and cell counting applications.