Extraction of Canine Cataract Object for Developing Handy Pre-diagnostic Tool with Fuzzy Stretching and ART2 Learning

Article information

Int. J. Fuzzy Log. Intell. Syst Vol. 16, No. 1, 21-26, March, 2016
Publication date ( electronic ) : 2016 March 31
doi : https://doi.org/10.5391/IJFIS.2016.16.1.21
Department of Computer Engineering, Silla University, Busan, Korea
Correspondence to: Kwang Baek Kim (gbkim@silla.ac.kr)
received : 2016 March 01, rev-recd : 2016 March 14, accepted : 2016 March 24.

Abstract

Canine cataract is developed with aging and can cause the blindness or surgical treatment if not treated timely. The first observation must be made by pet owners but they do not have proper equipment and knowledge to see the abnormalities. In this paper, we propose an intelligent image processing method to extract canine cataract suspicious object from non-professional equipment such as ordinary digital camera and cellular phone photographs so that even casual owners of pet dog can make a pre-diagnosis of such a surgery-needed disease as soon as possible. The experiment shows that the proposed method is successful in most cases except the dog has similar colored hair to the color of cataract.

1. Introduction

A cataract is opacity within a lens. Like human, dogs also develop cataracts with age (often 8 years of age). It can cause blurred vision and eventually entire lens diffusely can become cloudy, and all functional vision may be lost [1]. It was revealed by an extensive cross-sectional survey that the prevalence of cataract in the general canine population increases with age and that by the age of 13.5 years none of the dogs in this study population was free of some degree of lens opacity [2]. There are numerous theories advocated as the cause of cataract. The cataracts may develop within weeks or slowly over years, in one or both the eyes [3]. The treatment of canine cataract can be injection of eye drops to delay the development or artificial lens insertion by surgery and the surgical methodology may be decided by the age of the dog and symptoms in consideration of postoperative treatment [4]. Although, the cataract is a very serious disorder, the treatment can be easier and simpler if dogs owners have proper awareness of the disease and timely treatment is performed [1]. Usually a pet dog gives a sign to its owner by expressing unusual behavior or by the change of its body when its health is at risk or having a disease. In canine cataract cases, the dog would express more degree of attachment to the owners than usual and/or staggers in walk. However, without deep knowledge about the pet dogs disease, owners tend to neglect such signs but depend on the regular check by veterinarians only to make the situation worse [5]. The first step in diagnosing cataract and other eye-related diseases for pet dogs is to identifying abnormalities of lens structure while the final decision whether and when the surgery is necessary is up to the veterinarian. Many computer-assisted techniques have been developed such as ultrasonography [6, 7], magnetic resonance imaging (MRI) [8], and even very recent endoscopic evaluation technique [9] to assist the medical expert. And, if we can extract characteristic features for the diagnosis, we may apply machine learning techniques in this field [10]. However, since the patient is a dog who has very limited capability of complaining its abnormalities to human, we need a pre-diagnostic tool for the pet owners who have limited knowledge of animal diseases [5]. That is, people may need handy pre-diagnosis type software tool to see if the pet has cataract-suspicious object with non-negligible size. A complete evaluation of eye by veterinary ophthalmologist will determine if the cataract treatment is necessary. Thus, in this paper, we propose an intelligent image processing method to detect cataract-suspicious dog eyeball analyzing software from normal cellular phone photographs for casual pet owners. The purpose of the system is to give alert to the pet owner as soon as possible when the pet shows eye-related abnormal behavior. Our goal in this research is, thus, to extract cataract-suspicious object from normal cellular phone photographs. The system may not need to be as accurate as the medical doctor’s tool such as ultrasonography or MRI images but its role is to draw attention to the public to listen to their pet’s complaints for preventive public healthcare. Unfortunately, there have not been any notable researches in this pre-diagnosis type canine cataract diagnosis but out previous effort [11]. In that study, we use bilinear interpolation to extend pixels in the image but that approach often suffers from aliasing thus the extracted object has non-negligible error magnitude. In this paper, we try intelligent quantization with ART2 learning [12] to overcome that problem. Figure 1 demonstrates the overall procedure of proposed method.

Figure 1

Cataract extraction processes.

2. Fuzzy Stretching for Enhancing the Brightness Contrast

In this paper, our input image is a simple type of digital camera image thus it contains irregular pixel values. The image may not have enough brightness contrast between the “bright” side and the “dark” side. The first task of our software is to fond the boundary lines of cataract-suspicious object. Thus, we stretch 0’s and 1’s as follows so that the bright contrast is effectively exaggerated to find the boundary lines as accurate as possible. It is a modified version of fuzzy stretching technique [13] to enhance the brightness contrast.

(1) Xm=1M×N×l=0255Xl

Let Xm be the average brightness value of the image with M×N size, the distance from the brightest pixel and the darkest pixel are defined as Eq. (2).

(2) Dmax=Xh-Xm,Dmin=Xm-Xl

The brightness adjustment value is computes as shown in Eq. (3).

(3) if(Xm>128)         adjustment=255-Xmelseif(XmDmin)         adjustment=Dminelseif(XmDmax)         adjustment=Dmaxelse         adjustment=Xm

Thus, the maximum, minimum, and the center point of the brightness which will form the fuzzy membership triangle are defined as follows;

(4) Imax=Xm+adjustmentImin=Xm-adjustment
(5) Imin=Imax+Imin2.

The membership function of each pixel in the region of interest is given as Figure 2.

Figure 2

Fuzzy membership function.

where Imin, Imax be the minimum and maximum brightness of the given region and Imid be the midpoint of the two. The cut point (αcut) in Figure 2 is computed as follows;

(6) if(Imin0)α-cut=IminImaxelseα-cut=0.5.

The degree of membership of a pixel α(X) is defined as formula (7).

(7) if(XImin)or(XImax)thenμ(X)=0if(X>Imid)thenμ(X)=Imax-XImax-Imidif(X<Imid)thenμ(X)=X-IminImid-Iminif(XImid)thenμ(X)=1

The upper limit value (β) and the lower limit value (α) are defined as the highest and lowest Xi among pixels that have higher membership degree than the cut point αcut. The upper limit value (β) and the lower limit value (α) are applied to the formula (8) to compute the final stretched value of the pixel.

(8) f(X)=X-αβ-α×255

After stretching, we need noise removal process thus we apply simple binarization process and associated image processing operations such as erosion and expansion to form the labeled object that is a suspicious cataract. The effect of fuzzy stretching is shown as Figure 3.

Figure 3

effect of fuzzy stretching. (a) Input image. (b) Fuzzy stretched.

3. Binarization Using ART2 Learning Based Quantization

Since the input image of our software is not made by regular medical equipment used in the hospital but from various casual digital equipment such as cellular phone camera, the cataract part of the image may consist of various color pixels. Thus, we need to quantize them with ART2 learning procedure. Binarization is applied after this process. ART2 learning is a type of neural network learning that learns repeatedly until the change of center vector is negligible using already learned pattern. The detailed steps of ART2 applied in this paper are shown in Table 1, and its effect is shown in Figure 4.

Applied ART2 algorithm

Figure 4

The effect of ART2 quantization. (a) Input image. (b) ART2 quantization.

After quantization, the binarization procedure is performed based on the brightness value of cluster centers and the result is as shown in Figure 5.

Figure 5

The effect of binarization. (a) Quantized image. (b) Binarized image.

4. Extracting Cataract with 8-Directional Contour Tracking

After binarization, we apply 8-directional contour tracing [14] to form the target cataract object. Figure 6 shows the scan direction of the contour tracing. The tracing is done twice (from top to bottom and from bottom to top) for the reliability and then apply labeling procedure to form the oval shape of the target. Erosion and expansion operators are applied during that process. Figure 7 demonstrates the tracing result.

Figure 6

Eight-Directional contour tracing.

Figure 7

The effect of 8-directional contour tracing. (a) Binarized image. (b) 8-Directional traced.

Figure 8 demonstrates the cataract extraction procedure with histogram analysis.

Figure 8

Cataract extraction process. (a) Histogram before noise removal. (b) Histogram after noise removal. (c) Contour tracing. (d) Extracting cataract.

5. Experiment

The system is implemented in Visual Studio 2010 C# with Intel(R) Core(TM) i7-4700 CPU@2.40GHz and 8.0GB RAM PC. 40 real world dog eye photographs (30 with cataract 10 without cataract) are used in this experiment.

Figure 9 shows a snapshot of the implemented system with an example of extraction of cataract suspicious object from a normal photograph.

Figure 9

Snapshot of the implemented system.

As one can see from Table 2 that summarizes the experiment result, the proposed software does not have any false negative but there are several cases of failed extraction. In most cases, our proposed method is successful in extracting cataract when the image contains it and successful in not extracting anything when the image is without cataract. However, in the case of Figure 10(b), the software cannot discriminate white hair around eyes from cataract in ART2 clustering process thus there was an inaccurate cataract extraction case. Otherwise, the proposed method is sufficiently effective in extracting canine cataract.

Experiment result

Figure 10

Successful and failed cases of cataract extraction. (a) Various successful cataract extraction. (b) Failed cataract extraction. (c) Image without cataract.

6. Conclusion

Computer-assisted medical tools are usually designed for medical doctors to make more accurate decision with deep domain knowledge. However, from the view of public health management, the ordinary people may also have a chance to observe the possible disease as early as possible. Especially, when the patient is pet dog who has limited capability of complaining its uncomfortable body condition; it is important to observe the anomalies as soon as possible from the pet owners side. In this paper, we propose an intelligent computer vision methodology to extract canine cataract from digital camera photographs. A series of carefully designed image processing algorithms including fuzzy stretching, ART2 learning for quantization, 8-directional contour tracing, and subsequent noise removal processes enable us to extract canine cataract from non-professional equipment like cellular phone camera. Unfortunately, there were failed extraction cases that the hair color of the dog is inseparable from the cataract but otherwise the proposed method is verified as effective for casual pet dog owners to see if the dig has cataract problem as early as possible. We expect that similar vision based methodology can be applied to extract glaucoma and give pre-diagnosis of abnormality as soon as possible.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

1. Raghuvanshi, P.D.S., and S.K. Maiti. Canine cataracts and its management: an overview. Journal of Animal Research. 3(1):17–26. 2013.
2. Williams, D.L., M.F. Heath, and C. Wallis. Prevalence of canine cataract: preliminary results of a cross-sectional study. Veterinary Ophthalmology. 7(1):29–35. 2004. http://dx.doi.org/10.1111/j.1463-5224.2004.00317.x. 10.1111/j.1463-5224.2004.00317.x. 14738504.
3. McCalla, T.L.. Cataract in dogs: Animal eye care LLC. Bellingham. http://www.waltham.com. 2005.
4. Yoo, S.J.. An outline about a canine cataract. Journal of the Korean Veterinary Medical Association. 40(8):708–716. 2004.
5. Kim, K.B., D.H. Song, and Y.W. Woo. Machine intelligence can guide pet dog health pre-diagnosis for casual owner: a neural network approach. International Journal of Bio-Science and Bio-Technology. 6(2):83–90. 2014. http://dx.doi.org/10.14257/ijbsbt.2014.6.2.08. 10.14257/ijbsbt.2014.6.2.08.
6. Martins, B.C., A.P. Ribeiro, J.P.D. Ortiz, C.B.S. Lisb?o, A.L.G. Souza, D. Brooks, and J.L. Laus. Ultrasonographic analysis of senile cataractous lens of dogs and its correlation to phacoemulsification. Arquivo Brasileiro de Medicina Veterinaria e Zootecnia. 63(5):1104–1112. 2011. http://dx.doi.org/10.1590/S0102-09352011000500010. 10.1590/S0102-09352011000500010.
7. Dar, M., D.K. Tiwari, D.B. Patil, and P.V. Parikh. B-scan ultrasonography of ocular abnormalities: a review of 182 dogs. Iranian Journal of Veterinary Research. 15(2):122–126. 2013.
8. Lizak, M.J., K. Mori, T.L. Ceckler, R.S. Balaban, and P.F. Kador. Quantitation of galactosemic cataracts in dogs using magnetization transfer contrast-enhanced magnetic resonance imaging. Investigative Ophthalmology & Visual Science. 37(11):2219–2227. 1996.
9. Abd-Elhamid, M.A., K.M. Ali, and A.M. Ayman. Endoscopic evaluation for the anterior and posterior segment of the eye: a new and useful technique for diagnosis of glaucoma in dogs. Life Science Journal. 11(11):233–237. 2014.
10. Gupta, S., and A.M. Karandikar. Diagnosis of diabetic retinopathy using machine learning. Journal of Research and Development. 3(2):1–6. 2015. http://dx.doi.org/10.4172/jrd.1000127.
11. Kim, K.B., H.J. Park, and D.H. Song. Extracting canine cataract object from normal cellular phone image for casual pet. In : Proceedings of the 7th International Conference on Information. Taipei, Taiwan; 2015;
12. Carpenter, G.A., and S. Grossberg. ART 2: self-organization of stable category recognition codes for analog input patterns. Applied Optics. 26(23):4919–4930. 1987. http://dx.doi.org/10.1364/AO.26.004919. 10.1364/AO.26.004919. 20523470.
13. Kim, K.B., and D.H. Song. Defect detection method using fuzzy stretching and ART2 learning from ceramic images. International Journal of Software Engineering and Its Applications. 8(9):29–38. 2014.
14. Gonzalez, R.C., and R.E. Woods. Digital Image Processing. 2nd ed. Upper Saddle River, NJ. Prentice Hall; 2002.

Biography

Kwang Baek Kim received his M.S. and Ph.D. degrees from the Department of Computer Science, Pusan National University, Busan, Korea, in 1993 and 1999, respectively. From 1997 to the present, he is a professor at the Department of Computer Engineering, Silla University, Korea. He is currently an associate editor for Journal of Intelligence and Information Systems and The Open Artificial Intelligence Journal (USA). His research interests include fuzzy neural network and applications, bioinformatics, and image processing.

E-mail : gbkim@silla.ac.kr

Article information Continued

Figure 1

Cataract extraction processes.

Figure 2

Fuzzy membership function.

Figure 3

effect of fuzzy stretching. (a) Input image. (b) Fuzzy stretched.

Figure 4

The effect of ART2 quantization. (a) Input image. (b) ART2 quantization.

Figure 5

The effect of binarization. (a) Quantized image. (b) Binarized image.

Figure 6

Eight-Directional contour tracing.

Figure 7

The effect of 8-directional contour tracing. (a) Binarized image. (b) 8-Directional traced.

Figure 8

Cataract extraction process. (a) Histogram before noise removal. (b) Histogram after noise removal. (c) Contour tracing. (d) Extracting cataract.

Figure 9

Snapshot of the implemented system.

Figure 10

Successful and failed cases of cataract extraction. (a) Various successful cataract extraction. (b) Failed cataract extraction. (c) Image without cataract.

Table 1

Applied ART2 algorithm

Step 1. Let Xk be the kth input pattern and Oj be the center of jth cluster.
 Definitions:
  • Set of Input Patterns X = x1, x2, ..., xN

  • Set of Clusters O = o1, o2, ..., oC

  • N=Number of Input pattern

  • C=Number of Cluster

  • T=Total Iteration

Step 2. Select winner cluster as Oj* that satisfies
Oj*=minxk-wjk
Step 3. Perform the similarity test over new input pattern. If the input pattern is within the radius of the winner cluster, it is included in the cluster and the center is adjusted as
wj*knew=xk+wj*koldClusterj*oldClusterj*old+1
 If the distance between the input and the center is larger than (vigilance parameter), then the input is independent from the cluster and it forms a new cluster by itself.
Step 4. Repeat Step 1 to Step 3 for all input pattern.
Step 5. Stop learning if the repetition is over the predefined number or there is no center vector change.

Table 2

Experiment result

Image Extracted Failed Total
With cataract 10 0 10
Without cataract 28 2 30

Total 38 2 40