international journal of clinical & medical images, clinical journals, medical journals, omics international, international journals, open access publication, scientific journals, free online medical journals, peer reviewed online journals, research, free online scientific articles
PHONE
+44-7482-878921

International Journal of Clinical & Medical Images

2376-0249

Clinical-Medical Image - International Journal of Clinical & Medical Images (2023) Volume 10, Issue 5

Optical Coherence Tomography Imitation by Fundus Image Processing and Machine Learning

Optical Coherence Tomography Imitation by Fundus Image Processing and Machine Learning

Author(s): Ahmet Narmanli*

Department of Engineering, Middle East Technical University, 07070 Konyaaltı/Antalya, Turkey

*Corresponding Author:
Ahmet Narmanli
Department of Engineering
Middle East Technical University
07070 Konyaaltı/Antalya, Turkey
Tel: +90 5352184672
E-mail: ahmet@uraltelekom.com, narmanli.ahmet@gmail.com

Received: 20 April 2023, Manuscript No. ijcmi-23-96686; Editor assigned: 21 April 2023, Pre QC No. P-96686; Reviewed: 05 May 2023, QC No. Q-96686; Revised: 10 May 2023, Manuscript No. R-96686; Published: 17 May 2023, DOI:10.4172/2376-0249.1000895

Citation: Narmanli A. (2023) Optical Coherence Tomography Imitation by Fundus Image Processing. Int J Clin Med Imaging 10: 895.

Copyright: © 2023 Narmanli A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Clinical-Medical Image

Abstract

Glaucoma is the main reason for blindness around the globe. The illness alters the structure of the optic nerve head by affecting it. The size of the structural elements of the optical disc must be manually estimated over an extended period, usually using color fundus pictures, to screen for the condition. Other techniques have also been developed, such as optical coherence tomography (OCT), to automatically compute diagnostic measurements of the optic nerve head. Unfortunately, it is pretty expensive and necessitates specialized imaging equipment. Hence, to better understand essential disorders like glaucoma, this paper aims to provide the fundamental methodology for producing approximative OCT optical disc surface diagrams without the need for tomography but through image processing and machine learning procedures applied to fundus pictures. These stages entail obtaining an optical disc ROI with an Object detection algorithm like Single Shot Detector and enhancing the depth of the cup across the appropriate area with a manually adjusted color map filter. The majority of their perspective-based varieties of 3D images are created to clarify the geometry of each sample’s optical disc. The results depict the efficiency of the proposed approach. The goal is to give medical personnel a better visual aid for diagnosing irregularities.

Introduction

The use of Fundus Images for the initial evaluation of ocular disorders is widespread. When an ophthalmologist sees slight changes in the optic nerve head (ONH) in the patient’s fundus photographs, they might recommend therapy. On the other hand, optical coherence tomography (OCT) is a rapid imaging method for quantitatively evaluating the retina’s layers. Retinal layer morphology can be studied using OCT images to paint a complete picture of eye illness [1]. OCT has become increasingly popular as a medical imaging method in recent years. In the past few years, it has been used in cosmetics, cardiology and ophthalmology as diagnostic tools. OCT can create volumetric cross-sectional images of the tissues, allowing for the investigation of tissue structure and characteristics without causing any damage to the tissues themselves [2].

The detection of disorders that threaten eyesight requires the use of optical disc visualization at the appropriate medical stage. Tomography-based ophthalmic equipment, OCT, takes an x-ray approach to ocular devices. Unfortunately, OCT technology is not commonly employed because of its high price and restricted field of view. Photographing the retina from the back, called a retinal fundus image, is a non-invasive imaging technique to document the retina’s anatomy and structure cost-effectively, non-invasively and accurately visualize the retinal vasculature. Hence researchers have turned their attention to a novel method based on digital image processing and machine learning. Similarly, optical disc glaucoma research is in high demand on the road to machine learning and image processing [3]. Machine learning (ML) is a research branch that allows computers to learn without being explicitly taught. In contrast to traditional computer programs, machine learning software can adjust the details of its algorithms as it gathers more experience and knowledge. As a result of their ability to learn from experience, ML systems can produce insightful predictions following the settings of their algorithms. Machine learning is used in various fields, including healthcare [4], object detection [5], computer vision [6], sports [7], medical imaging [8], Education [9] and many more.

Accordingly, the paper aims to use image processing and ML techniques for producing OCT images from the retinal fundus images to detect medical anomalies like glaucoma. Previous techniques with alternative approaches, like using stereo pictures [10,11], define OCT imitation by image processing. These imaging technologies shed light on extracting depth data from the optical disc zone. The basic concept is to use image processing on photos taken at varying angles to generate a gradient. Furthermore, this research focuses on a simple image-processing strategy prioritizing accuracy over visual appeal. Nevertheless, such techniques rely on two images to depict depth, making them impractical for most applications. In addition to meeting the requirements for quality needs, the proposed approach enables the extraction of 3D depth visualization on a single optical disc. In addition, the proposed method asserts that it is more robust than others while using a less complex methodology.

The significant contributions of the research are that the OCT mechanism’s equivalent image is obtained by considering a series of processes on the fundus images. The quality of the 3D image is evaluated using the ROI data from an optical disc (obtained using an object detection network). When only images meeting specific criteria are included in the pipeline’s processing, performance and reliability, improve. Thus, the proposed approach uses images with particular criteria as input. The accuracy of the generated 3D images, like the optical disc depth or cup width, can be checked by comparing them to the related OCT images. The segmentation result is applied to the 3D image to improve it with additional features. Such a glaucoma-detecting approach may ultimately lead to a complete comprehension of the cup and disc area on the image, hence aiding in the diagnosis of the anomaly by medical specialists.

The paper is further divided into various sections. Section II describes a literature review regarding using fundus and OCT images for glaucoma detection and for producing different forms of images using image processing and ML techniques. The proposed architecture is presented in Section III. Section IV describes various pre-processing steps performed on the input fundus image and optical disc detection. The methodology adopted for OCT replication is presented in Section V. Section VI discusses the results and efficiency of the proposed approach utilized for OCT replication. Finally, the conclusion is presented in Section VII.

Literature Review

A decade ago, image processing techniques were quite familiar in detecting diseases using medical modalities. Nevertheless, image processing techniques were not able to process large datasets. Hence, here comes the requirement of machine learning techniques that can process large amounts of data in a few minutes and produce optimized models, increasing the accuracy. ML methods for unstructured data like images require one intermediate feature extraction step involving various image processing techniques. Henceforth, many researchers use ML and image processing techniques on fundus and OCT images to detect medical abnormalities like glaucoma in the eye.

Various researchers have used OCT and fundus images for segmenting the optic nerve head, disc and many more. Also, some researchers focus on producing higher imaging using fundus images by adopting image processing and ML techniques.

The study by Miri MS, et al. [12] presented a multimodal method for detecting and segmenting the optic disc and cup borders by combining the complementary information from fundus pictures and spectral domain optical coherence tomography (SD-OCT) volumes. The issue is posed as an optimization problem and the best answer is found via a theoretical graph-based approach in machine learning. Specifically, a random forest classifier creates separate in-region cost functions for the foreground, middle ground and background areas. After that, radial scans are made by resampling the volumes, making Bruch’s Membrane Opening (BMO) end-points more visible. The disc-boundary cost function is developed with a random forest classifier, the same as the in-region cost function, except the features are derived from the radial projection image using the Haar Stationary Wavelet Transform (SWT). The method is tested on a leave-one-out set of 25 image pairs from 25 people using multiple modalities.

Cazanas-Gordon A, et al. [13] have presented a method that shows how to use pairs of stereo fundus photos to estimate the thickness of the optic nerve head structure and create a 3D model of the optic nerve head and its structural elements—reconstruction of the ONH in three dimensions using disparity calculations of markers from two stereoscopic. Fundus images promoted a low-cost alternative to high-end imaging technologies like OCT. The findings of this study demonstrate that, in comparison to fundus photography and stereoscopy, 3D modeling enables access to morphological characteristics that are not visible in traditional fundus photography, improving diagnostic information for determining optical disc depth.

Alternatively, Tavakkoli A, et al. [14] introduce a conditional generative adversarial network (GAN) for deep learning that can transform fundus photos into fluorescein angiography (FA) images. The recommended GAN significantly outperforms two other cutting-edge generative algorithms in producing anatomically accurate angiograms with fidelity comparable to FA pictures. Expert studies further confirm that the proposed approach generates FA images of such high quality that they cannot be distinguished from clinical angiograms.

On the other hand, Toğaçar M [15] have proposed a Deep Learning (DL) approach for processing the fundus images via pre-processing stages with the hopes of positively contributing to the performance of CNN models in identifying diabetic retinopathy, i.e., morphological gradient and segmentation approaches. Moreover, two free datasets are used for experimentation. The Atom Search Optimization approach is used to improve accuracy. The suggested method achieves the highest accuracy of 99.81%.

Also, many researchers work on classifying fundus and OCT images of eye diseases and detecting glaucoma. Correspondingly, Gaddipati DJ and Sivaswamy J [16] introduce an innovative method that uses images from both the OCT and fundus modalities to train a model that can successfully map fundus features onto OCT data to detect glaucoma from a single modality (fundus). The suggested model outperformed a model trained with only fundus characteristics on a diverse sample of 568 photos, achieving an AUC and Sensitivity of 0.9429 and 0.9044, respectively. Over 1,600 images were used for cross-validation, showing that the suggested model outperformed the state-of-the-art method by gains ranging from 8% to 18% on both datasets.

Likewise, Wang M, et al. [17] used the DL method for the four-class classification of ocular diseases. The authors use OCT images of the retina and analyze them using three different convolutional neural network (CNN) models having five, seven and nine layers for identifying the various retinal layers, extracting useful information, keeping track of any new deviations and forecasting the various eye deformities. The findings show that, with a classification accuracy of 96.5%, the suggested model beats the manual ophthalmological diagnosis.

Similar to the above studies, Kumar Y and Gupta S [18] have used ML-based techniques that have been trained and validated using numerous photos of eye disorders such as Diabetic Macular Edema (DME) and Choroidal Neovascularisation (CNV), DRUSEN, GLAUCOMA, NORMAL and CATARACTS. To predict eye illnesses, different transfer learning models were used. The study employed Basic CNN, Deep CNN, AlexNet 2, Xception, Inception V3, ResNet 50 and DenseNet121 as deep transfer learning methodologies. The simulation outcomes show that ResNet50 beat all other methods and achieved a validation accuracy of 98.9%. Also, the Xception model performed well and had an accuracy rate of 98.4%. The Xception model’s training and validation loss are 0.15 and 0.05, respectively. The Xception model’s best root mean squared error is 0.22.

Like the above studies, Bandyopadhyay S, et al. [19] proposed an automated method for detecting diabetic retinopathy and glaucoma. The technique uses various image processing methods like Discrete wavelet transform as a pre-processing step for feature extraction. Also, different ML techniques like the k-nearest neighbor, Multilayer perceptron, Multinomial Naïve Bayes, Decision tree and Ensemble methods like voting and stacking are used to detect diseases. Among all the techniques, Stacking ensemble methods outperform by achieving an accuracy of 79.87%.

Fathima & Subhija also suggested an effective glaucoma detection technique using retinal fundus images and Optical Coherence Tomography (OCT) images of the same eye. The structural feature analysis of the retinal fundus image compares K-means clustering and Otsu thresholding methods. The OCT image is used to calculate the thickness of the retinal nerve fiber layer (RNFL). Fundus and OCT images are classified as glaucomatous, non-glaucomatous and suspect for glaucoma using Support Vector Machine (SVM) with multiclass. A public dataset and photos from an eye hospital comprising glaucomatous, non-glaucomatous and glaucoma suspects are used in this work. The accuracy of the suggested approach is 90% for fundus images and 92% for OCT images [20].

From the literature analysis, it appears that it is possible to determine the stage of glaucoma by using both the retinal fundus image and the OCT image. But the problem with OCT imaging is that it is rarely available and is expensive; however, fundas images can be obtained at a meager cost. Most of the research utilized image processing and ML-based solutions to produce different modality images from one or detect glaucoma from the fundus and OCT images.

Additionally, the previous research is based on optical attributes of fundus imagery and obtaining depth information from this perspective. Such calculations are made through the location of optical discs with different angles and getting the 3D results. For such implementations, the requirement of quality and cropping for optical discs is passed through the process. However, this paper presents a novel approach that uses image processing and ML techniques with a process flow from the raw fundus image to the ending 3D image. Still, the previous methods proposed a more precise picture of the depth with lesser noise. Hence, this concludes with the aim of precision, whereas the proposed approach claims robustness.

Proposed architecture

An overarching architecture for the suggested approach should be provided in the flow to get a well-structured perspective of the process. Figure 1 shows the proposed system architecture that depicts the complete process of generating the necessary fundus image to crop, evaluate the optical disc’s quality and obtain 3D imaging. The pertinent models and process types can be changed to create a different system with an improved method, such as segmentation or classification for an anomaly diagnosis. As depicted in Figure 1, the proposed architecture describes the overall process for converting the fundus images into a 3D Optical Disc map. Various pre-processing steps are performed, like cropping and resizing the image and cropping the optical disc. The quality of the optic disc is accessed and then the colormap is applied to it. Finally, a 3D map is generated and a 3D optical disk map is obtained. A detailed description of the proposed methodology is given in the below sections.

clinical-medical-images-architecture

Figure 1. Proposed architecture of 3D OCT imitation process with descriptions.

Pre-Processing steps

It is essential to choose the images and apply the proper filters to enhance the optical disc area before beginning the image processing step to create the 3D surface. Another technique for advancing a start-to-end application is an object detection model to locate optical discs.

Optical disc detection: The object detection model is reserved to find an optical disc on random fundus images. A collection of annotated optical discs and fovea locations was used to fit this model. Since these areas have dominant characteristics, passing the final test is a foregone conclusion. With a collection of open-source fundus images, such models become reasonably simple to create after a few repetitions. Nevertheless, YOLO, EfficenDet, or CenterNet can also be employed; Single Shot Detector (SSD) structure is used in this implementation. Figure 2 illustrates a model’s prediction for a randomly selected fundus image.

clinical-medical-images-optical-disc

Figure 2. Optical disc detection on a random fundus image predicted by the model.

Data quality is crucial for success in this object detection and choosing outlier data supports the results. To create bounding boxes around optical disc areas using masks and the function of OpenCV’s boundingRect [21] to maintain a healthy dataset for object detection, you can obtain pre-annotated datasets like RIGA [22]. Figure 3 displays a real-world illustration in which the bounding box of the optical disc segmentation markings is first detected and the direct color fundus image for the object detection dataset is then applied.

clinical-medical-images-segmentation

Figure 3. Optical disc object detection dataset through segmentation annotation on a fundus image. (a) The bounding rectangle of the mask with the boundingRect function and (b) The extracted points are applied to the fundus image to create the object detection dataset.

After creating the training set, the following tasks can be accomplished using established model training structures. The problem of detecting optical discs is quite evident for object detection algorithms, where a deep model is merely a cost of inference time. Hence, models with small input sizes (320 × 320 px) could be chosen.

Optical disc quality assessment: The exemplary optical disc for 3-dimensional visualization should be selected using a quality assessment model created. A set of cropped optical discs from the raw data are categorized as “good” or “reject.” to train the model to distinguish between optical discs with apparent cup zones and blurry or impossible to detect any cup.

MobileNetV3 [23] was used as the model to create a lightweight run-time system. It successfully divides a set of 320 × 320 optical discs with equal numbers of “good” and “reject” in TFRecord format with an accuracy rate of 88%. This binary classification is kept on a 0.77 threshold of 1 (good) and 0 (reject) spectrum to stay an edge quality. To ensure the model’s focus on classification -which is intended as a cup zone- a grad cam [24] visualization is established. Visualization for particular types of camera sources can be examined in Figure 4.

clinical-medical-images-visualization

Figure 4. Grad cam visualization of optical disc quality assessment model on images as chosen “good.”

Oct Replication

The grad cam displays a typical type of cup focus because it was selected for “excellent” model outputs. Since a cup on the image is the primary tool for visualizing an excellent optical disc, this result is a prelude to more advanced methods. Self-dedicating labeling on cropped optical discs received from the previous step is required to establish a dataset for such implementation. Since no obvious or general dataset exists for quality labeling on optical discs, creating a set with apparent cups on optical discs with desired fundus image sources is mandatory. Taking well-shooted optical discs from clean dataset sources (like RIGA, REFUGE and many more.) as “good” and finding arbitrary defect optical discs as “reject” samples can be a workaround to establish a training set. The custom implementations thresholds and other parameters may vary. Since the preparation of a reliable optical disc is reserved, generating an OCT replication is structured within the specified steps. Such steps are relevant to the optic disc type and may vary with an acceptable error edge.

Custom colormap appliance: The image mapping should have been altered to give a negative phase on view in achieving a 3-dimensional depth. Moreover, such a view is acquired by designing a custom colormap. Also, the custom colormap is based on inverse triangular numeration of color layers. At the same time, a robust colormap set is achieved as the bottom value is tuned with iterations, as shown below.

mapred=[255,254,…89,88,89,…,175,176]

mapgreen=[255,254,…35,34,35,…,67,68]

mapblue=[255,254,…56,55,57,…,109,110]

This scheme of color map is obtained by intuitive trials to make the relative negative aspect of the image for depth-related purposes. Since it is primarily empirical, its structure can be tuned for alternative appliances. The primary methodology of the colormap is to establish lower and higher edges of RGB values as high and mid levels as low, which amplifies the edges for the surface matrix of the optical disc.

Equation 1 shows the general formulation of the colormap design.

image

After achieving the colormap design of an essential appliance through the OpenCV lookup table (lut) [25] function on several sources of quality chosen optical disc images, a reliable result is achieved, as shown in Figure 5. With reasonable iterations over different sources, this custom color map receives the desired depth visualizing a negative image.

clinical-medical-images-visualization

Figure 5. Custom colormap appliance on selections of optical discs on different camera and data sources.

Overall generating surfaces: A start-to-end process is represented by achieving the essential tools for generating the mesh surface for the optical disc area to verify the expected functionality. Crop size is specifically chosen for the input size of the quality assessment model.

Extracting optical disc: As the first step from an arbitrary fundus image, an optical disc area is cropped for desired shapes and qualified by the quality assessment model. Subjected optical disc image can be examined in Figure 6.

clinical-medical-images-visualization

Figure 6. Cropped optical disc on selected quality filters for surface visualization approach.

2) Applying colormap: The second step involves the primary colormap appliance to enhance the depth of the cup area. The resultant image can be viewed in Figure 7. This color-filtering technique is the design’s central accumulator, giving a chance to determine the image’s surface layer. This colormap can also provide a better visual for segmentation annotations for glaucoma ML techniques with a clearer perspective of the cup zone.

clinical-medical-images-optical-disc

Figure 7. Colormap applied an optical disc image to enhance the depth features of the cup zone.

3) Extracting green layer: As the layers are examined through the filtered image, the green layer is determined as the dominant match for the depth features. A grayscale image is produced, as depicted in Figure 8, after the cup area image has also been cropped in the middle. A moderate crop for the optical disc is safe to utilize because the optical disc detection and quality model concentrates on the optical disc’s center.

clinical-medical-images-grayscale

Figure 8. Green layer extracted as a grayscale image with a center cropped area.

4) Generating surfaces and profiles: As the last step, a simple mesh is generated on the received grayscale image and depth is visualized on a 3-dimensional basis with matplotlib tools [26]. Since the features of depth are dominant in previous enhancement steps, a reliable result is achieved, which can be examined in Figure 9.

clinical-medical-images-dimensional

Figure 9. 3-dimensional depth visualization of the subjected optical disc image.

Colormap for the mesh surface is selected as “inferno” since it enhances the area’s geography to perceive the cup zone. A series of other visualizations was needed to implement the resultant mesh surface. Hence, extracting features for the visualization is processed for better validation.

A profile view of the 3D plotting is achieved by adjusting the viewpoint parameters to progress the result for a better perspective. The resultant plot is shown in Figure 10.

clinical-medical-images-optical-disc

Figure 10. Side profile view of the generated mesh surface of the optical disc.

As further steps, the projections of the surface based on the x-axis and y-axis are generated. Such implementation is decided to apply to medical examinations, as shown in Figure 11.

clinical-medical-images-optical-disc

Figure 11. Projectional view of optical disc surface concerning (a) x-axis and (b) y-axis.

As the last implementation, the z-axis projection is achieved to enhance the view of the optical disc area on the “inferno” colormap basis. The resulting image gives a better understanding of the depth, as shown in Figure 12.

clinical-medical-images-surface

Figure 12. z-axis projection of the resultant mesh surface to enhance the view of the optical disc and cup area.

Hence the resultant image set gives a better understanding of the cup area and optical disc characteristics for a better experience for professional medical diagnosis steps.

Comparison with OCT: To establish a reliable basis for the developed image processing algorithm, comparison with OCT data was mandatory.

So a specimen of OCT image set with original fundus photography is taken into the subject.

Generating surface and profile: With the method mentioned in previous steps, an OCT fundus image is cropped by an optical disc area and results are listed to compare visually. The results are consistent with the first part of the view with the surface and cross-sectional image of the optical disc. The resultant image grid for the comparison can be examined in Figure 13.

clinical-medical-images-visualization

Figure 13. Comparison of optical disc area 3D visualization with image processing from direct fundus image and OCT outputs of relevant aspects.

Hence the integrity of the first view has a basis of reliability and checking the results for depth and area of the optical disc is another need to test image processing-based visualization.

2) Checking for numerical integrity: A ratio test of the pertinent spaces is used to properly integrate the depth and area for the proposed system of OCT. A compatible ratio denotes reliability when measured against the actual OCT results. Such a numerical comparison is conducted with the grayscale values of the final green layer image. These thresholds are reached by comparisons with the image where the cup and disc portions fit, unlike traditional forms where the cup is mentioned for the entire image as follows

min(imggreen) ≤ cupmask ≤ min(imggreen)+3*std(imggreen)

The three standard variation edge provides a reliable result for the visual output of the image; in the tests involving this particular image, the appropriate disc area is stated as follows

min(imggreen) ≤ discmask ≤ min(imggreen)+5.3*std(imggreen)

The 8-bit values of the pertinent regions collected towards the disc’s border are what the subsequent computation for the depth volume of the cup and disc area is given in equation 2.

Volcup=Σ(imggreen[cupmask]) (2)

Volcup=Σ(imggreen[discmask])

Results and Discussion

Since the relevant values are also present for the tomography scan outputs, the proposed approach’s pixel values for the proposed method and OCT metric values could be compared, as in Table 1.

Table 1: Numerical comparison of OCT and image processing.

  Numerical Comparison of OCT and Image Processing
DSP (pixel) OCT(normalized)
Area(Disc) 9221 2.228054
Area(Cup) 5968 1.468776
Volume(Disc) 557468 0.741159
Volume (Cup) 486972 0.631675
Area Ratio (Disc/Cup) 1.545074 1.516946
Volume Ratio (Disc/Cup) 1.144764 1.173323

Table 1 show that the proposed method area ratio’s pixel value is 1.54, while the metric value for OCT is 1.51. Also, the pixel value for the volume ratio for the proposed method is 1.14, while the metric value for OCT is 1.17. Hence, the results proved that the image processing visualization is compatible with the ground truth values. Although this implementation has the quality and pre-processing restrictions, it can occasionally be less expensive than other options. Consequently, the efficiency of the proposed approach is proved.

Conclusion

The method of image processing applied to arbitrary fundus imaging can produce a surface for an optical disc area that meets specific requirements with a fair amount of success. Such an implementation may be helpful for medical experts during the segmentation and visualization stages of optical disc pre-processing. This application’s primary goal is experimenting with different colormap configurations, demonstrating how applying the right color filter can quickly fix difficulties involving robust images. More comprehensive ranges of the images should be tweaked into parameters to capture a better grasp of this type of technique in fundus photography. A suggestion of a lightweight and uncomplicated solution for pertinent phases may be configured to move on to the subsequent operations.

From the particular relevant example of optical disc comparison for OCT and image processing results, the overall error of area disc/cup ratio is ~%1.8 and for volume disc/cup ratio is ~%2.4. OCT claims a percentage of disc/cup area as 1.516 and the proposed approach gives a result of 1.545; for the volume counterpart, OCT has 1.173 and the proposed method has 1.144. The final practical point is that this approach’s tool development primarily supports glaucoma diagnosis. However, it may not be correct because it has uncertain properties and fundus imaging. Moreover, OCT diagnosis is a quick fix to obtain high-resolution results. The primary component of a glaucoma diagnosis is the cup and disc detection (CDR). The proposed application provides a quickly constructed interface for diagnosticians to identify the cup and optical disc zones more precisely.

Keywords

Optical coherence tomography; Optic disc; Optic cup; Fundus; Glaucoma; 3D Mapping; Medical image processing; Fundus imaging

Conflict of Interest

None of the authors has any conflicts of interests to disclose.

References

[1] Arnold MJ and Reynolds KE. (2003). Hedonic shopping motivations. J Retail 79: 77-95.

Google Scholar, Crossref, Indexed at

[2] James GF, Costas P, Stephen AB and Mark EB. (2000). Optical coherence tomography: An emerging technology for biomedical imaging and optical biopsy. Neoplasia 2: 9-25.

Google Scholar, Crossref, Indexed at

[3] Bulut E, Celebi ARC, Dokur M and Dayi O. (2021). Analysis of trending topics in glaucoma articles from an altmetric perspective. Int Ophthalmol 41: 2125-2137.

Google Scholar, Crossref, Indexed at

[4] Israni P. (2019). Breast Cancer Diagnosis (BCD) model using machine learning. Int J Eng Innov 8: 4456-4463.

Google Scholar, Crossref, Indexed at

[5] Israni D and Mewada H. (2018). Identity retention of multiple objects under extreme occlusion scenarios using feature descriptors. J Commun Softw Syst 14: 290-301.

Google Scholar, Crossref, Indexed at

[6] Alolaiwy M, Tanik M and Jololian L. (2021). From CNNs to adaptive filter design for digital image denoising using reinforcement q-learning. In SoutheastCon 2021: 1-8.

Google Scholar, Crossref, Indexed at

[7] Israni D and Mewada H. (2018). Feature descriptor based identity retention and tracking of players under intense occlusion in soccer videos. Int J Intell Syst 11: 31-41.

Google Scholar, Crossref, Indexed at

[8] Abhishek K. (2022. News article classification using a transfer learning approach. ICRITO: 1-6.

Google Scholar, Crossref, Indexed at

[9] Kanaparthi V. (2022). Examining natural language processing techniques in the education and healthcare fields. Int J Eng Adv Technol 12: 8–18.

Google Scholar, Crossref, Indexed at

[10] Xu J, Chutatape O, Zheng C and Kuan PCT. (2006). Three dimensional optic disc visualisation from stereo images via dual registration and ocular media optical correction. Br J Ophthalmol 90: 181-185.

Google Scholar, Crossref, Indexed at

[11] Bansal M, Sizintsev M, Eledath J, Sawhney H and Pearson DJ, et al. (2013). 3D optic disc reconstruction via a global fundus stereo algorithm. Annu Int Conf IEEE Eng Med Biol Soc 52: 5877-5882.

Google Scholar, Crossref, Indexed at

[12] Miri MS, Abràmoff MD, Lee K, Niemeijer M and Wang JK, et al. (2015). Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach. IEEE Trans Med Imaging 34: 1854-1866.

Google Scholar, Crossref, Indexed at

[13] Cazañas-Gordón A, Parra-Mora E and da Silva Cruz LA. (2021). 3D modeling of the optic nerve head of glaucomatous eyes using fundus stereo images. Telecoms Conference: 1-5.

Google Scholar, Crossref, Indexed at

[14] Tavakkoli A, Kamran SA, Hossain KF and Zuckerbrod SL. (2020). A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep 10: 1-15.

Google Scholar, Crossref, Indexed at

[15] Toğaçar M. (2022). Detection of retinopathy disease using morphological gradient and segmentation approaches in fundus images. Comput Methods Programs Biomed 214: 106579.

Google Scholar, Crossref, Indexed at

[16] Gaddipati DJ and Sivaswamy J. (2021). Glaucoma assessment from fundus images with fundus to OCT feature space mapping. ACM Trans Inf Syst 3: 1-15.

Google Scholar, Crossref, Indexed at

[17] Wang M, Yao J, Zhang G, Guan B and Wang X, et al. (2021). ParallelNet: Multiple backbone network for detection tasks on thigh bone fracture. Multimedia Systems 1417–1438.

Google Scholar, Crossref, Indexed at

[18] Kumar Y and Gupta S. (2023). Deep transfer learning approaches to predict glaucoma, cataract, choroidal neovascularization, diabetic macular edema, drusen and healthy eyes: An experimental review. Arch Comput 30: 521-541.

Google Scholar, Crossref, Indexed at

[19] Bandyopadhyay S, Bose P, Dutta S and Goyal V. (2021). Detection of glucoma and diabetes through image processing and machine learning approaches.

Google Scholar, Crossref, Indexed at

[20] CS F. (2019). Glaucoma detection using fundus images and OCT images. ICSEE.

Google Scholar, Crossref, Indexed at

[21] https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html

[22] https://deepblue.lib.umich.edu/data/concern/data_sets/3b591905z

[23] Howard A, Sandler M, Chu G, Chen LC and Chen B, et al. (2019). Searching for mobilenetv3. IEEE Winter Conf Appl Comput Vis: 1314-1324.

[24] https://github.com/ismailuddin/gradcam-tensorflow-2

[25] https://github.com/Milchreis/OpenCV-LUT-Editor

[26] https://www.tutorialspoint.com/how-to-create-a-surface-plot-from-a-greyscale-image-with-matplotlib

flyer Image Awards Nomination
Indexing and Archiving
A generic square placeholder image with rounded corners in a figure.
All published articles are assigned to Digital Object Identifier (DOI)- CrossRef.
A generic square placeholder image with rounded corners in a figure.
All published articles of this journal are included in the indexing and abstracting coverage of:
Google Scholar citation report
Citations : 293

International Journal of Clinical & Medical Images received 293 citations as per Google Scholar report