DeepCAN: A Modular Deep Learning System for Automated Cell Counting and Viability Analysis
View/ Open
Access
info:eu-repo/semantics/embargoedAccessDate
2022Author
Eren, FurkanAslan, Mete
Kanarya, Dilek
Uysallı, Yigit
Aydin, Musa
Kiraz, Berna
Aydın, Ömer
Kiraz, Alper
Metadata
Show full item recordCitation
EREN, Furkan, Mete ASLAN, Dilek KANARYA, Yiğit UYSALLI, Musa AYDIN, Berna KİRAZ, Ömer AYDIN & Alper KİRAZ. "DeepCAN: A Modular Deep Learning System for Automated Cell Counting and Viability Analysis". Generic Colorized Journal, 20 (2022): 1-9.Abstract
Precise and quick monitoring of key cytometric features such as cell count, cell size, cell morphology,
and DNA content is crucial for applications in biotechnology, medical sciences, and cell culture research. Traditionally, image cytometry relies on the use of a hemocytometer accompanied with visual inspection of an operator
under a microscope. This approach is prone to error due
to subjective decisions of the operator. Recently, deep
learning approaches have emerged as powerful tools enabling quick and highly accurate image cytometric analysis that are easily generalizable to different cell types.
Leading to simpler, more compact, and less expensive
solutions, these approaches revealed image cytometry as
a viable alternative to flow cytometry or Coulter counting.
In this study, we demonstrate a modular deep learning
system, DeepCAN, that provides a complete solution for
automated cell counting and viability analysis. DeepCAN
employs three different neural network blocks called Parallel Segmenter, Cluster CNN, and Viability CNN that are
trained for initial segmentation, cluster separation, and
cell viability analysis, respectively. Parallel Segmenter and
Cluster CNN blocks achieve highly accurate segmentation
of individual cells while Viability CNN block performs viability classification. A modified U-Net network, a wellknown deep neural network model for bioimage analysis, is
used in Parallel Segmenter while LeNet-5 architecture and
its modified version called Opto-Net are used for Cluster
CNN and Viability CNN, respectively. We train the Parallel Segmenter using 15 images of A2780 cells and 5 images
of yeasts cells, containing, in total, 14742 individual cell
images. Similarly, 6101 and 5900 A2780 cell images are
employed for training Cluster CNN and Viability CNN models, respectively. 2514 individual A2780 cell images are
used to test the overall segmentation performance of Parallel Segmenter combined with Cluster CNN, revealing high
Precision/Recall/F1-Score values of 96.52%/96.45%/98.06%,
respectively. Overall cell counting/viability analysis performance of DeepCAN is tested with A2780 (2514 cells), A549
(601 cells), Colo (356 cells), and MDA-MB-231 (887 cells) cell
images revealing high counting/viability analysis accuracies of 96.76%/99.02%, 93.82%/95.93%, and 92.18%/97.90%,
85.32%/97.40%, respectively.