Star protocols:用于分析微生物学细胞图像的深度学习框架

Star protocols:用于分析微生物学细胞图像的深度学习框架

Before you begin

Deep learning (DL) has proven to be extremely effective in addressing a range of major biological challenges, including predicting protein structure,4 DNA sequencing,5 and drug discovery.6 The application of DL has expanded into the microbiological field,7 particularly in cellular image analysis. In traditional cellular image analysis, there are several challenges that need to be addressed.

One challenge is that parasites have completely distinctive features in morphology during their complex life cycles,8 and the shape and size of the cells can vary considerably,9 making the classification and detection of different parasites and cells quite difficult. Additionally, obtaining high-quality and in-focus microscopic images can be challenging,7 due to various factors such as the diffraction barrier and defects in optical systems.10

DL-based cellular image analysis can solve these problems to some extent. However, the black-box nature of DL often leads to unexplainable results. Incorporating the knowledge and insights of experts into the modeling process can help to solve it, but most of the DL-based methods have not considered the importance of knowledge from microbiologists in cellular image analysis.11,12 They are highly specialized and lack detailed instructions for most microbiologists. As a result, it can be challenging to develop accurate and easy-to-use DL models for cellular image analysis in microbiology.

To address these challenges, this protocol introduces a knowledge-integrated DL framework for cellular image analysis in microbiology. By building upon the previous studies of our group,1,2,3 this protocol provides a comprehensive guide to implementing a wide spectrum of tasks (i.e., classification, detection, and reconstruction) in cellular image analysis. The following sections describe how the DL model integrates with human expert knowledge and provides step-by-step instructions accessible to both beginners and professionals.

Description of the methods

This protocol introduces three DL models integrated with knowledge from microbiologists, namely deep cycle transfer learning (DCTL),1 geometric-feature spectrum ExtremeNet (GFS-ExtremeNet),2and correcting out-of-focus microscopic images (COMI).3 These models are designed for the classification, detection, and reconstruction tasks of cellular images in microbiology.

DCTL and COMI are both based on cycle generative adversarial networks (CycleGAN),13 as illustrated in Figure 1A. CycleGAN is comprised of two sets of generator-discriminator structures, which are different types of neural networks with distinct functionalities. Generators are used to transform the input images into different styles, while discriminators are used to identify whether the images are synthesized or not. Unlike traditional GANs, the cycle network topology does not require the one-to-one pairing of source images (DomainX) and target images (DomainY), as in the case of DCTL.

In DCTL, X represents the morphologically similar macroscopic objects, while Y denotes the parasites to be recognized. The GeneratorXY transforms the macroscopic images in DomainX into their corresponding parasite images, SyntheticY. Then, GeneratorYX restores the images in SyntheticY back to the original macroscopic images, RestorationX. Another cycle performs the same process in reverse. Finally, the discriminators are used to distinguish between the generated images and the original images, which are used to help the generators improve the quality of the generated images.

Building upon the backbone of CycleGAN, DCTL incorporates human expert knowledge through two supplementary feature extractors, as shown in Figure 1B. Using four groups of extreme points, it calculates the microscopic and macroscopic correlation (MMC)1 to find the morphologically similar macroscopic objects of each parasite as a quantitative knowledge representation (Figure 2A). CycleGAN then learns the morphological information from these two image domains and teaches the supplementary feature extractors to identify different parasites. Each feature extractor is trained on both original images and synthetic images using a Cross-Entropy loss function.14 Once the model training is completed, the supplementary Feature ExtractorY can be applied to classify the four types of parasites.

Key resources table

REAGENT or RESOURCE SOURCE IDENTIFIER
Software and algorithms
Anaconda Anaconda v2.4.0 https://www.anaconda.com/
Spyder Spyder v5.3.3 https://www.spyder-ide.org/
Python3 Python v3.7.16 https://www.python.org/
Tensorflow Tensorflow v1.15.0 https://www.tensorflow.org/
Tensorboard Tensorboard v1.15.0 https://pypi.org/project/tensorboard/
Tensorflow-estimator Tensorflow-estimator v1.15.1 https://pypi.org/project/tensorflow-estimator/
Pytorch Pytorch v1.2.0 https://pytorch.org/
Torchvision Torchvision v0.4.0 https://pytorch.org/
Keras Keras v2.2.4 https://keras.io/
Keras-contrib Keras-contrib v2.0.8 https://github.com/keras-team/keras-contrib
H5py H5py v2.10.0 https://www.h5py.org/
Scikit-learn Scikit-learn v1.0.2 https://scikit-learn.org/stable/
Matplotlib Matplotlib v3.5.3 https://matplotlib.org/
Scikit-image Scikit-image v0.17.2 https://scikit-image.org/
Opencv-python Opencv-python v4.6.0.66 https://pypi.org/project/opencv-python/
Pycocotools Pycocotools v2.0.5 https://pypi.org/project/pycocotools/2.0.5/
Tqdm Tqdm v4.64.1 https://tqdm.github.io/releases/
Pandas Pandas v1.3.5 https://pandas.pydata.org/
Numpy Numpy v1.21.6 https://numpy.org/
Protobuf Protobuf v3.19.0 https://pypi.org/project/protobuf/
Tensorflow-gpu Tensorflow-gpu v1.15.0 https://www.tensorflow.org/
cuDNN cuDNN v7.6.5 https://developer.nvidia.com/cudnn
Cudatoolkit Cudatoolkit v10.0.130 https://developer.nvidia.com/cuda-toolkit
Other
Codes and Datasets Github https://github.com/ruijunfeng/A-knowledge-integrated-deep-learning-framework-for-cellular-image-analysis-in-parasite-microbiology
Computing Platform: This protocol was performed on a computer with Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40 GHz Processor, two NVIDIA 2080Ti graphic cards, and 32G memory. Computer with more graphic cards is recommended to accelerate the training and evaluation. Windows 10 https://www.microsoft.com/en-au/software-download/windows10

如若转载,请注明出处:https://www.ouq.net/2460.html

(1)
打赏 微信打赏,为服务器增加50M流量 微信打赏,为服务器增加50M流量 支付宝打赏,为服务器增加50M流量 支付宝打赏,为服务器增加50M流量
上一篇 08/09/2023
下一篇 08/10/2023

相关推荐

  • Transwell孔径0.4um3um8um选择原则

    在实验中使用Transwell®通透性支持物时选择合适的孔径也是十分重要的。常用最小孔径的Tanswell膜(0.4um )主要应用于药物转导研究。细胞侵袭,趋化性和运动性研究通常采用3.0um或以上的孔径的Tranwell膜。 细胞从膜的…

    05/06/2024
    84
  • CRISPR质粒区别

    PX330; SpCas9 and single guide RNA PX458; SpCas9-2A-EGFP and single guide RNA PX459; SpCas9-2A-Puro and single guide RNA…

    实验方法 04/07/2024
    275
  • 如何设计质粒载体

    设计质粒载体是一项复杂而有趣的任务,通常需要考虑到载体的用途、宿主细胞类型、表达水平、选用的启动子、标签等多个因素。以下是设计质粒载体的一些建议步骤: 明确研究目的: 定义质粒的主要用途,是用于表达蛋白质、RNA干扰、基因编辑等。 确定宿主…

    实验方法 12/27/2023
    280
  • 常用通用测序引物名称和序列

    本文列出一代测序常用的通用引物,一般由测序公司免费提供。 引物名称 引物序列 27F AGAGTTTGATCMTGGCTCAG 1492R GGTTACCTTGTTACGACTT 86F GCTCAGTAACACGTGG 1340R CGG…

    实验方法 12/15/2023
    787
  • 实验兔的解剖学特征和生理学特性

    兔(Rabbit)是 哺乳类 兔形目 兔科 下属所有的属的总称。 生物学 分类动物界 脊索动物门 脊椎动物亚门 哺乳纲 兔形目 。 1、一般特性及生活习性 家兔虽然经人类长期的驯化和培育已成为一种常用的实验动物,但仍然继承了其祖先野生穴兔的…

    实验方法 12/13/2023
    811