TY - INPR A1 - Liu, Ziwen A1 - Hirata Miyasaki, Eduardo A1 - Pradeep, Soorya A1 - Rahm, Johanna Viola A1 - Foley, Christian A1 - Chandler, Talon A1 - Ivanov, Ivan E. A1 - Woosley, Hunter O. A1 - Lao, Tiger A1 - Balasubramanian, Akilandeswari A1 - Marreiros, Rita A1 - Liu, Chad A1 - Leonetti, Manuel D. A1 - Aviner, Ranen A1 - Arias, Carolina A1 - Jacobo, Adrian A1 - Mehta, Shalin B. T1 - Robust virtual staining of landmark organelles T2 - bioRxiv N2 - Correlative dynamic imaging of cellular landmarks, such as nuclei and nucleoli, cell membranes, nuclear envelope and lipid droplets is critical for systems cell biology and drug discovery, but challenging to achieve with molecular labels. Virtual staining of label-free images with deep neural networks is an emerging solution for correlative dynamic imaging. Multiplexed imaging of cellular landmarks from scattered light and subsequent demultiplexing with virtual staining leaves the light spectrum for imaging additional molecular reporters, photomanipulation, or other tasks. Current approaches for virtual staining of landmark organelles are fragile in the presence of nuisance variations in imaging, culture conditions, and cell types. We report training protocols for virtual staining of nuclei and membranes robust to variations in imaging parameters, cell states, and cell types. We describe a flexible and scalable convolutional architecture, UNeXt2, for supervised training and self-supervised pre-training. The strategies we report here enable robust virtual staining of nuclei and cell membranes in multiple cell types, including human cell lines, neuromasts of zebrafish and stem cell (iPSC)-derived neurons, across a range of imaging conditions. We assess the models by comparing the intensity, segmentations, and application-specific measurements obtained from virtually stained and experimentally stained nuclei and cell membranes. The models rescue missing labels, non-uniform expression of labels, and photobleaching. We share three pre-trained models (VSCyto3D, VSNeuromast, and VSCyto2D) and a PyTorch-based pipeline (VisCy) for training, inference, and deployment that leverages current community standards for image data and metadata. Y1 - 2024 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/86497 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-864978 UR - https://www.biorxiv.org/content/10.1101/2024.05.31.596901v3 IS - 2024.05.31.596901v3 PB - bioRxiv ER -