Park, Woohyun
Forecasting Whole-Brain Neuronal Activity from Volumetric Video
Immer, Alexander, Lueckmann, Jan-Matthis, Chen, Alex Bo-Yuan, Li, Peter H., Petkova, Mariela D., Iyer, Nirmala A., Dev, Aparna, Ihrke, Gudrun, Park, Woohyun, Petruncio, Alyson, Weigel, Aubrey, Korff, Wyatt, Engert, Florian, Lichtman, Jeff W., Ahrens, Misha B., Jain, Viren, Januszewski, Michał
Large-scale neuronal activity recordings with fluorescent calcium indicators are increasingly common, yielding high-resolution 2D or 3D videos. Traditional analysis pipelines reduce this data to 1D traces by segmenting regions of interest, leading to inevitable information loss. Inspired by the success of deep learning on minimally processed data in other domains, we investigate the potential of forecasting neuronal activity directly from volumetric videos. To capture long-range dependencies in high-resolution volumetric whole-brain recordings, we design a model with large receptive fields, which allow it to integrate information from distant regions within the brain. We explore the effects of pre-training and perform extensive model selection, analyzing spatio-temporal trade-offs for generating accurate forecasts. Our model outperforms trace-based forecasting approaches on ZAPBench, a recently proposed benchmark on whole-brain activity prediction in zebrafish, demonstrating the advantages of preserving the spatial structure of neuronal activity.