Fish were maintained at approximately 28°C on a 14/10 h light/ dark cycle with standard husbandry procedures. Zebrafish lines, Tg(-5.5sws1: EGFP)kj9, Tg(-3.2sws2: mCherry)mi2007, Tg(trß2: tdTomato), Tg(-3.2sws2: EGFP), Tg(gnat2:H2A-CFP), and pigment mutant ruby carrying albino (slc45a)b4/b4 and roya9/a9 were used. All animal procedures were approved by the Institutional Animal Care and Use Committee at the University of Michigan.
Generation of transgenic zebrafish with nuclear-localized photoconvertible (green-to-red) EOS protein expressed specifically in UV cones:
Multi-Gateway-based tol2 kit system was used to generate expression vectors. In brief, the 5’ entry clone, p5E- 5.5sws1, middle entry clone, pME-nEOS (gift from Dr. David Raible), and 3’ Entry clone, p3E-polyA, were assembled into a destination vector, pDestTol2pA using LR Clonase II Plus enzyme (Thermo Fisher Scientific). Embryos of the transparent ruby genetic background at the 1-cell stage were injected with 1 nL of solution containing 25 pg plasmid DNA and 25 pg tol2 transposase mRNA. Founders (F0) with germline transmission of the transgene were identified by outcrossing with wildtype animals, and their F1 progenies were screened for nEOS expression at 4 days post fertilization.
nEOS photoconversion and imaging:
Photoconversion of nEOS protein was performed on ruby; Tg(sws1:nEos) fish. Juvenile zebrafish (0.7 to 0.88 cm standard body length) were anesthetized with 0.672 mg/ml Tricaine S/ MS-222 (Western Chemical Inc., Ferndale, WA) and placed dorsal side down on a 50 mm glass bottom petri dish with a No. 1.5 coverslip (MarTek Corporation, Ashlan MA) and held in place with damped Kimwipes. Imaging and photoconversion were performed with a Leica TCS SP8 LSCM (Leica Microsystems, Werzlar, Germany) equipped with Leica 40X PL APO CS2 Water Immersion lens, 1.1 NA with 650 um working distance. Green to red photoconversion of nEOS protein was performed by a 405 Diode laser at 400 Hz scan speed with a resolution of 512 x 512 pixels in the xy dimension at a single optical plane. Pre and post photoconversion images were captured with the White Light Laser tuned to 506 nm for nEOS (green) and 573 nm for nEOS (red). Leica HyD hybrid detectors were tuned to 516-525 nm for nEOS (green) and 620-761 nm nEOS (red).
Tracking nuclear positions in photoconverted regions:
In the photoconversion experiments, we observe the same region of the same retina at two different times in live fish. Given a nucleus at one time point, we want to find the same nucleus in the image at the other time point. One image of the region is taken immediately after photoconversion, which we call day 0. Across fish, we vary the time between photoconversion and the time of the second observation (i.e., two days after photoconversion at the earliest and four days after photoconversion at the latest). We call the second time point day 2-4.
At both times of observations for each fish, we have an image with two channels. One channel corresponds to the color of the photoconverted fluorescent protein. The other channel corresponds to the color of the non-photoconverted fluorescent protein. For the image analysis below, we use the photoconverted channel at both times. The image is three-dimensional, and the plane which contains the UV cone nuclei (i.e., where the fluorescent protein is localized) is mostly parallel to the x-y plane. This fact allows us to perform most of the computations, for tracking each nucleus from one image to the other, based on two-dimensional projections.
For each z-stack, we compute a two-dimensional wiener filter (wiener2; MATLAB 2016B Image Processing Toolbox, MathWorks) with a filter size of eight pixels, which is approximately a micron. This filter removes noisy specks (i.e., spikes in intensity at small length scales). We, then, compute a two-dimensional projection by summing over z-stacks. The photoconverted UV cones are in the middle of the image. The intensity in the photoconverted channel is significantly weaker for UV cones near the edge of the image. This provides us the reference boundary by which we can identify common nuclei (i.e., which nucleus in the day 2-4 image corresponds to a specific nucleus in the day 0 image).
We perform an image registration, computing the combination of rotation and translation that optimizes the normalized cross-correlation between the two images (normxcorr2; MATLAB 2016B Image Processing Toolbox, MathWorks). Then, we segment nuclei in the two images. Because the intensity of UV cone nuclei varies significantly across the image, we use both adaptive thresholding (adaptthresh; MATLAB 2016B Image Processing Toolbox, MathWorks) and a low absolute threshold. We morphologically open the thresholded image, followed by morphological closing. We fill holes in the image (imfill; MATLAB 2016B Image Processing Toolbox, MathWorks) and clear the border of the image (imclearborder; MATLAB 2016B Image Processing Toolbox, MathWorks). We perform minimal manual correction of these segmentations. Given that we have aligned the two images and segmented the nuclei, we track each nucleus from one image to the other by computing for each nucleus in the day 0 image its nearest neighbor in the day 2-4 image (knnsearch; MATLAB 2016B, MathWorks). As a sanity check, for each nucleus in the day 2-4 we compute its nearest neighbor in the day 0 image to make sure that calculation returns the same answer for each nucleus. We manually correct any errors.
Following this segmentation and identification of common nuclei between the two images, we want to estimate the three-dimensional position of each nucleus based on the raw z-stacks rather than on a post-processed version. We identify a circular region, of radius two and a half microns, in the xy-plane centered on each of the segmented nuclei. This radius is larger in the xy-plane than the nuclear radius but small enough not to encompass other nuclei. This circular region corresponds to a pillar in the z-direction. To estimate the three-dimensional position of each nucleus in both images, we use the raw z-stacks, computing the center of intensity of each pillar (i.e., weighted average of voxel positions in each pillar where the weights are the voxel intensities). At the end of this entire procedure, for each nucleus common to both images, we know its position at both time points.
Statistical significance of growing grain boundaries in live fish:
We have images in which we can identify newly incorporated Y-Junctions lining up into grain boundaries. These are images of UV cone nuclei near the retinal margin (i.e., where the layer grows by addition of post-mitotic cells). These images are oriented such that the margin is parallel to the y-axis. Our field of view in these images contains approximately forty rows of UV cones and forty columns of UV cones.
To identify grain boundaries that are already visible immediately after photoconversion, we trace rows of UV cones at the retinal margin. If immediately after photoconversion the row direction rotates about a group of defects by ten degrees or more at the margin, we call the group of Y-Junctions in-between the rotated rows a grain boundary. (Note that we change this arbitrary threshold to twelve and fourteen degrees.) Based on this criterion, out of the eighteen samples, twelve samples have grain boundaries near the retinal margin at the time of photoconversion. Since some samples have two grain boundaries, in total we observe fifteen grain boundaries.
All subsequent analysis is based on the later image (i.e., two, three, or four days later). We trace rows in the later image. We identify newly inserted rows (i.e., newly incorporated Y-Junctions) in the later image, and we again identify the old defects within each grain boundary (i.e., those not newly incorporated). We calculate a one-dimensional coordinate for the location of each grain boundary in the later image. This one-dimensional coordinate is the average of y-coordinates (i.e., axis approximately parallel to the margin in the image) of all defects (i.e., not newly incorporated) within each grain boundary. We are interested in how close, along the y-direction, newly incorporated Y-Junctions are to the nearest grain boundary in the image.
Suppose that in the image there is only one grain boundary that is identifiable at the time of photoconversion and later imaging. Suppose this grain boundary is located at coordinate y_gb. The image spans from y=0 to y=y_max. For each new Y-Junction, we generate one hundred thousand random Y-Junction positions, uniformly distributed from y=0 to y=y_max (rand; MATLAB 2016B, MathWorks). We calculate the distance between each of these one hundred thousand random Y-Junction positions and the grain boundary (at y_gb). We call the array of distances between each random Y-Junction position and the grain boundary (δ_rand ) ⃗. We also store the actual distance, which we call δ_actual, between each observed newly incorporated Y-Junction position in the image and the nearest grain boundary in the image.
Suppose that in the image there are two grain boundaries that are identifiable at the time of photoconversion and later imaging. Suppose their coordinates are y_(gb,1) and y_(gb,2). The image spans from y=0 to y=y_max. For each new Y-Junction, we generate one hundred thousand random Y-Junction positions, uniformly distributed from y=0 to y=y_max (rand; MATLAB 2016B, MathWorks). We calculate the distance between each of these one hundred thousand random Y-Junction positions and the nearest grain boundary (at either y_(gb,1) or y_(gb,2)). We call the array of distances between each random Y-Junction position and the nearest grain boundary (δ_rand ) ⃗. We also store the actual distance, δ_actual, between each newly incorporated Y-Junction position and its nearest grain boundary.
Based on the procedure outlined above, for each new Y-Junction, we have a vector of length one hundred thousand and a scalar, (δ_rand ) ⃗ and δ_actual. If after random incorporation with respect to the grain boundaries in the image, a newly incorporated Y-Junction moves at a speed of one row per two days closer to the nearest grain boundary (with spacing between rows approximately equal to six microns), the distribution of distances with respect to the nearest grain boundary becomes max((δ_rand ) ⃗-3 μm/day ∆t*1 ⃗,0 ⃗ ), where ∆t is the time between photoconversion and later imaging (i.e., two, three, or four days).
We have thirty-seven new Y-Junctions across the twelve samples (with at least one grain boundary at the retinal margin). We would like to compare the thirty-seven scalar values of δ_actual to a concatenated vector of max((δ_rand ) ⃗-3 μm/day ∆t*1 ⃗,0 ⃗ ) across all thirty-seven defects. This concatenated vector is of length three million seven hundred thousand. We test whether the distribution of δ_actual has the same median as the concatenated vector of max((δ_rand ) ⃗-3 μm/day ∆t*1 ⃗,0 ⃗ ) across all thirty-seven defects. We assign a p-value to that comparison via Mann-Whitney U-test (ranksum; MATLAB 2016B Statistics and Machine Learning Toolbox, MathWorks).