Deep-prior ODEs augment fluorescence imaging with chemical sensors

Physical modelWe set out from the chemical reaction (1), which models the binding process. We consider that the total number of sensors stot(x) = s(x, t) + sb(x, t) is constant over time. We also assume that the spatial diffusion of the sensor is negligible. Both assumptions are common for fluorescent sensors31,67,68. The temporal evolution of sb is then given by$$\frac{{{\rm{d}}}{s}_{b}({{\bf{x}}},t)}{{{\rm{d}}}t}=-{k}_{b}{s}_{b}({{\bf{x}}},t)+{k}_{f}s({{\bf{x}}},t)c{({{\bf{x}}},t)}^{{n}_{{{\rm{H}}}}}.$$
(4)
Rewriting (4) in terms of sb and dividing it by stot leads to$$\frac{{{\rm{d}}}{\tilde{s}}_{b}({{\bf{x}}},t)}{{{\rm{d}}}t}=-{k}_{b}{\tilde{s}}_{b}({{\bf{x}}},t)+{k}_{f}(1-{\tilde{s}}_{b}({{\bf{x}}},t))c{({{\bf{x}}},t)}^{{n}_{{{\rm{H}}}}},$$
(5)
where \({\tilde{s}}_{b}({{\bf{x}}},t)=\frac{{s}_{b}({{\bf{x}}},t)}{{s}_{{{\rm{tot}}}}({{\bf{x}}})}\) denotes the proportion of bound sensors.We use g(x, t) to denote the fluorescence received by the imaging setup, which is emitted by the sensors. We model it as$$g({{\bf{x}}},t)={g}_{0}({{\bf{x}}})+{q}_{e}({{\bf{x}}}){\tilde{s}}_{b}({{\bf{x}}},t),$$
(6)
where g0(x) and qe(x) are the fluorescent background and a concentration-to-fluorescence factor, respectively31. These variables indirectly account for several unknown factors such as the quantum yield or the small emissions of the unbound sensor.Together, Eqs. (5) and (6) relate the CSI concentration to the fluorescence measurements. Our physical model is then$$\begin{array}{rc}{{\mathcal{H}}} ({{\bf{x}}},t;c,{g}_{0},{q}_{e})=g({{\bf{x}}},t;{\tilde{s}}_{b},{g}_{0},{q}_{e}),\\ {{\mbox{where}}}\,\,{\tilde{s}}_{b}({{\bf{x}}},t;c)\,{\mbox{satisfies}}\,\,\,(5).\end{array}$$
(7)
Equation (7) can be understood as the following algorithm. (i) Solve (5) for the proportion \({\tilde{s}}_{b}({{\bf{x}}},t;c)={\tilde{s}}_{b}({{\bf{x}}},t)\) of bound sensors starting from a given concentration c. (ii) Compute the fluorescence \(g({{\bf{x}}},t;{\tilde{s}}_{b},{g}_{0},{q}_{e})=g({{\bf{x}}},t)\) using (6) given the solution \({\tilde{s}}_{b}\), as well as g0(x), and qe(x). The resulting \({{\mathcal{H}}}({{\bf{x}}},t;c,{g}_{0},{q}_{e})\) is the spatiotemporal fluorescence predicted by the model given c, g0, qe.Deep spatiotemporal priorIn the main text, we formulated our inverse problem as$$\begin{array}{rclll}\left({c}^{\star },{g}_{0}^{\star },{q}_{e}^{\star }\right)\in \arg {\min}_{c,{g}_{0},{q}_{e}}\,& {{\mathcal{D}}}\left({{\mathcal{H}}}(c,{g}_{0},{q}_{e}),\,{g}_{m}\right)\\ & {\hskip -29pt} +{{\mathcal{R}}}(c,{g}_{0},{q}_{e}).\end{array}$$
(8)
Our alternative regularization strategy consists of reparameterizing the distribution of the concentration as the output of a neural network c(x, t) = fθ(x, z(t)) with the parameters θ, and the time-dependent latent vector z(t). The minimization problem (8) then becomes$$\begin{array}{rlll}\hfill\left({{{\boldsymbol{\theta }}}}^{\star },{{{\bf{z}}}}^{\star },{g}_{0}^{\star },{q}_{e}^{\star }\right) &\in & \arg {\min }_{{{\boldsymbol{\theta }}},{{\bf{z}}},{g}_{0},{q}_{e}} & {{\mathcal{D}}}\left.\right({{\mathcal{H}}}\left(\left({f}_{{{\boldsymbol{\theta }}}}({{\bf{x}}},{{\bf{z}}}(t)),{g}_{0},{q}_{e}\right),{g}_{m}\right)\\ && &+{{{\mathcal{R}}}}_{p}({g}_{0},{q}_{e}),\hfill \\ \hfill {c}^{\star }({{\bf{x}}},t) &= & {f}_{{{{\boldsymbol{\theta }}}}^{\star }}({{\bf{x}}},{{{\bf{z}}}}^{\star }(t)).&\end{array}$$
(9)
Note that we apply the deep prior to the concentration alone because the background and the scaling do not have a temporal component; we only regularize these two over space with \({{{\mathcal{R}}}}_{p}\).The benefits of this method are twofold. First, a spatial prior is enforced by an untrained deep-image prior69,70. Second, we enforce temporal regularity on the sequence by restricting the latent variables to a manifold. A single neural network generates the whole sequence, while the design of the latent vectors encodes the temporal proximity of consecutive frames.We remark that the resulting CSI concentration is in arbitrary units. This is due to a lack of information such as the conversion efficiency or the quantum yield. If at any point in space or time the concentration can be measured, this information can be readily incorporated into DUSK for automatic calibration. Alternatively, experimental procedures of calibration may be used to recover the exact concentration afterwards30,50,51,52,53.Parametric latent spaceWe propose a parameterization of deep-image priors that adapts to the fast and slow dynamics that appear in biological signaling. To this end, we represent our latent space with a flexible parametric curve$${{\bf{z}}}(t)={\sum }_{k=0}^{K-1}{{{\bf{b}}}}_{k}\varphi \left(\frac{t}{\Delta t}-k\right).$$
(10)
Here, the latent vector is parameterized by a number K of shifted basis functions φ( ⋅ ) with coefficients \({{{\bf{b}}}}_{k}\in {{\mathbb{R}}}^{L}\)71. We chose to use a basis of cubic B-splines with a stepsize Δt. The coefficients bk are directly optimized in place of z when solving (9). The number of basis functions (i.e., knots) determines the flexibility of the curve and, in consequence, how much temporal regularity is enforced. The possibility of this intuitive tradeoff is similar to that in more traditional regularization methods such as total variation72.Discretization of the physical modelTo discretize (5) and (6), we sample \({\tilde{s}}_{b},c\), and g on an equispaced spatial grid with Nx × Ny points and at Nt = T/Δt time points with time step Δt. Here, we assume that all the sampled functions are compactly supported in the domain \(\Omega \times \left[0,T\right)\) with \(\Omega \subset {{\mathbb{R}}}^{2}\). The samples of \({\tilde{s}}_{b},g\) and c are concatenated into the matrices \({{\bf{S}}},{{\bf{G}}},{{\bf{C}}}\in {{\mathbb{R}}}^{N\times {N}_{t}}\) with N = NxNy, respectively. Similarly, the samples of g0 and qe are concatenated into the vectors \({{{\bf{g}}}}_{0},{{{\bf{q}}}}_{e}\in {{\mathbb{R}}}_{\ge 0}^{N}\). To solve (5), we use a backward Euler scheme. We then obtain the discrete forward model \({{\bf{H}}}:{{\mathbb{R}}}^{N\times {N}_{t}}\to {{\mathbb{R}}}^{N\times {N}_{t}}\) defined as$${{{\bf{H}}}}_{{{\bf{p}}}}({{{\bf{C}}}}^{\odot {n}_{{{\rm{H}}}}})={{{\bf{g}}}}_{0}{{{\bf{1}}}}_{{N}_{t}}^{T}+{{\bf{diag}}}({{{\bf{q}}}}_{e}){{\bf{S}}},$$
(11)
where \({{\bf{p}}}={[{{{\bf{g}}}}_{0}^{T},{{{\bf{q}}}}_{e}^{T}]}^{T}\in {{\mathbb{R}}}_{\ge 0}^{2N}\), the symbol \({(\cdot )}^{\odot {n}_{{{\rm{H}}}}}\) denotes the Hadamard power (i.e., each matrix element is raised to the power of nH), and the entries of S are computed recursively$${s}_{n,{n}_{t}}=\frac{{s}_{n,{n}_{t}-1}+\Delta t{k}_{f}{({c}_{n,{n}_{t}})}^{{n}_{{{\rm{H}}}}}}{1+\Delta t({k}_{f}{({c}_{n,{n}_{t}})}^{{n}_{{{\rm{H}}}}}+{k}_{b})},$$
(12)
for n = 1, …, N and nt = 1, …, Nt.Problem formulation in the discrete domainNow that we are equipped with a discretized physical model, we present our variational framework to recover the concentration distribution from measured fluorescence images. In practice, we may acquire the images at a time step ΔtM = DΔt (\(D\in {\mathbb{N}}\)) larger than the one used in (11). To recover the concentration, we aim at solving the minimization problem$$\begin{array}{rcl}({{{\bf{C}}}}^{\star },{{{\bf{p}}}}^{\star })\in \arg {\min }_{{{\bf{C}}}\in {{\mathbb{R}}}^{N\times {N}_{t}},{{\bf{p}}}\in {{\mathbb{R}}}^{2N}}&\parallel \! \! {{{\bf{H}}}}_{{{\bf{p}}}}({{{\bf{C}}}}^{\odot {n}_{{{\rm{H}}}}})-{{\bf{G}}}{\parallel }_{1} \hfill\\ &+{\tau }_{p}{{{\mathcal{R}}}}_{{{\bf{p}}}}({{\bf{p}}})+{{{\mathcal{R}}}}_{{{\bf{C}}}}({{\bf{C}}}),\hfill\end{array}$$
(13)
where \({{{\bf{M}}}}_{D}\in {{\mathbb{R}}}^{{N}_{t}\times ({N}_{t}/D)}\) encodes the downsampling operation and the matrix \({{\bf{G}}}\in {{\mathbb{R}}}^{N\times ({N}_{t}/D)}\) denotes the measurements. The ℓ1-norm is the data-fidelity term, which we found robust to the noise present in the measurements. The regularization terms \({{{\mathcal{R}}}}_{{{\bf{p}}}}:{{\mathbb{R}}}^{2N}\to {{\mathbb{R}}}_{\ge 0}\) and \({{{\mathcal{R}}}}_{{{\bf{C}}}}:{{\mathbb{R}}}^{N\times {N}_{t}}\to {{\mathbb{R}}}_{\ge 0}\) enforce some prior knowledge about the parameters p and the concentration, respectively. In this work, \({{{\mathcal{R}}}}_{{{\bf{p}}}}\) only regularizes qe with the total variation and a small trade-off parameter τp > 072.Deep spatiotemporal priors in the discrete domainTo mitigate the illposedness of (13), we aim at enforcing some regularity along space and time on the concentration C. To that end, we reconstruct the concentration using a deep spatiotemporal prior40,41,42. In the framework of deep spatiotemporal prior, the concentration at time \(t\in \left[0,T\right)\) is the output c(θ, t) = fθ(z(t)) of a convolutional neural network \({{{\bf{f}}}}_{{{\boldsymbol{\theta }}}}:{{\mathbb{R}}}^{L}\to {{\mathbb{R}}}^{N}\) parameterized by \({{\boldsymbol{\theta }}}\in {{\mathbb{R}}}^{P}\). By design, the concentration is sampled on an equispaced spatial grid with N points, but the time can be sampled arbitrarily.To recover the concentration, we then optimize the minimization problem$$\begin{array}{rcl}({{{\boldsymbol{\theta }}}}^{*},{{{\bf{p}}}}^{*})\in \arg {\min }_{{{\boldsymbol{\theta }}}\in {{\mathbb{R}}}^{P},{{\bf{p}}}\in {{\mathbb{R}}}_{\ge 0}^{2N}}&\parallel \!\! {{{\bf{H}}}}_{{{\bf{p}}}}({{\bf{C}}}{({{\boldsymbol{\theta }}},\Delta t)}^{\odot {n}_{{{\rm{H}}}}}){{{\bf{M}}}}_{D}-{{\bf{G}}}{\parallel }_{1}\\ & {\hskip -70pt}+{\tau }_{p}{{{\mathcal{R}}}}_{{{\bf{p}}}}({{\bf{p}}})\end{array}$$
(14)
with$${{\bf{C}}}({{\boldsymbol{\theta }}},\Delta t)=[{{\bf{c}}}({{\boldsymbol{\theta }}},0),{{\bf{c}}}({{\boldsymbol{\theta }}},\Delta t),\ldots,{{\bf{c}}}({{\boldsymbol{\theta }}},\Delta t({N}_{t}-1))].$$
(15)
We represent our latent space with a parametric curve (see (10)) and optimize the coefficients bk along the coefficients θ. With a slight abuse of notation, the parameters θ will now encompass the coefficients \({\{{{{\bf{b}}}}_{k}\in {{\mathbb{R}}}^{L}\}}_{k=0}^{K-1}\) as well.Architecture and optimizationThe architecture of fθ is detailed in Table 1. The network layers are applied sequentially from the top to the bottom of the table, starting with an input of shape (1 × 3). It is noteworthy that the sequence of concentration images is represented with an underparametrized neural network. For instance, if (Nx × Ny × Nt) = (96 × 68 × 244) and K = 30, L = 3, there are 149, 925 + 90 parameters to optimize, which amounts to about 10% of the total number of recovered pixels. We do not specify validation or training sets because our framework is training-free, i.e., the network remains untrained.Table 1 Architecture of the network fθWe optimize all the variables in (14) using the AMSGrad algorithm73 with a constant learning rate of 0.01, tolerance ϵtol = 10−12, and a maximum number of iterations of 104. We set the tradeoff parameter τp = 10−5. The number of knots K is optimized by grid search either by maximizing a metric for simulated data or by visual inspection for real data. We enforce that \({({{{\bf{g}}}}_{0})}_{k} > 0\) and \({({{{\bf{q}}}}_{e})}_{k} > 1{0}^{-6}\) by projecting violating values to the respective bound after each gradient update.For k = 1, …, N, we initialize \({({{{\bf{q}}}}_{e})}_{k}=Q\) and \({({{{\bf{g}}}}_{0})}_{k}=\min ({({{\bf{G}}})}_{k,\cdot })\), i.e., with the minimal value reached during the measurements for each pixel. In simulated data, Q was chosen among the values [1, 10, 100] to maximize the regressed signal-to-noise ratio (RSNR, see (18)). In real data, we set to Q = 1 for all sensors but jGCaMP7f. We observed that the DUSK results on jGCaMP7f were insufficiently fitting the measurements. This is due to the neural network inability to reach very large values of concentration that are imputable to the high Hill coefficient of jGCaMP7f (nH = 3.1).For numerical stability, we chose to include the effect of the Hill coefficient in fθ such that \({{\bf{c}}}{({{\boldsymbol{\theta }}},t)}^{{n}_{{{\rm{H}}}}}={{{\bf{f}}}}_{{{\boldsymbol{\theta }}}}({{\bf{z}}}(t))\). Unless specified otherwise, the estimates displayed are the nHth root of the output of fθ.In Table 2, we present the kinetic coefficients. They were measured experimentally for the sensors in the datasets that we use from ref. 21. These are the only hyper-parameters to set in our framework other than the number of knots for the latent spline.Table 2 Parameters of the sensor kineticsWe set the time window \(\left[0,T\right)\) to avoid cutting the signal of interest (i.e., during a burst of APs). In addition, the memory cost could limit the duration of the sequence. However, it was not detrimental to the quality of reconstruction, as past values have little impact on future values after some time. Note that the principle of DUSK is not bound to a convolutional architecture with fixed spatial discretization. There are alternatives with continuous spatial representation74. Similarly, we adopt the backward Euler scheme for temporal discretization for its efficiency and simplicity. Other adaptive schemes are possible too.All experiments were run on a Linux workstation with an Intel Xeon Gold 6226R CPU (2.90GHz), 4 × 16 Gb, and a GPU NVIDIA RTX A6000 (48 Gb).Temporal regularityIn this work, the basis functions \({\{\varphi (\cdot -{t}_{k})\}}_{k=0}^{K-1}\) in (10) are cubic B-splines. In Supplementary Note 10, we show the effect of our temporal regularization on the recovered concentration distribution. By increasing the number of knots K, we can see that the temporal traces over the ROI show fewer fluctuations in both the predicted fluorescence and concentration traces. This behavior corroborates well with the tradeoff parameters used in classical spatial regularization.Simulation pipelineUsing a space colonization method75, we designed an astrocyte-like sample fully contained in the image domain \(\Omega \subset {{\mathbb{R}}}^{2}\) (see Fig. 2B, bottom). We used a 2D reaction-diffusion equation to simulate the propagation of the CSI, where the diffusion coefficient in the branches (Ωbranches ⊂ Ω) is higher than in the background. The concentration thus propagates rapidly along the branches, and slower elsewhere. Here, the signal contains two distinct traveling waves with little dispersion: One wave in the branches with higher velocity and amplitude, the other one in the background with lower velocity and amplitude. We thus solve the partial differential equation$$\frac{\partial c({{\bf{x}}},t)}{\partial t}= {{{\boldsymbol{\nabla }}}}_{{{\bf{x}}}}\cdot (D({{\bf{x}}}){{{\boldsymbol{\nabla }}}}_{{{\bf{x}}}}c({{\bf{x}}},t))+{k}_{r}p({{\bf{x}}},t)c({{\bf{x}}},t)\\ -{k}_{d}({{\bf{x}}}) c ({{\bf{x}}},t)+s({{\bf{x}}},t),\\ \frac{\partial p({{\bf{x}}},t)}{\partial t}= -{k}_{r}p({{\bf{x}}},t) c ({{\bf{x}}},t),$$
(16)
where ∇x and ∇x ⋅ are the spatial gradients and spatial divergence, respectively. The function \(D:\Omega \to {{\mathbb{R}}}_{\ge 0}\) is the spatially varying diffusion coefficient. The quantity \(p:\Omega \times {{\mathbb{R}}}_{\ge 0}\to {{\mathbb{R}}}_{\ge 0}\) is a precursor in the autocatalytic reaction \({{\rm{p}}}+{{\rm{c}}}{\to }^{{{{\rm{k}}}}_{r}}2\,{{\rm{c}}}\) with rate kr. The chemical species is degraded at a rate \({k}_{d}:\Omega \to {{\mathbb{R}}}_{\ge 0}\). The spatially-varying decaying rate allows us to obtain different traveling waves in the branches and background. The source term s(x, t) models a stress signal that is compactly supported in both space and time. More precisely, s is only non-zeros and of value s0 = 1 in the two first frames and over a localized area at the center of the image (i.e. , in the soma of the astrocyte). We solve (16) with a backward Euler scheme and a finite element solver76 for the temporal and spatial discretization, respectively. To ensure numerical stability, we simulate with the time step ΔtEuler = 0.5Δt and downsample the computed concentration by two to get CGT.The fluorescence images are then simulated using (11) with constant parameters \({({{{\bf{g}}}}_{0})}_{k}=0.25\) and \({({{{\bf{q}}}}_{e})}_{k}=10\) for k ∈ [1, …, N]. The sensors kinetics kf, kb correspond to the ones of the jGCaMP8s (Table 2) but the Hill coefficient is set to nH = 1. The recorded measurement images \({{\bf{G}}}\in {{\mathbb{R}}}^{N\times ({N}_{t}/D)}\) are then generated according to$${{\bf{G}}}={{{\bf{H}}}}_{{{\bf{p}}}}\left({{{\bf{C}}}}_{{{\rm{GT}}}}^{\odot {n}_{{{\rm{H}}}}}\right)+{{\bf{N}}},$$
(17)
where each (k, l)th entry of \({{\bf{N}}}\in {{\mathbb{R}}}^{N\times ({N}_{t}/D)}\) is a realization of a signal-dependent Gaussian random variable \({{\mathcal{N}}}(0,{\sigma }_{k,l}^{2})\). We emulate Poisson noise by setting \({\sigma }_{k,l}^{2}={({{{\bf{H}}}}_{{{\bf{p}}}}\left({{{\bf{C}}}}_{{{\rm{GT}}}}^{\odot {n}_{{{\rm{H}}}}}\right))}_{k,l}/B\) with photon budget B = 25.The size of the astrocyte-like sample is 40 × 40 μm2, discretized with (128 × 128) pixels (corresponding to a pixel length of  ~0.3 μm). We acquired 128 frames at an acquisition rate of 200 Hz (frame period of 5 ms). We set the diffusion coefficient to D(x) = 3.91 ×= 10−4 μm2/s for x ∈ Ωbranches, and to D(x) = 9.77 × 10−7 μm2/s otherwise. The reaction rate is set to kr = 1, and the decay rate is set to kd(x) = 0.03 for x ∈ Ωbranches, and to kd(x) = 0.3 otherwise. We set the initial conditions to c(x, 0) = 0 and p(x, 0) = 1 for x ∈ Ωbranches, and to p(x, 0) = 0.75 otherwise. We choose the Dirichlet boundary conditions (∂Ω = 0). The speed of the traveling wave then reaches about 31.25 μm/s, which is consistent with values measured in astrocytes77,78.Quantitative evaluation of the performanceTo measure the performance of reconstruction, we use the regressed signal-to-noise ratio (RSNR) over the whole image sequence,$${{\rm{RSNR}}}({{\bf{C}}},\hat{{{\bf{C}}}})={\max }_{a,b\in {\mathbb{R}}}\,20{\log }_{10}\left(\frac{\parallel \!\! {{\bf{C}}}{\parallel }_{F}}{\parallel \!\! a\hat{{{\bf{C}}}}+b{{{\bf{1}}}}_{N,{N}_{t}}-{{\bf{C}}}{\parallel }_{F}}\right),$$
(18)
where C and \(\hat{{{\bf{C}}}}\) are the ground truth and the reconstructed concentration, respectively. The RSNR adapts the standard signal-to-noise ratio—which compares the magnitude of the error of the reconstruction to the magnitude of the ground truth—to account for possible shifts and scalings of the signal. It is measured in a logarithmic scale and bigger values indicate better performance.To quantify the detectability of a peak (i.e., fluorescence or calcium rise) within a spatial ROI, we compute a sensitivity index defined as79$${d}^{{\prime} }({\mu }_{{{\rm{s}}}},{\mu }_{{{\rm{bg}}}},{\sigma }_{{{\rm{s}}}},{\sigma }_{{{\rm{bg}}}})=\frac{| {\mu }_{{{\rm{s}}}}-{\mu }_{{{\rm{bg}}}}| }{\sqrt{\frac{{\sigma }_{{{\rm{s}}}}^{2}+{\sigma }_{{{\rm{bg}}}}^{2}}{2}}},$$
(19)
where μs, μbg (σs, σbg) denote the median value (median absolute deviation) at the peak occurrence and in the background (e.g., before the rise), respectively.Remarks about the physical modelWhile originally intended as an integer value, note that the Hill coefficient nH is now generally understood as an empirical indicator of cooperative binding. In this interpretation, the coefficients often take non-integer values, which can be indicative of additional (hidden) sequential reactions34 or dependent binding sites31. The coefficients kf, kb, nH in (1)—on which Hill’s equation is based—are the ones that are usually measured experimentally when designing (or using) new sensors. For example, the values in Table 2 were measured for the same GCaMP8 dataset that we use in this article.All this suggests that the behavior of some chemical sensors can be approximated by the phenomenological reaction model (1) with the three experimental parameters. We tested the quality of our reconstruction using mixing and unmixing experiments, where the behavior of the underlying concentration is “known” (see Supplementary Note 3). In addition, we offer further comments on the suitability of model (1) in Supplementary Notes 7 and 8. When available, more complex chemical models can be plugged into the framework seamlessly by changing (5) for another set of ODEs. This might yield better estimates but requires (1) working out the model and (2) coming up with ways to measure what are often (too) many parameters (see Supplementary Note 7).While the fluorescence of most sensors (e.g., GCaMP) increases upon binding to the corresponding CSI as per (6), a few sensors see their fluorescence decrease instead. More specifically, the light they emit is proportional to the concentration of the unbound sensor. Such sensors are readily compatible with our framework by simply replacing \({\tilde{s}}_{b}\) for s in (6). One example is the H2O2 sensors used to study signaling in plants, which we explored in33.Statistics and reproducibilityWe applied our computational framework to experimental video data of calcium sensors collected for21. These are available at80. The videos that we analyzed were randomly selected from the 1.4 TB dataset for jGCaMP8s, jGCaMP8m, and jGCaMP7f. No other data were excluded. Sample sizes are specified in the manuscript for each experiment. We did not use blinding or randomization because there were no experimental groups.Reporting summaryFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Hot Topics

Related Articles