Tutorial Contents

Fluorescent image pre-processing

Photo-bleaching

Remove exponential drop

ΔF/F0

Contents

Fluorescent (calcium) image pre-processing

DataView is primarily intended for analysing electrophysiological data, but it has some facilities that may be useful for fluorescent image analysis when fluorescence intensity is recorded as a series of values at equal-spaced time intervals. For example, in calcium imaging experiments, a transient increase in fluorescence in a localized patch of tissue indicates a brief increase in the concentration of free calcium within that patch, which in turn might indicate an increase in synaptic and/or spiking activity in that region.

DataView does not have any facility to interact with a camera directly. The assumption is that image capture is done externally and that the data are then exported in a format that DataView can read. This might typically be a CSV file, wih each column containing the time series from a separate region of interest. The columns are then imported as separate traces in DataView.

Photo-bleach compensation

In a calcium imaging experiment, the amount of fluorescence depends on the calcium concentration, but it also depends on the amount of the calcium-sensitive fluorophore present. One common problem is that when a fluorophore is exposed to light at its excitation frequency, it can be gradually but irreversibly destroyed in a process is known as photo-bleaching. This means that the emission of fluorescent light decays over time, and this can contaminate any signals due to variations in calcium concentration.

photo-bleaching
An example of photo-bleaching with a caclium-sensitive dye.

The precise molecular mechanism of photo-bleaching is not entirely clear, but empirical studies show that it often follows a negative mono- or bi-exponential time course, with a decline to some steady-state value representing the intrinsic tissue fluorescence plus that emitted by any unbleachable fluorophore.  The background fluorescent can thus often be fitted to one of the following equations:

\[V_t = V_{\infty} + (V_0 - V_{\infty} )e^{-t/\tau_m} \qquad\textsf{mono-exponential}\]

\[V_t = V_{\infty} + w_0(V_0 - V_{\infty} )e^{-t/\tau_{\,0}} + w_1(V_0 - V_{\infty} )e^{-t/\tau_{\,1}} \qquad\textsf{bi-exponential}\]

where V0 is the fluorescence value at the start of exposure, Vt is the value at time t from the start, V is the value after the exponential decline stabilises, τ0,1 is the exponential time constant (of the 1st and 2nd exponents for bi-exponential decline), and w0,1 is the weighting of 1st and 2nd exponents for bi-exponential decline.

If the raw data have a reasonable fit to one of these functions they can be transformed to compensate for the exponential decline. This is usually done by dividing the raw values by the fitted exponential values on a point-by-point basis. If the data were an exact fit to the curve, this would make the trace have a value of 1 throughout its length. The absolute numerical value of fluorescence intensity can be restored simply by multiplying all corrected values by the first uncorrected intensity level (i.e. the level before any photo-bleaching has occurred). Deviations from the fitted curve caused by transient changes in calcium concentration then become apparent as deviations from this value.

This is a proportional compensation method. An alternative linear method is to just subtract the fitted curve from the raw data (making all values 0 for a perfect fit), and then to add back the initial value.

One consequence of proportional compensation is that a deviation from the baseline that occurs early in the raw recording, before much photo-bleaching has occurred, will end up as a smaller signal in the compensated record than a deviation of the same absolute raw size that occurs late in the recording, after substantial photo-bleaching. However, this is often what is wanted, since the later signal represents a greater fractional change from baseline (and hence presumably a bigger calcium signal) than the earlier signal. A downside of proportional compensation is that any non-fluorescent noise in the signal is also amplified more in the later part of the recording than the earlier.

A major problem with either method is that the genuine calcium-related signal is included in the data from which the fit is derived, and thus the fit equation may not be a true representation of baseline photo-bleaching. This means that quantitative values derived after such photo-bleach compensation should be treated with considerable caution.

Remove Exponential Drop

This contains 8 traces representing calcium fluorescence levels recorded at 8 different regions of interest in a fly larva that has been genetically engineered to express a calcium-sensitive fluorophore. There is a very obvious exponential-like decline in the signal level over the period of the recording in each trace.

The program makes an initial estimate of the parameters of a mono-exponential decline, and the display at the top-right of the dialog shows the raw signal with the exponential curve shown as a red line superimposed on the data (for an explanation of parts of the dialog box not mentioned in this tutorial, press F1 to see the on-line help). The line is a reasonable, but not exceptionally good, fit to the data.

Note the BIC value of 1800. The interpretation of the BIC is described elsewhere, but the take-home message here is that lower BIC values indicate a better model fit than higher values.

The BIC value is now 460, confirming that the fitted curve is a better fit than the initial estimate.

When the curve stabilizes, note that the BIC value has dropped to 126, indicating that the improvement in the fit brought about by fitting the double exponentials is “worth it” compared to the mono-exponential, even though it requires more parameters. The transformed data are now reasonably linear when viewed in the Preview window.

Remove exponential drop dialog
Fitting a bi-exponential curve to data and removing it to compensate for photo-bleaching in a calcium fluorescence intensity time-series.


What Could Possibly Go Wrong?

The fit process stops when either the convergence criterion is met, which means that the percentage change in parameters values is less than the specified criterion for 3 successive iterations, or when the number of iterations reaches the set Maximum iterations. In the latter case, you can run the fit again to see whether the criterion can be reached with further iterations, or just work with the parameter values that have been achieved so far.

If the data are a poor fit to the model, the fit parameters may go off to extreme and obviously incorrect values. This is flagged by a message saying the fit “Failed to converge”, followed by restoration of the original parameters. You could try reducing the iteration Step size to see whether that helps, or manually adjusting the starting conditions to try to get a better initial fit. But if the data are too distorted, automatic fitting may be impossible, and you have to abandon the transformation, or accept the best fit that you can achieve by manual adjustment.

Manual Transform a Single Trace

At this stage, you could save the individual compensated trace to a file simply by clicking the Transform button. Before doing this you should select whether to write a new file or overwrite the original (only possible for native DataView files), and whether to add the transformed trace as a new trace, or to replace the existing raw trace.

Auto-Transform Multiple Traces

There are 8 traces in the exponential drop file, and it would be tedious to manually transform each in turn. However, we can batch process traces so that the steps are carried out automatically

The auto transform process takes each selected trace in turn and tries to fit first a mono- and then a bi-exponential curve. Whichever fit has the lower BIC value is used to transform that trace. The Progress box shows which traces have been processed, the BIC values and number of iterations for each exponential, and the exponential type used in the transform.

exponential drop progress
The progress report dialog after processing multiple traces to remove an exponential drop caused by photo-bleaching in a calcium fluorescent image.

 

The Progress dialog above shows that the bi-exponential fit for trace 4 did not converge within the Maximum iterations (500), but that its BIC was still (just) below that for the mono-exponential and so has been used. The bi-exponential fit for trace 5 shows the “failed to converge” message, and so the mono-exponential fit, which converged successfully, has been used. [Note that you cannot compare BIC values between traces – they are only useful for comparing fits of different models to the same data.]

ΔF/F0

When comparing time-series changes in fluorescence between different tissues and different experiments, it is usual to normalize the levels in some way because unavoidable differences in fluorophore loading and tissue characteristics can significantly affect absolute fluorescence levels.

One fairly standard procedure in fluorescent image analysis is to use the signal-to-baseline ratio (SBR), also called ∆F/F0 normalization. In this, F0 is a measure of the baseline fluorescence in the resting state, and ∆F is the moment-by-moment deviation from that baseline. Thus:

\[SBR_t = \frac{\Delta{F_t}}{F_0} = \frac{F_t-F_0}{F_0}\]

There are several different methods for determining F0 described in the literature, and some of these are available in DataView.

The Camera baseline is normally left at 0, but can be set to a positive value if the imaging camera has a non-zero output even in darkness. This value is subtracted from both Ft and F0 during normalization (unless the F0 value is set explicitly by the user - see below).

The F0 offset is also normally left at 0, but can be set to a positive value and added to the divisor during normalization if there is a risk that on-the-fly F0 calculation might yield values very close to zero.

There are five options for obtaining F0 values in DataView.

  1. F0 is set to the average Ft value measured from the trace over the whole recording. This may be appropriate in a preparation that is continuously active and it is not possible to measure a "resting" value. This can yield negative ∆F/F0 values.
  2. F0 is set to the average Ft value measured from the trace over a user-specified time window in the recording. This may be appropriate if a preparation is initially quiescent, and then something is done to activate it. The F0 measurement can be made during the quiescent period.
  3. F0 is set to an explicit value chosen by the user. This could be useful if one of the traces contained a recording from a non-reactive region of tissue. The average value of this trace could be obtained from the Analyse: Measure data: Whole-trace statistics menu command and used directly as F0 for the other traces.
  4. The percentile filter option continuously updates F0 throughout the recording. It does this by passing a sliding window of user-specified duration over the data, and setting F0 to the specified percentile value (typically 20%) within the window (e.g. Mu et al., 2019). This not only normalizes Ft, but also filters the ∆F/F0 values to emphasize transient changes at the expense of plateaus. However, it should be used with care because this filter type is not well characterized from a theoretical signal-processing perspective.
  5. The lowest sum process finds the user-specified window of data in the trace with the lowest average value in the recording, and uses this as F0. This may be useful if quiescent periods of data occur at unpredicatable times within a recording.

Note that in all these cases except the user-specified value, the F0 value is calculated independently for each trace and applied to that trace only. If you wish to apply the F0 value derived from one trace to all the traces in a recording, you should calculate it separately using the various analysis procedures in DataView, and then set F0 to this user-specified value.