Tutorial Contents

Detecting "silent" periods

Threshold outlier detection

Contents

Detecting "silent" periods using activity levels

Sometimes it is appropriate to automate the detection of “silent” (or at least quiet) periods of a trace, so that the level of background noise can be specified on an objective basis. DataView defines the activity level within a time period in either of two ways:

  1. The activity level is the sum of squared deviations from the mean (the SSD) within that period. (For a period of fixed duration the SSD is thus directly related to the standard deviation, but is quicker to calculate.) Thus “busy” regions with a high level of variability and therefore a large SSD are regarded as highly active, and regions with a low SSD are regarded as relatively silent. The might be an appropriate metric for use with an extracellular recording.
  2. The activity level is the simple sum of values within the period, with low values indicating less activity. This might be an appropriate metric for use with an intracellular recording.

The program slides a 20 ms “window” across the visible data trace, calculating the SSD as it advances on a point-by-point basis. It records the 20 ms regions which have the highest and lowest SSDs, and adds two events encompassing those regions to the selected event channel. These should be visible in the main display view, and should be located over a silent inter-burst interval and a large burst. The events have the tag labels Min and Max associated with them.

The maximum activity level event in channel b is associated with what appears to be a noise artefact in the record, while the minimum activity level event is centred over a flat bit of the data trace where there is clear signal drop-out. This indicates two things; first that the automatic detection is correctly finding regions of high and low variability in the trace, and second, that it is dangerous to use automatic detection methods without actually looking at the data first!

Threshold outlier detection

One use for this automatic detection method is to have an objective method of determining the best “silent” period from which to derive trace statistics for setting thresholds. This is particularly useful when batch processing several files (but note the problem with data glitches mentioned above).

The statistics are now measure from the event in channel a which has the tag label Min. The latter is the label applied by the Activity level process earlier (note that the match is case sensitive).

The cursor marking the threshold moves downwards slightly, because the statistics are measured from a region pre-selected because it has the lowest standard deviation, whereas before it a user-defined region that just happened to have a rather low standard deviation.

The cursor now moves to an obviously incorrect location because the event in channel b with the Min label includes a section of data with a zero-valued glitch, giving a standard deviation which is far too small.

The Threshold dialog actually has a built-in facility for finding the least active region of data in the visible display, which can save time if this is all that is needed.

The Threshold preview should now display two vertical dashed blue cursors indicating the region of least activity within the preview window. It is up to the user to set the preview window appropriately.