Metadata-Version: 2.1
Name: ncrar-abr
Version: 2024.2.14
Summary: Evoked potential analysis software
Author-email: Brad Buran <bburan@alum.mit.edu>, Brad Buran <buran@ohsu.edu>, Brad Buran <bradley.buran@va.gov>
Maintainer-email: Brad Buran <bburan@alum.mit.edu>, Brad Buran <buran@ohsu.edu>, Brad Buran <bradley.buran@va.gov>
License: BSD 3-Clause License
        
        Copyright (c) 2019, Brad Buran
        All rights reserved.
        
        Redistribution and use in source and binary forms, with or without
        modification, are permitted provided that the following conditions are met:
        
        1. Redistributions of source code must retain the above copyright notice, this
           list of conditions and the following disclaimer.
        
        2. Redistributions in binary form must reproduce the above copyright notice,
           this list of conditions and the following disclaimer in the documentation
           and/or other materials provided with the distribution.
        
        3. Neither the name of the copyright holder nor the names of its
           contributors may be used to endorse or promote products derived from
           this software without specific prior written permission.
        
        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
        AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
        IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
        DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
        FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
        SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
        CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
        OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
        OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
        
Project-URL: homepage, https://github.com/NCRAR/ncrar-abr
Project-URL: documentation, https://github.com/NCRAR/ncrar-abr
Project-URL: repository, https://github.com/NCRAR/ncrar-abr
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: BSD License
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE.txt
Requires-Dist: atom
Requires-Dist: enaml[qt6-pyside]
Requires-Dist: numpy
Requires-Dist: scipy
Requires-Dist: pandas
Requires-Dist: matplotlib
Requires-Dist: openpyxl

# ABR analysis

This program facilitates the analysis of auditory evoked responses (tested with auditory brainstem responses and compound action potentials).  You can visualize the waveform series collected during a single experiment, and identify the threshold and extract the amplitude and latency of each of the individual peaks in the waveform.

The program works with the text file exported by the IHS system at the NCRAR.

## Installing

### Getting started

The simplest way to get started is to download the [Anaconda Python Distribution](https://www.anaconda.com/distribution/). Once installed, you will have new programs available in your start menu. Open `Anaconda Prompt` and type the following sequence of commands to install the program:

	conda create -n ncrar-abr python>=3.9
	conda activate ncrar-abr
	pip install ncrar-abr
    conda run -n ncrar-abr ncrar-abr-make-shortcuts

Note that previous versions of the software automatically added a shortcut to the start menu. These shortcuts must now be created by running the following command in an `Anaconda Prompt` (you can run it immediately after typing `pip install ncrar-abr`). 

    conda run -n ncrar-abr ncrar-abr-make-shortcuts

You will then have a shortcut in the Windows Start menu.

### Running pilot versions

If you wish to test a newer version of the ABR program without losing your current copy (e.g., to test a new feature), you can install the new version alongside your main one. The best way to do this is to open the `Anaconda Prompt` and type:

	conda create -n ncrar-abr-test python>=3.9
	conda activate ncrar-abr-test
	pip install ncrar-abr



## Usage

![Main interface](docs/abr.png)

The main interface for the program allows you to configure the following settings. All settings are required:
<dl>
    <dt><strong>Analyzer</strong></dt>
    <dd>Your initials. This will be saved as part of the output filename that is generated.</dd>
    <dt><strong>File format</strong></dt>
    <dd>The type of file you are going to be analyzing. Right now only the IHS text export is supported.</dd>
    <dt><strong>Calibration file</strong></dt>
    <dd>An Excel file mapping the nominal (i.e., values entered in the IHS system) to the actual (i.e., measured) stimulus levels.</dd>
    <dt><strong>Latencies file</strong></dt>
    <dd>An Excel file providing suggested latencies for each stimulus frequency and wave. This facilitates peak-picking.</dd>
    <dt><strong>Measure waves</strong></dt>
    <dd>Waves you wish to measure. If none are checked, then only threshold is measured.</dd>
    <dt><strong>Filter?</strong></dt>
    <dd>If unchecked, no filtering is applied. Otherwise, apply the band-pass filter.</dd>
    <dt><strong>Study folder</strong></dt>
    <dd>Folder containing study data. This is only required if you are using the "Launch batch", "Export analysis" or "Compare raters" options.<dd>
<dl>

All settings are saved so that you do not have to re-enter them each time you open the program.

### Required files

#### Calibration file

The calibration file is required and ensures that the output file (generated by the analysis program) records the correct (calibrated) level. All experiments must contain an entry in the calibration file. A separate calibration file can be maintained for each study if desired. The spreadsheet must be a single tab containing the following columns:

* `IHS system number`: The system number (not including IHS), e.g., 5453 or 7141.
* `IHS system booth`: This is for your internal reference and is not used by the ABR program.
* `Calibration date`: Day calibration was run. If the date matches the experiment date, it will be assumed that the calibration was run before the data was collected.
* `Calibration frequency`: Frequency of stimulus (or Click).
* `Actual level`: Actual (i.e., measured level) for the level dialed in on the IHS.
* `Level on the IHS`: Level as entered on the IHS system.

We are fairly strict about the calibration requirements. When laoding an experiment, the following steps are performed:

* Find the most recent calibration performed before or on the experiment date. If the most recent calibration is older than six months, an error is reported and the file is not loaded.
* Look for the stimulus frequency and level in the calibration file. Since the stimulus level in the experiment file is what was entered on the IHS system, it is matched against the `Level on the IHS` column. If the exact stimulus frequency and level combination is missing, an error is reported and the file is not loaded.

![Calibration error example](docs/cal-error.png)

#### Latencies file

The latencies file contains suggested latencies (mean and standard deviation) for each wave of interest. Since latencies can vary with stimulus, there is a row for each stimulus. Be sure there is an entry for each stimulus and wave that you wish to analyze. Enter Click as `click` or `Click`. Tonebursts must be entered in units of kHz (e.g., `1`, `3`, `4`, `6`). The mean and standard deviations for each wave act as priors that weight the peak detection algorithm so it is more likely to correctly identify the peak with minimal adjustment. The latencies must be provided in an Excel spreadsheet in a tab called `latencies`. When you run the batch export of a study folder, it will generate an Excel spreadsheet with the appropriate `latencies` tab (this will allow you to use your historical data to guide future analyses).

### Functions

<dl>
    <dt><strong>Launch basic</strong></dt>
    <dd>This allows you to open up the standard version of the program that has an interface that allows you to visualize multiple data files at once (by tearing off and rearranging the dock windows). You can drag and drop one or more IHS text files onto the window. A separate tab will be opened for each frequency in each file (i.e., two frequencies in each of three files will open up six tabs). The interface allows you to "tear off" tabs and place them side-by-side.</dd>
    <dt><strong>Launch batch</strong></dt>
    <dd>This will automatically scan the study folder for data that needs to be scored by the specified analyzer. All unprocessed data will be shown to you. As you complete and save the analysis, it will immediately move to the next dataset. The page up/down keys can be used to move through the analyses without rating them. The algorithm does *not* check to see if the filter settings and/or measured waves match what is in an existing analysis (it only checkes whether an analyzed file with the raters' initialis exist.</dd>
    <dt><strong>Export analysis</strong></dt>
    <dd>This will scan the full study folder and create a summary Excel spreadsheet with three tabs (threshold, waves, latencies). The latencies tab is in a format that can be used by the `latencies file` option in the GUI.</dd>

### Output format

The amplitude and latency of each point are saved along with the threshold of the series. If the point is part of a subthreshold waveform, the additive inverse of the latency is saved (i.e. when parsing the file, subthreshold data can be recognized by negative latencies). Amplitudes from subthreshold points can be used to estimate the noise floor if desired. If a peak was marked as unscorable, it will appear as NaN in the file.

### Interface

![Main interface](docs/interface.png)

#### General interaction

Each waveform is bandpass filtered using a butterworth filter. This filtering process removes the baseline shift as well as high-frequency noise that may interfere with the peak-finding algorithm. The primary mode of interaction with the program is via the keyboard. You may navigate through the waveform stack via the up/down arrows and select a point via the corresponding number (1-5).  Once a point is selected (it will turn to a white square), you can move it along the waveform using the right/left arrow keys.  Since the algorithm relies on the location of P1-5 to compute the best possible estimate of N1-5, you should correct the location of P1-5 before asking the algorithm to estimate N1-5.  You may also specify threshold by navigating to the appropriate waveform (via the up/down arrows) and hitting the "t" key.

The typical sequence of steps is as follows:

* Load the waveform and adjust the view to your liking. You can toggle between raw and normalized mode by pressing "n". If the topmost waveforms are off-screen, you can move them down by pressing Shift+Down. If the bottommost waveforms are off-screen, you can move them up by pressing Shift+Alt+Up.
* Specify threshold by using the arrow keys to navigate to the level with threshold and pressing "t". If all waveforms are above threshold, then you can press "Alt+Up" (or click "All above threshold"). If all waveforms are below threshold, you can press "Alt+Down" (or click "All below threshold").
* Press "I" to automatically guess the location of the positive peaks. Navigate up/down through the waveforms and select peaks by pressing the corresponding number key (1 ... 5). Once a peak is selected, you can fine-tune the adjustment. Right/left arrows snap to the next identified peak. Hold down the Alt key when usign the right/left arrows to adjust the peak in fine increments.
* Once you are happy with the locations of the positive peaks, press "I" again to guess the location of the negative peaks. This time, you select the negative peaks by pressing Alt at the same time you press the corresponding number key.
* Once you are happy with the analysis, you can save it by hitting "S". A pop-up will indicate that the data was successfully saved.

#### Additional details

The current waveform is displayed as a thick, black line.  Once a threshold is specified, subthreshold waveforms are indicated by a gray line.  The selected point is indicated by a white square.  Negativities are indicated by triangles, positivities as squares.  Red is P1/N1, yellow is P2/N2, green is P3/N3, light blue is P4/N4, and dark blue is P5/N5.

The following keys can be used when analyzing a waveform:

<dl>
    <dt>**Up/Down arrows**</dt>
    <dd>Select previous/next waveform in the series</dd>
    <dt>**Right/Left arrows**</dt>
    <dd>Move a toggled peak left or right along the waveform.  Movement of the
        peak will "snap" to estimated peaks in the waveform.  To adjust the peak
        in fine increments, hold down the alt key simultaneously.</dd>
    <dt>**Number keys 1-5**</dt>
    <dd>Select the corresponding peak on the current waveform.  To select N1-5,
        hold down alt while pressing the corresponding number.</dd>
    <dt>**I**</dt>
    <dd>Estimates P1-5 for all waveforms on the first press. N1-5 for all
        waveforms on the second press. After that, nothing happens.</dd>
    <dt>**U**</dt>
    <dd>Updates guess for corresponding P or N of successive waveforms based on
        position of currently toggled P or N.</dd>
    <dt>**N**</dt>
    <dd>Toggles normalized view of waveform.</dd>
    <dt>**+/- keys**</dt>
    <dd>Increases/decreases scaling factor of waveform.</dd>
    <dt>**S**</dt>
    <dd>Saves amplitude and latency of peaks.</dd>
    <dt>**T**</dt>
    <dd>Set threshold to current waveform.</dd>
    <dt>**Alt+Up**</dt>
    <dd>Indicate that all waveforms are below threshold.</dd>
    <dt>**Alt+Down**</dt>
    <dd>Indicate that all waveforms are above threshold.</dd>
	<dt>**Shift+Up**</dt>
    <dd>Shift topmost waveforms up.</dd>
	<dt>**Shift+Down**</dt>
    <dd>Shift topmost waveforms down.</dd>
	<dt>**Shift+Alt+Up**</dt>
    <dd>Shift bottom-most waveforms up.</dd>
	<dt>**Shift+Alt+Down**</dt>
    <dd>Shift bottom-most waveforms down.</dd>
	<dt>**Delete**</dt>
    <dd>Toggle peak as unscorable.</dd>
</dl>

## Changes

### Version 1.0.0

*Change in build system*

We have dropped using `conda-build` to create Anaconda packages. These packages take over an hour to build and are very fragile. Instead, we have switched to using GitHub actions to build a package that is automatically uploaded to PyPI.

*Loading data from IHS file*

After receiveing feedback from IHS, many values that were hard-coded into the file reader for the IHS text export have been corrected. The original code made the following assumptions:

* For stimulus levels 105 dB peSPL or greater, the waveform scaling factor was hard-coded as 1/3.37e2.
* For stimulus levels less than 105 dB peSPL, the waveform scaling factor was hard-coded as 1/6.74e2.

However, the correct formula for scaling factor is based on the number of sweeps and gain as recorded in the file. If you analyzed data using older versions of this software and the following assumptions hold:

* For stimulus levels 105 dB peSPL or greater, gain was set to 100 (see "amp. gain" in file header of IHS text export to find this value) and sweeps was set to 1024.
* For stimulus levels less than 105 dB peSPL, gain was set to 100 (see "amp. gain" in file header of IHS text export to find this value) and sweeps was set to 2048.

Then, you can multiply the peak amplitudes by 1.004368915372173 so that data analyzed using older versions of this software can be compared with data analyzed with versions 2023.06.02 or later. There are some important exceptions to note. For example, the IHS system in the lime booth was calibrated for a 105 dB peSPL stimulus; however, this is entered as 104 dB peSPL in IHS. Even though this is a 105 dB peSPL stimulus, the incorrect scaling factor was applied. Thus, in addition to multiplying the data by 1.004368915372173, you need to mulitply again by two.

Further, it appears that different IHS systems may export a different "zero point". All waveforms saved to the file include ~12.5 msec of pre-stimulus baseline; however, this varies from system to system. We originally assumed that it was 12.8 msec of pre-stimulus baseline. However, this varies from system to system. Thus, data acquired on the lime system using versions pre-dating this one will have lateinces that are 0.9 msec longer than expected.
