Rick Gerkin

Rick Gerkin
Carnegie Mellon University
Pittsburgh, United States

Speaker of Workshop 4

Will talk about: NeuroElectro and NeuronUnit

Bio sketch:

Rick Gerkin is developing frameworks for data-driven validation of scientific models, focusing on neurophysiology.  The central goal of this work is to allow the development and evaluation of models to more closely match the "unit-testing" framework of software development, bringing clarity and focus to the state and utility of modeling projects.  He also contributes to the development of the NeuroElectro project, which curates electrophysiological data about neuron types. He investigates how olfactory perception, learning, and behavior are encoded by the physiological responses of ensembles of neurons in the mammalian olfactory bulb. He records the activity of these neurons while animals are engaged in olfactory tasks, and applies statistical and modeling techniques to understand how these neurons work together to represent odor information.

Talk abstract:

Rigorously validating a quantitative scientific model requires comparing its predictions against an unbiased selection of experimental observations according to sound statistical criteria. Developing new models thus requires a comprehensive and contemporary understanding of competing models, relevant data and statistical best practices. Today, developing such an understanding requires an encyclopedic knowledge of the literature. Unfortunately, in rapidly-growing fields like neuroscience, this is becoming increasingly untenable, even for the most conscientious scientists. For new scientists, this can be a significant barrier to entry.

Software engineers seeking to verify, validate and contribute to a complex software project rely not only on volumes of human documentation, but on suites of simple executable tests, called "unit tests''. Drawing inspiration from this practice, we have developed SciUnit, an easy-to-use framework for developing "model validation tests'' -- executable functions, here written in Python.  These tests generate and statistically validate predictions from a specified class of scientific models against one relevant empirical observation to produce a score indicating agreement between the model and the data.  Suites of such validation tests, collaboratively developed by a scientific community in common repositories, produce up-to-date statistical summaries of the state of the field.  Here we aim to detail this test-driven workflow and introduce it to the neuroscience community.  As an initial example, we describe NeuronUnit, a library that builds upon SciUnit and integrates with several existing neuroinformatics resources to support validating single-neuron models using data gathered by neurophysiologists.