Furthermore, scoring of behaviors can vary greatly between researchers especially as new researchers are trained ( Segalin, 2020) and can be subject to bias. In addition, researchers often do not label the video frames when specific behaviors occur, precluding subsequent analysis and review of behavior bouts, such as bout durations and the transition probability between behaviors. Also, because this approach requires manual viewing, often only one or a small number of behaviors are studied at a time. This approach takes immense amounts of researcher time, often equal to or greater than the duration of the video per individual subject. To quantify these observations, the most commonly used approach, to our knowledge, is for researchers to manually watch videos with a stopwatch to count the time each behavior of interest is exhibited ( Figure 1A). In these cases, researchers often closely observe videos of animals and then develop a list of behaviors they want to measure. Increasingly, researchers are finding that important details of behavior involve subtle actions that are hard to quantify, such as changes in the prevalence of grooming in models of anxiety ( Peça et al., 2011), licking a limb in models of pain ( Browne, 2017), and manipulation of food objects for fine sensorimotor control ( Neubarth, 2020 Sauerbrei et al., 2020). In some cases, behavioral tests allow quantification of behavior through tracking an animal’s location in space, such as in the three-chamber assay, open-field arena, Morris water maze, and elevated plus maze (EPM) ( Pennington, 2019). For example, researchers study behavioral patterns of animals to investigate the effect of a gene mutation, understand the efficacy of potential pharmacological therapies, or uncover the neural underpinnings of behavior. The analysis of animal behavior is a common approach in a wide range of biomedical research fields, including basic neuroscience research ( Krakauer et al., 2017), translational analysis of disease models, and development of therapeutics. DeepEthogram’s rapid, automatic, and reproducible labeling of researcher-defined behaviors of interest may accelerate and enhance supervised behavior analysis. A graphical interface allows beginning-to-end analysis without end-user programming. DeepEthogram accurately predicts rare behaviors, requires little training data, and generalizes across subjects. Behaviors are classified with above 90% accuracy on single frames in videos of mice and flies, matching expert-level human performance. It uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors. DeepEthogram is designed to be general-purpose and applicable across species, behaviors, and video-recording hardware. We created DeepEthogram: software that uses supervised machine learning to convert raw video pixels into an ethogram, the behaviors of interest present in each video frame. Behaviors of interest are often scored manually, which is time-consuming, limited to few behaviors, and variable across researchers. Videos of animal behavior are used to quantify researcher-defined behaviors of interest to study neural function, gene mutations, and pharmacological therapies.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |