Sample landmark localization results. Circles are our algorithm and gold stars are ground truth.
Discovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
M. Breslav, T. L. Hedrick, S. Sclaroff and M. Betke, "Discovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets," Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, March 2016. [Paper] [Video] [Poster] [Slides]
Top row are input images, bottom row are automatic 3D pose estimates.
3D Pose Estimation of Bats in the Wild
Vision-based methods have gained popularity as a tool for helping to analyze the behavior of bats. Though, for bats in the wild, there are still no tools capable of estimating and subsequently analyzing articulated 3D bat pose. We propose a model-based multi-view articulated 3D bat pose estimation framework for this novel problem. Key challenges include the large search space associated with articulated 3D pose, the ambiguities that arise from 2D projections of 3D bodies, and the low resolution image data we have available. Our method uses multi-view camera geometry and temporal constraints to reduce the state space of possible articulated 3D bat poses and finds an optimal set using a Markov Random Field based model. Our experiments use real video data of flying bats and gold-standard annotations by a bat biologist. Our results show, for the first time in the literature, articulated 3D pose estimates being generated automatically for video sequences of bats flying in the wild. The average differences in body orientation and wing joint angles, between estimates produced by our method and those based on gold-standard annotations, ranged from 16 degrees to 21 degrees (i.e., approximately 17% to 23%) for orientation and 14 degree to 26 degrees (i.e., approximately 7% to 14%) for wing joint angles.
M. Breslav, N. Fuller, S. Sclaroff and M. Betke, "3D Pose Estimation of Bats in the Wild," Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, March 2014. [Paper] [Video] [Poster] [Slides]
Shown above are three shape-time signals which represent the shape and trajectory information associated with each of the three individual bats.
A System for Estimating the Wing Beat Frequency of a Bat given Thermal-Infrared Video.
Biologists and aerospace engineers are both interested in how bats fly. Biologists want to undertand the group behavior of bats, and aerospace engineers want to know the characteristics of a bat's flight. To further these broad research goals this work explores how the wing beat frequency of individual bats can be automatically estimated from thermal infrared video data. An estimate of the wing beat frequency of an individual bat can be incorporated into a position tracker to produce more accurate 3D trajectories. These trajectories are then analyzed so inferences can be made about bat behavior and flight. Additionally, wing beat estimates for bats in the wild may reveal interesting patterns previously unobserved or yet unknown. The focus of this work is in using 2D shapes across time to estimate when the bat is repeating its 3D pose. Repetition of a 3D pose is used to estimate periodicity which in turn yields the wing beat frequency estimate. Our methods are validated experimentally on 20 bats observed in the wild.
M. Breslav, N. W. Fuller, M. Betke. "Vision System for Wing Beat Analysis of Bats in the Wild", In the proceedings of the Workshop on Visual Observation and Analysis of Animal and Insect Behavior (VAIB) held in conjunction with the International Conference on Pattern Recognition (ICPR). Tsukuba, Japan, November 2012. [Paper] [Slides]
Shown above is a segmented contour from an Endobronchial Ultrasound Image that is then reconstructed in 3D.
3D Segmentation and Reconstruction of Endobronchial Ultrasound
To make Endobronchial Ultrasound (EBUS) useful it needs to be fused with modalities such as MDCT data. This paper describes automatic methods for 3D segmentation of a ROI in EBUS data as well as the 3D reconstruction of the ROI which allows measurements to be taken.
Xiaonan Zang, Mikhail Breslav, and William E. Higgins, "3D Segmentation and Reconstruction of Endobronchial Ultrasound," SPIE Medical Imaging 2013: Ultrasonic Imaging, Tomography, and Therapy, Johan G. Bosch and Marvin M Doyley, eds., Orlando, FL, vol. 8675, 9-14 February 2013. [Paper]
Shown above is a sample 3D reconstruction of an airwaytree phantom imaged with EBUS.
3D Reconstruction of 2D Endobronchial Ultrasound
One key challenge that doctors must face during a biopsy procedure is how to accurately navigate inside the body to reach the biopsy site. Furthermore, when a biopsy site is located outside the wall that lines the human airway tree, there is no way to see it with a bronchoscope. The doctor is forced to guess where to puncture the wall and how to proceed in order to reach the biopsy site. A mistake by the doctor can lead to serious consequences for the patient. To address this problem our lab purchased an endobronchial ultrasound (EBUS) probe which can "see" through surfaces by measuring the characteristics of returning ultrasound signals after some have been emitted. The bulk of the work done for my Master's thesis addresses the question of how to reconstruct a 3D volume from the 2D images produced by EBUS. Various 3D reconstruction algorithms are tested experimentally and quantitative results are reported for various experiments. This probe generates 2D images as it constantly scans a portion of the body. The work done in this Master's thesis is an attempt to reconstruct a 3D volume from the 2D images generated by the endobronchial ultrasound probe. Novel experiments were designed to quantify the success of the reconstructed volumes. Details are available in the thesis cited below.
Breslav, Mikhail. 2010. 3D Reconstruction of 2D Endobronchial Ultrasound. Master's Thesis, The Pennsylvania State University Department of Electrical Engineering. [Thesis]
 
Disclaimer