The ability to accurately recognize elementary surgical gestures is a stepping stone to automated surgical assessment and surgical training. However, as the pool of subjects increases, variation in surgical techniques and unanticipated motion increases the challenge of creating robust statistical models of gestures. This paper examines the applicability of advanced modeling techniques from automated speech recognition to the problem of increasing variability in surgical motions. In particular, we demonstrate the effectiveness of automatically bootstrapped user-adaptive models on diverse data acquired from the da Vinci surgical robot.