In this work, a novel multi-modal device that allows data to simultaneously be collected from three noninva-sive sensor modalities was created. Force myography (FMG), surface electromyography (sEMG), and inertial measurement unit (IMU) sensors were integrated into a wearable armband and used to collect signal data while subjects performed gestures important for the activities of daily living (ADL). An established machine learning algorithm was used to decipher the signals to predict the user's intent/gesture being held, which could be used to control a prosthetic device. Using all three modalities provided statistically-significant improvements over most other modality combinations, as it provided the most accurate and consistent classification results. Clinical relevance-The use of three sensing modalities can improve gesture-based control of upper-limb prosthetics.