Feature Selection for Learning to Predict Outcomes of Compute Cluster Jobs with Application to Decision Support

Proc (Int Conf Comput Sci Comput Intell). 2020 Dec:2020:1231-1236. doi: 10.1109/csci51800.2020.00230.

Abstract

We present a machine learning framework and a new test bed for data mining from the Slurm Workload Manager for high-performance computing (HPC) clusters. The focus was to find a method for selecting features to support decisions: helping users decide whether to resubmit failed jobs with boosted CPU and memory allocations or migrate them to a computing cloud. This task was cast as both supervised classification and regression learning, specifically, sequential problem solving suitable for reinforcement learning. Selecting relevant features can improve training accuracy, reduce training time, and produce a more comprehensible model, with an intelligent system that can explain predictions and inferences. We present a supervised learning model trained on a Simple Linux Utility for Resource Management (Slurm) data set of HPC jobs using three different techniques for selecting features: linear regression, lasso, and ridge regression. Our data set represented both HPC jobs that failed and those that succeeded, so our model was reliable, less likely to overfit, and generalizable. Our model achieved an R2 of 95% with 99% accuracy. We identified five predictors for both CPU and memory properties.

Keywords: HPC; feature analysis; predictive analytics; user modeling.