Probabilistic Guaranteed Gradient Learning-Based Spark Advance Self-Optimizing Control for Spark-Ignited Engines

IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):4683-4693. doi: 10.1109/TNNLS.2017.2767293. Epub 2017 Dec 4.

Abstract

In spark-ignited (SI) engines, the spark advance (SA) controls the combustion phase that has a significant impact on the efficiency. Online self-optimizing control (SOC) of SA to maximize the indicated fuel conversion efficiency (IFCE) forms a stochastic optimization problem for a static map due to the stochasticity of combustion. Gradient-based optimization algorithms using periodic dithers are effective methods of dealing with such problems. However, decision sequences corrupted by periodic dithers are undesirable in SA online SOC. To choose a proper decision sequence for this problem, a gradient descent-based dither-free SOC scheme iteratively updates the decision based on probabilistic guaranteed gradient learning (PGGL). The PGGL approach uses the statistical distribution of the past samples to approximate the gradient on which the sample size can be adaptively adjusted to achieve the probabilistic target. The proposed scheme not only guarantees the accuracy of gradient learning but also adaptively adjusts the sample size in the learning process, achieving a tradeoff between a rapid response and a stable decision sequence. Moreover, the convergence performance of the obtained decision sequence is analyzed with respect to the probability distribution. Finally, experimental validations performed on an SI engine test bench show that the proposed PGGL-based SOC scheme successfully manages engine operation around the optimal IFCE with a fast response and stable SA behavior, under both steady and mild transient conditions.

Publication types

  • Research Support, Non-U.S. Gov't