Information content in data sets for a nucleated-polymerization model

J Biol Dyn. 2015;9(1):172-97. doi: 10.1080/17513758.2015.1050465. Epub 2015 Jun 5.

Abstract

We illustrate the use of statistical tools (asymptotic theories of standard error quantification using appropriate statistical models, bootstrapping, and model comparison techniques) in addition to sensitivity analysis that may be employed to determine the information content in data sets. We do this in the context of recent models [S. Prigent, A. Ballesta, F. Charles, N. Lenuzza, P. Gabriel, L.M. Tine, H. Rezaei, and M. Doumic, An efficient kinetic model for assemblies of amyloid fibrils and its application to polyglutamine aggregation, PLoS ONE 7 (2012), e43273. doi:10.1371/journal.pone.0043273.] for nucleated polymerization in proteins, about which very little is known regarding the underlying mechanisms; thus, the methodology we develop here may be of great help to experimentalists. We conclude that the investigated data sets will support with reasonable levels of uncertainty only the estimation of the parameters related to the early steps of the aggregation process.

Keywords: 49Q12; 62P10; 64B10; 65M32; information content; inverse problems; polyglutamine and aggregation modelling; sensitivity; uncertainty quantification.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms
  • Amyloid / chemistry*
  • Kinetics
  • Models, Biological
  • Models, Statistical
  • Peptides / chemistry*
  • Polymerization
  • Sensitivity and Specificity

Substances

  • Amyloid
  • Peptides
  • polyglutamine