Compact Model Quality and Benchmarking
Compact modeling is hard: It is easy to make mistakes in the formulation of a model, in the implementation of a model, and in parameter extraction for a model. This talk will discuss how the adoption of Verilog-A, the evolution of benchmark tests (unfortunately primarily only for MOS transistor modeling), the CMC requirement for standard models to include QA (quality assurance) tests and “golden” reference results, and VAMPyRE, have all helped elevate the baseline “health” of compact models. There are still many challenges. Models continue to be released with numerical issues and parameter extraction procedures can still give values that cause unphysical behavior. At present, we rely on non-artificial intelligence (“institutional knowledge”) to try, unsuccessfully, to prevent such problems from recurring. This talk will show how some numerical issues can be (manually) detected and resolved, review benchmarks for AI/ML compact models and parameter extraction, and speculate on avenues for future improvements.