So far, the engineering community has been driven mostly by a physics-based modeling approach. This approach involves careful observation of a physical phenomenon of interest, development of its partial understanding, expression of the understanding in the form of mathematical equations and ultimately solution of these equations. Due to the partial understanding and numerous assumptions along the steps from observation to solution of the equations, a large portion of the important governing physics gets ignored. Even the applicability of high fidelity simulators with minimal assumptions has so far been limited to the offline design phase only. Despite this major drawback, what makes these models attractive are
- sound foundations from first principles
- interpretability
- generalizability
- existence of robust theories for the analysis of stability and uncertainty
However, most of these models are generally
- computationally expensive
- do not adapt to new scenarios automatically
- can be susceptible to numerical instabilities.
Data-driven modeling: With the abundant supply of big data, open-source cutting edge and easy-to-use machine learning libraries, cheap computational infrastructure and high quality, readily available training resources, data-driven modeling has become very popular. Compared to the physics-based modeling approach, these models thrive on the assumption that data is a manifestation of both known and unknown physics and hence when trained with ample amount of data, the data-driven models will learn the full physics on their own. This approach, involving in particular deep learning, has started achieving human-level performance in several tasks that were until recently considered impossible for computers. Some of the advantages of these models are
- online learning capability
- computational efficiency for inference
- accurate even for very challenging problems as far as the training, validation and test data are prepared properly.
However, due to their
- data hungry and black-box nature,
- poor generalizability
- inherent bias
- lack of robust theory for the analysis of model stability,
their acceptability in high stake applications like digital twin and autonomous systems will remain fairly limited.
It can be concluded that neither of the modeling approach is an ideal candidate for usage in safety critical applications. At the bare minimum, one desires at least four characteristics in any modeling approach:
- computational efficiency and accuracy
- trustworthiness
- generalizability
- self-evolution
Hybrid Analysis and Modelling / Hybrid Analytics / Hybird Modelling
HAM / HM / HA is a new paradigm in modeling that combines the interpretability, robust foundation and understanding of a physics-based approach with the accuracy, efficiency, and automatic pattern-identification capabilities of advanced data-driven machine learning and artificial intelligence algorithms to produce less uncertain results.