Physics informed learning is of growing importance for scientific and engineering problems. Physics-informed simply refers to our ability to constrain the learning process by physical and/or engineering principles. For instance, conservation of mass, momentum, or energy can be imposed in the learning process. In the parlance of ML, the imposed constraints are referred to as regularizers. Thus, physics informed learning focuses on adding regularization to the learning process to impose or enforce physical priors. There are four major stages in machine learning: 1) determining a high-level task or objective, 2) collecting and curating the training data, 3) identifying the model architecture and parameterization, and 4) choosing an optimization strategy to determine the parameters of the model from the data. Known physics (e.g., invariances, symmetries, conservation laws, constraints, etc.) may be incorporated in each of these stages. For example, rotational invariance is often incorporated by augmenting the training data with rotated copies, and translational invariance is often captured using convolutional neural network architectures. In kernel-based techniques, such as Gaussian process regression and support vector machines, symmetries can be imposed by means of rotation-invariant, translation-invariant, and symmetric covariance kernels. Additional physics and prior knowledge may be incorporated as additional loss functions or constraints in the optimization problem.
Our mission is to leverage data and AI to automatically discover the underlying mechanisms and mathematical representations that best explain observations across a variety of evolving/changing natural systems. We aim to responsibly use these mathematical formulations to gain insights, make predictions, and discover governing equations. We also aim to improve machine learning processes for time-series data and to understand their limits.