Boosting inter‐ply fracture toughness data on carbon nanotube‐engineered carbon composites for prognostics

In order to build predictive analytic for engineering materials, large data is required for machine learning (ML). Gathering such a data can be demanding due to the challenges involved in producing specialty specimen and conducting ample experiments. Additionally, numerical simulations require effor...

Full description

Saved in:
Bibliographic Details
Main Author: Joshi, Sunil Chandrakant
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146340
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In order to build predictive analytic for engineering materials, large data is required for machine learning (ML). Gathering such a data can be demanding due to the challenges involved in producing specialty specimen and conducting ample experiments. Additionally, numerical simulations require efforts. Smaller datasets are still viable, however, they need to be boosted systematically for ML. A newly developed, knowledge-based data boosting (KBDB) process, named COMPOSITES, helps in logically enhancing the dataset size without further experimentation or detailed simulation. This process and its successful usage are discussed in this paper, using a combination of mode-I and mode-II inter-ply fracture toughness (IPFT) data on carbon nanotube (CNT) engineered carbon fiber reinforced polymer (CFRP) composites. The amount of CNT added to strengthen the mid-ply interface of CFRP vs the improvement in IPFT is studied. A simpler way of combining mode-I and mode-II values of IPFT to predict delamination resistance is presented. Every step of the 10-step KBDB process, its significance and implementation are explained and the results presented. The KBDB helped in not only adding a number of data points reliably, but also in finding boundaries and limitations of the augmented dataset. Such an authentically boosted dataset is vital for successful ML.