Abstract
The greatest challenge in creating digital material twins from μCT images is the lack of a robust and versatile tool for segmenting the μCT images and post-processing the segmented volumes into a FE mesh. Here, we have used deep convolutional neural networks (DCNN) for segmenting μCT images of a multi-layer plain-woven fabric. First, a set of raw 2D image slices extracted from the gray-scale volume of a single-layer fabric was used to train a DCNN using manually annotated images. The trained DCNN was then tested using some “unseen” manually segmented images, resulting in more than 96% global accuracy. Moreover, the trained DCNN was also used to segment unseen images from a multilayer stack of the fabric with good accuracy. A novel procedure based on the “watershed segmentation” technique was also successfully developed to separate individual yarns from connected yarn cross-sections during post-processing of segmented volumes. The work presented here provides a robust and efficient framework of segmenting CT scan images of woven fabrics for generating their digital material twins and FE mesh.
Original language | British English |
---|---|
Article number | 109091 |
Journal | Composites Science and Technology |
Volume | 217 |
DOIs | |
State | Published - 5 Jan 2022 |
Keywords
- CT Analysis
- Deep learning
- Fabrics/textiles
- Microstructures
- Process modeling