Ultra-broad sensing range, high sensitivity textile pressure sensors with heterogeneous fibre architecture and molecular interconnection strategy
Textile-based pressure sensors have garnered extensive attentions owing to their potentials in health monitoring, human–machine interactions, wearable human–machine interfaces (HMIs) and soft robotics. High sensitivity over a broad sensing range are highly desired yet challenging for textile pressur...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180770 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Textile-based pressure sensors have garnered extensive attentions owing to their potentials in health monitoring, human–machine interactions, wearable human–machine interfaces (HMIs) and soft robotics. High sensitivity over a broad sensing range are highly desired yet challenging for textile pressure sensors due to the incompressibility of matrix materials and the stiffening of microstructures. Herein, we developed a high performance pressure sensor based on the silk/3-Aminopropyltriethoxy-silane/titanium carbide (Silk/APTES/MXene) film. The silk is designed with heterogeneous fiber architecture, which enables rich and multi-level contact pattern and could compensate for the effect of structural stiffening. Furthermore, APTES molecules are used to enhance the interface bonding between MXene and silk, promoting the mechanical stability of the sensor. Benefiting from these advantages, the Silk/APTES/MXene sensor device is achieved with high sensitivity of 17.1 KPa−1 over an ultra-broad sensing range up to 3.3 MPa (R2 = 0.997), ultra-low detection limit (0.25 Pa), and low fatigue toughness after 5000 cycles loading and unloading. With these merits, we have demonstrated its capability for a series of human motion detection (such as foot movement detection, arm/wrist/finger bending, etc) and an overall accuracy of 95 % is obtained with the help of CNN-based convolution neural architecture. More significantly, we have built an entire intelligent human–machine dialogue system and a patient-audience dialogue system, from lip/sign language recognition to real-time screen display and final voice output. |
---|