Machine learning-based multi-modal information perception for soft robotic hands

This paper focuses on multi-modal Information Perception (IP) for Soft Robotic Hands (SRHs) using Machine Learning (ML) algorithms. A flexible Optical Fiber-based Curvature Sensor (OFCS) is fabricated, consisting of a Light-Emitting Diode (LED), photosensitive detector, and optical fiber. Bending th...

Mô tả chi tiết

Lưu vào:
Hiển thị chi tiết
Tác giả chính: Huang, H.
Đồng tác giả: Lin, J.
Định dạng: BB
Ngôn ngữ:en_US
Thông tin xuất bản: IEEE Xplore 2020
Chủ đề:
Truy cập trực tuyến:http://tailieuso.tlu.edu.vn/handle/DHTL/9863
Từ khóa: Thêm từ khóa bạn đọc
Không có từ khóa, Hãy là người đầu tiên gắn từ khóa cho biểu ghi này!
id oai:localhost:DHTL-9863
record_format dspace
spelling oai:localhost:DHTL-98632020-12-07T09:01:39Z Machine learning-based multi-modal information perception for soft robotic hands Huang, H. Lin, J. Wu, L. Fang, B. Wen, Z. Sun, F. multi-modal sensors optical fiber gesture recognition object recognition Soft Robotic Hands (SRHs) Machine Learning (ML) This paper focuses on multi-modal Information Perception (IP) for Soft Robotic Hands (SRHs) using Machine Learning (ML) algorithms. A flexible Optical Fiber-based Curvature Sensor (OFCS) is fabricated, consisting of a Light-Emitting Diode (LED), photosensitive detector, and optical fiber. Bending the roughened optical fiber generates lower light intensity, which reflecting the curvature of the soft finger. Together with the curvature and pressure information, multi-modal IP is performed to improve the recognition accuracy. Recognitions of gesture, object shape, size, and weight are implemented with multiple ML approaches, including the Supervised Learning Algorithms (SLAs) of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Logistic Regression (LR), and the unSupervised Learning Algorithm (un-SLA) of K-Means Clustering (KMC). Moreover, Optical Sensor Information (OSI), Pressure Sensor Information (PSI), and Double-Sensor Information (DSI) are adopted to compare the recognition accuracies. The experiment results demonstrate that the proposed sensors and recognition approaches are feasible and effective. The recognition accuracies obtained using the above ML algorithms and three modes of sensor information are higer than 85 percent for almost all combinations. Moreover, DSI is more accurate when compared to single modal sensor information and the KNN algorithm with a DSI outperforms the other combinations in recognition accuracy. 2020-12-07T09:00:41Z 2020-12-07T09:00:41Z 2019 BB http://tailieuso.tlu.edu.vn/handle/DHTL/9863 en_US Tsinghua Science and Technology, (2020), Vol ume 25, Number 2, pp 255-269 application/pdf IEEE Xplore
institution Trường Đại học Thủy Lợi
collection DSpace
language en_US
topic multi-modal sensors
optical fiber
gesture recognition
object recognition
Soft Robotic Hands (SRHs)
Machine Learning (ML)
spellingShingle multi-modal sensors
optical fiber
gesture recognition
object recognition
Soft Robotic Hands (SRHs)
Machine Learning (ML)
Huang, H.
Machine learning-based multi-modal information perception for soft robotic hands
description This paper focuses on multi-modal Information Perception (IP) for Soft Robotic Hands (SRHs) using Machine Learning (ML) algorithms. A flexible Optical Fiber-based Curvature Sensor (OFCS) is fabricated, consisting of a Light-Emitting Diode (LED), photosensitive detector, and optical fiber. Bending the roughened optical fiber generates lower light intensity, which reflecting the curvature of the soft finger. Together with the curvature and pressure information, multi-modal IP is performed to improve the recognition accuracy. Recognitions of gesture, object shape, size, and weight are implemented with multiple ML approaches, including the Supervised Learning Algorithms (SLAs) of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Logistic Regression (LR), and the unSupervised Learning Algorithm (un-SLA) of K-Means Clustering (KMC). Moreover, Optical Sensor Information (OSI), Pressure Sensor Information (PSI), and Double-Sensor Information (DSI) are adopted to compare the recognition accuracies. The experiment results demonstrate that the proposed sensors and recognition approaches are feasible and effective. The recognition accuracies obtained using the above ML algorithms and three modes of sensor information are higer than 85 percent for almost all combinations. Moreover, DSI is more accurate when compared to single modal sensor information and the KNN algorithm with a DSI outperforms the other combinations in recognition accuracy.
author2 Lin, J.
author_facet Lin, J.
Huang, H.
format BB
author Huang, H.
author_sort Huang, H.
title Machine learning-based multi-modal information perception for soft robotic hands
title_short Machine learning-based multi-modal information perception for soft robotic hands
title_full Machine learning-based multi-modal information perception for soft robotic hands
title_fullStr Machine learning-based multi-modal information perception for soft robotic hands
title_full_unstemmed Machine learning-based multi-modal information perception for soft robotic hands
title_sort machine learning-based multi-modal information perception for soft robotic hands
publisher IEEE Xplore
publishDate 2020
url http://tailieuso.tlu.edu.vn/handle/DHTL/9863
work_keys_str_mv AT huangh machinelearningbasedmultimodalinformationperceptionforsoftrobotichands
_version_ 1768589792735395840