• Title/Summary/Keyword: Wearable sensors

Search Result 296, Processing Time 0.024 seconds

A Study on Smart Clothing Products Based on Smart Clothing Patent Application Technology (스마트 의류의 제품 사례 연구 -스마트 의류 특허출원 기술을 중심으로-)

  • Lee, Jaekyong;Choo, Hojung;Kim, Hayeon
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.1
    • /
    • pp.28-45
    • /
    • 2021
  • The importance of smart clothing as a product is increasingly emphasized as further growth in the potential of the smart market is expected. There is a high understanding and sympathy for the potential of smart clothing in the mass consumer market; therefore, commercialization is not actively carried out. This study enhances the understanding of the development direction of products with a focus on technical benefits, in order for smart clothing to gain access to customers as wearable devices. This study identifies major technologies used in smart clothing through an analysis of the patent technology status of smart clothing in Korea. Smart clothing is divided into three types: passive smart, active smart and advanced smart clothing based on a reaction mechanism and functional scope. We present the smart clothing and discuss the product features for three types. According to research, smart clothing products were equipped with passive, active, and advanced smart systems as well as provided new services by converging big data and AI technologies, rather than only using technologies such as sensors, controls, and actuators. Future directions for new smart clothing product development is also discussed in the conclusion.

Design and Implementation of CNN-Based Human Activity Recognition System using WiFi Signals (WiFi 신호를 활용한 CNN 기반 사람 행동 인식 시스템 설계 및 구현)

  • Chung, You-shin;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.4
    • /
    • pp.299-304
    • /
    • 2021
  • Existing human activity recognition systems detect activities through devices such as wearable sensors and cameras. However, these methods require additional devices and costs, especially for cameras, which cause privacy issue. Using WiFi signals that are already installed can solve this problem. In this paper, we propose a CNN-based human activity recognition system using channel state information of WiFi signals, and present results of designing and implementing accelerated hardware structures. The system defined four possible behaviors during studying in indoor environments, and classified the channel state information of WiFi using convolutional neural network (CNN), showing and average accuracy of 91.86%. In addition, for acceleration, we present the results of an accelerated hardware structure design for fully connected layer with the highest computation volume on CNN classifiers. As a result of performance evaluation on FPGA device, it showed 4.28 times faster calculation time than software-based system.

Lifelog Analysis and Future using Artificial Intelligence in Healthcare (헬스케어에서 인공지능을 활용한 라이프로그 분석과 미래)

  • Park, Minseo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.1-6
    • /
    • 2022
  • Lifelog is a digital record of an individual collected from various digital sensors, and includes activity amount, sleep information, weight change, body mass, muscle mass, fat mass, etc. Recently, as wearable devices have become common, a lot of high-quality lifelog data is being produced. Lifelog data shows the state of an individual's body, and can be used not only for individual health care, but also for causes and treatment of diseases. However, at present, AI/ML-based correlation analysis and personalization are not reflected. It is only at the level of presenting simple records or fragmentary statistics. Therefore, in this paper, the correlation/relationship between lifelog data and disease, and AI/ML technology inside lifelog data are examined, and furthermore, a lifelog data analysis process based on AI/ML is proposed. The analysis process is demonstrated with the data collected in the actual Galaxy Watch. Finally, we propose a future convergence service roadmap including lifelog data, diet, health information, and disease information.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Stretchable Sensor Array Based on Lead-Free Piezoelectric Composites Made of BaTiO3 Nanoparticles and Polymeric Matrix (BaTiO3 압전나노입자와 폴리머로 제작된 비납계 압전복합체의 스트레쳐블 압전 센서 어레이로의 적용 연구)

  • Bae, Jun Ho;Ham, Seong Su;Park, Sung Cheol;Park, and Kwi-Il
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.5
    • /
    • pp.312-317
    • /
    • 2022
  • Piezoelectric energy harvesting has attracted increasing attention over the last decade as a means for generating sustainable and long-lasting energy from wasted mechanical energy. To develop self-powered wearable devices, piezoelectric materials should be flexible, stretchable, and bio-eco-friendly. This study proposed the fabrication of stretchable piezoelectric composites via dispersing perovskite-structured BaTiO3 nanoparticles inside an Ecoflex polymeric matrix. In particular, the stretchable piezoelectric sensor array was fabricated via a simple and cost-effective spin-coating process by exploiting the piezoelectric composite comprising of BaTiO3 nanoparticles, Ecoflex matrix, and stretchable Ag coated textile electrodes. The fabricated sensor generated an output voltage of ~4.3 V under repeated compressing deformations. Moreover, the piezoelectric sensor array exhibited robust mechanical stability during mechanical pushing of ~5,000 cycles. Finite element method with multiphysics COMSOL simulation program was employed to support the experimental output performance of the fabricated device. Finally, the stretchable piezoelectric sensor array can be used as a self-powered touch sensor that can effectively detect and distinguish mechanical stimuli, such as pressing by a human finger. The fabricated sensor demonstrated potential to be used in a stretchable, lead-free, and scalable piezoelectric sensor array.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.

Effects of the Selection of Deformation-related Variables on Accuracy in Relative Position Estimation via Time-varying Segment-to-Joint Vectors (시변 분절-관절 벡터를 통한 상대위치 추정시 변형관련 변수의 선정이 추정 정확도에 미치는 영향)

  • Lee, Chang June;Lee, Jung Keun
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.3
    • /
    • pp.156-162
    • /
    • 2022
  • This study estimates the relative position between body segments using segment orientation and segment-to-joint center (S2J) vectors. In many wearable motion tracking technologies, the S2J vector is treated as a constant based on the assumption that rigid body segments are connected by a mechanical ball joint. However, human body segments are deformable non-rigid bodies, and they are connected via ligaments and tendons; therefore, the S2J vector should be determined as a time-varying vector, instead of a constant. In this regard, our previous study (2021) proposed a method for determining the time-varying S2J vector from the learning dataset using a regression method. Because that method uses a deformation-related variable to consider the deformation of S2J vectors, the optimal variable must be determined in terms of estimation accuracy by motion and segment. In this study, we investigated the effects of deformation-related variables on the estimation accuracy of the relative position. The experimental results showed that the estimation accuracy was the highest when the flexion and adduction angles of the shoulder and the flexion angles of the shoulder and elbow were selected as deformation-related variables for the sternum-to-upper arm and upper arm-to-forearm, respectively. Furthermore, the case with multiple deformation-related variables was superior by an average of 2.19 mm compared to the case with a single variable.

Evaluation of Output Performance of Flexible Thermoelectric Energy Harvester Made of Organic-Inorganic Thermoelectric Films Based on PEDOT:PSS and PVDF Matrix (PEDOT:PSS 및 PVDF 기반의 유-무기 열전 필름으로 제작된 플렉서블 열전 에너지 하베스터의 발전 성능 평가)

  • Yujin Na;Kwi-Il Park
    • Korean Journal of Materials Research
    • /
    • v.33 no.7
    • /
    • pp.295-301
    • /
    • 2023
  • Thermoelectric (TE) energy harvesting, which converts available thermal resources into electrical energy, is attracting significant attention, as it facilitates wireless and self-powered electronics. Recently, as demand for portable/wearable electronic devices and sensors increases, organic-inorganic TE films with polymeric matrix are being studied to realize flexible thermoelectric energy harvesters (f-TEHs). Here, we developed flexible organic-inorganic TE films with p-type Bi0.5Sb1.5Te3 powder and polymeric matrices such as poly(3,4-eethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) and poly (vinylidene fluoride) (PVDF). The fabricated TE films with a PEDOT:PSS matrix and 1 wt% of multi-walled carbon nanotube (MWCNT) exhibited a power factor value of 3.96 µW·m-1·K-2 which is about 2.8 times higher than that of PVDF-based TE film. We also fabricated f-TEHs using both types of TE films and investigated the TE output performance. The f-TEH made of PEDOT:PSS-based TE films harvested the maximum load voltage of 3.4 mV, with a load current of 17.4 µA, and output power of 15.7 nW at a temperature difference of 25 K, whereas the f-TEH with PVDF-based TE films generated values of 0.6 mV, 3.3 µA, and 0.54 nW. This study will broaden the fields of the research on methods to improve TE efficiency and the development of flexible organic-inorganic TE films and f-TEH.

Development of a Web Platform System for Worker Protection using EEG Emotion Classification (뇌파 기반 감정 분류를 활용한 작업자 보호를 위한 웹 플랫폼 시스템 개발)

  • Ssang-Hee Seo
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.37-44
    • /
    • 2023
  • As a primary technology of Industry 4.0, human-robot collaboration (HRC) requires additional measures to ensure worker safety. Previous studies on avoiding collisions between collaborative robots and workers mainly detect collisions based on sensors and cameras attached to the robot. This method requires complex algorithms to continuously track robots, people, and objects and has the disadvantage of not being able to respond quickly to changes in the work environment. The present study was conducted to implement a web-based platform that manages collaborative robots by recognizing the emotions of workers - specifically their perception of danger - in the collaborative process. To this end, we developed a web-based application that collects and stores emotion-related brain waves via a wearable device; a deep-learning model that extracts and classifies the characteristics of neutral, positive, and negative emotions; and an Internet-of-things (IoT) interface program that controls motor operation according to classified emotions. We conducted a comparative analysis of our system's performance using a public open dataset and a dataset collected through actual measurement, achieving validation accuracies of 96.8% and 70.7%, respectively.

An Attention-based Temporal Network for Parkinson's Disease Severity Rating using Gait Signals

  • Huimin Wu;Yongcan Liu;Haozhe Yang;Zhongxiang Xie;Xianchao Chen;Mingzhi Wen;Aite Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2627-2642
    • /
    • 2023
  • Parkinson's disease (PD) is a typical, chronic neurodegenerative disease involving the concentration of dopamine, which can disrupt motor activity and cause different degrees of gait disturbance relevant to PD severity in patients. As current clinical PD diagnosis is a complex, time-consuming, and challenging task that relays on physicians' subjective evaluation of visual observations, gait disturbance has been extensively explored to make automatic detection of PD diagnosis and severity rating and provides auxiliary information for physicians' decisions using gait data from various acquisition devices. Among them, wearable sensors have the advantage of flexibility since they do not limit the wearers' activity sphere in this application scenario. In this paper, an attention-based temporal network (ATN) is designed for the time series structure of gait data (vertical ground reaction force signals) from foot sensor systems, to learn the discriminative differences related to PD severity levels hidden in sequential data. The structure of the proposed method is illuminated by Transformer Network for its success in excavating temporal information, containing three modules: a preprocessing module to map intra-moment features, a feature extractor computing complicated gait characteristic of the whole signal sequence in the temporal dimension, and a classifier for the final decision-making about PD severity assessment. The experiment is conducted on the public dataset PDgait of VGRF signals to verify the proposed model's validity and show promising classification performance compared with several existing methods.