• Title/Summary/Keyword: Data Collecting

Search Result 2,221, Processing Time 0.032 seconds

Perceptions and Trends of Digital Fashion Technology - A Big Data Analysis - (빅데이터 분석을 이용한 디지털 패션 테크에 대한 인식 연구)

  • Song, Eun-young;Lim, Ho-sun
    • Fashion & Textile Research Journal
    • /
    • v.23 no.3
    • /
    • pp.380-389
    • /
    • 2021
  • This study aimed to reveal the perceptions and trends of digital fashion technology through an informational approach. A big data analysis was conducted after collecting the text shown in a web environment from April 2019 to April 2021. Key words were derived through text mining analysis and network analysis, and the structure of perception of digital fashion technology was identified. Using textoms, we collected 8144 texts after data refinement, conducted a frequency of emergence and central component analysis, and visualized the results with word cloud and N-gram. The frequency of appearance also generated matrices with the top 70 words, and a structural equivalent analysis was performed. The results were presented with network visualizations and dendrograms. Fashion, digital, and technology were the most frequently mentioned topics, and the frequencies of platform, digital transformation, and start-ups were also high. Through clustering, four clusters of marketing were formed using fashion, digital technology, startups, and augmented reality/virtual reality technology. Future research on startups and smart factories with technologies based on stable platforms is needed. The results of this study contribute to increasing the fashion industry's knowledge on digital fashion technology and can be used as a foundational study for the development of research on related topics.

Proposal of AI-based Digital Forensic Evidence Collecting System

  • Jang, Eun-Jin;Shin, Seung-Jung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.124-129
    • /
    • 2021
  • As the 4th industrial era is in full swing, the public's interest in related technologies such as artificial intelligence, big data, and block chain is increasing. As artificial intelligence technology is used in various industrial fields, the need for research methods incorporating artificial intelligence technology in related fields is also increasing. Evidence collection among digital forensic investigation techniques is a very important procedure in the investigation process that needs to prove a specific person's suspicions. However, there may be cases in which evidence is damaged due to intentional damage to evidence or other physical reasons, and there is a limit to the collection of evidence in this situation. Therefore, this paper we intends to propose an artificial intelligence-based evidence collection system that analyzes numerous image files reported by citizens in real time to visually check the location, user information, and shooting time of the image files. When this system is applied, it is expected that the evidence expected data collected in real time can be actually used as evidence, and it is also expected that the risk area analysis will be possible through big data analysis.

Systolic blood pressure measurement algorithm with mmWave radar sensor

  • Shi, JingYao;Lee, KangYoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1209-1223
    • /
    • 2022
  • Blood pressure is one of the key physiological parameters for determining human health, and can prove whether human cardiovascular function is healthy or not. In general, what we call blood pressure refers to arterial blood pressure. Blood pressure fluctuates greatly and, due to the influence of various factors, even varies with each heartbeat. Therefore, achievement of continuous blood pressure measurement is particularly important for more accurate diagnosis. It is difficult to achieve long-term continuous blood pressure monitoring with traditional measurement methods due to the continuous wear of measuring instruments. On the other hand, radar technology is not easily affected by environmental factors and is capable of strong penetration. In this study, by using machine learning, tried to develop a linear blood pressure prediction model using data from a public database. The radar sensor evaluates the measured object, obtains the pulse waveform data, calculates the pulse transmission time, and obtains the blood pressure data through linear model regression analysis. Confirm its availability to facilitate follow-up research, such as integrating other sensors, collecting temperature, heartbeat, respiratory pulse and other data, and seeking medical treatment in time in case of abnormalities.

Precision nutrition: approach for understanding intra-individual biological variation (정밀영양: 개인 간 대사 다양성을 이해하기 위한 접근)

  • Kim, Yangha
    • Journal of Nutrition and Health
    • /
    • v.55 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • In the past few decades, great progress has been made on understanding the interaction between nutrition and health status. But despite this wealth of knowledge, health problems related to nutrition continue to increase. This leads us to postulate that the continuing trend may result from a lack of consideration for intra-individual biological variation on dietary responses. Precision nutrition utilizes personal information such as age, gender, lifestyle, diet intake, environmental exposure, genetic variants, microbiome, and epigenetics to provide better dietary advices and interventions. Recent technological advances in the artificial intelligence, big data analytics, cloud computing, and machine learning, have made it possible to process data on a scale and in ways that were previously impossible. A big data platform is built by collecting numerous parameters such as meal features, medical metadata, lifestyle variation, genome diversity and microbiome composition. Sophisticated techniques based on machine learning algorithm can be used to integrate and interpret multiple factors and provide dietary guidance at a personalized or stratified level. The development of a suitable machine learning algorithm would make it possible to suggest a personalized diet or functional food based on analysis of intra-individual metabolic variation. This novel precision nutrition might become one of the most exciting and promising approaches of improving health conditions, especially in the context of non-communicable disease prevention.

Concept and Construction Direction of Marine Digital Twin considering the Characteristics of Marine Information (해양정보 특성을 고려한 해양 디지털트윈 개념 및 구축방향)

  • Choi, Tae-seok;Choi, Yun-soo;Kim, Jae-myeong;Song, Hyun-Ho;Min, Byeong-heon;Lee, Sang-min
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.39-47
    • /
    • 2022
  • Digital Twin is positioned as one of the establishment of a digital management system for core infrastructure in terms of collecting real data and implementing virtual space. However, there are currently no integrated three-dimensional marine information analysis tools and technologies in Korea, and unlike land, new 3D modeling technologies and data processing technologies are required to digitize flexible marine information, but there are limitations in implementation. Therefore, this study aims to present development directions in four areas: structure, data, modeling, and utilization platform of marine digital twin by analyzing trends related to marine digital twin and digital twin technology elements.

Application of Point Cloud Based Hull Structure Deformation Detection Algorithm (포인트 클라우드 기반 선체 구조 변형 탐지 알고리즘 적용 연구)

  • Song, Sang-ho;Lee, Gap-heon;Han, Ki-min;Jang, Hwa-sup
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.59 no.4
    • /
    • pp.235-242
    • /
    • 2022
  • As ship condition inspection technology has been developed, research on collecting, analyzing, and diagnosing condition information has become active. In ships, related research has been conducted, such as analyzing, detecting, and classifying major hull failures such as cracks and corrosion using 2D and 3D data information. However, for geometric deformation such as indents and bulges, 2D data has limitations in detection, so 3D data is needed to utilize spatial feature information. In this study, we aim to detect hull structural deformation positions. It builds a specimen based on actual hull structure deformation and acquires a point cloud from a model scanned with a 3D scanner. In the obtained point cloud, deformation(outliers) is found with a combination of RANSAC algorithms that find the best matching model in the Octree data structure and dataset.

Real Estate Industry in the Era of Technology 5.0

  • Sun Ju KIM
    • The Journal of Economics, Marketing and Management
    • /
    • v.11 no.6
    • /
    • pp.9-22
    • /
    • 2023
  • Purpose: This paper aims to suggest ways to apply the leading technologies of Industry 5.0 to the housing welfare field, tasks for this, and policy implications. Research design, data, and methodology: The analysis method of this study is a literature study. The analysis steps are as follows. Technology trends and characteristics of Industry 5.0 were investigated and analyzed. The following is a method of applying technology 5.0 in the industrial field. Finally, the application areas of each technology and the challenges to be solved in the process were presented. Results: The results of the analysis are 1) the accessibility and diffusion of technology. This means that all citizens have equal access to and use of the latest technology. To this end, the appropriate use of technology and the development of a user-centered interface are needed. 2) Data protection and privacy. Residential welfare-related technologies may face risks such as personal information leakage and hacking in the process of collecting and analyzing residents' data. 3) Stability, economic feasibility, and sustainability of the technology. Conclusions: The policy implications include: 1) Enhancing technology education and promotion to improve tech accessibility for groups like the low-income, rural areas, and the elderly, 2) Strengthening security policies and regulations to safeguard resident data and mitigate hacking risks, 3) Standardization of technology, 4) Investment and support in R&D.

Comparison and Application of Deep Learning-Based Anomaly Detection Algorithms for Transparent Lens Defects (딥러닝 기반의 투명 렌즈 이상 탐지 알고리즘 성능 비교 및 적용)

  • Hanbi Kim;Daeho Seo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.1
    • /
    • pp.9-19
    • /
    • 2024
  • Deep learning-based computer vision anomaly detection algorithms are widely utilized in various fields. Especially in the manufacturing industry, the difficulty in collecting abnormal data compared to normal data, and the challenge of defining all potential abnormalities in advance, have led to an increasing demand for unsupervised learning methods that rely on normal data. In this study, we conducted a comparative analysis of deep learning-based unsupervised learning algorithms that define and detect abnormalities that can occur when transparent contact lenses are immersed in liquid solution. We validated and applied the unsupervised learning algorithms used in this study to the existing anomaly detection benchmark dataset, MvTecAD. The existing anomaly detection benchmark dataset primarily consists of solid objects, whereas in our study, we compared unsupervised learning-based algorithms in experiments judging the shape and presence of lenses submerged in liquid. Among the algorithms analyzed, EfficientAD showed an AUROC and F1-score of 0.97 in image-level tests. However, the F1-score decreased to 0.18 in pixel-level tests, making it challenging to determine the locations where abnormalities occurred. Despite this, EfficientAD demonstrated excellent performance in image-level tests classifying normal and abnormal instances, suggesting that with the collection and training of large-scale data in real industrial settings, it is expected to exhibit even better performance.

A Study for Quality Improvement of Three-dimensional Body Measurement Data (3차원 인체치수 조사 자료의 품질 개선을 위한 연구)

  • Park, Sun-Mi;Nam, Yun-Ja;Park, Jin-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.117-124
    • /
    • 2009
  • To inspect the quality of data collected from a large-scale body measurement and investigation project, it is necessary to establish a proper data editing process. The three-dimensional body measurement may have measuring errors caused from measurer's proficiency or changes in the subject's posture. And it may also have errors caused in the process of algorithm expressing the information obtained from the three-dimensional scanner into numerical values, and in the course of data-processing dealing with numerous data for individuals. When those errors are found, the quality of the measured data is deteriorated, and they consequently reduce the quality of statistics which was conducted on the basis of it. Therefore this study intends to suggest a new way to improve the quality of the data collected from the three-dimensional body measurement by proposing a working procedure identifying data errors and correcting them from the whole data processing procedure-collecting, processing, and analyzing- of the 2004 Size Korea Three-dimensional Body Measurement Project. This study was carried out into three stages: Firstly, we detected erroneous data by examining of logical relations among variables under each edit rule. Secondly, we detected suspicious data through independent examination of individual variable value by sex and age. Finally, we examined scatter-plot matrix of many variables to consider the relationships among them. This simple graphical tool helps us to find out whether some suspicious data exist in the data set or not. As a result of this study, we detected some erroneous data included in the raw data. We figured out that the main errors are not because of the system errors that the three-dimensional body measurement system has but because of the subject's original three-dimensional shape data. Therefore by correcting some erroneous data, we have enhanced data quality.

Proposal of Process Model for Research Data Quality Management (연구데이터 품질관리를 위한 프로세스 모델 제안)

  • Na-eun Han
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.51-71
    • /
    • 2023
  • This study analyzed the government data quality management model, big data quality management model, and data lifecycle model for research data management, and analyzed the components common to each data quality management model. Those data quality management models are designed and proposed according to the lifecycle or based on the PDCA model according to the characteristics of target data, which is the object that performs quality management. And commonly, the components of planning, collection and construction, operation and utilization, and preservation and disposal are included. Based on this, the study proposed a process model for research data quality management, in particular, the research data quality management to be performed in a series of processes from collecting to servicing on a research data platform that provides services using research data as target data was discussed in the stages of planning, construction and operation, and utilization. This study has significance in providing knowledge based for research data quality management implementation methods.