• Title/Summary/Keyword: 컴퓨터기술

Search Result 11,373, Processing Time 0.039 seconds

How to Identify Customer Needs Based on Big Data and Netnography Analysis (빅데이터와 네트노그라피 분석을 통합한 온라인 커뮤니티 고객 욕구 도출 방안: 천기저귀 온라인 커뮤니티 사례를 중심으로)

  • Soonhwa Park;Sanghyeok Park;Seunghee Oh
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.175-195
    • /
    • 2019
  • This study conducted both big data and netnography analysis to analyze consumer needs and behaviors of online consumer community. Big data analysis is easy to identify correlations, but causality is difficult to identify. To overcome this limitation, we used netnography analysis together. The netnography methodology is excellent for context grasping. However, there is a limit in that it is time and costly to analyze a large amount of data accumulated for a long time. Therefore, in this study, we searched for patterns of overall data through big data analysis and discovered outliers that require netnography analysis, and then performed netnography analysis only before and after outliers. As a result of analysis, the cause of the phenomenon shown through big data analysis could be explained through netnography analysis. In addition, it was able to identify the internal structural changes of the community, which are not easily revealed by big data analysis. Therefore, this study was able to effectively explain much of online consumer behavior that was difficult to understand as well as contextual semantics from the unstructured data missed by big data. The big data-netnography integrated model proposed in this study can be used as a good tool to discover new consumer needs in the online environment.

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

The Use of an Iliac Branch Device: Single-Center Study of Endovascular Preservation of Internal Iliac Artery Flow (장골 분지 장치 사용: 내장골동맥 흐름의 혈관내 보존에 대한 단일 기관의 경험)

  • Hyeseung Lee;Jeong-min Lee;Soongu Cho;JungUi Hong
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.6
    • /
    • pp.1339-1349
    • /
    • 2023
  • Purpose To determine the efficacy and safety of iliac branch device (IBD) implantation and to evaluate its limitations based on 7 years of experience in a single center. Materials and Methods This single-center study included patients with bilateral common iliac artery aneurysms (CIAAs). We investigated follow-up CT and reviewed the internal iliac artery (IIA) patency and complications related to IBD. A retrospective analysis was performed and the overall survival rate and freedom from reintervention rate were reported according to the Kaplan-Meier method. Results Of the 38 patients with CIAAs, only 10 (12 CIAAs) were suitable for IBD treatment. Five patients underwent unilateral IBD insertion with contralateral IIA embolization, and three (60%) showed claudication; however, symptoms resolved within 6 months. The 7-year freedom from IBD-related reintervention rate was 77.8%. No procedure-related deaths occurred. Conclusion IBD has good technical success and long-term patency rates; however, anatomical factors frequently limit its application, particularly in Asians. Additionally, unilateral IIA embolization showed relatively mild complications and a good prognosis; therefore, it can be performed safely for anatomically complex aortoiliac aneurysms.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Research on Optimized Operating Systems for Implementing High-Efficiency Small Wind Power Plants (고효율 소형 풍력 발전소 구현을 위한 최적화 운영 체계 연구)

  • Young-Bu Kim;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.94-99
    • /
    • 2024
  • Recently, wind power has been gaining attention as a highly efficient renewable energy source, leading to various technological developments worldwide. Typically, wind power is operated in the form of large wind farms with many wind turbines installed in areas rich in wind resources. However, in developing countries or regions isolated from the power grid, off-grid small wind power systems are emerging as an efficient solution. To efficiently operate and expand off-grid small-scale power systems, the development of real-time monitoring systems is required. For the efficient operation of small wind power systems, it is essential to develop real-time monitoring systems that can actively respond to excessive wind speeds and various environmental factors, as well as ensure the stable supply of produced power to small areas or facilities through an Energy Storage System (ESS). The implemented system monitors turbine RPM, power generation, brake operation, and more to create an optimal operating environment. The developed small wind power system can be utilized in remote road lighting, marine leisure facilities, mobile communication base stations, and other applications, contributing to the development of the RE100 industry ecosystem.

Creating and Utilization of Virtual Human via Facial Capturing based on Photogrammetry (포토그래메트리 기반 페이셜 캡처를 통한 버추얼 휴먼 제작 및 활용)

  • Ji Yun;Haitao Jiang;Zhou Jiani;Sunghoon Cho;Tae Soo Yun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.113-118
    • /
    • 2024
  • Recently, advancements in artificial intelligence and computer graphics technology have led to the emergence of various virtual humans across multiple media such as movies, advertisements, broadcasts, games, and social networking services (SNS). In particular, in the advertising marketing sector centered around virtual influencers, virtual humans have already proven to be an important promotional tool for businesses in terms of time and cost efficiency. In Korea, the virtual influencer market is in its nascent stage, and both large corporations and startups are preparing to launch new services related to virtual influencers without clear boundaries. However, due to the lack of public disclosure of the development process, they face the situation of having to incur significant expenses. To address these requirements and challenges faced by businesses, this paper implements a photogrammetry-based facial capture system for creating realistic virtual humans and explores the use of these models and their application cases. The paper also examines an optimal workflow in terms of cost and quality through MetaHuman modeling based on Unreal Engine, which simplifies the complex CG work steps from facial capture to the actual animation process. Additionally, the paper introduces cases where virtual humans have been utilized in SNS marketing, such as on Instagram, and demonstrates the performance of the proposed workflow by comparing it with traditional CG work through an Unreal Engine-based workflow.

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

A Methodology of Multimodal Public Transportation Network Building and Path Searching Using Transportation Card Data (교통카드 기반자료를 활용한 복합대중교통망 구축 및 경로탐색 방안 연구)

  • Cheon, Seung-Hoon;Shin, Seong-Il;Lee, Young-Ihn;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.233-243
    • /
    • 2008
  • Recognition for the importance and roles of public transportation is increasing because of traffic problems in many cities. In spite of this paradigm change, previous researches related with public transportation trip assignment have limits in some aspects. Especially, in case of multimodal public transportation networks, many characters should be considered such as transfers. operational time schedules, waiting time and travel cost. After metropolitan integrated transfer discount system was carried out, transfer trips are increasing among traffic modes and this takes the variation of users' route choices. Moreover, the advent of high-technology public transportation card called smart card, public transportation users' travel information can be recorded automatically and this gives many researchers new analytical methodology for multimodal public transportation networks. In this paper, it is suggested that the methodology for establishment of brand new multimodal public transportation networks based on computer programming methods using transportation card data. First, we propose the building method of integrated transportation networks based on bus and urban railroad stations in order to make full use of travel information from transportation card data. Second, it is offered how to connect the broken transfer links by computer-based programming techniques. This is very helpful to solve the transfer problems that existing transportation networks have. Lastly, we give the methodology for users' paths finding and network establishment among multi-modes in multimodal public transportation networks. By using proposed methodology in this research, it becomes easy to build multimodal public transportation networks with existing bus and urban railroad station coordinates. Also, without extra works including transfer links connection, it is possible to make large-scaled multimodal public transportation networks. In the end, this study can contribute to solve users' paths finding problem among multi-modes which is regarded as an unsolved issue in existing transportation networks.

Development of a New Cardiac and Torso Phantom for Verifying the Accuracy of Myocardial Perfusion SPECT (심근관류 SPECT 검사의 정확도 검증을 위한 새로운 심장.흉부 팬텀의 개발)

  • Yamamoto, Tomoaki;Kim, Jung-Min;Lee, Ki-Sung;Takayama, Teruhiko;Kitahara, Tadashi
    • Journal of radiological science and technology
    • /
    • v.31 no.4
    • /
    • pp.389-399
    • /
    • 2008
  • Corrections of attenuation, scatter and resolution are important in order to improve the accuracy of single photon emission computed tomography (SPECT) image reconstruction. Especially, the heart movement by respiration and beating cause the errors in the corrections. Myocardial phantom is used to verify the correction methods, but there are many different parts in the current phantoms in actual human body. Therefore the results using a phantom are often considered apart from the clinical data. We developed a new phantom that implements the human body structure around the thorax more faithfully. The new phantom has the small mediastinum which can simulate the structure in which the lung adjoins anterior, lateral and apex of myocardium. The container was made of acrylic and water-equivalent material was used for mediastinum. In addition, solidified polyurethane foam in epoxy resin was used for lung. Five different sizes of myocardium were developed for the quantitative gated SPECT (QGS). The septa of all different cardiac phantoms were designed so that they can be located at the same position. The proposed phantom was attached with liver and gallbladder, the adjustment was respectively possible for the height of them. The volumes of five cardiac ventricles were 150.0, 137.3, 83.1, 42.7 and 38.6ml respectively. The SPECT were performed for the new phantom, and the differences between the images were examined after the correction methods were applied. The three-dimensional tomography of myocardium was well reconstructed, and the subjective evaluations were done to show the difference among the various corrections. We developed the new cardiac and torso phantom, and the difference of various corrections was shown on SPECT images and QGS results.

  • PDF