• Title/Summary/Keyword: 다중특징

Search Result 1,192, Processing Time 0.028 seconds

Improved Estimation of Hourly Surface Ozone Concentrations using Stacking Ensemble-based Spatial Interpolation (스태킹 앙상블 모델을 이용한 시간별 지상 오존 공간내삽 정확도 향상)

  • KIM, Ye-Jin;KANG, Eun-Jin;CHO, Dong-Jin;LEE, Si-Woo;IM, Jung-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.3
    • /
    • pp.74-99
    • /
    • 2022
  • Surface ozone is produced by photochemical reactions of nitrogen oxides(NOx) and volatile organic compounds(VOCs) emitted from vehicles and industrial sites, adversely affecting vegetation and the human body. In South Korea, ozone is monitored in real-time at stations(i.e., point measurements), but it is difficult to monitor and analyze its continuous spatial distribution. In this study, surface ozone concentrations were interpolated to have a spatial resolution of 1.5km every hour using the stacking ensemble technique, followed by a 5-fold cross-validation. Base models for the stacking ensemble were cokriging, multi-linear regression(MLR), random forest(RF), and support vector regression(SVR), while MLR was used as the meta model, having all base model results as additional input variables. The results showed that the stacking ensemble model yielded the better performance than the individual base models, resulting in an averaged R of 0.76 and RMSE of 0.0065ppm during the study period of 2020. The surface ozone concentration distribution generated by the stacking ensemble model had a wider range with a spatial pattern similar with terrain and urbanization variables, compared to those by the base models. Not only should the proposed model be capable of producing the hourly spatial distribution of ozone, but it should also be highly applicable for calculating the daily maximum 8-hour ozone concentrations.

Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model (언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델)

  • Donghwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2023
  • With the increase in the spread of smart devices and the impact of COVID-19, the consumption of media contents through smart devices has significantly increased. Along with this trend, the amount of media contents viewed through OTT platforms is increasing, that makes contents recommendations on these platforms more important. Previous contents-based recommendation researches have mostly utilized metadata that describes the characteristics of the contents, with a shortage of researches that utilize the contents' own descriptive metadata. In this paper, various text data including titles and synopses that describe the contents were used to recommend similar contents. KLUE-RoBERTa-large, a Korean language model with excellent performance, was used to train the model on the text data. A dataset of over 20,000 contents metadata including titles, synopses, composite genres, directors, actors, and hash tags information was used as training data. To enter the various text features into the language model, the features were concatenated using special tokens that indicate each feature. The test set was designed to promote the relative and objective nature of the model's similarity classification ability by using the three contents comparison method and applying multiple inspections to label the test set. Genres classification and hash tag classification prediction tasks were used to fine-tune the embeddings for the contents meta text data. As a result, the hash tag classification model showed an accuracy of over 90% based on the similarity test set, which was more than 9% better than the baseline language model. Through hash tag classification training, it was found that the language model's ability to classify similar contents was improved, which demonstrated the value of using a language model for the contents-based filtering.

A Study on Land Surface Temperature Changes in Redevelopment Area Using Landsat Satellite Images : Focusing on Godeok-dong and Dunchon-dong in Gangdong-gu, Seoul (Landsat 위성영상을 활용한 재건축 지역의 지표 온도 변화에 관한 연구 : 서울특별시 강동구의 고덕동과 둔촌동을 중심으로)

  • Jihoon HAN;Chul SON
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.2
    • /
    • pp.42-54
    • /
    • 2023
  • The population is concentrated in the metropolitan areas in Korea, and low-density residential areas are transforming into high density residential areas through redevelopment to meet this demand. However, large-scale redevelopment in a short period of time has a negative impact on the urban climate, such as generating a heat island effect due to the reduction of urban green areas. In this study, the change in surface temperature from 2013 to 2022 in the redevelpment areas of Godeok-dong and Dunchon-dong, Gangdong-gu, Seoul, was analyzed using Landsat 8 satellite images. In the Godeok-dong area, the difference in surface temperature was analyzed for the target redevelopment area, forest area, mixed forest and urban area, and low density residential area. In the Dunchon-dong area, the difference in surface temperature was analyzed for the target redevelopment area, forest area, and low density residential area. The difference in surface temperature was analyzed through multiple regression analysis conducted yearly over the three different stages in redevelopment period. The results from the multiple regression analysis show that in both areas, the land surface temperature of target redevelopment area was higher than that of the forest area and lower than low density residential area. It can be seen that these results occurred because the low-density residential area in Godeok-dong and Dunchon-dong had a lower green area ratio and a higher building-to-land ratio than the target redevelopment area. The results of this study suggest that even if low-density residential areas are transforming into high-density areas, adjusting the management of green areas and building-to-land ratio can contribute to lessen urban heat island effect.

A Study on the Effect of SNS Marketing Characheristics on Formation of Hair Shop Image and Visiting Intention (SNS 마케팅 특성이 헤어샵 이미지 형성과 방문의도에 미치는 영향 연구)

  • Kyu-ri Lee;In-Sil Kwak
    • Journal of Digital Policy
    • /
    • v.3 no.2
    • /
    • pp.1-14
    • /
    • 2024
  • The purpose of this study was to analyze the effect of SNS marketing characteristics on hair shop image formation and visit intention in the hair beauty industry. SNS marketing is a strategy to carry out marketing activities through interaction with customers, information provision, information trust, and playfulness using modern social media platforms. It was intended to analyze how these characteristics of SNS marketing affect the formation of hair shop images and visit intention to customers in the hair beauty industry. For the study, a total of 307 customers with experience using hair-related SNS were surveyed. The questionnaire included items related to SNS marketing characteristics, hair shop images, and visit intention, and the collected data was statistically analyzed using SPSS 26.0. The results of the research problem were derived by applying analysis methods such as frequency analysis, factor analysis, reliability analysis, correlation analysis, simple regression analysis, multiple regression analysis, and mediated regression analysis. As a result of the study, it was found that information provision, information reliability, playfulness, and interaction, which are characteristics of SNS marketing, have a positive effect on the formation of hair shop images. In addition, it was confirmed that the hair shop image had a positive effect on the intention to visit. In addition, it was found that the hair shop image plays a mediating role between the SNS marketing characteristics and the intention to visit. This provides important insights that can improve image formation and customer visit intention in the hair beauty industry through SNS marketing.

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

Online Host and Its Impact on Live Streaming Commerce Performance: The Moderating Role of Product Type (온라인 호스트가 라이브 스트리밍 커머스 성과에 미치는 영향: 제품 유형의 조절 역할을 중심으로)

  • Xuanting Jin;Minghao Huang;Dongwon Lee
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.213-231
    • /
    • 2023
  • With the rapid development of live streaming commerce, online host as an information source plays a critical role in affecting live streaming performance. However, the impact of different product types on the relationship between online hosts and live streaming has been less studied. Based on the elaboration likelihood model (ELM) and information source theory, this study aims to empirically investigate what factors influence the sales of live streaming commerce and how product type moderates the relationship between them. The analysis of 11,422 live streaming commerce data collected for four months from October 10, 2021 to February 10, 2022 shows that, among the factors related to source credibility and attractiveness, multi-channel networks (MCN) and the number of followers positively affect the sales volume of live streaming commerce, whereas the reputation score harms the sales. Moreover, the moderating effect of the product type (i.e., ratio of involvement products) on the relationships is confirmed. The findings enrich the literature on live streaming commerce performance. The limitations and future research directions are also discussed.

Atomic Layer Deposition Method for Polymeric Optical Waveguide Fabrication (원자층 증착 방법을 이용한 폴리머 광도파로 제작)

  • Eun-Su Lee;Kwon-Wook Chun;Jinung Jin;Ye-Jun Jung;Min-Cheol Oh
    • Korean Journal of Optics and Photonics
    • /
    • v.35 no.4
    • /
    • pp.175-183
    • /
    • 2024
  • Research into optical signal processing using photonic integrated circuits (PICs) has been actively pursued in various fields, including optical communication, optical sensors, and quantum optics. Among the materials used in PIC fabrication, polymers have attracted significant interest due to their unique characteristics. To fabricate polymer-based PICs, establishing an accurate manufacturing process for the cross-sectional structure of an optical waveguide is crucial. For stable device performance and high yield in mass production, a process with high reproducibility and a wide tolerance for variation is necessary. This study proposes an efficient method for fabricating polymer optical-waveguide devices by introducing the atomic layer deposition (ALD) process. Compared to conventional photoresist or metal-film deposition methods, the ALD process enables more precise fabrication of the optical waveguide's core structure. Polyimide optical waveguides with a core size of 1.8 × 1.6 ㎛2 are fabricated using the ALD process, and their propagation losses are measured. Additionally, a multimode interference (MMI) optical-waveguide power-splitter device is fabricated and characterized. Throughout the fabrication, no cracking issues are observed in the etching-mask layer, the vertical profiles of the waveguide patterns are excellent, and the propagation loss is below 1.5 dB/cm. These results confirm that the ALD process is a suitable method for the mass production of high-quality polymer photonic devices.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Identification of the Environmentally Problematic Input/Environmental Emissions and Selection of the Optimum End-of-pipe Treatment Technologies of the Cement Manufacturing Process (시멘트 제조공정의 환경적 취약 투입물/환경오염물 파악 및 최적종말처리 공정 선정)

  • Lee, Joo-Young;Kim, Yoon-Ha;Lee, Kun-Mo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.39 no.8
    • /
    • pp.449-455
    • /
    • 2017
  • Process input data including material and energy, process output data including product, co-product and its environmental emissions of the reference and target processes were collected and analyzed to evaluate the process performance. Environmentally problematic input/environmental emissions of the manufacturing processes were identified using these data. Significant process inputs contributing to each of the environmental emissions were identified using multiple regression analysis between the process inputs and environmental emissions. Optimum combination of the end-of-pipe technologies for treating the environmental emissions considering economic aspects was made using the linear programming technique. The cement manufacturing processes in Korea and the EU producing same type of cement were chosen for the case study. Environmentally problematic input/environmental emissions of the domestic cement manufacturing processes include coal, dust, and $SO_x$. Multiple regression analysis among the process inputs and environmental emissions revealed that $CO_2$ emission was influenced most by coal, followed by the input raw materials and gypsum. $SO_x$ emission was influenced by coal, and dust emission by gypsum followed by raw material. Optimization of the end-of-pipe technologies treating dust showed that a combination of 100% of the electro precipitator and 2.4% of the fiber filter gives the lowest cost. The $SO_x$ case showed that a combination of 100% of the dry addition process and 25.88% of the wet scrubber gives the lowest cost. Salient feature of this research is that it proposed a method for identifying environmentally problematic input/environmental emissions of the manufacturing processes, in particular, cement manufacturing process. Another feature is that it showed a method for selecting the optimum combination of the end-of-pipe treatment technologies.

Evaluation of Agro-Climatic Index Using Multi-Model Ensemble Downscaled Climate Prediction of CMIP5 (상세화된 CMIP5 기후변화전망의 다중모델앙상블 접근에 의한 농업기후지수 평가)

  • Chung, Uran;Cho, Jaepil;Lee, Eun-Jeong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.17 no.2
    • /
    • pp.108-125
    • /
    • 2015
  • The agro-climatic index is one of the ways to assess the climate resources of particular agricultural areas on the prospect of agricultural production; it can be a key indicator of agricultural productivity by providing the basic information required for the implementation of different and various farming techniques and practicalities to estimate the growth and yield of crops from the climate resources such as air temperature, solar radiation, and precipitation. However, the agro-climate index can always be changed since the index is not the absolute. Recently, many studies which consider uncertainty of future climate change have been actively conducted using multi-model ensemble (MME) approach by developing and improving dynamic and statistical downscaling of Global Climate Model (GCM) output. In this study, the agro-climatic index of Korean Peninsula, such as growing degree day based on $5^{\circ}C$, plant period based on $5^{\circ}C$, crop period based on $10^{\circ}C$, and frost free day were calculated for assessment of the spatio-temporal variations and uncertainties of the indices according to climate change; the downscaled historical (1976-2005) and near future (2011-2040) RCP climate sceneries of AR5 were applied to the calculation of the index. The result showed four agro-climatic indices calculated by nine individual GCMs as well as MME agreed with agro-climatic indices which were calculated by the observed data. It was confirmed that MME, as well as each individual GCM emulated well on past climate in the four major Rivers of South Korea (Han, Nakdong, Geum, and Seumjin and Yeoungsan). However, spatial downscaling still needs further improvement since the agro-climatic indices of some individual GCMs showed different variations with the observed indices at the change of spatial distribution of the four Rivers. The four agro-climatic indices of the Korean Peninsula were expected to increase in nine individual GCMs and MME in future climate scenarios. The differences and uncertainties of the agro-climatic indices have not been reduced on the unlimited coupling of multi-model ensembles. Further research is still required although the differences started to improve when combining of three or four individual GCMs in the study. The agro-climatic indices which were derived and evaluated in the study will be the baseline for the assessment of agro-climatic abnormal indices and agro-productivity indices of the next research work.