• Title/Summary/Keyword: Data Input Approach

Search Result 787, Processing Time 0.028 seconds

An Alternative Approach for Setting Equilibrium Prices of Sericultural Products (잠사류의 균형 가격모색)

  • 이질현
    • Journal of Sericultural and Entomological Science
    • /
    • no.12
    • /
    • pp.47-50
    • /
    • 1970
  • There are many factors affecting the development of sericultural industry in Korea. The setting of a rational pricing system for sericultural products is one of important activities of the Korean Government to improve the incentives to producers. The determination o: the prices for many years were based on the production costs including a certain level of profits. Some of cost items are in conflict both in cocoon producers and silk-reeling industries. Government officials have to evaluate these conflicting problems and estimate the consequences of their decisions. In this situation the final decision often became political decisions. This analysis is aimed at providing an alternative method of setting the prices of sericultural products. The criteria of the equilibrium employed in this analysis are based on economic principle which equilibrium condition is determined by the relationships between the marginal productivity of input factors and factor prices. In order to obtain the related information Cobb-Douglas'functions were fitted using KIST computer and data were obtained mostly from the Bank of Korea and the Ministry of Agriculture and Forestru, An important assumption is that "Opportunity Costs" of factors input in both cocoon production and silk-Peeling industries are same, The major finding s obtained are as followings. 1) The sum of coefficient of production elastisity in silk-reeling industries is greater than one. Silk-reeling industries are operating under the situation of increasing return to scale and it is, therefore, expected to develop the industries as the capital-intensive large scale. 2) The cocoon producing farmers are under the situations of the decreasing return to scale and it is expected to continue their cocoon farming as the labor-intensive small scale, assuming the present level of production technology. As the development of commercial farming, the resources input in cocoon production will be shifted to the production for higher profitable crops, 3) The price elastisity of production is higher in cocoon production than in silk-reeling industries. It is expected that the price changing effects on domestic production will be resulted from cocoon producers. 4) Based on analysis results of marginal productivities and the opportunity costs of resources, cocoon price for meeting equilibrium price condition is to be increased by 8-16 percent or standard price level of silk increased by 6-8 percent. There were the possibilities of over evaluation on opportunity cost of resources input in silk-reeling industries, or income transfered from the farmers to the industries. It is recommended that the prices for meeting equilibrium price conditions are to be determined by 72 percent for cocoon and 28 percent for silk-reeling costs, based on standard level of the exporting prices.

  • PDF

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Satellite-Based Cabbage and Radish Yield Prediction Using Deep Learning in Kangwon-do (딥러닝을 활용한 위성영상 기반의 강원도 지역의 배추와 무 수확량 예측)

  • Hyebin Park;Yejin Lee;Seonyoung Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1031-1042
    • /
    • 2023
  • In this study, a deep learning model was developed to predict the yield of cabbage and radish, one of the five major supply and demand management vegetables, using satellite images of Landsat 8. To predict the yield of cabbage and radish in Gangwon-do from 2015 to 2020, satellite images from June to September, the growing period of cabbage and radish, were used. Normalized difference vegetation index, enhanced vegetation index, lead area index, and land surface temperature were employed in this study as input data for the yield model. Crop yields can be effectively predicted using satellite images because satellites collect continuous spatiotemporal data on the global environment. Based on the model developed previous study, a model designed for input data was proposed in this study. Using time series satellite images, convolutional neural network, a deep learning model, was used to predict crop yield. Landsat 8 provides images every 16 days, but it is difficult to acquire images especially in summer due to the influence of weather such as clouds. As a result, yield prediction was conducted by splitting June to July into one part and August to September into two. Yield prediction was performed using a machine learning approach and reference models , and modeling performance was compared. The model's performance and early predictability were assessed using year-by-year cross-validation and early prediction. The findings of this study could be applied as basic studies to predict the yield of field crops in Korea.

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.

Estimation of Global Horizontal Insolation over the Korean Peninsula Based on COMS MI Satellite Images (천리안 기상영상기 영상을 이용한 한반도 지역의 수평면 전일사량 추정)

  • Lee, Jeongho;Choi, Wonseok;Kim, Yongil;Yun, Changyeol;Jo, Dokki;Kang, Yongheack
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.1
    • /
    • pp.151-160
    • /
    • 2013
  • Recently, although many efforts have been made to estimate insolation over Korean Peninsula based on satellite imagery, most of them have utilized overseas satellite imagery. This paper aims to estimate insolation over the Korean Peninsula based on the Korean stationary orbit satellite imagery. It utilizes level 1 data and level 2 cloud image of COMS MI, the first meteorological satellite of Korea, and OMI image of NASA as input data. And Kawamura physical model which has been known to be suitable for East Asian area is applied. Daily global horizontal insolation was estimated by using satellite images of every fifteen minutes for the period from May 2011 to April 2012, and the estimates were compared to the ground based measurements. The estimated and observed daily insolations are highly correlated as the $R^2$ value is 0.86. The error rates of monthly average insolation was under ${\pm}15%$ in most stations, and the annual average error rate of horizontal global insolation ranged from -5% to 5% except for Seoul. The experimental results show that the COMS MI based approach has good potential for estimating insolation over the Korean Peninsula.

The Efficiency Determinants to Port Cargo Equipment on Container Terminals to DEA & Tobit Model (DEA와 Tobit 모형에 따른 컨테이너 터미널의 하역장비 효율성 결정요인)

  • Park, Hong-Gyun
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.3
    • /
    • pp.1-17
    • /
    • 2010
  • This paper focuses on measuring the efficiency of container yards in container terminals in Busan and Gwangyang using Data Envelopment Analysis (DEA) approach. It analyses the relative efficiency of 11 container terminals based on the data for the period between 2006 and 2009 to offer a fresh perspective. The applied framework assumes inputs to be container cranes, transtainer cranes and yard tractors and output as container transshipment volume. Through the analysis, the differences between the impact of using of container cranes, transtainer cranes and yard tractors, top handler & reach stacker on container yard efficiency are measured. Moreover, the associations between the three input factors are analyzed as well. This paper also employs heteroscedastic Tobit model to show the impact of explanatory variables on container yard efficiencies. I took into consideration the strategies for operation of container cranes, transtainer cranes and yard tractors in container yard.

A Program Transformational Approach for Rule-Based Hangul Automatic Programming (규칙기반 한글 자동 프로그램을 위한 프로그램 변형기법)

  • Hong, Seong-Su;Lee, Sang-Rak;Sim, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.114-128
    • /
    • 1994
  • It is very difficult for a nonprofessional programmer in Koera to write a program with very High Level Language such as, V,REFINE, GIST, and SETL, because the semantic primitives of these languages are based on predicate calculus, set, mapping, or testricted natural language. And it takes time to be familiar with these language. In this paper, we suggest a method to reduce such difficulties by programming with the declarative, procedural constructs, and aggregate constructs. And we design and implement an experimental knowledge-based automatic programming system. called HAPS(Hangul Automatic Program System). HAPS, whose input is specification such as Hangul abstract algorithm and datatype or Hangul procedural constructs, and whose output is C program. The method of operation is based on rule-based and program transformation technique, and the problem transformation technique. The problem area is general problem. The control structure of HAPS accepts the program specification, transforms this specification according to the proper rule in the rule-base, and stores the transformed program specification on the global data base. HAPS repeats these procedures until the target C program is fully constructed.

  • PDF

Development of a surrogate model based on temperature for estimation of evapotranspiration and its use for drought index applicability assessment (증발산 산정을 위한 온도기반의 대체모형 개발 및 가뭄지수 적용성 평가)

  • Kim, Ho-Jun;Kim, Kyoungwook;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.969-983
    • /
    • 2021
  • Evapotranspiration, one of the hydrometeorological components, is considered an important variable for water resource planning and management and is primarily used as input data for hydrological models such as water balance models. The FAO56 PM method has been recommended as a standard approach to estimate the reference evapotranspiration with relatively high accuracy. However, the FAO56 PM method is often challenging to apply because it requires considerable hydrometeorological variables. In this perspective, the Hargreaves equation has been widely adopted to estimate the reference evapotranspiration. In this study, a set of parameters of the Hargreaves equation was calibrated with relatively long-term data within a Bayesian framework. Statistical index (CC, RMSE, IoA) is used to validate the model. RMSE for monthly results reduced from 7.94 ~ 24.91 mm/month to 7.94 ~ 24.91 mm/month for the validation period. The results confirmed that the accuracy was significantly improved compared to the existing Hargreaves equation. Further, the evaporative demand drought index (EDDI) based on the evaporative demand (E0) was proposed. To confirm the effectiveness of the EDDI, this study evaluated the estimated EDDI for the recent drought events from 2014 to 2015 and 2018, along with precipitation and SPI. As a result of the evaluation of the Han-river watershed in 2018, the weekly EDDI increased to more than 2 and it was confirmed that EDDI more effectively detects the onset of drought caused by heatwaves. EDDI can be used as a drought index, particularly for heatwave-driven flash drought monitoring and along with SPI.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Framework on 3D Object-Based Construction Information Management System for Work Productivity Analysis for Reinforced Concrete Work (철근콘크리트 공사의 작업 생산성 분석을 위한 3차원 객체 활용 정보관리 시스템 구축방안)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.2
    • /
    • pp.15-24
    • /
    • 2018
  • Despite the recognition of the need for productivity information and its importance, the feedback of productivity information is not well-established in the construction industry. Effective use of productivity information is required to improve the reliability of construction planning. However, in many cases, on-site productivity information is hardly management effectively, but rather it relies on the experience and/or intuition of project participants. Based on the literature review and expert interviews, the authors recognized that one of the possible solutions is to develop a systematic approach in dealing with productivity information of the construction job-sites. It is required that the new system should not be burdensome to users, purpose-oriented information management, easy-to follow information structure, real-time information feedback, and productivity-related factor recognition. Based on the preliminary investigations, this study proposed a framework for a novel system that facilitate the effective management of construction productivity information. This system has utilized Sketchup software which has good user accessibility by minimizing additional data input and related workload. The proposed system has been designed to input, process, and output the pertinent information through a four-stage process: preparation, input, processing, and output. The inputted construction information is classified into Task Breakdown Structure (TBS) and Material Breakdown Structure (MBS), which are constructed by referring to the contents of the standard specification of building construction, and converted into productivity information. In addition, the converted information is also graphically visualized on the screen, allowing the users to use the productivity information from the job-site. The productivity information management system proposed in this study has been pilot-tested in terms of practical applicability and information availability in the real construction project. Very positive results have been obtained from the usability and the applicability of the system and benefits are expected from the validity test of the system. If the proposed system is used in the planning stage in the construction, the productivity information and the continuous information is accumulated, the expected effectiveness of this study would be conceivably further enhanced.