• Title/Summary/Keyword: modeling system

Search Result 10,747, Processing Time 0.045 seconds

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining - (미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)

  • Seok, Youngsun;Song, Kihwan;Han, Hyojoo;Lee, Junga
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.79-96
    • /
    • 2021
  • Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

The Study of Radiation Exposed dose According to 131I Radiation Isotope Therapy (131I 방사성 동위원소 치료에 따른 피폭 선량 연구)

  • Chang, Boseok;Yu, Seung-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.653-659
    • /
    • 2019
  • The purpose of this study is to measure the (air dose rate of radiation dose) the discharged patient who was administrated high dose $^{131}I$ treatment, and to predict exposure radiation dose in public person. The dosimetric evaluation was performed according to the distance and angle using three copper rings in 30 patients who were treated with over 200mCi high dose Iodine therapy. The two observer were measured using a GM surverymeter with 8 point azimuth angle and three difference distance 50, 100, 150cm for precise radion dose measurement. We set up three predictive simulations to calculate the exposure dose based on this data. The most highest radiation dose rate was showed measuring angle $0^{\circ}$ at the height of 1m. The each distance average dose rate was used the azimuth angle average value of radiation dose rate. The maximum values of the external radiation dose rate depending on the distance were $214{\pm}16.5$, $59{\pm}9.1$ and $38{\pm}5.8{\mu}Sv/h$ at 50, 100, 150cm, respectively. If high dose Iodine treatment patient moves 5 hours using public transportation, an unspecified person in a side seat at 50cm is exposed 1.14 mSv radiation dose. A person who cares for 4days at a distance of 1 meter from a patient wearing a urine bag receives a maximum radiation dose of 6.5mSv. The maximum dose of radiation that a guardian can receive is 1.08mSv at a distance of 1.5m for 7days. The annual radiation dose limit is exceeded in a short time when applied the our developed radiation dose predictive modeling on the general public person who was around the patients with Iodine therapy. This study can be helpful in suggesting a reasonable guideline of the general public person protection system after discharge of high dose Iodine administered patients.

A Study on the Effect of Person-Job Fit and Organizational Justice Recognition on the Job Competency of Small and Medium Enterprises Workers (중소기업 종사자들의 직무 적합성과 조직 공정성 인식이 직무역량에 미치는 영향에 관한 연구)

  • Jung, Hwa;Ha, Kyu Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.3
    • /
    • pp.73-84
    • /
    • 2019
  • Despite decades of work experience, workers at small- and medium-sized enterprises(SME) here have yet to make inroads into the self-employed sector that utilizes the job competency they have accumulated at work after retirement. Unlike large companies, SME do not have a proper system for improving the long-term job competency of their employees as they focus on their immediate performance. It is necessary to analyse the independent variables affecting the job competency of employees of SME to derive practical implications for the personnel of SME. In the preceding studies, there are independent variable analyses that affect job competency in specialized industries, such as health care, public officials and IT, but the analysis of workers at SME is insufficient. This study set the person-job fit and organizational justice based on the prior studies of the independent variables that affect the job competency of SME general workers as a dependent variable. The sub-variables of each variable derived knowledge, skills, experience, and desire for person-job fit, and distribution, procedural and deployment justice for organizational justice, respectively. The survey of employees of SME in Korea was conducted from February to March 2019 by Likert 5 scales, and the survey was retrieved from 323 people and analyzed in a demonstration using the SPSS and AMOS statistics package. Among the four sub-independent variables of person-job fit, knowledge, skills and experience were shown to have a significant impact on the job competency, and desire was not shown to be so. Among the three sub-independent variables of organizational justice, deployment justice has a significant impact on job competency, but distribution and procedural justices have not. Personnel managers of SME need to improve the job competency of their employees by appropriately utilizing independent variables such as knowledge, skills, experience and deployment at each stage, including recruitment, deployment, and promotion. Future job competency modeling studies are needed to overcome the limitations of this study, which fails to objectively measure job competency.

Development of A Material Flow Model for Predicting Nano-TiO2 Particles Removal Efficiency in a WWTP (하수처리장 내 나노 TiO2 입자 제거효율 예측을 위한 물질흐름모델 개발)

  • Ban, Min Jeong;Lee, Dong Hoon;Shin, Sangwook;Lee, Byung-Tae;Hwang, Yu Sik;Kim, Keugtae;Kang, Joo-Hyon
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.345-353
    • /
    • 2022
  • A wastewater treatment plant (WWTP) is a major gateway for the engineered nano-particles (ENPs) entering the water bodies. However existing studies have reported that many WWTPs exceed the No Observed Effective Concentration (NOEC) for ENPs in the effluent and thus they need to be designed or operated to more effectively control ENPs. Understanding and predicting ENPs behaviors in the unit and \the whole process of a WWTP should be the key first step to develop strategies for controlling ENPs using a WWTP. This study aims to provide a modeling tool for predicting behaviors and removal efficiencies of ENPs in a WWTP associated with process characteristics and major operating conditions. In the developed model, four unit processes for water treatment (primary clarifier, bioreactor, secondary clarifier, and tertiary treatment unit) were considered. Additionally the model simulates the sludge treatment system as a single process that integrates multiple unit processes including thickeners, digesters, and dewatering units. The simulated ENP was nano-sized TiO2, (nano-TiO2) assuming that its behavior in a WWTP is dominated by the attachment with suspendid solids (SS), while dissolution and transformation are insignificant. The attachment mechanism of nano-TiO2 to SS was incorporated into the model equations using the apparent solid-liquid partition coefficient (Kd) under the equilibrium assumption between solid and liquid phase, and a steady state condition of nano-TiO2 was assumed. Furthermore, an MS Excel-based user interface was developed to provide user-friendly environment for the nano-TiO2 removal efficiency calculations. Using the developed model, a preliminary simulation was conducted to examine how the solid retention time (SRT), a major operating variable affects the removal efficiency of nano-TiO2 particles in a WWTP.

Study on the Multilevel Effects of Integrated Crisis Intervention Model for the Prevention of Elderly Suicide: Focusing on Suicidal Ideation and Depression (노인자살예방을 위한 통합적 위기개입모델 다층효과 연구: 자살생각·우울을 중심으로)

  • Kim, Eun Joo;Yook, Sung Pil
    • 한국노년학
    • /
    • v.37 no.1
    • /
    • pp.173-200
    • /
    • 2017
  • This study is designed to verify the actual effect on the prevention of the elderly suicide of the integrated crisis intervention service which has been widely provided across all local communities in Gyeonggi-province focusing on the integrated crisis intervention model developed for the prevention of elderly suicide. The integrated crisis intervention model for the local communities and its manual were developed for the prevention of elderly suicide by integrating the crisis intervention theory which contains local community's integrated system approach and the stress vulnerability theory. For the analysis of the effect, the geriatric depression and suicidal ideation scale was adopted and the data was collected as follows; The data was collected from 258 people in the first preliminary test. Then, it was collected from the secondary test of 184 people after the integrated crisis intervention service was performed for 6 months. The third collection of data was made from 124 people after 2 or 3 years later using the backward tracing method. As for the analysis, the researcher used the R Statistics computing to conduct the test equating, and the vertical scaling between measuring points. Then, the researcher conducted descriptive statistics analysis and univariate analysis of variance, and performed multi-level modeling analysis using Bayesian estimation. As a result of the study, it was found out that the integrated crisis intervention model which has been developed for the elderly suicide prevention has a statistically significant effect on the reduction of elderly suicide in terms of elderly depression and suicide ideation in the follow-up measurement after the implementation of crisis intervention rather than in the first preliminary scores. The integrated crisis intervention model for the prevention of elderly suicide was found to be effective to the extent of 0.56 for the reduction of depression and 0.39 for the reduction of suicidal ideation. However, it was found out in the backward tracing test conducted 2-3 years after the first crisis intervention that the improved values returned to its original state, thus showing that the effect of the intervention is not maintained for long. Multilevel analysis was conducted to find out the factors such as the service type(professional counseling, medication, peer counseling), characteristics of the client (sex, age), the characteristics of the counselor(age, career, major) and the interaction between the characteristics of the counselor and intervention which affect depression and suicidal ideation. It was found that only medication can significantly reduce suicidal ideation and that if the counselor's major is counseling, it significantly further reduces suicidal ideation by interacting with professional counseling. Furthermore, as the characteristics of the suicide prevention experts are found to regulate the intervention effect on elderly suicide prevention in applying integrated crisis intervention model, the primary consideration should be given to the counseling ability of these experts.

A Study on the Determinants of Blockchain-oriented Supply Chain Management (SCM) Services (블록체인 기반 공급사슬관리 서비스 활용의 결정요인 연구)

  • Kwon, Youngsig;Ahn, Hyunchul
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.119-144
    • /
    • 2021
  • Recently, as competition in the market evolves from the competition among companies to the competition among their supply chains, companies are struggling to enhance their supply chain management (hereinafter SCM). In particular, as blockchain technology with various technical advantages is combined with SCM, a lot of domestic manufacturing and distribution companies are considering the adoption of blockchain-oriented SCM (BOSCM) services today. Thus, it is an important academic topic to examine the factors affecting the use of blockchain-oriented SCM. However, most prior studies on blockchain and SCMs have designed their research models based on Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT), which are suitable for explaining individual's acceptance of information technology rather than companies'. Under this background, this study presents a novel model of blockchain-oriented SCM acceptance model based on the Technology-Organization-Environment (TOE) framework to consider companies as the unit of analysis. In addition, Value-based Adoption Model (VAM) is applied to the research model in order to consider the benefits and the sacrifices caused by a new information system comprehensively. To validate the proposed research model, a survey of 126 companies were collected. Among them, by applying PLS-SEM (Partial Least Squares Structural Equation Modeling) with data of 122 companies, the research model was verified. As a result, 'business innovation', 'tracking and tracing', 'security enhancement' and 'cost' from technology viewpoint are found to significantly affect 'perceived value', which in turn affects 'intention to use blockchain-oriented SCM'. Also, 'organization readiness' is found to affect 'intention to use' with statistical significance. However, it is found that 'complexity' and 'regulation environment' have little impact on 'perceived value' and 'intention to use', respectively. It is expected that the findings of this study contribute to preparing practical and policy alternatives for facilitating blockchain-oriented SCM adoption in Korean firms.

Analysis of the Effect of Objective Functions on Hydrologic Model Calibration and Simulation (목적함수에 따른 매개변수 추정 및 수문모형 정확도 비교·분석)

  • Lee, Gi Ha;Yeon, Min Ho;Kim, Young Hun;Jung, Sung Ho
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.1
    • /
    • pp.1-12
    • /
    • 2022
  • An automatic optimization technique is used to estimate the optimal parameters of the hydrologic model, and different hydrologic response results can be provided depending on objective functions. In this study, the parameters of the event-based rainfall-runoff model were estimated using various objective functions, the reproducibility of the hydrograph according to the objective functions was evaluated, and appropriate objective functions were proposed. As the rainfall-runoff model, the storage function model(SFM), which is a lumped hydrologic model used for runoff simulation in the current Korean flood forecasting system, was selected. In order to evaluate the reproducibility of the hydrograph for each objective function, 9 rainfall events were selected for the Cheoncheon basin, which is the upstream basin of Yongdam Dam, and widely-used 7 objective functions were selected for parameter estimation of the SFM for each rainfall event. Then, the reproducibility of the simulated hydrograph using the optimal parameter sets based on the different objective functions was analyzed. As a result, RMSE, NSE, and RSR, which include the error square term in the objective function, showed the highest accuracy for all rainfall events except for Event 7. In addition, in the case of PBIAS and VE, which include an error term compared to the observed flow, it also showed relatively stable reproducibility of the hydrograph. However, in the case of MIA, which adjusts parameters sensitive to high flow and low flow simultaneously, the hydrograph reproducibility performance was found to be very low.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.