• Title/Summary/Keyword: Information technology process

Search Result 7,932, Processing Time 0.036 seconds

Prioritizing Noxious Liquid Substances (NLS) for Preparedness Against Potential Spill Incidents in Korean Coastal Waters (해상 유해액체물질(NLS) 유출사고대비 물질군 선정에 관한 연구)

  • Kim, Young-Ryun;Choi, Jeong-Yun;Son, Min-Ho;Oh, Sangwoo;Lee, Moonjin;Lee, Sangjin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.22 no.7
    • /
    • pp.846-853
    • /
    • 2016
  • This study prioritizes Noxious Liquid Substances (NLS) transported by sea via a risk-based database containing 596 chemicals to prepare against NLS incidents. There were 158 chemicals transported in Korean waters during 2014 and 2015, which were prioritized, and then chemicals were grouped into four categories (with rankings of 0-3) based on measures for preparedness against incident. In order to establish an effective preparedness system against NLS spill incidents on a national scale, a compiling process for NLS chemicals ranked 2~3 should be carried out and managed together with an initiative for NLS chemicals ranked 0-1. Also, it is advisable to manage NLS chemicals ranked 0-1 after considering the characteristics of NLS specifically transported through a given port since the types and characteristics of NLS chemicals relevant differ depending on the port. In addition, three designated regions are suggested: 1) the southern sector of the East Sea (Ulsan and Busan); 2) the central sector of the South Sea (Gwangyang and Yeosu); and 3) the northern sector of the West Sea (Pyeongtaek, Daesan and Incheon). These regions should be considered special management sectors, with strengthened surveillance and the equipment, materials and chemicals used for pollution response management schemes prepared in advance at NLS spill incident response facilities. In the near future, the risk database should be supplemented with specific information on chronic toxicity and updated on a regular basis. Furthermore, scientific ecotoxicological data for marine organisms should be collated and expanded in a systematic way. A system allowing for the identification Hazardous and Noxious Substances (HNS) should also be established, noting the relevant volumes transported in Korean waters as soon as possible to allow for better management of HNS spill incidents at sea.

GOCI-II Capability of Improving the Accuracy of Ocean Color Products through Fusion with GK-2A/AMI (GK-2A/AMI와 융합을 통한 GOCI-II 해색 산출물 정확도 개선 가능성)

  • Lee, Kyeong-Sang;Ahn, Jae-Hyun;Park, Myung-Sook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1295-1305
    • /
    • 2021
  • Satellite-derived ocean color products are required to effectively monitor clear open ocean and coastal water regions for various research fields. For this purpose, accurate correction of atmospheric effect is essential. Currently, the Geostationary Ocean Color Imager (GOCI)-II ground segment uses the reanalysis of meteorological fields such as European Centre for Medium-Range Weather Forecasts (ECMWF) or National Centers for Environmental Prediction (NCEP) to correct gas absorption by water vapor and ozone. In this process, uncertainties may occur due to the low spatiotemporal resolution of the meteorological data. In this study, we develop water vapor absorption correction model for the GK-2 combined GOCI-II atmospheric correction using Advanced Meteorological Imager (AMI) total precipitable water (TPW) information through radiative transfer model simulations. Also, we investigate the impact of the developed model on GOCI products. Overall, the errors with and without water vapor absorption correction in the top-of-atmosphere (TOA) reflectance at 620 nm and 680 nm are only 1.3% and 0.27%, indicating that there is no significant effect by the water vapor absorption model. However, the GK-2A combined water vapor absorption model has the large impacts at the 709 nm channel, as revealing error of 6 to 15% depending on the solar zenith angle and the TPW. We also found more significant impacts of the GK-2 combined water vapor absorption model on Rayleigh-corrected reflectance at all GOCI-II spectral bands. The errors generated from the TOA reflectance is greatly amplified, showing a large error of 1.46~4.98, 7.53~19.53, 0.25~0.64, 14.74~40.5, 8.2~18.56, 5.7~11.9% for from 620 nm to 865 nm, repectively, depending on the SZA. This study emphasizes the water vapor correction model can affect the accuracy and stability of ocean color products, and implies that the accuracy of GOCI-II ocean color products can be improved through fusion with GK-2A/AMI.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

Exploring Differences of Student Response Characteristics between Computer-Based and Paper-Based Tests: Based on the Results of Computer-Based NAEA and Paper-Based NAEA (컴퓨터 기반 평가와 지필평가 간 학생 응답 특성 탐색 -컴퓨터 기반 국가수준 학업성취도 평가 병행 시행 결과를 중심으로-)

  • Jongho Baek;Jaebong Lee;Jaok Ku
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.1
    • /
    • pp.17-28
    • /
    • 2023
  • In line with the entry into the digital-based intelligent information society, the science curriculum emphasizes the cultivation of scientific competencies, and computer-based test (CBT) is drawing attention for assessment of competencies. CBT has advantages to develop items that have high fidelity, and to establish a feedback system by accumulating results into the database. However, it is necessary to solve the problems of improving validity of assessment results, lowering measurement efficiency, and increasing management factors. To examine students' responses to the introduction of the new assessment tools in the process of transitioning from paper-based test (PBT) to CBT, in this study, we analyzed the results of the PBT and the CBT conducted in 2021 National Assessment of Educational Achievement (NAEA). In particular, we sought to find the effects on student achievement when only the mode of assessment was changed without change of items, and the effect on student achievement when the items were composed including technology enhanced features that take advantage of CBT. This study is derived through the analysis of the results of 7,137 third-grade middle school students taking one among the three kinds of assessments, which were the PBT or two kinds of CBT. After the assessment, the percentage of correct answers and the item discriminations were collected for each group, and expert opinions on characteristics of response were collected through the expert council involving 8 science teachers with experience in NAEA. According to the results, there was no significant difference between students' achievement results in the PBT and the CBT-M, which means simple mode conversion type of CBT, so it could be explained that the mode effect did not appear. However, it was confirmed that the percentage of correct answers for the construct response items was somewhat high in the CBT, and this result was analyzed to be related to the convenience of the response. On the other hand, there were the items with a difference of more than 10%p from the correct answer rate of similar items, among the items to which technology enhanced functions were applied following the introduction of CBT. According to the analysis of response rate of options, these results could be explained that the students' level of understanding could be more closely grasped through the innovative items developed through the technology enhanced function. Based on the results, we discussed some guidance to be considered when introducing CBT and developing items through CBT, and presented implications.

A Study on the Adjustment Effect of Corporate Agility on the Intention to Accept Digital Transformation in Passenger Transportation Business (여객자동차운송사업의 디지털 트랜스포메이션 수용의도에 대한 기업민첩성의 조절효과 연구)

  • Baek, Woon-heung;Lee, So-young
    • Journal of Venture Innovation
    • /
    • v.7 no.3
    • /
    • pp.1-23
    • /
    • 2024
  • This study empirically analyzed the factors affecting acceptance intention when introducing digital transformation in the passenger vehicle transportation business, and examined the moderating effect of corporate agility. Based on the technology acceptance model (TAM) and the technology-organization-environment (TOE) model, the study set introduction usability, ease of introduction, digital transformation capability, CEO support, customer demand, and competitive pressure as independent variables, and 316 effective samples were analyzed by setting corporate agility as a modulating variable. As a result of the analysis, ease of introduction, digital transformation capability, CEO support, customer demand, and competitive pressure all had a significant influence on the intention to accept digital transformation, and in particular, digital transformation capability had the greatest influence. This suggests that the ability of companies to effectively utilize digital technology is an essential factor in the success of digital transformation. On the other hand, the usefulness of introduction did not significantly affect the intention to accept, which means that concerns about costs and risks can have a greater impact on the introduction of digital transformation. In addition, corporate agility showed a positive moderating effect in the relationship between acceptance intention and independent variables, and it was shown that the more agile the company can effectively solve the challenges that arise in the process of introducing digital transformation. In particular, when corporate agility is high, the effect of usefulness of introduction changes positively. This study is the first attempt to systematically analyze digital transformation in the passenger vehicle transportation business, and in practice, it has great academic significance and confirmed the importance of digital transformation capabilities and CEO support. However, since the scope of the study is limited to the passenger vehicle transportation business, there is a limit to generalization, research expanded to various industries and fields is needed in the future.

Electronic Word-of-Mouth in B2C Virtual Communities: An Empirical Study from CTrip.com (B2C허의사구중적전자구비(B2C虚拟社区中的电子口碑): 관우휴정려유망적실증연구(关于携程旅游网的实证研究))

  • Li, Guoxin;Elliot, Statia;Choi, Chris
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.262-268
    • /
    • 2010
  • Virtual communities (VCs) have developed rapidly, with more and more people participating in them to exchange information and opinions. A virtual community is a group of people who may or may not meet one another face to face, and who exchange words and ideas through the mediation of computer bulletin boards and networks. A business-to-consumer virtual community (B2CVC) is a commercial group that creates a trustworthy environment intended to motivate consumers to be more willing to buy from an online store. B2CVCs create a social atmosphere through information contribution such as recommendations, reviews, and ratings of buyers and sellers. Although the importance of B2CVCs has been recognized, few studies have been conducted to examine members' word-of-mouth behavior within these communities. This study proposes a model of involvement, statistics, trust, "stickiness," and word-of-mouth in a B2CVC and explores the relationships among these elements based on empirical data. The objectives are threefold: (i) to empirically test a B2CVC model that integrates measures of beliefs, attitudes, and behaviors; (ii) to better understand the nature of these relationships, specifically through word-of-mouth as a measure of revenue generation; and (iii) to better understand the role of stickiness of B2CVC in CRM marketing. The model incorporates three key elements concerning community members: (i) their beliefs, measured in terms of their involvement assessment; (ii) their attitudes, measured in terms of their satisfaction and trust; and, (iii) their behavior, measured in terms of site stickiness and their word-of-mouth. Involvement is considered the motivation for consumers to participate in a virtual community. For B2CVC members, information searching and posting have been proposed as the main purpose for their involvement. Satisfaction has been reviewed as an important indicator of a member's overall community evaluation, and conceptualized by different levels of member interactions with their VC. The formation and expansion of a VC depends on the willingness of members to share information and services. Researchers have found that trust is a core component facilitating the anonymous interaction in VCs and e-commerce, and therefore trust-building in VCs has been a common research topic. It is clear that the success of a B2CVC depends on the stickiness of its members to enhance purchasing potential. Opinions communicated and information exchanged between members may represent a type of written word-of-mouth. Therefore, word-of-mouth is one of the primary factors driving the diffusion of B2CVCs across the Internet. Figure 1 presents the research model and hypotheses. The model was tested through the implementation of an online survey of CTrip Travel VC members. A total of 243 collected questionnaires was reduced to 204 usable questionnaires through an empirical process of data cleaning. The study's hypotheses examined the extent to which involvement, satisfaction, and trust influence B2CVC stickiness and members' word-of-mouth. Structural Equation Modeling tested the hypotheses in the analysis, and the structural model fit indices were within accepted thresholds: ${\chi}^2^$/df was 2.76, NFI was .904, IFI was .931, CFI was .930, and RMSEA was .017. Results indicated that involvement has a significant influence on satisfaction (p<0.001, ${\beta}$=0.809). The proportion of variance in satisfaction explained by members' involvement was over half (adjusted $R^2$=0.654), reflecting a strong association. The effect of involvement on trust was also statistically significant (p<0.001, ${\beta}$=0.751), with 57 percent of the variance in trust explained by involvement (adjusted $R^2$=0.563). When the construct "stickiness" was treated as a dependent variable, the proportion of variance explained by the variables of trust and satisfaction was relatively low (adjusted $R^2$=0.331). Satisfaction did have a significant influence on stickiness, with ${\beta}$=0.514. However, unexpectedly, the influence of trust was not even significant (p=0.231, t=1.197), rejecting that proposed hypothesis. The importance of stickiness in the model was more significant because of its effect on e-WOM with ${\beta}$=0.920 (p<0.001). Here, the measures of Stickiness explain over eighty of the variance in e-WOM (Adjusted $R^2$=0.846). Overall, the results of the study supported the hypothesized relationships between members' involvement in a B2CVC and their satisfaction with and trust of it. However, trust, as a traditional measure in behavioral models, has no significant influence on stickiness in the B2CVC environment. This study contributes to the growing body of literature on B2CVCs, specifically addressing gaps in the academic research by integrating measures of beliefs, attitudes, and behaviors in one model. The results provide additional insights to behavioral factors in a B2CVC environment, helping to sort out relationships between traditional measures and relatively new measures. For practitioners, the identification of factors, such as member involvement, that strongly influence B2CVC member satisfaction can help focus technological resources in key areas. Global e-marketers can develop marketing strategies directly targeting B2CVC members. In the global tourism business, they can target Chinese members of a B2CVC by providing special discounts for active community members or developing early adopter programs to encourage stickiness in the community. Future studies are called for, and more sophisticated modeling, to expand the measurement of B2CVC member behavior and to conduct experiments across industries, communities, and cultures.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on 3D Scan Technology for Find Archetype of Youngbeokji in Seongnagwon Garden (성락원 영벽지의 원형 파악을 위한 3D 스캔기술 연구)

  • Lee, Won-Ho;Kim, Dong-Hyun;Kim, Jae-Ung;Park, Dong-Jin
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.31 no.3
    • /
    • pp.95-105
    • /
    • 2013
  • This study on circular identifying purposes was performed of Youngbeokji space located in Seongnagwon(Scenic Sites No.35). Through the data acquisition of 3D high precision, such as the surrounding terrain of the Youngbeokji. The results of this study is summarized like the following. First, the purpose of the stone structures and structure within the Youngbeokji search is an important clue to find that earlier era will be a prototype. 3D scan method of enforcement is searching the whole structure, including the surrounding terrain and having the easy way. Second, the measurement results are as follows. Department of bedrock surveyed from South to North was measured by 7,665mm. From East to West was measured at 7,326mm. The size of the stone structures, $1,665mm{\times}1,721mm$ in the form of a square. Its interior has a diameter of 1, 664mm of hemispherical form. In the lower portion of the rock masses in the South to the North, has fallen out of the $1,006mm{\times}328mm$ scale traces were discovered. Third, the Youngbeokji recorded in the internal terrain Multiresolution approach. After working with the scanner and scan using the scan data, broadband, to merge. Polygon Data conversion to process was conducted and mash as fine scan data are converted to process data. High resolution photos obtained through the creation of 3D terrain data overlap and the final result. Fourthly, as a result of this action, stone structure West of the waterway back outgoing times oil was confirmed. Bangjiwondo is estimated to be seokji of structure hydroponic facility confirmed will artificially carved in the bedrock. As a result of this and the previous situation of the 1960s could compare data was created. This study provides 3D precision ordnance through the acquisition of the data. Excavations at the circle was able to preserve in perpetuity as digital data. In the future, this data is welcome to take a wide variety of professionals. This is the purpose of this is to establish foundations and conservation management measures will be used. In addition, The new ease of how future research and 3D scan unveiled in the garden has been used in the study expect.