• Title/Summary/Keyword: Automated analysis system

Search Result 849, Processing Time 0.025 seconds

A Study on the Choice of Main Entry in German Cataloging Rules; a comparison with the title entry in the Orient (독일목록규칙의 기본기입선정에 관한 연구)

  • Kim Tae-soo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.21
    • /
    • pp.61-101
    • /
    • 1991
  • This study reviews the development and change of main entry principle in German cataloging codes, with special emphasis on RAK. With rerard to the functions of catalog, comparison between the traditional title main entry in the Orient and author main entry in the West has been made. The analysis confirms in this study that various criteria in choice of the entries in RAK have been adopted. In case of works where the persons who have played different roles in the works are named on the title page, as well as related works and works of mixed responsibility, the criteria of entry determination are complex and time consuming process and have no absolute value. And there are also various kinds of problems in corporate entries including confirmation of originator(Urheber), choice of either the territorial authority corncerned or corporate bodies as an entry depending on the nature of the publications, and a unique bibliographical situation of treaties. This means the code is absence of absolute value in selecting entries, and this results in adoption of main entry principle which has lost its significance for the purpose of cataloging. With emergence of the ISBD and actualization of automated cataloging, morever, all entries are equal as points of access. It would eliminate the need for personal judgements required in choice of main entry by the present code. In doing so, it would bring uniformity and standardization to cataloging practice. In direct approach to works, title entry is more developed finding device than author entry in cataloging theories. Thus introduction of unit card system beginning with title which is adopted in KCR3 would be desirable, the complicated rules for the choice of entry could be abandoned from cataloging codes. Most of the user studies show that catalog users have placed higher value on the title entry as a finding device and each entry is equal as access points through unit entry. This means that choice of a given entry as a main entry is unnecessary in cataloging codes. Title entry would be a rather simple standard and direct approach for works. This study proves that the traditional title entry of Korea is superior to author main entry in the Western world in cataloging theory. Thus recommendation to be made is that abandonment of author main entry from cataloging codes should be considered in the future.

  • PDF

Distribution Analysis of Land Surface Temperature about Seoul Using Landsat 8 Satellite Images and AWS Data (Landsat 8 위성영상과 AWS 데이터를 이용한 서울특별시의 지표면 온도 분포 분석)

  • Lee, Jong-Sin;Oh, Myoung-Kwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.1
    • /
    • pp.434-439
    • /
    • 2019
  • Recently, interest in urban temperature change and ground surface temperature change has been increasing due to weather phenomenon due to global warming, heat island phenomenon caused by urbanization in urban areas. In Korea, weather data such as temperature and precipitation have been collected since 1904. In recent years, there are 96 ASOS stations and 494 AWS weather observation stations. However, in the case of terrestrial networks, terrestrial meteorological data except measurement points are predicted through interpolation because they provide point data for each installation point. In this study, to improve the resolution of ground surface temperature measurement, the surface temperature using satellite image was calculated and its applicability was analyzed. For this purpose, the satellite images of Landsat 8 OLI TIRS were obtained for Seoul Metropolitan City by seasons and transformed to surface temperature by applying NASA equation to the thermal bands. The ground measurement data was based on the temperature data measured by AWS. Since the AWS temperature data is station based point data, interpolation is performed by Kriging interpolation method for comparison with Landsat image. As a result of comparing the satellite image base surface temperature with the AWS temperature data, the temperature difference according to the season was calculated as fall, winter, summer, based on the RMSE value, Spring, in order of applicability of Landsat satellite image. The use of that attribute and AWS support starts at $2.11^{\circ}C$ and RMSE ${\pm}3.84^{\circ}C$, which reflects information from the extended NASA.

Comparison and Analysis of Drought Index based on MODIS Satellite Images and ASOS Data for Gyeonggi-Do (경기도 지역에 대한 MODIS 위성영상 및 지점자료기반 가뭄지수의 비교·분석)

  • Yu-Jin, KANG;Hung-Soo, KIM;Dong-Hyun, KIM;Won-Joon, WANG;Han-Eul, LEE;Min-Ho, SEO;Yun-Jae, CHOUNG
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.1-18
    • /
    • 2022
  • Currently, the Korea Meteorological Administration evaluates the meteorological drought by region using SPI6(standardized precipitation index 6), which is a 6-month cumulative precipitation standard. However, SPI is an index calculated only in consideration of precipitation at 69 weather stations, and the drought phenomenon that appears for complex reasons cannot be accurately determined. Therefore, the purpose of this study is to calculate and compare SPI considering only precipitation and SDCI (Scaled Drought Condition Index) considering precipitation, vegetation index, and temperature in Gyeonggi. In addition, the advantages and disadvantages of the station data-based drought index and the satellite image-based drought index were identified by using results calculated through the comparison of SPI and SDCI. MODIS(MODerate resolution Imaging Spectroradiometer) satellite image data, ASOS(Automated Synoptic Observing System) data, and kriging were used to calculate SDCI. For the duration of precipitation, SDCI1, SDCI3, and SDCI6 were calculated by applying 1-month, 3-month, and 6-month respectively to the 8 points in 2014. As a result of calculating the SDCI, unlike the SPI, drought patterns began to appear about 2-month ago, and drought by city and county in Gyeonggi was well revealed. Through this, it was found that the combination of satellite image data and station data increased efficiency in the pattern of drought index change, and increased the possibility of drought prediction in wet areas along with existing dry areas.

Improvement and Estimation of Effect for Speed Limit Tolerance (속도위반 단속 허용범위 개선안 제시 및 효과 추정)

  • Su-hwan Jeong;Kyeung-hee Han;Min-ho Lee;Choul-ki Lee
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.164-181
    • /
    • 2023
  • In a low speed limit environment, the speed limit tolerance of automated traffic enforcement devices is very high, which is one of the main factors for the low compliance rate. Therefore, in this study, we aimed to the improve the speed limit tolerance and to present a new standard. The effects of the operator and user errors that can cause speeding by drivers were analyzed. Based on the results of the analysis, an improvement of the tolerance was proposed by applying an error in the enforcement device and GPS speed. In addition, long-term expected safety effects such as the accident rate and severity were estimated from the operator's perspective when improving the tolerance. As a result of the estimation, the speed limit compliance rate, accident rate, and change rate of a number of severe accidents due to speed change, and pedestrian traffic accident mortality rate were all improved in all speed limit environments. The introduction of the proposed improvement is expected to improve road safety significantly.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

High-resolution medium-range streamflow prediction using distributed hydrological model WRF-Hydro and numerical weather forecast GDAPS (분포형 수문모형 WRF-Hydro와 기상수치예보모형 GDAPS를 활용한 고해상도 중기 유량 예측)

  • Kim, Sohyun;Kim, Bomi;Lee, Garim;Lee, Yaewon;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.5
    • /
    • pp.333-346
    • /
    • 2024
  • High-resolution medium-range streamflow prediction is crucial for sustainable water quality and aquatic ecosystem management. For reliable medium-range streamflow predictions, it is necessary to understand the characteristics of forcings and to effectively utilize weather forecast data with low spatio-temporal resolutions. In this study, we presented a comparative analysis of medium-range streamflow predictions using the distributed hydrological model, WRF-Hydro, and the numerical weather forecast Global Data Assimilation and Prediction System (GDAPS) in the Geumho River basin, Korea. Multiple forcings, ground observations (AWS&ASOS), numerical weather forecast (GDAPS), and Global Land Data Assimilation System (GLDAS), were ingested to investigate the performance of streamflow predictions with highresolution WRF-Hydro configuration. In terms of the mean areal accumulated rainfall, GDAPS was overestimated by 36% to 234%, and GLDAS reanalysis data were overestimated by 80% to 153% compared to AWS&ASOS. The performance of streamflow predictions using AWS&ASOS resulted in KGE and NSE values of 0.6 or higher at the Kangchang station. Meanwhile, GDAPS-based streamflow predictions showed high variability, with KGE values ranging from 0.871 to -0.131 depending on the rainfall events. Although the peak flow error of GDAPS was larger or similar to that of GLDAS, the peak flow timing error of GDAPS was smaller than that of GLDAS. The average timing errors of AWS&ASOS, GDAPS, and GLDAS were 3.7 hours, 8.4 hours, and 70.1 hours, respectively. Medium-range streamflow predictions using GDAPS and high-resolution WRF-Hydro may provide useful information for water resources management especially in terms of occurrence and timing of peak flow albeit high uncertainty in flood magnitude.

A Study on the Design of the Grid-Cell Assessment System for the Optimal Location of Offshore Wind Farms (해상풍력발전단지의 최적 위치 선정을 위한 Grid-cell 평가 시스템 개념 설계)

  • Lee, Bo-Kyeong;Cho, Ik-Soon;Kim, Dae-Hae
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.7
    • /
    • pp.848-857
    • /
    • 2018
  • Recently, around the world, active development of new renewable energy sources including solar power, waves, and fuel cells, etc. has taken place. Particularly, floating offshore wind farms have been developed for saving costs through large scale production, using high-quality wind power and minimizing noise damage in the ocean area. The development of floating wind farms requires an evaluation of the Maritime Safety Audit Scheme under the Maritime Safety Act in Korea. Floating wind farms shall be assessed by applying the line and area concept for systematic development, management and utilization of specified sea water. The development of appropriate evaluation methods and standards is also required. In this study, proper standards for marine traffic surveys and assessments were established and a systemic treatment was studied for assessing marine spatial area. First, a marine traffic data collector using AIS or radar was designed to conduct marine traffic surveys. In addition, assessment methods were proposed such as historical tracks, traffic density and marine traffic pattern analysis applying the line and area concept. Marine traffic density can be evaluated by spatial and temporal means, with an adjusted grid-cell scale. Marine traffic pattern analysis was proposed for assessing ship movement patterns for transit or work in sea areas. Finally, conceptual design of a Marine Traffic and Safety Assessment Solution (MaTSAS) was competed that can be analyzed automatically to collect and assess the marine traffic data. It could be possible to minimize inaccurate estimation due to human errors such as data omission or misprints through automated and systematic collection, analysis and retrieval of marine traffic data. This study could provides reliable assessment results, reflecting the line and area concept, according to sea area usage.

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

The Effect of Penalizing Wrong Answers Upon the Omission Response in the Computerized Modified Multiple-choice Testing (컴퓨터화 변형 선다형 시험 방식에서 감점제가 시험 점수와 반응 포기에 미치는 영향)

  • Song, Min Hae;Park, Jooyong
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.4
    • /
    • pp.315-328
    • /
    • 2017
  • Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little domestic research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests, which are most widely used type of assessment in current Korean educational setting. Multiple-choice tests, in which options are presented with the questions, are efficient in that grading can be automated; however, they allow for students who don't know the answer, to find the correct answer from the options. Park(2005) has developed a modified multiple-choice testing system (CMMT) using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. The present study was conducted to find out if penalizing wrong answers could lower the possibility of students choosing an answer among the options when they don't know the correct answer. 116 students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: 2 conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. The results were analyzed using a two-way ANOVA for the number of no response, the test score and self-report score. Analysis showed that the number of no response was significantly higher for the CMMT format and that test scores were significantly lower when the penalizing percentage was high. The possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed. In addition, the need for further research in the cognitive sciences to develop computerized assessment tools, was discussed.