• Title/Summary/Keyword: 기하 알고리즘

Search Result 6,225, Processing Time 0.031 seconds

Measurement and Quality Control of MIROS Wave Radar Data at Dokdo (독도 MIROS Wave Radar를 이용한 파랑관측 및 품질관리)

  • Jun, Hyunjung;Min, Yongchim;Jeong, Jin-Yong;Do, Kideok
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Wave observation is widely used to direct observation method for observing the water surface elevation using wave buoy or pressure gauge and remote-sensing wave observation method. The wave buoy and pressure gauge can produce high-quality wave data but have disadvantages of the high risk of damage and loss of the instrument, and high maintenance cost in the offshore area. On the other hand, remote observation method such as radar is easy to maintain by installing the equipment on the land, but the accuracy is somewhat lower than the direct observation method. This study investigates the data quality of MIROS Wave and Current Radar (MWR) installed at Dokdo and improve the data quality of remote wave observation data using the wave buoy (CWB) observation data operated by the Korea Meteorological Administration. We applied and developed the three types of wave data quality control; 1) the combined use (Optimal Filter) of the filter designed by MIROS (Reduce Noise Frequency, Phillips Check, Energy Level Check), 2) Spike Test Algorithm (Spike Test) developed by OOI (Ocean Observatories Initiative) and 3) a new filter (H-Ts QC) using the significant wave height-period relationship. As a result, the wave observation data of MWR using three quality control have some reliability about the significant wave height. On the other hand, there are still some errors in the significant wave period, so improvements are required. Also, since the wave observation data of MWR is different somewhat from the CWB data in high waves of over 3 m, further research such as collection and analysis of long-term remote wave observation data and filter development is necessary.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

High-Speed Implementation and Efficient Memory Usage of Min-Entropy Estimation Algorithms in NIST SP 800-90B (NIST SP 800-90B의 최소 엔트로피 추정 알고리즘에 대한 고속 구현 및 효율적인 메모리 사용 기법)

  • Kim, Wontae;Yeom, Yongjin;Kang, Ju-Sung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.1
    • /
    • pp.25-39
    • /
    • 2018
  • NIST(National Institute of Standards and Technology) has recently published SP 800-90B second draft which is the document for evaluating security of entropy source, a key element of a cryptographic random number generator(RNG), and provided a tool implemented on Python code. In SP 800-90B, the security evaluation of the entropy sources is a process of estimating min-entropy by several estimators. The process of estimating min-entropy is divided into IID track and non-IID track. In IID track, the entropy sources are estimated only from MCV estimator. In non-IID Track, the entropy sources are estimated from 10 estimators including MCV estimator. The running time of the NIST's tool in non-IID track is approximately 20 minutes and the memory usage is over 5.5 GB. For evaluation agencies that have to perform repeatedly evaluations on various samples, and developers or researchers who have to perform experiments in various environments, it may be inconvenient to estimate entropy using the tool and depending on the environment, it may be impossible to execute. In this paper, we propose high-speed implementations and an efficient memory usage technique for min-entropy estimation algorithm of SP 800-90B. Our major achievements are the three improved speed and efficient memory usage reduction methods which are the method applying advantages of C++ code for improving speed of MultiMCW estimator, the method effectively reducing the memory and improving speed of MultiMMC by rebuilding the data storage structure, and the method improving the speed of LZ78Y by rebuilding the data structure. The tool applied our proposed methods is 14 times faster and saves 13 times more memory usage than NIST's tool.

Development of Analytic Hierarchy Process or Solving Dependence Relation between Multicriteria (다기준 평가항목간 중복도를 반영한 AHP 기법 개발)

  • 송기한;홍상연;정성봉;전경수
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.15-22
    • /
    • 2002
  • Transportation project appraisal should be precise in order to increase the social welfare and efficiency, and it has been evaluated by only a single criterion analysis such as benefit/cost analysis. However, this method cannot assess some qualitative items, and cannot get a proper solution for the clash of interests among various groups. Therefore, the multi-criteria analysis, which can control these problems, is needed, and then Saaty has developed one of these methods, AHP(Analytic Hierarchy Process) method. In AHP, the project is evaluated through weighted score of the criteria and the alternatives, which is surveyed by a questionnaire of specialists. It is based on some strict suppositions such as reciprocal comparison, homogeneity, expectation, independence relationship between multi-criteria, but supposing that each criterion has independence relation with others is too difficult in two reasons. First, in real situation, there cannot be perfect independence relationship between standards. Second, individuals, even though they are specialists of that area, do not feel the degree of independence relation as same as others. This paper develops a modified AHP method for solving this dependence relationship between multi-criteria. First of all. in this method, the degree of dependence relationship between multi-criteria that the specialist feels is surveyed and included to the weighted score of multi-criteria This study supposes three methods to implement this idea. The first model products the degree of dependence relationship in the first step for calculating the weighted score, and the others adjust the result of weighted score from the basic AHP method to the dependence relationship. One of the second methods distributes the cross weighted score to each standard by constant ratio, and the other splits them using Fuzzy measure such as Bel and Pl. Finally, in order to validate these methods, this paper applies them to evaluate the alternatives which can control public resentments against Korean rail path in a city area.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Development of a Feasibility Evaluation Model for Apartment Remodeling with the Number of Households Increasing at the Preliminary Stage (노후공동주택 세대수증가형 리모델링 사업의 기획단계 사업성평가 모델 개발)

  • Koh, Won-kyung;Yoon, Jong-sik;Yu, Il-han;Shin, Dong-woo;Jung, Dae-woon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.20 no.4
    • /
    • pp.22-33
    • /
    • 2019
  • The government has steadily revised and developed laws and systems for activating remodeling of apartments in response to the problems of aged apartments. However, despite such efforts, remodeling has yet to be activated. For many reasons, this study noted that there were no tools for reasonable profitability judgements and decision making in the preliminary stages of the remodeling project. Thus, the feasibility evaluation model was developed. Generally, the profitability judgements are made after the conceptual design. However, decisions to drive remodeling projects are made at the preliminary stage. So a feasibility evaluation model is required at the preliminary stage. Accordingly, In this study, a feasibility evaluation model was developed for determining preliminary stage profitability. Construction costs, business expenses, financial expenses, and generally sales revenue were calculated using the initial available information and remodeling variables derived through the existing cases. Through this process, we developed an algorithm that can give an overview of the return on investment. In addition, the preliminary stage feasibility evaluation model developed was applied to three cases to verify the applicability of the model. Although applied in three cases, the difference between the model's forecast and actual case values is less than 5%, which is considered highly applicable. If cases are expanded in the future, it will be a useful tool that can be used in actual work. The feasibility evaluation model developed in this study will support decision making by union members, and if the model is applied in different regions, it will be expected to help local governments to understand the size of possible remodeling projects.

An adaptive digital watermark using the spatial masking (공간 마스킹을 이용한 적응적 디지털 워터 마크)

  • 김현태
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.3
    • /
    • pp.39-52
    • /
    • 1999
  • In this paper we propose a new watermarking technique for copyright protection of images. The proposed technique is based on a spatial masking method with a spatial scale parameter. In general it becomes more robust against various attacks but with some degradations on the image quality as the amplitude of the watermark increases. On the other hand it becomes perceptually more invisible but more vulnerable to various attacks as the amplitude of the watermark decreases. Thus it is quite complex to decide the compromise between the robustness of watermark and its visibility. We note that watermarking using the spread spectrum is not robust enought. That is there may be some areas in the image that are tolerable to strong watermark signals. However large smooth areas may not be strong enough. Thus in order to enhance the invisibility of watermarked image for those areas the spatial masking characteristics of the HVS(Human Visual System) should be exploited. That is for texture regions the magnitude of the watermark can be large whereas for those smooth regions the magnitude of the watermark can be small. As a result the proposed watermarking algorithm is intend to satisfy both the robustness of watermark and the quality of the image. The experimental results show that the proposed algorithm is robust to image deformations(such as compression adding noise image scaling clipping and collusion attack).

Current status and future of insect smart factory farm using ICT technology (ICT기술을 활용한 곤충스마트팩토리팜의 현황과 미래)

  • Seok, Young-Seek
    • Food Science and Industry
    • /
    • v.55 no.2
    • /
    • pp.188-202
    • /
    • 2022
  • In the insect industry, as the scope of application of insects is expanded from pet insects and natural enemies to feed, edible and medicinal insects, the demand for quality control of insect raw materials is increasing, and interest in securing the safety of insect products is increasing. In the process of expanding the industrial scale, controlling the temperature and humidity and air quality in the insect breeding room and preventing the spread of pathogens and other pollutants are important success factors. It requires a controlled environment under the operating system. European commercial insect breeding facilities have attracted considerable investor interest, and insect companies are building large-scale production facilities, which became possible after the EU approved the use of insect protein as feedstock for fish farming in July 2017. Other fields, such as food and medicine, have also accelerated the application of cutting-edge technology. In the future, the global insect industry will purchase eggs or small larvae from suppliers and a system that focuses on the larval fattening, i.e., production raw material, until the insects mature, and a system that handles the entire production process from egg laying, harvesting, and initial pre-treatment of larvae., increasingly subdivided into large-scale production systems that cover all stages of insect larvae production and further processing steps such as milling, fat removal and protein or fat fractionation. In Korea, research and development of insect smart factory farms using artificial intelligence and ICT is accelerating, so insects can be used as carbon-free materials in secondary industries such as natural plastics or natural molding materials as well as existing feed and food. A Korean-style customized breeding system for shortening the breeding period or enhancing functionality is expected to be developed soon.

Evaluating efficiency of Vertical MLC VMAT plan for naso-pharyngeal carcinoma (비인두암 Vertical MLC VMAT plan 유용성 평가)

  • Chae, Seung Hoon;Son, Sang Jun;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.127-135
    • /
    • 2021
  • Purpose : The purpose of the study is to evaluate the efficiency of Vertical MLC VMAT plan(VMV plan) Using 273° and 350° collimator angle compare to Complemental MLC VMAT plan(CMV plan) using 20° and 340° collimator angle for nasopharyngeal carcinoma. Materials & Methods : Thirty patients treated for nasopharyngeal carcinoma with the VMAT technique were retrospectively selected. Those cases were planned by Eclipse, PO and AcurosXB Algorithm with two 6MV 360° arcs and Each arc has 273° and 350° of collimator angle. The Complemental MLC VMAT plans are based on existing treatment plans. Those plans have the same parameters of existing treatment plans but collimator angle. For dosimetric evaluation, the dose-volumetric(DV) parameters of the planning target volume (PTV) and organs at risk (OARs) were calculated for all VMAT plans. MCSv(Modulation complexity score of VMAT), MU and treatment time were also compared. In addition, Pearson's correlation analysis was performed to confirm whether there was a correlation between the difference in the MCSv and the difference in each evaluation index of the two treatment plans. Result : In the case of PTV evaluation index, the CI of PTV_67.5 was improved by 3.76% in the VMV Plan, then for OAR, the dose reduction effect of the spinal cord (-14.05%) and brain stem (-9.34%) was remarkable. In addition, the parotid glands (left parotid : -5.38%, right : -5.97%) and visual organs (left optic nerve: -4.88%, right optic nerve: -5.80%, optic chiasm : -6.12%, left lens: -6.12%, right lens: -5.26%), auditory organs (left: -11.74%, right: -12.31%) and thyroid gland (-2.02%) were also confirmed. The difference in MCSv of the two treatment plans showed a significant negative (-) correlation with the difference in CI (r=-0.55) of PTV_54 and the difference in CI (r=-0.43) of PTV_48. Spinal cord (r=0.40), brain stem (r=0.34), and both salivary glands (left: r=0.36, right: r=0.37) showed a positive (+) correlation. (For all the values, p<.05) Conclusion : Compared to the CMV plan, the VMV plan is considered to be helpful in improving the quality of the treatment plan by allowing the MLC to be modulated more efficiently