• Title/Summary/Keyword: 테스트 생성

Search Result 1,009, Processing Time 0.025 seconds

Comparative Analysis among Radar Image Filters for Flood Mapping (홍수매핑을 위한 레이더 영상 필터의 비교분석)

  • Kim, Daeseong;Jung, Hyung-Sup;Baek, Wonkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.43-52
    • /
    • 2016
  • Due to the characteristics of microwave signals, Radar satellite image has been used for flood detection without weather and time influence. The more methods of flood detection were developed, the more detection rate of flood area has been increased. Since flood causes a lot of damages, flooded area should be distinguished from non flooded area. Also, the detection of flood area should be accurate. Therefore, not only image resolution but also the filtering process is critical to minimize resolution degradation. Although a resolution of radar images become better as technology develops, there were a limited focused on a highly suitable filtering methods for flood detection. Thus, the purpose of this study is to find out the most appropriate filtering method for flood detection by comparing three filtering methods: Lee filter, Frost filter and NL-means filter. Therefore, to compare the filters to detect floods, each filters are applied to the radar image. Comparison was drawn among filtered images. Then, the flood map, results of filtered images are compared in that order. As a result, Frost and NL-means filter are more effective in removing the speckle noise compared to Lee filter. In case of Frost filter, resolution degradation occurred severly during removal of the noise. In case of NL-means filter, shadow effect which could be one of the main reasons that causes false detection were not eliminated comparing to other filters. Nevertheless, result of NL-means filter shows the best detection rate because the number of shadow pixels is relatively low in entire image. Kappa coefficient is scored 0.81 for NL-means filtered image and 0.55, 0.64 and 0.74 follows for non filtered image, Lee filtered image and Frost filtered image respectively. Also, in the process of NL-means filter, speckle noise could be removed without resolution degradation. Accordingly, flooded area could be distinguished effectively from other area in NL-means filtered image.

A Study on Oxygen Reduction Reaction of PtM Electrocatalysts Synthesized by a Modified Polyol Process (수정된 폴리올 방법을 적용하여 합성한 PtM 촉매들의 산소환원반응성 연구)

  • Yang, Jongwon;Hyun, Kyuwhan;Chu, Cheunho;Kwon, Yongchai
    • Applied Chemistry for Engineering
    • /
    • v.25 no.1
    • /
    • pp.78-83
    • /
    • 2014
  • In this research, we evaluated the performance and characteristics of carbon supported PtM (M = Ni and Y) alloy catalysts (PtM/Cs) synthesized by a modified polyol method. With the PtM/Cs employed as a catalyst for the oxygen reduction reaction (ORR) of cathodes in proton exchange membrane fuel cells (PEMFCs), their catalytic and ORR activities and electrical performance were investigated and compared with those of commercial Pt/C. Their particle sizes, particle distributions and electrochemically active surface areas (EAS) were measured by TEM and cyclic voltammetry (CV), while their ORR activity and electrical performance were explored using linear sweeping voltammetries with rotating disk electrodes and rotating ring-disk electrodes as well as PEMFC single cell tests. TEM and CV measurements show that PtM/Cs have the compatible particle size and EAS with Pt/C. When it comes to ORR activity, PtM/C showed the equivalent or better half-wave potential, kinetic current density, transferred electron number per oxygen molecule and $H_2O_2$ production(%) to or than commerical Pt/C. Based on results gained by the three electrode tests, when the PEMFC single cell tests were carried out, the current density measured at 0.6 V and maximum power density of PEMFC single cell adopting PtM/C catalysts were better than those adopting Pt/C catalyst. It is therefore concluded that PtM/C catalysts synthesized by modified polyol can result in the equivalent or better ORR catalytic capability and PEMFC performance to or than commercial Pt/C catalyst.

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

Isolation and Identification of Lactic Acid Bacteria with Probiotic Activities from Kimchi and Their Fermentation Properties in Milk (전통 김치로부터 Probiotic 유산균의 분리 및 우유 발효 특성)

  • Lim, Young-Soon;Kim, JiYoun;Kang, HyeonCheol
    • Journal of Dairy Science and Biotechnology
    • /
    • v.37 no.2
    • /
    • pp.115-128
    • /
    • 2019
  • Lactic acid bacteria obtained from traditional Kimchi were selected on the basis of their caseinolytic activity and lactose usability and examined for availability as a starter in probiotic activity. Thirty-two strains were selected as lactic acid producing bacteria in BCP agar, and two strains (KC23 and KF26) with more than 90% resistance for both acid and bile salts were selected. The two strains were identified as L. plantarum (KC23) and L. paracasei (KF26) by API 50 CHL system and 16S rRNA sequence analysis. L. plantarum (KC23) was finally selected based on its biochemical characteristics for lactose and raffinose usability. Free tyrosine content increased rapidly in 10% skimmed milk medium, from $24.1{\mu}g/mL$ after 8 h to $43.9{\mu}g/mL$ after 16 h. Additionally, the caseinolytic clear zone of 12 mm of L. plantarum (KC23) was greater than the 9 mm zone of commercial L. acidophilus CSLA. The bacterium exhibited mesophilic growth and yielded $8.9{\times}10^8CFU/mL$ when incubated at $37^{\circ}C$ for 12 h at pH 4.25. Moreover, L. plantarum KC23 exhibited antibacterial activity as it formed a clear zone of 8-13 mm for the 5 pathogens. Adherent activity was 2.23 fold higher than that of LGG. The acidity of 10% skimmed milk fermented for 12 h was 0.74%.

Accuracy Analysis of Target Recognition according to EOC Conditions (Target Occlusion and Depression Angle) using MSTAR Data (MSTAR 자료를 이용한 EOC 조건(표적 폐색 및 촬영부각)에 따른 표적인식 정확도 분석)

  • Kim, Sang-Wan;Han, Ahrim;Cho, Keunhoo;Kim, Donghan;Park, Sang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.457-470
    • /
    • 2019
  • Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) has been attracted attention in the fields of surveillance, reconnaissance, and national security due to its advantage of all-weather and day-and-night imaging capabilities. However, there have been some difficulties in automatically identifying targets in real situation due to various observational and environmental conditions. In this paper, ATR problems in Extended Operating Conditions (EOC) were investigated. In particular, we considered partial occlusions of the target (10% to 50%) and differences in the depression angle between training ($17^{\circ}$) and test data ($30^{\circ}$ and $45^{\circ}$). To simulate various occlusion conditions, SARBake algorithm was applied to Moving and Stationary Target Acquisition and Recognition (MSTAR) images. The ATR accuracies were evaluated by using the template matching and Adaboost algorithms. Experimental results on the depression angle showed that the target identification rate of the two algorithms decreased by more than 30% from the depression angle of $45^{\circ}$ to $30^{\circ}$. The accuracy of template matching was about 75.88% while Adaboost showed better results with an accuracy of about 86.80%. In the case of partial occlusion, the accuracy of template matching decreased significantly even in the slight occlusion (from 95.77% under no occlusion to 52.69% under 10% occlusion). The Adaboost algorithm showed better performance with an accuracy of 85.16% in no occlusion condition and 68.48% in 10% occlusion condition. Even in the 50% occlusion condition, the Adaboost provided an accuracy of 52.48%, which was much higher than the template matching (less than 30% under 50% occlusion).

Report on the improvement of the in vitro and specimen reception environment system (핵의학과 검체 접수 환경시스템의 개선사례 보고)

  • Kim, Jung In;Kang, Mi Ji;Kim, Na Kyung;Park, Ji Sol;Kwon, Won Hyun;Lee, Kyung Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.29-34
    • /
    • 2021
  • Purpose Sample reception environment system in nuclear medicine has not changed much compared to 20 years ago. When preparing sample for in vitro test, there was no significant change because the test was carried out by generating an own specimen from the parent specimen. In this study, We would like to introduce a method that automatically removes the sample cap using the automated decapper equipment and enables automatic reception at the same time. In addition, including a provisional reception system. Materials and Methods In 2019, it was intended to get a device that automatically removes the cap of a patient's blood sample. This equipment is the same as the equipment used in the Department of Laboratory Medicine (Vacuette Ⓡ Unicap Belt Decapper, Greiner bio-one, Austria). However, the purchase was delayed due to differences in tube size, budget, and space. In January 2020, we borrowed domestic automatic decapper equipment and modified it to suit our laboratory environment. After 9 months, we were able to introduce a system that automatically removes the lid of a patient's blood sample and at the same time automatically accepts the test. And, through the provisional reception system, it was possible to know the arrival of the specimen in a short time. Results With the use of an automatic decapper device, the sample cap was automatically removed, and the reception proceeded at the same time. So, it was very efficient at work because it shortened the sample preparation time by about 20 minutes. In addition, it was possible to prevent the examiner's musculoskeletal disorders caused by repeated wrist use. After using the provisional reception system, patients were able to be discharged quickly, and the number of phone calls to confirm the arrival of samples was reduced. Conclusion Most hospitals have about four employees in the nuclear medicine in vitro laboratory. It is effective to use automatic decapper equipment and a provisional reception system for organizations that perform work with the minimum number of personnel.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

Prediction of Spring Flowering Timing in Forested Area in 2023 (산림지역에서의 2023년 봄철 꽃나무 개화시기 예측)

  • Jihee Seo;Sukyung Kim;Hyun Seok Kim;Junghwa Chun;Myoungsoo Won;Keunchang Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.427-435
    • /
    • 2023
  • Changes in flowering time due to weather fluctuations impact plant growth and ecosystem dynamics. Accurate prediction of flowering timing is crucial for effective forest ecosystem management. This study uses a process-based model to predict flowering timing in 2023 for five major tree species in Korean forests. Models are developed based on nine years (2009-2017) of flowering data for Abeliophyllum distichum, Robinia pseudoacacia, Rhododendron schlippenbachii, Rhododendron yedoense f. poukhanense, and Sorbus commixta, distributed across 28 regions in the country, including mountains. Weather data from the Automatic Mountain Meteorology Observation System (AMOS) and the Korea Meteorological Administration (KMA) are utilized as inputs for the models. The Single Triangle Degree Days (STDD) and Growing Degree Days (GDD) models, known for their superior performance, are employed to predict flowering dates. Daily temperature readings at a 1 km spatial resolution are obtained by merging AMOS and KMA data. To improve prediction accuracy nationwide, random forest machine learning is used to generate region-specific correction coefficients. Applying these coefficients results in minimal prediction errors, particularly for Abeliophyllum distichum, Robinia pseudoacacia, and Rhododendron schlippenbachii, with root mean square errors (RMSEs) of 1.2, 0.6, and 1.2 days, respectively. Model performance is evaluated using ten random sampling tests per species, selecting the model with the highest R2. The models with applied correction coefficients achieve R2 values ranging from 0.07 to 0.7, except for Sorbus commixta, and exhibit a final explanatory power of 0.75-0.9. This study provides valuable insights into seasonal changes in plant phenology, aiding in identifying honey harvesting seasons affected by abnormal weather conditions, such as those of Robinia pseudoacacia. Detailed information on flowering timing for various plant species and regions enhances understanding of the climate-plant phenology relationship.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.