• Title/Summary/Keyword: 이동 알고리즘

Search Result 3,809, Processing Time 0.034 seconds

Scolytidae, Platypodidae, Bostrichidae and Lyctidae Intercepted from Imported Timbers at Busan Port Entry (부산항의 수입재에서 검출된 나무좀과, 긴나무좀과, 개나무좀과 및 가루나무좀과의 종류)

  • 최은정;추호렬;이동운;이상명;박종균
    • Korean journal of applied entomology
    • /
    • v.42 no.3
    • /
    • pp.173-184
    • /
    • 2003
  • Beetles belonging to the families Scolytidae, Platypodidae, Bostrichidae, and Lyctidae intercepted from imported timbers at Busan port were investigated from March 1 to November 30 in 2000. In addition, hosts imported country were examined. A total of 52 species of within 23 genera was intercepted from nineteen species of timbers or logs from fifteen countries. In Scolytidae, 35 species of 16 genera in three subfamilies were identified 12 species in Xyleborus, 6 species in Ips, 3 species in Trypodendron, 2 species in Arixyleborus, and 12 species of all different genera including Alinphagous. Scolytidae were intercepted from 16 species of timbers in 13 genera imported from 11 countries. The highest beetles were intercepted from Malaysian lauan. In Platypodidae, 9 species of one genus (Platypus) were intercepted from 6 species of timbers in 4 genera imported from 6 countries including Australia. The highest numbers were intercepted from Malysian lauan. In Bostrychidae, 5 species of 4 genera in two subfamilies were intercepted from 6 species of timbers in 4 genera imported from four countries. In Lyctidae, Trogoxylon sp., Minthea sp., and Minthea rugicollis were intercepted from 3 species of timbers in 2 genera imported from 3 countries.

CNN-based Recommendation Model for Classifying HS Code (HS 코드 분류를 위한 CNN 기반의 추천 모델 개발)

  • Lee, Dongju;Kim, Gunwoo;Choi, Keunho
    • Management & Information Systems Review
    • /
    • v.39 no.3
    • /
    • pp.1-16
    • /
    • 2020
  • The current tariff return system requires tax officials to calculate tax amount by themselves and pay the tax amount on their own responsibility. In other words, in principle, the duty and responsibility of reporting payment system are imposed only on the taxee who is required to calculate and pay the tax accurately. In case the tax payment system fails to fulfill the duty and responsibility, the additional tax is imposed on the taxee by collecting the tax shortfall and imposing the tax deduction on For this reason, item classifications, together with tariff assessments, are the most difficult and could pose a significant risk to entities if they are misclassified. For this reason, import reports are consigned to customs officials, who are customs experts, while paying a substantial fee. The purpose of this study is to classify HS items to be reported upon import declaration and to indicate HS codes to be recorded on import declaration. HS items were classified using the attached image in the case of item classification based on the case of the classification of items by the Korea Customs Service for classification of HS items. For image classification, CNN was used as a deep learning algorithm commonly used for image recognition and Vgg16, Vgg19, ResNet50 and Inception-V3 models were used among CNN models. To improve classification accuracy, two datasets were created. Dataset1 selected five types with the most HS code images, and Dataset2 was tested by dividing them into five types with 87 Chapter, the most among HS code 2 units. The classification accuracy was highest when HS item classification was performed by learning with dual database2, the corresponding model was Inception-V3, and the ResNet50 had the lowest classification accuracy. The study identified the possibility of HS item classification based on the first item image registered in the item classification determination case, and the second point of this study is that HS item classification, which has not been attempted before, was attempted through the CNN model.

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

Ecoclimatic Map over North-East Asia Using SPOT/VEGETATION 10-day Synthesis Data (SPOT/VEGETATION NDVI 자료를 이용한 동북아시아의 생태기후지도)

  • Park Youn-Young;Han Kyung-Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.2
    • /
    • pp.86-96
    • /
    • 2006
  • Ecoclimap-1, a new complete surface parameter global database at a 1-km resolution, was previously presented. It is intended to be used to initialize the soil-vegetation- atmosphere transfer schemes in meteorological and climate models. Surface parameters in the Ecoclimap-1 database are provided in the form of a per-class value by an ecoclimatic base map from a simple merging of land cover and climate maps. The principal objective of this ecoclimatic map is to consider intra-class variability of life cycle that the usual land cover map cannot describe. Although the ecoclimatic map considering land cover and climate is used, the intra-class variability was still too high inside some classes. In this study, a new strategy is defined; the idea is to use the information contained in S10 NDVI SPOT/VEGETATION profiles to split a land cover into more homogeneous sub-classes. This utilizes an intra-class unsupervised sub-clustering methodology instead of simple merging. This study was performed to provide a new ecolimatic map over Northeast Asia in the framework of Ecoclimap-2 global database construction for surface parameters. We used the University of Maryland's 1km Global Land Cover Database (UMD) and a climate map to determine the initial number of clusters for intra-class sub-clustering. An unsupervised classification process using six years of NDVI profiles allows the discrimination of different behavior for each land cover class. We checked the spatial coherence of the classes and, if necessary, carried out an aggregation step of the clusters having a similar NDVI time series profile. From the mapping system, 29 ecosystems resulted for the study area. In terms of climate-related studies, this new ecosystem map may be useful as a base map to construct an Ecoclimap-2 database and to improve the surface climatology quality in the climate model.

Development of Convertor supporting Multi-languages for Mobile Network (무선전용 다중 언어의 번역을 지원하는 변환기의 구현)

  • Choe, Ji-Won;Kim, Gi-Cheon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.2
    • /
    • pp.293-296
    • /
    • 2002
  • UP Link is One of the commercial product which converts HTML to HDML convertor in order to show the internet www contents in the mobile environments. When UP browser accesses HTML pages, the agent in the UP Link controls the converter to change the HTML to the HDML, I-Mode, which is developed by NTT-Docomo of Japan, has many contents through the long and stable commercial service. Micro Explorer, which is developed by Stinger project, also has many additional function. In this paper, we designed and implemented WAP convertor which can accept C-HTML contents and mHTML contents. C-HTML format by I-Mode is a subset of HTML format, mHTML format by ME is similar to C-HTML, So the content provides can easily develop C-HTML contents compared with WAP and the other case. Since C-HTML, mHTML and WML are used under the mobile environment, the limited transmission capacity of one page is also similar. In order to make a match table. After that, we apply conversion algorithm on it. If we can not find matched element, we arrange some tags which only can be supported by WML to display in the best shape. By the result, we can convert over 90% contents.

Detection of Arctic Summer Melt Ponds Using ICESat-2 Altimetry Data (ICESat-2 고도계 자료를 활용한 여름철 북극 융빙호 탐지)

  • Han, Daehyeon;Kim, Young Jun;Jung, Sihun;Sim, Seongmun;Kim, Woohyeok;Jang, Eunna;Im, Jungho;Kim, Hyun-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1177-1186
    • /
    • 2021
  • As the Arctic melt ponds play an important role in determining the interannual variation of the sea ice extent and changes in the Arctic environment, it is crucial to monitor the Arctic melt ponds with high accuracy. Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), which is the NASA's latest altimeter satellite based on the green laser (532 nm), observes the global surface elevation. When compared to the CryoSat-2 altimetry satellite whose along-track resolution is 250 m, ICESat-2 is highly expected to provide much more detailed information about Arctic melt ponds thanks to its high along-track resolution of 70 cm. The basic products of ICESat-2 are the surface height and the number of reflected photons. To aggregate the neighboring information of a specific ICESat-2 photon, the segments of photons with 10 m length were used. The standard deviation of the height and the total number of photons were calculated for each segment. As the melt ponds have the smoother surface than the sea ice, the lower variation of the height over melt ponds can make the melt ponds distinguished from the sea ice. When the melt ponds were extracted, the number of photons per segment was used to classify the melt ponds covered with open-water and specular ice. As photons are much more absorbed in the water-covered melt pondsthan the melt ponds with the specular ice, the number of photons persegment can distinguish the water- and ice-covered ponds. As a result, the suggested melt pond detection method was able to classify the sea ice, water-covered melt ponds, and ice-covered melt ponds. A qualitative analysis was conducted using the Sentinel-2 optical imagery. The suggested method successfully classified the water- and ice-covered ponds which were difficult to distinguish with Sentinel-2 optical images. Lastly, the pros and cons of the melt pond detection using satellite altimetry and optical images were discussed.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

Quantitative Differences between X-Ray CT-Based and $^{137}Cs$-Based Attenuation Correction in Philips Gemini PET/CT (GEMINI PET/CT의 X-ray CT, $^{137}Cs$ 기반 511 keV 광자 감쇠계수의 정량적 차이)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Park, Eun-Kyung;Kim, Jong-Hyo;Kim, Jae-Il;Lee, Hong-Jae;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • Purpose: There are differences between Standard Uptake Value (SUV) of CT attenuation corrected PET and that of $^{137}Cs$. Since various causes lead to difference of SUV, it is important to know what is the cause of these difference. Since only the X-ray CT and $^{137}Cs$ transmission data are used for the attenuation correction, in Philips GEMINI PET/CT scanner, proper transformation of these data into usable attenuation coefficients for 511 keV photon has to be ascertained. The aim of this study was to evaluate the accuracy in the CT measurement and compare the CT and $^{137}Cs$-based attenuation correction in this scanner. Methods: For all the experiments, CT was set to 40 keV (120 kVp) and 50 mAs. To evaluate the accuracy of the CT measurement, CT performance phantom was scanned and Hounsfield units (HU) for those regions were compared to the true values. For the comparison of CT and $^{137}Cs$-based attenuation corrections, transmission scans of the elliptical lung-spine-body phantom and electron density CT phantom composed of various components, such as water, bone, brain and adipose, were performed using CT and $^{137}Cs$. Transformed attenuation coefficients from these data were compared to each other and true 511 keV attenuation coefficient acquired using $^{68}Ge$ and ECAT EXACT 47 scanner. In addition, CT and $^{137}Cs$-derived attenuation coefficients and SUV values for $^{18}F$-FDG measured from the regions with normal and pathological uptake in patients' data were also compared. Results: HU of all the regions in CT performance phantom measured using GEMINI PET/CT were equivalent to the known true values. CT based attenuation coefficients were lower than those of $^{68}Ge$ about 10% in bony region of NEMA ECT phantom. Attenuation coefficients derived from $^{137}Cs$ data was slightly higher than those from CT data also in the images of electron density CT phantom and patients' body with electron density. However, the SUV values in attenuation corrected images using $^{137}Cs$ were lower than images corrected using CT. Percent difference between SUV values was about 15%. Conclusion: Although the HU measured using this scanner was accurate, accuracy in the conversion from CT data into the 511 keV attenuation coefficients was limited in the bony region. Discrepancy in the transformed attenuation coefficients and SUV values between CT and $^{137}Cs$-based data shown in this study suggests that further optimization of various parameters in data acquisition and processing would be necessary for this scanner.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.