• Title/Summary/Keyword: 계산 알고리즘

Search Result 5,276, Processing Time 0.03 seconds

Investigation of SO2 Effect on TOMS O3 Retrieval from OMI Measurement in China (OMI 위성센서를 이용한 중국 지역에서 TOMS 오존 산출에 대한 이산화황의 영향 조사 연구)

  • Choi, Wonei;Hong, Hyunkee;Kim, Daewon;Ryu, Jae-Yong;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.629-637
    • /
    • 2016
  • In this present study, we identified the $SO_2$ effect on $O_3$ retrieval from the Ozone Monitoring Instrument (OMI) measurement over Chinese Industrial region from 2005 through 2007. The Planetary boundary layer (PBL) $SO_2$ data measured by OMI sensor is used in this present study. OMI-Total Ozone Mapping Spectrometer (TOMS) total $O_3$ is compared with OMI-Differential Optical Absorption Spectrometer (DOAS) total $O_3$ in various $SO_2$ condition in PBL. The difference between OMI-TOMS and OMI-DOAS total $O_3$ (T-D) shows dependency on $SO_2$ (R (Correlation coefficient) = 0.36). Since aerosol has been reported to cause uncertainty of both OMI-TOMS and OMI-DOAS total $O_3$ retrieval, the aerosol effect on relationship between PBL $SO_2$ and T-D is investigated with changing Aerosol Optical Depth (AOD). There is negligible aerosol effect on the relationship showing similar slope ($1.83{\leq}slope{\leq}2.36$) between PBL $SO_2$ and T-D in various AOD conditions. We also found that the rate of change in T-D per 1.0 DU change in PBL, middle troposphere (TRM), and upper troposphere and stratosphere (STL) are 1.6 DU, 3.9 DU and 4.9 DU, respectively. It shows that the altitude where $SO_2$ exist can affect the value of T-D, which could be due to reduced absolute radiance sensitivity in the boundary layer at 317.5 nm which is used to retrieve OMI-TOMS ozone in boundary layer.

Comparisons between the Two Dose Profiles Extracted from Leksell GammaPlan and Calculated by Variable Ellipsoid Modeling Technique (렉셀 감마플랜(LGP)에서 추출된 선량 분포와 가변 타원체 모형화기술(VEMT)에 의해 계산된 선량 분포 사이의 비교)

  • Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.1
    • /
    • pp.9-17
    • /
    • 2017
  • A high degree of precision and accuracy in Gamma Knife Radiosurgery(GKRS) is a fundamental requirement for therapeutical success. Elaborate radiation delivery and dose gradients with the steep fall-off of radiation are clinically applied thus necessitating a dedicated Quality Assurance(QA) program in order to guarantee dosimetric and geometric accuracy and reduce all the risk factors that can occur in GKRS. In this study, as a part of QA we verified the accuracy of single-shot dose profiles used in the algorithm of Gamma Knife Perfexion(PFX) treatment planning system employing Variable Ellipsoid Modeling Technique(VEMT). We evaluated the dose distributions of single-shots in a spherical ABC phantom with diameter 160 mm on Gamma Knife PFX. The single-shots were directed to the center of ABC phantom. Collimating configurations of 4, 8, and 16 mm sizes along x, y, and z axes were studied. Gamma Knife PFX treatment planning system being used in GKRS is called Leksell GammaPlan(LGP) ver 10.1.1. From the verification like this, the accuracy of GKRS will be doubled. Then the clinical application must be finally performed based on precision and accuracy of GKRS. Specifically the width at the 50% isodose level, that is, Full-Width-of-Half-Maximum(FWHM) was verified under such conditions that a patient's head is simulated as a sphere with diameter 160mm. All the data about dose profiles along x, y, and z axes predicted through VEMT were excellently consistent with dose profiles from LGP within specifications(${\leq}1mm$ at 50% isodose level) except for a little difference of FWHM and PENUMBRA(isodose level: 20%~80%) along z axis for 4 mm and 8mm collimating configurations. The maximum discrepancy of FWHM was less than 2.3% at all collimating configurations. The maximum discrepancy of PENUMBRA was given for the 8 mm collimator along z axis. The difference of FWHM and PENUMBRA in the dose distributions obtained with VEMT and LGP is too small to give the clinical significance in GKRS. The results of this study are considered as a reference for medical physicists involved in GKRS in the whole world. Therefore we can work to confirm the validity of dose distributions for all collimating configurations determined through the regular preventative maintenance program using the independent verification method VEMT for the results of LGP and clinically assure the perfect treatment for patients of GKRS. Thus the use of VEMT is expected that it will be a part of QA that can verify and operate the system safely.

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

CNN-based Recommendation Model for Classifying HS Code (HS 코드 분류를 위한 CNN 기반의 추천 모델 개발)

  • Lee, Dongju;Kim, Gunwoo;Choi, Keunho
    • Management & Information Systems Review
    • /
    • v.39 no.3
    • /
    • pp.1-16
    • /
    • 2020
  • The current tariff return system requires tax officials to calculate tax amount by themselves and pay the tax amount on their own responsibility. In other words, in principle, the duty and responsibility of reporting payment system are imposed only on the taxee who is required to calculate and pay the tax accurately. In case the tax payment system fails to fulfill the duty and responsibility, the additional tax is imposed on the taxee by collecting the tax shortfall and imposing the tax deduction on For this reason, item classifications, together with tariff assessments, are the most difficult and could pose a significant risk to entities if they are misclassified. For this reason, import reports are consigned to customs officials, who are customs experts, while paying a substantial fee. The purpose of this study is to classify HS items to be reported upon import declaration and to indicate HS codes to be recorded on import declaration. HS items were classified using the attached image in the case of item classification based on the case of the classification of items by the Korea Customs Service for classification of HS items. For image classification, CNN was used as a deep learning algorithm commonly used for image recognition and Vgg16, Vgg19, ResNet50 and Inception-V3 models were used among CNN models. To improve classification accuracy, two datasets were created. Dataset1 selected five types with the most HS code images, and Dataset2 was tested by dividing them into five types with 87 Chapter, the most among HS code 2 units. The classification accuracy was highest when HS item classification was performed by learning with dual database2, the corresponding model was Inception-V3, and the ResNet50 had the lowest classification accuracy. The study identified the possibility of HS item classification based on the first item image registered in the item classification determination case, and the second point of this study is that HS item classification, which has not been attempted before, was attempted through the CNN model.

Construction of Gene Network System Associated with Economic Traits in Cattle (소의 경제형질 관련 유전자 네트워크 분석 시스템 구축)

  • Lim, Dajeong;Kim, Hyung-Yong;Cho, Yong-Min;Chai, Han-Ha;Park, Jong-Eun;Lim, Kyu-Sang;Lee, Seung-Su
    • Journal of Life Science
    • /
    • v.26 no.8
    • /
    • pp.904-910
    • /
    • 2016
  • Complex traits are determined by the combined effects of many loci and are affected by gene networks or biological pathways. Systems biology approaches have an important role in the identification of candidate genes related to complex diseases or traits at the system level. The gene network analysis has been performed by diverse types of methods such as gene co-expression, gene regulatory relationships, protein-protein interaction (PPI) and genetic networks. Moreover, the network-based methods were described for predicting gene functions such as graph theoretic method, neighborhood counting based methods and weighted function. However, there are a limited number of researches in livestock. The present study systemically analyzed genes associated with 102 types of economic traits based on the Animal Trait Ontology (ATO) and identified their relationships based on the gene co-expression network and PPI network in cattle. Then, we constructed the two types of gene network databases and network visualization system (http://www.nabc.go.kr/cg). We used a gene co-expression network analysis from the bovine expression value of bovine genes to generate gene co-expression network. PPI network was constructed from Human protein reference database based on the orthologous relationship between human and cattle. Finally, candidate genes and their network relationships were identified in each trait. They were typologically centered with large degree and betweenness centrality (BC) value in the gene network. The ontle program was applied to generate the database and to visualize the gene network results. This information would serve as valuable resources for exploiting genomic functions that influence economically and agriculturally important traits in cattle.

Detection of Arctic Summer Melt Ponds Using ICESat-2 Altimetry Data (ICESat-2 고도계 자료를 활용한 여름철 북극 융빙호 탐지)

  • Han, Daehyeon;Kim, Young Jun;Jung, Sihun;Sim, Seongmun;Kim, Woohyeok;Jang, Eunna;Im, Jungho;Kim, Hyun-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1177-1186
    • /
    • 2021
  • As the Arctic melt ponds play an important role in determining the interannual variation of the sea ice extent and changes in the Arctic environment, it is crucial to monitor the Arctic melt ponds with high accuracy. Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), which is the NASA's latest altimeter satellite based on the green laser (532 nm), observes the global surface elevation. When compared to the CryoSat-2 altimetry satellite whose along-track resolution is 250 m, ICESat-2 is highly expected to provide much more detailed information about Arctic melt ponds thanks to its high along-track resolution of 70 cm. The basic products of ICESat-2 are the surface height and the number of reflected photons. To aggregate the neighboring information of a specific ICESat-2 photon, the segments of photons with 10 m length were used. The standard deviation of the height and the total number of photons were calculated for each segment. As the melt ponds have the smoother surface than the sea ice, the lower variation of the height over melt ponds can make the melt ponds distinguished from the sea ice. When the melt ponds were extracted, the number of photons per segment was used to classify the melt ponds covered with open-water and specular ice. As photons are much more absorbed in the water-covered melt pondsthan the melt ponds with the specular ice, the number of photons persegment can distinguish the water- and ice-covered ponds. As a result, the suggested melt pond detection method was able to classify the sea ice, water-covered melt ponds, and ice-covered melt ponds. A qualitative analysis was conducted using the Sentinel-2 optical imagery. The suggested method successfully classified the water- and ice-covered ponds which were difficult to distinguish with Sentinel-2 optical images. Lastly, the pros and cons of the melt pond detection using satellite altimetry and optical images were discussed.

Spatial Downscaling of Ocean Colour-Climate Change Initiative (OC-CCI) Forel-Ule Index Using GOCI Satellite Image and Machine Learning Technique (GOCI 위성영상과 기계학습 기법을 이용한 Ocean Colour-Climate Change Initiative (OC-CCI) Forel-Ule Index의 공간 상세화)

  • Sung, Taejun;Kim, Young Jun;Choi, Hyunyoung;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.959-974
    • /
    • 2021
  • Forel-Ule Index (FUI) is an index which classifies the colors of inland and seawater exist in nature into 21 gradesranging from indigo blue to cola brown. FUI has been analyzed in connection with the eutrophication, water quality, and light characteristics of water systems in many studies, and the possibility as a new water quality index which simultaneously contains optical information of water quality parameters has been suggested. In thisstudy, Ocean Colour-Climate Change Initiative (OC-CCI) based 4 km FUI was spatially downscaled to the resolution of 500 m using the Geostationary Ocean Color Imager (GOCI) data and Random Forest (RF) machine learning. Then, the RF-derived FUI was examined in terms of its correlation with various water quality parameters measured in coastal areas and its spatial distribution and seasonal characteristics. The results showed that the RF-derived FUI resulted in higher accuracy (Coefficient of Determination (R2)=0.81, Root Mean Square Error (RMSE)=0.7784) than GOCI-derived FUI estimated by Pitarch's OC-CCI FUI algorithm (R2=0.72, RMSE=0.9708). RF-derived FUI showed a high correlation with five water quality parameters including Total Nitrogen, Total Phosphorus, Chlorophyll-a, Total Suspended Solids, Transparency with the correlation coefficients of 0.87, 0.88, 0.97, 0.65, and -0.98, respectively. The temporal pattern of the RF-derived FUI well reflected the physical relationship with various water quality parameters with a strong seasonality. The research findingssuggested the potential of the high resolution FUI in coastal water quality management in the Korean Peninsula.

A Study on the Retrieval of River Turbidity Based on KOMPSAT-3/3A Images (KOMPSAT-3/3A 영상 기반 하천의 탁도 산출 연구)

  • Kim, Dahui;Won, You Jun;Han, Sangmyung;Han, Hyangsun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1285-1300
    • /
    • 2022
  • Turbidity, the measure of the cloudiness of water, is used as an important index for water quality management. The turbidity can vary greatly in small river systems, which affects water quality in national rivers. Therefore, the generation of high-resolution spatial information on turbidity is very important. In this study, a turbidity retrieval model using the Korea Multi-Purpose Satellite-3 and -3A (KOMPSAT-3/3A) images was developed for high-resolution turbidity mapping of Han River system based on eXtreme Gradient Boosting (XGBoost) algorithm. To this end, the top of atmosphere (TOA) spectral reflectance was calculated from a total of 24 KOMPSAT-3/3A images and 150 Landsat-8 images. The Landsat-8 TOA spectral reflectance was cross-calibrated to the KOMPSAT-3/3A bands. The turbidity measured by the National Water Quality Monitoring Network was used as a reference dataset, and as input variables, the TOA spectral reflectance at the locations of in situ turbidity measurement, the spectral indices (the normalized difference vegetation index, normalized difference water index, and normalized difference turbidity index), and the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived atmospheric products(the atmospheric optical thickness, water vapor, and ozone) were used. Furthermore, by analyzing the KOMPSAT-3/3A TOA spectral reflectance of different turbidities, a new spectral index, new normalized difference turbidity index (nNDTI), was proposed, and it was added as an input variable to the turbidity retrieval model. The XGBoost model showed excellent performance for the retrieval of turbidity with a root mean square error (RMSE) of 2.70 NTU and a normalized RMSE (NRMSE) of 14.70% compared to in situ turbidity, in which the nNDTI proposed in this study was used as the most important variable. The developed turbidity retrieval model was applied to the KOMPSAT-3/3A images to map high-resolution river turbidity, and it was possible to analyze the spatiotemporal variations of turbidity. Through this study, we could confirm that the KOMPSAT-3/3A images are very useful for retrieving high-resolution and accurate spatial information on the river turbidity.

Composition of Curriculums and Textbooks for Speed-Related Units in Elementary School (초등학교에서 속력 관련 단원의 교육과정 및 교과서 내용 구성에 관한 논의)

  • Jhun, Youngseok
    • Journal of Korean Elementary Science Education
    • /
    • v.41 no.4
    • /
    • pp.658-672
    • /
    • 2022
  • The unique teaching and learning difficulties of speed-related units in elementary school science are mainly due to the student's lack of mathematical thinking ability and procedural knowledge on speed measurement, and curriculums and textbooks must be constructed with these in mind. To identify the implications of composing a new science curriculum and relevant textbooks, this study reviewed the structure and contents of the speed-related units of three curriculums from the 2007 revised curriculum to the 2015 revised curriculum and the resulting textbooks and examined their relevance in light of the literature. Results showed that the current content carries the risk of making students calculate only the speed of an object through a mechanical algorithm by memorization rather than grasp the multifaceted relation between traveled distance, duration time, and speed. Findings also highlighted the need to reorganize the curriculum and textbooks to offer students the opportunity to learn the meaning of speed step-by-step by visualizing materials such as double number lines and dealing with simple numbers that are easy to calculate and understand intuitively. In addition, this paper discussed the urgency of improving inquiry performance such as process skills by observing and measuring an actual object's movement, displaying it as a graph, and interpreting it rather than conducting data interpretation through investigation. Lastly, although the current curriculum and textbooks emphasize the connection with daily life in their application aspects, they also deal with dynamics-related content somewhat differently from kinematics, which is the main learning content of the unit. Hence, it is necessary to reorganize the contents focusing on cases related to speed so that students can grasp the concept of speed and use it in their everyday lives. With regard to the new curriculum and textbooks, this study proposes that students be provided the opportunity to systematically and deeply study core topics rather than exclude content that is difficult to learn and challenging to teach so that students realize the value of science and enjoy learning it.