• Title/Summary/Keyword: 최대엔트로피

Search Result 167, Processing Time 0.026 seconds

Introducing multi-layer structure for the better estimation of evapotranspiration (증발산 산정 향상을 위한 다층 구조 도입)

  • Choi, Kwanghun;Paik, Kyungrock
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.65-65
    • /
    • 2022
  • 울창한 숲에도 어느 정도 햇빛은 들 듯이, 태양복사에너지는 식생의 잎과 흙에 모두 미치며, 그로 인해 증산과 증발이 각각 발생한다. 이러한 사실을 반영하는 것은 현존하는 증발산 산정 방법을 개선하여 더 나은 증발산 추정치를 구하는 데에 도움이 될 것이다. 이 연구에서는 증발 표면을 수직적으로 흙층(soil layer)과 잎층(canopy layer)으로 나눠진 다층 구조로 바라보고, 각 층에서 증발산을 계산하는 방법을 도입했다. 증발 표면을 수직 상에서 구분했기에 각 층의 환경 조건은 그 층을 대표하는 높이에서 관측된 기상자료를 활용할 수 있다. 또한, 식생 활기에 따른 각 층의 복사에너지 유입량과 기공의 여닫힘에 따른 Bowen 비를 통해 식생이 증발산에 미치는 영향을 반영하는 것이 가능하다. 본 연구에서는 Fluxnet에서 제공하는 공분산 방법(eddy covariance method)으로 측정한 자료를 참고하여 다층 구조가 실제 증발산 산정에 타당한가를 논했다. 시스템 내 변화는 주어진 조건에서 엔트로피가 최대로 생성되는 방향으로 발생한다는 Maximum Entropy Production (MEP) 이론을 기반으로 만들어진 증발산 산정법을 통해 각 층의 증발산을 계산했으며, 관측 증발산을 토대로 잎층과 흙층에 유입된 복사에너지의 크기를 비교했다. 결과적으로 잎층에 계산된 복사에너지 흡수능이 낙엽수림의 변화 주기를 잘 반영하는 것을 확인했으며 다층 구조를 도입하는 것이 증발산 산정 향상과 수문-식생 관계를 고려한 증발산 분석에 적절한 접근법임을 보였다.

  • PDF

Application of Texture Feature Analysis Algorithm used the Statistical Characteristics in the Computed Tomography (CT): A base on the Hepatocellular Carcinoma (HCC) (전산화단층촬영 영상에서 통계적 특징을 이용한 질감특징분석 알고리즘의 적용: 간세포암 중심으로)

  • Yoo, Jueun;Jun, Taesung;Kwon, Jina;Jeong, Juyoung;Im, Inchul;Lee, Jaeseung;Park, Hyonghu;Kwak, Byungjoon;Yu, Yunsik
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.1
    • /
    • pp.9-15
    • /
    • 2013
  • In this study, texture feature analysis (TFA) algorithm to automatic recognition of liver disease suggests by utilizing computed tomography (CT), by applying the algorithm computer-aided diagnosis (CAD) of hepatocellular carcinoma (HCC) design. Proposed the performance of each algorithm was to comparison and evaluation. In the HCC image, set up region of analysis (ROA, window size was $40{\times}40$ pixels) and by calculating the figures for TFA algorithm of the six parameters (average gray level, average contrast, measure of smoothness, skewness, measure of uniformity, entropy) HCC recognition rate were calculated. As a result, TFA was found to be significant as a measure of HCC recognition rate. Measure of uniformity was the most recognition. Average contrast, measure of smoothness, and skewness were relatively high, and average gray level, entropy showed a relatively low recognition rate of the parameters. In this regard, showed high recognition algorithms (a maximum of 97.14%, a minimum of 82.86%) use the determining HCC imaging lesions and assist early diagnosis of clinic. If this use to therapy, the diagnostic efficiency of clinical early diagnosis better than before. Later, after add the effective and quantitative analysis, criteria research for generalized of disease recognition is needed to be considered.

Entropy-based Discrimination of Hand and Elbow Movements Using ECoG Signals (엔트로피 기반 ECoG 신호를 이용한 손과 팔꿈치 움직임 추론)

  • Kim, Ki-Hyun;Cha, Kab-Mun;Rhee, Kiwon;Chung, Chun Kee;Shin, Hyun-Chool
    • Journal of IKEEE
    • /
    • v.17 no.4
    • /
    • pp.505-510
    • /
    • 2013
  • In this paper, a method of estimating hand and elbow movements using electrocorticogram (ECoG) signals is proposed. Using multiple channels, surface electromyogram (EMG) signals and ECoG signals were obtained from patients simultaneously. The estimated movements were those to close and then open the hand and those to bend the elbow inward. The patients were encouraged to perform the movements in accordance with their free will instead of after being induced by external stimuli. Surface EMG signals were used to find movement time points, and ECoG signals were used to estimate the movements. To extract the characteristics of the individual movements, the ECoG signals were divided into a total of six bands (the entire band and the ${\delta}$, ${\Theta}$, ${\alpha}$, ${\beta}$, and ${\gamma}$ bands) to obtain the information entropy, and the maximum likelihood estimation method was used to estimate the movements. The results of the experiment showed the performance averaged 74% when the ECoG of the gamma band was used, which was higher than that when other bands were used, and higher estimation success rates were shown in the gamma band than in other bands. The time of the movements was divided into three time sections based on movement time points, and the "before" section, which included the readiness potential, was compared with the "onset" section. In the "before" section and the "onset" section, estimation success rates were 66% and 65%, respectively, and thus it was determined that the readiness potential could be used.

On Information Theoretical Research of the Korean Language (한국어의 정보이론적 연구 방향)

  • Lee, Jae-Hong;Yi, Chae-Hag
    • Annual Conference on Human and Language Technology
    • /
    • 1992.10a
    • /
    • pp.367-375
    • /
    • 1992
  • 한국어는 다른 언어와는 달리 초성, 중성, 종성의 자소가 모여서 한 음절을 이룬다. 음절을 이루는 자소는 그 발생의 확률적 성질에 따라 확률변수로 간주된다. 음절 안에서 자소간의 발생의 상관관계는 자소간 조건부 확률 및 엔트로피로 표시된다. 음절이 모여서 단어를 이루고 단어를 이루는 음절은 그 발생의 확률적 성질에 따라 확률변수로 간주된다. 한국어 단어안에서 음절간의 발생의 상관관계는 음절간 조건부 확률 및 엔트로피로 표시된다. 수 있다. 그런데 가능한 음절의 종류가 매우 많기 때문에 음절 발생의 상관관계를 표시하는 지표로서 음절간 조건부 확률 대신 초성, 중성, 종성 단위의 조건부 확률을 사용하는 것이 음절간의 발생의 상관관계를 표시하는데 효과적이다. 이러한 한국어의 정보이론적 연구를 위하여서는 기초자료로서 한국어 단어의 빈도분포가 필요하다. 한국어 단어의 빈도분포의 포괄적인 조사는 1956년의 "우리말 말수 사용의 잦기 조사"가 유일한 실정이다. 시간 경과에 따른 한국어의 정보이론적 특성 변화의 분석을 위하여서는 한국어 단어 빈도의 주기적인 조사가 필요하다. 한국어에서 초성, 중성, 종성단위의 정보이론적 연구결과는 한국어 음성인식 및 함성, 자연언어처리, 암호법, 언어학, 음성학, 한국어부호 표준화 연구등에 이용될 것으로 기대된다. 남북한의 언어는 분단이 지속됨에 따라 상호 이질화가 진행되고 있다. 이러한 이질화를 극복하려는 부분적인 노력으로 남북한 언어의 한국어 영문표기의 단일화 등이 있었다. 이러한 노력에 병행하여 남한과 북한의 언어에 대한 정보이론적 비교 연구도 있어야 할 것이다. 정보를 효과적으로 캐싱할 수 있도록 인접한 데이터를 클러스터링해서 브로드캐스팅하여 이동 호스트의 구성 시간(setup time)을 최소화하였다. 그리고, 맨하탄거리(Manhattan Distance)를 사용해서 위치 의존 질의에서 사용하는 데이타를 캐싱하고 질의를 처리하는 방법을 제안한다. 맨하탄 거리를 이용해서 캐싱하면 도로에 인접해서 위치한 데이타를 효과적으로 캐싱할 수 있다. 또한, 거리 계산 방법으로 맨하탄 거리를 사용하면 도심에서 실제 이동 거리와 비슷한 값을 알 수 있고, 직선 거리 계산식에 비해서 계산식도 간단하기 때문에 시스템 계산량도 줄일 수 있다. 기준으로 라이신 부산물은 어분 단백질을 40%까지 대체가 가능하였으며, 아울러 높은 라이신 부산물의 대체 수준에 있어서 사료효율과 단백질 전환효율을 고려한다면 아미노산 첨가(라이신과 아르지닌)와 중화 효과에 좋은 결과가 있을 것으로 사료된다.의한 적정 양성수용밀도는 각고 5~6cm 크기의 경우 10~15개체가 적합하였다. 수증별 성장은 15~20 m 수층에서 빨랐으며, 성장촉진과 폐사를 줄이기 위해서는 고수온이 지속되는 7~10월에는 20~30m수층으로 채롱을 내려 양성하고 그 외 시기에는 15 m층 내외가 좋은 것으로 나타났다. 상품으로 출하 가능한 크기 인 각고 10 cm이상, 전중량 140 g 내외로 성장시 키기까지는 채묘후 22개월이 소요되었고, 출하시기는 전중량 증가가 최대에 이르는 3월에서 4월 중순이 경제적일 것으로 판단된다.er 90 % of good relative dynamic modulus of elasticity due

  • PDF

A Blind Watermarking Algorithm using CABAC for H.264/AVC Main Profile (H.264/AVC Main Profile을 위한 CABAC-기반의 블라인드 워터마킹 알고리즘)

  • Seo, Young-Ho;Choi, Hyun-Jun;Lee, Chang-Yeul;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.181-188
    • /
    • 2007
  • This paper proposed a watermark embedding/extracting method using CABAC(Context-based Adaptive Binary Arithmetic Coding) which is the entropy encoder for the main profile of MPEG-4 Part 10 H.264/AVC. This algorithm selects the blocks and the coefficients in a block on the bases of the contexts extracted from the relationship to the adjacent blocks and coefficients. A watermark bit is embedded without any modification of coefficient or with replacing the LSB(Least Significant Bit) of the coefficient with a watermark bit by considering both the absolute value of the selected coefficient and the watermark bit. Therefore, it makes it hard for an attacker to find out the watermarked locations. By selecting a few coefficients near the DC coefficient according to the contexts, this algorithm satisfies the robustness requirement. From the results from experiments with various kinds and various strengths of attacks the maximum error ratio of the extracted watermark was 5.02% in maximum, which makes certain that the proposed algorithm has very high level of robustness. Because it embeds the watermark during the context modeling and binarization process of CABAC, the additional amount of calculation for locating and selecting the coefficients to embed watermark is very small. Consequently, it is highly expected that it is very useful in the application area that the video must be compressed right after acquisition.

Design of a Pipelined Binary Arithmetic Encoder for H.264/AVC (H.264/AVC를 위한 파이프라인 이진 산술 부호화기 설계)

  • Yun, Jae-Bok;Park, Tae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.6 s.360
    • /
    • pp.42-49
    • /
    • 2007
  • CABAC(Context-based Adaptive Binary Arithmetic Coding) among various entropy coding schemes which are used to improve compression efficiency in H.264/AVC has a high hardware complexity and the fast calculation is difficult because data dependancy exists in the bit-serial process. In this paper, the proposed architecture efficiently compose the renormalization process of binary arithmetic encoder which is an important part of CABAC used in H.264/AVC. At every clock cycle, the input symbol is encoded regardless of the iteration of the renormalization process for every input symbol. Also, the proposed architecture can deal with the bitsOutstanding up to 127 which is adopted to handle the carry generation problem and encode input symbol without stall. The proposed architecture with three-stage pipeline has been synthesized using the 0.18um Dongbu-Anam standard cell library and can be operated at 290MHz.

Contrast Enhancement based on Gaussian Region Segmentation (가우시안 영역 분리 기반 명암 대비 향상)

  • Shim, Woosung
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.608-617
    • /
    • 2017
  • Methods of contrast enhancement have problem such as side effect of over-enhancement with non-gaussian histogram distribution, tradeoff enhancement efficiency against brightness preserving. In order to enhance contrast at various histogram distribution, segmentation to region with gaussian distribution and then enhance contrast each region. First, we segment an image into several regions using GMM(Gaussian Mixture Model)fitting by that k-mean clustering and EM(Expectation-Maximization) in $L^*a^*b^*$ color space. As a result region segmentation, we get the region map and probability map. Then we apply local contrast enhancement algorithm that mean shift to minimum overlapping of each region and preserve brightness histogram equalization. Experiment result show that proposed region based contrast enhancement method compare to the conventional method as AMBE(AbsoluteMean Brightness Error) and AE(Average Entropy), brightness is maintained and represented detail information.

A study on the Equilibrium sorption of Silk fibroin by Reactive dye. (견에 대한 반응성 염료의 평형론적 연구)

  • 오병주;탁태문
    • Journal of Sericultural and Entomological Science
    • /
    • v.27 no.2
    • /
    • pp.40-46
    • /
    • 1985
  • The equilibrium sorptions of C.I. Reactive Blue 19 and C.I. Reactive Blue 19 and C.I. Acid Blue 138 on Silk fibroin were investigated in the range of 50$^{\circ}C$, 70$^{\circ}C$, 90$^{\circ}C$ and to the pH range from 2.0 to 10.5. The results obtained are summarized as follows: 1) The amount of sorption of reactive dye was increased with the decrease of pH in dyeing solution and temperature. The amount of fixation showed the maximum value to pH 8.5 and 70$^{\circ}C$. 2) In acidic region, the sorption behavior of acid dye was similar to that of reactive dye, and Langmuir adsorption constant was increased with the decrease of pH. 3) Langmuir constant of both dyes was decreased with the increase of temperature, while standard affinity was increased. 4) The reaction of both dyes was exothermic and the values of $\Delta$S$^{\circ}$ were positive. 5) It was found that the sorption behavior of dyes against Silk fibroin could be described as Langmuir adsorption and Nernst distribution in lower pH region.

  • PDF

FPGA Design of Motion JPEG2000 Encoder for Digital Cinema (디지털 시네마용 Motion JPEG2000 인코더의 FPGA 설계)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.3C
    • /
    • pp.297-305
    • /
    • 2007
  • In the paper, a Motion JPEG2000 coder which has been set as the standard for image compression by the Digital Cinema Initiatives (DCI), an organization composed of major movie studios was implemented into a target FPGA. The DWT (Discrete Wavelet Transform) based on lifting and the Tier 1 of EBCOT (Embedded Block Coding with Optimized Truncation) which are major functional modules of the JPEG2000 were setup with dedicated hardware. The Tier 2 process was implemented in software. For digital cinema the tile-size was set to support $1024\times1024$ pixels. To ensure the real-time operations, three entropy encoders were used. When Verilog-HDL was used for hardware, resources of 32,470 LEs in Altera's Stratix EP1S80 were used, and the hardware worked stably at the frequency of 150Mhz.

Word Sense Similarity Clustering Based on Vector Space Model and HAL (벡터 공간 모델과 HAL에 기초한 단어 의미 유사성 군집)

  • Kim, Dong-Sung
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.3
    • /
    • pp.295-322
    • /
    • 2012
  • In this paper, we cluster similar word senses applying vector space model and HAL (Hyperspace Analog to Language). HAL measures corelation among words through a certain size of context (Lund and Burgess 1996). The similarity measurement between a word pair is cosine similarity based on the vector space model, which reduces distortion of space between high frequency words and low frequency words (Salton et al. 1975, Widdows 2004). We use PCA (Principal Component Analysis) and SVD (Singular Value Decomposition) to reduce a large amount of dimensions caused by similarity matrix. For sense similarity clustering, we adopt supervised and non-supervised learning methods. For non-supervised method, we use clustering. For supervised method, we use SVM (Support Vector Machine), Naive Bayes Classifier, and Maximum Entropy Method.

  • PDF