• Title/Summary/Keyword: window size

Search Result 821, Processing Time 0.031 seconds

The Effect of Vanadium(V) Oxide Content of V2O5-WO3/TiO2 Catalyst on the Nitrogen Oxides Reduction and N2O Formation (질소산화물 환원과 N2O 생성에 있어서 V2O5-WO3/TiO2 촉매의 V2O5 함량 영향)

  • Kim, Jin-Hyung;Choi, Joo-Hong
    • Korean Chemical Engineering Research
    • /
    • v.51 no.3
    • /
    • pp.313-318
    • /
    • 2013
  • In order to investigate the effect of $V_2O_5$ loading of $V_2O_5-WO_3/TiO_2$ catalyst on the NO reduction and the formation of $N_2O$, the experimental study was carried out in a differential reactor using the powder catalyst. The NO reduction and the ammonia oxidation were, respectively, investigated over the catalysts compose of $V_2O_5$ content (1~8 wt%) based on the fixed composition of $WO_3$ (9 wt%) on $TiO_2$ powder. $V_2O_5-WO_3/TiO_2$ catalysts had the NO reduction activity even under the temperature of $200^{\circ}C$. However, the lowest temperature for NO reduction activity more than 99.9% to treat NO concentration of 700 ppm appeared at 340 with very limited temperature window in the case of 1 wt% $V_2O_5$ catalyst. And the temperature shifted to lower one as well as the temperature window was widen as the $V_2O_5$ content of the catalyst increased, and finally reached at the activation temperature ranged $220{\sim}340^{\circ}C$ in the case of 6 wt% $V_2O_5$ catalyst. The catalyst of 8 wt% $V_2O_5$ content presented lower activity than that of 8 wt% $V_2O_5$ content over the full temperature range. NO reduction activity decreased as the $V_2O_5$ content of the catalyst increased above $340^{\circ}C$. The active site for NO reduction over $V_2O_5-WO_3/TiO_2$ catalysts was mainly related with $V_2O_5$ particles sustained as the bare surface with relevant size which should be not so large to stimulate $N_2O$ formation at high temperature over $320^{\circ}C$ according to the ammonia oxidation. Currently, $V_2O_5-WO_3/TiO_2$ catalysts were operated in the temperature ranged $350{\sim}450^{\circ}C$ to treat NOx in the effluent gas of industrial plants. However, in order to save the energy and to reduce the secondary pollutant $N_2O$ in the high temperature process, the using of $V_2O_5-WO_3/TiO_2$ catalyst of content $V_2O_5$ was recommended as the low temperature catalyst which was suitable for low temperature operation ranged $250{\sim}320^{\circ}C$.

Detection of Surface Changes by the 6th North Korea Nuclear Test Using High-resolution Satellite Imagery (고해상도 위성영상을 활용한 북한 6차 핵실험 이후 지표변화 관측)

  • Lee, Won-Jin;Sun, Jongsun;Jung, Hyung-Sup;Park, Sun-Cheon;Lee, Duk Kee;Oh, Kwan-Young
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1479-1488
    • /
    • 2018
  • On September 3rd 2017, strong artificial seismic signals from North Korea were detected in KMA (Korea Meteorological Administration) seismic network. The location of the epicenter was estimated to be Punggye-ri nuclear test site and it was the most powerful to date. The event was not studied well due to accessibility and geodetic measurements. Therefore, we used remote sensing data to analyze surface changes around Mt. Mantap area. First of all, we tried to detect surface deformation using InSAR method with Advanced Land Observation Satellite-2 (ALOS-2). Even though ALOS-2 data used L-band long wavelength, it was not working well for this particular case because of decorrelation on interferogram. The main reason would be large deformation near the Mt. Mantap area. To overcome this limitation of decorrelation, we applied offset tracking method to measure deformation. However, this method is affected by window kernel size. So we applied various window sizes from 32 to 224 in 16 steps. We could retrieve 2D surface deformation of about 3 m in maximum in the west side of Mt. Mantap. Second, we used Pleiadas-A/B high resolution satellite optical images which were acquired before and after the 6th nuclear test. We detected widespread surface damage around the top of Mt. Mantap such as landslide and suspected collapse area. This phenomenon may be caused by a very strong underground nuclear explosion test. High-resolution satellite images could be used to analyze non-accessible area.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Double Queue CBOKe Mechanism for Congestion Control (이중 큐 CHOKe 방식을 사용한 혼잡제어)

  • 최기현;신호진;신동렬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11A
    • /
    • pp.867-875
    • /
    • 2003
  • Current end-to-end congestion control depends only on the information of end points (using three duplicate ACK packets) and generally responds slowly to the network congestion. This mechanism can't avoid TCP global synchronization in which TCP congestion window size is fluctuated during congestion period. Furthermore, if RTT(Round Trip Time) is increased, three duplicate ACK packets are not correct congestion signals because congestion might already disappear and the host may send more packets until it receives three duplicate ACK packets. Recently there are increasing interests in solving end-to-end congestion control using AQM(Active Queue Management) to improve the performance of TCP protocols. AQM is a variation of RED-based congestion control. In this paper, we first evaluate the effectiveness of the current AQM schemes such as RED, CHOKe, ARED, FRED and SRED, over traffic with different rates and over traffic with mixed responsive and non-responsive flows, respectively. In particular, CHOKe mechanism shows greater unfairness, especially when more unresponsive flows exist in a shared link. We then propose a new AQM scheme using CHOKe mechanism, called DQC(Double Queue CHOKe), which uses two FIFO queues before applying CHOKe mechanism to adaptive congestion control. Simulation shows that it works well in protecting congestion-sensitive flows from congestion-causing flows and exhibits better performances than other AQM schemes. Also we use partial state information, proposed in LRURED, to improve our mechanism.

Terrain Referenced Navigation Simulation using Area-based Matching Method and TERCOM (영역기반 정합 기법 및 TERCOM에 기반한 지형 참조 항법 시뮬레이션)

  • Lee, Bo-Mi;Kwon, Jay-Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.73-82
    • /
    • 2010
  • TERCOM(TERrain COntour Matching), which is the one of the Terrain Referenced Navigation and used in the cruise missile navigation system, is still under development. In this study, the TERCOM based on area-based matching algorithm and extended Kalman filter is analysed through simulation. In area-based matching, the mean square difference (MSD) and cross-correlation matching algorithms are applied. The simulation supposes that the barometric altimeter, radar altimeter and SRTM DTM loaded on board. Also, it navigates along the square track for 545 seconds with the velocity of 1000km per hour. The MSD and cross-correlation matching algorithms show the standard deviation of position error of 99.6m and 34.3m, respectively. The correlation matching algorithm is appeared to be less sensitive than the MSD algorithm to the topographic undulation and the position accuracy of the both algorithms is extremely depends on the terrain. Therefore, it is necessary to develop an algorithm that is more sensitive to less terrain undulation for reliable terrain referenced navigation. Furthermore, studies on the determination of proper matching window size in long-term flight and the determination of the best terrain database resolution needed by the flight velocity and area should be conducted.

Caching and Concurrency Control in a Mobile Client/Sever Computing Environment (이동 클라이언트/서버 컴퓨팅환경에서의 캐싱 및 동시성 제어)

  • Lee, Sang-Geun;Hwang, Jong-Seon;Lee, Won-Gyu;Yu, Heon-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.8
    • /
    • pp.974-987
    • /
    • 1999
  • 이동 컴퓨팅 환경에서 자주 접근하는 데이터에 대한 캐싱은 무선 채널의 좁은 대역폭에서 경쟁을 줄일 수 있는 유용한 기술이다. 그러나, 트랜잭션 캐시 일관성을 지원하는 전통적인 클라이언트/서버 전략은 클라이언트와 서버간에 많은 양의 통신을 필요로 하기 때문에 이동 클라이언트/서버 컴퓨팅 환경에서는 적절하지 않다. 본 논문에서는 브로드캐스트-기반 캐시 무효화 정책을 사용하면서 트랜잭션 캐시 일관성을 지원하는 OCC-UTS (Optimistic Concurrency Control with Update TimeStamp) 프로토콜을 제안한다. 접근한 데이터에 대한 일관성 검사 및 완료 프로토콜은 캐시 무효화 과정의 내부 과정으로 완전 분산 형태로 효율적으로 구현되며, 일관성 체크의 대부분이 이동 클라이언트에서 수행된다. 또한, 분석 모델에 기반한 성능 비교를 통해, 본 논문에서 제안하는 OCC-UTS 프로토콜이 다른 경쟁 프로토콜보다 높은 트랜잭션 처리율을 얻으며, 데이터 항목을 자주 접근하면 할수록 지역 캐시를 사용하는 OCC-UTS 프로토콜이 더 효율적임을 보인다. 이동 클라이언트의 접속 단절에 대해서는 무효화 브로드캐스트 윈도우를 크게 하여 접속 단절에 적절히 대처할 수 있다.Abstract In a mobile computing environment, caching of frequently accessed data has been shown to be a useful technique for reducing contention on the narrow bandwidth of the wireless channels. However, the traditional client/server strategies for supporting transactional cache consistency that require extensive communications between a client and a server are not appropriate in a mobile client/server computing environment. In this paper, we propose a new protocol, called OCC-UTS (Optimisitic Concurrency Control with Update TimeStamp), to support transactional cache consistency in a mobile client/server computing environment by utilizing the broadcast-based solutions for the problem of invalidating caches. The consistency check on accessed data and the commitment protocol are implemented in a truly distributed fashion as an integral part of cache invalidation process, with most burden of consistency check being downloaded to mobile clients. Also, our experiments based on an analytical model substantiate the basic idea and study the performance characteristics. Experimental results show that OCC-UTS protocol without local cache outperforms other competitor protocol, and the more frequent a mobile client accesses data items the more efficient OCC-UTS protocol with local cache is. With respect to disconnection, the tolerance to disconnection is improved if the invalidation broadcast window size is extended.

Application of Texture Feature Analysis Algorithm used the Statistical Characteristics in the Computed Tomography (CT): A base on the Hepatocellular Carcinoma (HCC) (전산화단층촬영 영상에서 통계적 특징을 이용한 질감특징분석 알고리즘의 적용: 간세포암 중심으로)

  • Yoo, Jueun;Jun, Taesung;Kwon, Jina;Jeong, Juyoung;Im, Inchul;Lee, Jaeseung;Park, Hyonghu;Kwak, Byungjoon;Yu, Yunsik
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.1
    • /
    • pp.9-15
    • /
    • 2013
  • In this study, texture feature analysis (TFA) algorithm to automatic recognition of liver disease suggests by utilizing computed tomography (CT), by applying the algorithm computer-aided diagnosis (CAD) of hepatocellular carcinoma (HCC) design. Proposed the performance of each algorithm was to comparison and evaluation. In the HCC image, set up region of analysis (ROA, window size was $40{\times}40$ pixels) and by calculating the figures for TFA algorithm of the six parameters (average gray level, average contrast, measure of smoothness, skewness, measure of uniformity, entropy) HCC recognition rate were calculated. As a result, TFA was found to be significant as a measure of HCC recognition rate. Measure of uniformity was the most recognition. Average contrast, measure of smoothness, and skewness were relatively high, and average gray level, entropy showed a relatively low recognition rate of the parameters. In this regard, showed high recognition algorithms (a maximum of 97.14%, a minimum of 82.86%) use the determining HCC imaging lesions and assist early diagnosis of clinic. If this use to therapy, the diagnostic efficiency of clinical early diagnosis better than before. Later, after add the effective and quantitative analysis, criteria research for generalized of disease recognition is needed to be considered.

Automatic Extraction of Eye and Mouth Fields from Face Images using MultiLayer Perceptrons and Eigenfeatures (고유특징과 다층 신경망을 이용한 얼굴 영상에서의 눈과 입 영역 자동 추출)

  • Ryu, Yeon-Sik;O, Se-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.2
    • /
    • pp.31-43
    • /
    • 2000
  • This paper presents a novel algorithm lot extraction of the eye and mouth fields (facial features) from 2D gray level face images. First of all, it has been found that Eigenfeatures, derived from the eigenvalues and the eigenvectors of the binary edge data set constructed from the eye and mouth fields are very good features to locate these fields. The Eigenfeatures, extracted from the positive and negative training samples for the facial features, ate used to train a MultiLayer Perceptron(MLP) whose output indicates the degree to which a particular image window contains the eye or the mouth within itself. Second, to ensure robustness, the ensemble network consisting of multiple MLPs is used instead of a single MLP. The output of the ensemble network becomes the average of the multiple locations of the field each found by the constituent MLPs. Finally, in order to reduce the computation time, we extracted the coarse search region lot eyes and mouth by using prior information on face images. The advantages of the proposed approach includes that only a small number of frontal faces are sufficient to train the nets and furthermore, lends themselves to good generalization to non-frontal poses and even to other people's faces. It was also experimentally verified that the proposed algorithm is robust against slight variations of facial size and pose due to the generalization characteristics of neural networks.

  • PDF

Improvement of Fetal Heart Rate Extraction from Doppler Ultrasound Signal (도플러 초음파 신호에서의 태아 심박 검출 개선)

  • Kwon, Ja Young;Lee, Yu Bin;Cho, Ju Hyun;Lee, Yoo Jin;Choi, Young Deuk;Nam, Ki Chang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.328-334
    • /
    • 2012
  • Continuous fetal heart beat monitoring has assisted clinicians in assuring fetal well-being during antepartum and intrapartum. Fetal heart rate (FHR) is an important parameter of fetal health during pregnancy. The Doppler ultrasound is one of very useful methods that can non-invasively measure FHR. Although it has been commonly used in clinic, inaccurate heart rate reading has not been completely resolved.. The objective of this study is to improve detection algorithm of FHR from Doppler ultrasound signal with simple method. We modified autocorrelation function to enhance signal periodicity and adopted adaptive window size and shifted for data segment to be analysed. The proposed method was applied to real measured data, and it was verified that beat-to-beat FHR estimation result was comparable with the reference fetal ECG data. This simple and effective method is expected to be implemented in the embedded system.

Evaluation of clinical outcomes of implants placed into the maxillary sinus with a perforated sinus membrane: a retrospective study

  • Kim, Gwang-Seok;Lee, Jae-Wang;Chong, Jong-Hyon;Han, Jeong Joon;Jung, Seunggon;Kook, Min-Suk;Park, Hong-Ju;Ryu, Sun-Youl;Oh, Hee-Kyun
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.38
    • /
    • pp.50.1-50.6
    • /
    • 2016
  • Background: The aim of this study was to evaluate the clinical outcomes of implants that were placed within the maxillary sinus that has a perforated sinus membrane by the lateral window approach. Methods: We examined the medical records of the patients who had implants placed within the maxillary sinus that has a perforated sinus membrane by the lateral approach at the Department of Oral and Maxillofacial Surgery of Chonnam National University Dental Hospital from January 2009 to December 2015. There were 41 patients (male:female = 28:13). The mean age of patients was $57.2{\pm}7.2years$ at the time of operation (range, 20-76 years). The mean follow-up duration was 2.1 years (range, 0.5-5 years) after implant placement. Regarding the method of sinus elevation, only the lateral approach was included in this study. Results: Ninety-nine implants were placed in 41 patients whose sinus membranes were perforated during lateral approach. The perforated sinus membranes were repaired with a resorbable collagen membrane. Simultaneous implant placements with sinus bone grafting were performed in 37 patients, whereas delayed placements were done in four patients. The average residual bone height was $3.4{\pm}2.0mm$ in cases of simultaneous implant placement and $0.6{\pm}0.9mm$ in cases of delayed placement. Maxillary bone graft with implant placement, performed on the patients with a perforated maxillary sinus membrane did not fail, and the cumulative implant survival rate was 100%. Conclusions: In patients with perforations of the sinus mucosa, sinus elevation and implant placement are possible regardless of the location and size of membrane perforation. Repair using resorbable collagen membrane is a predictable and reliable technique.