• Title/Summary/Keyword: Window performance

Search Result 1,374, Processing Time 0.024 seconds

A Study on the Efficiency of Container Ports in the Mediterranean Sea (지중해 컨테이너항만의 효율성 분석에 관한 연구)

  • Ibrahim, Ousama Ibrahim Hassan;Kim, Hyun Deok
    • Journal of Korea Port Economic Association
    • /
    • v.37 no.2
    • /
    • pp.91-105
    • /
    • 2021
  • The current increasing size of container vessels affects the container port's situation. The containerization has changed the inter-modal handling process, which brought more flexibility and comfortableness in the shipping industry sector. Thus, it is very crucial to analyze the efficiency of container ports in the regional sphere. Such kind of efficiency analysis provide a powerful management tool for port operators and shipping managers in the Mediterranean market, and it also helps to form an information for planning new regional and national port operations. This paper aims to analyze the ports' technical efficiency of Mediterranean major container ports. It is conducted to establish the model of port performance and efficiency through the empirical test of the various factors. Regarding to the panel data collected from the 48 DMUs (decision making units), this study attempts to provide the empirical basis of the port efficiency relative to another factors in the total port performance. Due to the complexity of the various activities carried out at container ports, the study focuses only on the technical efficiency at the level of the Mediterranean container port. Unlike the practice of cross-sectional data analysis, originally established by Charnes et al. (1985), the panel data in DEA window analysis applications are used. The main focus of this study is the relative technical efficiency of 12 container ports from 7 countries in the Mediterranean market. The selection of ports under study is based on their high handling capability and rankings in World Top 100 (Containerization International, 2018).

Synthesis of Nitrogen-Doped Porous Carbon Fibers Derived from Coffee Waste and Their Electrochemical Application (커피 폐기물 기반의 질소가 포함된 다공성 탄소 섬유의 제조 및 전기화학적 응용)

  • Dong Hyun Kim;Min Sang Kim;Suk Jekal;Jiwon Kim;Ha-Yeong Kim;Yeon-Ryong Chu;Chan-Gyo Kim;Hyung Sub Sim;Chang-Min Yoon
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.31 no.1
    • /
    • pp.57-68
    • /
    • 2023
  • In this study, coffee waste was recycled into nitrogen-doped porous carbon fibers as an active material for high-energy EDLC (Electric Double Layer Capacitors). The coffee waste was mixed with polyvinylpyrrolidone and dissolved into dimethylformamide. The mixture was then electrospun to fabricate coffee waste-derived nanofibers (Bare-CWNF), and carbonization process was followed under a nitrogen atmosphere at 900℃. Similar to Bare-CWNF, the as-synthesized carbonized coffee waste-derived nanofibers (Carbonized-CWNF) maintained its fibrous form while preserving the composition of nitrogen. The electrochemical performance was analyzed for carbonized coffee waste (Carbonized-CW)-, carbonized PAN-derived nanofibers (Carbonized-PNF)-, and Carbonized-CWNF-based electrodes in the operating voltage window of -1.0-0.0V, Among the electrodes, Carbonized-CWNF-based electrodes exhibited the highest specific capacitance of 123.8F g-1 at 1A g-1 owing to presence of nitrogen and porous structure. As a result, nitrogen-contained porous carbon fibers synthesized from coffee waste showed excellent electrochemical performance as electrodes for high-energy EDLC. The experimental designed in this study successfully demonstrated the recycling of the coffee waste, one of the plant-based biomass that causes the environmental pollution into high-energy materials, also, attaining the ecofriendliness.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2012 (설비공학 분야의 최근 연구 동향 : 2012년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwataik;Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Choi, Jong Min;Park, Jun-Seok;Kim, Sumin
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.25 no.6
    • /
    • pp.346-361
    • /
    • 2013
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2012. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. The conclusions are as follows : (1) The research works on thermal and fluid engineering have been reviewed as groups of fluid machinery, pipes and valves, fuel cells and power plants, ground-coupled heat pumps, and general heat and mass transfer systems. Research issues are mainly focused on new and renewable energy systems, such as fuel cells, ocean thermal energy conversion power plants, and ground-coupled heat pump systems. (2) Research works on the heat transfer area have been reviewed in the categories of heat transfer characteristics, pool boiling and condensing heat transfer, and industrial heat exchangers. Researches on heat transfer characteristics included the results for natural convection in a square enclosure with two hot circular cylinders, non-uniform grooved tube considering tube expansion, single-tube annular baffle system, broadcasting LED light with ion wind generator, mechanical property and microstructure of SA213 P92 boiler pipe steel, and flat plate using multiple tripping wires. In the area of pool boiling and condensing heat transfer, researches on the design of a micro-channel heat exchanger for a heat pump, numerical simulation of a heat pump evaporator considering the pressure drop in the distributor and capillary tubes, critical heat flux on a thermoexcel-E enhanced surface, and the performance of a fin-and-tube condenser with non-uniform air distribution and different tube types were actively carried out. In the area of industrial heat exchangers, researches on a plate heat exchanger type dehumidifier, fin-tube heat exchanger, an electric circuit transient analogy model in a vertical closed loop ground heat exchanger, heat transfer characteristics of a double skin window for plant factory, a regenerative heat exchanger depending on its porous structure, and various types of plate heat exchangers were performed. (3) In the field of refrigeration, various studies were executed to improve refrigeration system performance, and to evaluate the applicability of alternative refrigerants and new components. Various topics were presented in the area of refrigeration cycle. Research issues mainly focused on the enhancement of the system performance. In the alternative refrigerant area, studies on CO2, R32/R152a mixture, and R1234yf were performed. Studies on the design and performance analysis of various compressors and evaporator were executed. (4) In building mechanical system research fields, twenty-nine studies were conducted to achieve effective design of mechanical systems, and also to maximize the energy efficiency of buildings. The topics of the studies included heating and cooling, HVAC system, ventilation, renewable energy systems, and lighting systems in buildings. New designs and performance tests using numerical methods and experiments provide useful information and key data, which can improve the energy efficiency of buildings. (5) In the fields of the architectural environment, studies for various purposes, such as indoor environment, building energy, and renewable energy were performed. In particular, building energy-related researches and renewable energy systems have been mainly studied, reflecting interests in global climate change, and efforts to reduce building energy consumption by government and architectural specialists. In addition, many researches have been conducted regarding indoor environments.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

The Design of Broadband Ultrasonic Transducers for Fish Species Identification - Bandwidth Enhancement of a Ultrasonic Transducer Using Double Acoustic Matching Layers- (어종식별을 위한 광대역 초음파 변환기의 설계 ( III ) - 이중음향정합층을 이용한 초음파 변환기의 대역폭 확장 -)

  • 이대재
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.34 no.1
    • /
    • pp.85-95
    • /
    • 1998
  • The broadband ultrasonic transducers have been designed to use in obtaining the broadband echo signals from fish schools in relation to the identification of fish species. The broadening of bandwidth was achieved by attaching double acoustic matching layers on the front face of a Tonpilz transducer consisted of an aluminum head, a piezoelectric ring, a brass tail and to evaluate the performance characteristics, such as the transmitting voltage response(TVR) of transducers. The constructed transducers were tested experimentally and numerically by changing the parameters such as impedances and thicknesses of the head, tail and matching layers, in the water tank. Also, the developed transducer was excited by a chirp signal and the received chirp waveforms were analyzed. According to the measured TVR results, the available 3 dB bandwidth of the transducer with double matching layers of an $Al_O_3/epoxy$ composite of 7 mm thick and a polyurethane window of 18 mm thick was 7.3 kHz with a center frequency of 38.8 kHz, and the maximum and the minimum values of the TVR in this frequency region were 135.7 dB and 132.7 dB re $1\;{\mu}Pa/V$ at 1 m, respectively. Also, the available 3 dB bandwidth of the transducer with double matching layers of an $Al_O_3/epoxy$ composite of 11 mm thick and a polyurethane window of 15 mm thick was 6.2 kHz with a center frequency of 38.6 kHz, and the maximum TVR value in the frequency region was 136.3 dB re $1\;{\mu}Pa/V$ at 1 m. Reasonable agreement between the experimental results and the numerical results for the TVR of the developed transducers was achieved. The frequency dependant characteristics of experimentally observed chirp signals closely matched to the measured TVR results. These results suggest that there is potential for increasing the bandwidth by varying other parameters in the transducer design and the material of the acoustic matching layers.

  • PDF

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Development of Acquisition and Analysis System of Radar Information for Small Inshore and Coastal Fishing Vessels - Suppression of Radar Clutter by CFAR - (연근해 소형 어선의 레이더 정보 수록 및 해석 시스템 개발 - CFAR에 의한 레이더 잡음 억제 -)

  • 이대재;김광식;신형일;변덕수
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.347-357
    • /
    • 2003
  • This paper describes on the suppression of sea clutter on marine radar display using a cell-averaging CFAR(constant false alarm rate) technique, and on the analysis of radar echo signal data in relation to the estimation of ARPA functions and the detection of the shadow effect in clutter returns. The echo signal was measured using a X -band radar, that is located on the Pukyong National University, with a horizontal beamwidth of $$3.9^{\circ}$$, a vertical beamwidth of $20^{\circ}$, pulsewidth of $0.8 {\mu}s$ and a transmitted peak power of 4 ㎾ The suppression performance of sea clutter was investigated for the probability of false alarm between $l0-^0.25;and; 10^-1.0$. Also the performance of cell averaging CFAR was compared with that of ideal fixed threshold. The motion vectors and trajectory of ships was extracted and the shadow effect in clutter returns was analyzed. The results obtained are summarized as follows;1. The ARPA plotting results and motion vectors for acquired targets extracted by analyzing the echo signal data were displayed on the PC based radar system and the continuous trajectory of ships was tracked in real time. 2. To suppress the sea clutter under noisy environment, a cell averaging CFAR processor having total CFAR window of 47 samples(20+20 reference cells, 3+3 guard cells and the cell under test) was designed. On a particular data set acquired at Suyong Man, Busan, Korea, when the probability of false alarm applied to the designed cell averaging CFAR processor was 10$^{-0}$.75/ the suppression performance of radar clutter was significantly improved. The results obtained suggest that the designed cell averaging CFAR processor was very effective in uniform clutter environments. 3. It is concluded that the cell averaging CF AR may be able to give a considerable improvement in suppression performance of uniform sea clutter compared to the ideal fixed threshold. 4. The effective height of target, that was estimated by analyzing the shadow effect in clutter returns for a number of range bins behind the target as seen from the radar antenna, was approximately 1.2 m and the information for this height can be used to extract the shape parameter of tracked target..

Double Queue CBOKe Mechanism for Congestion Control (이중 큐 CHOKe 방식을 사용한 혼잡제어)

  • 최기현;신호진;신동렬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11A
    • /
    • pp.867-875
    • /
    • 2003
  • Current end-to-end congestion control depends only on the information of end points (using three duplicate ACK packets) and generally responds slowly to the network congestion. This mechanism can't avoid TCP global synchronization in which TCP congestion window size is fluctuated during congestion period. Furthermore, if RTT(Round Trip Time) is increased, three duplicate ACK packets are not correct congestion signals because congestion might already disappear and the host may send more packets until it receives three duplicate ACK packets. Recently there are increasing interests in solving end-to-end congestion control using AQM(Active Queue Management) to improve the performance of TCP protocols. AQM is a variation of RED-based congestion control. In this paper, we first evaluate the effectiveness of the current AQM schemes such as RED, CHOKe, ARED, FRED and SRED, over traffic with different rates and over traffic with mixed responsive and non-responsive flows, respectively. In particular, CHOKe mechanism shows greater unfairness, especially when more unresponsive flows exist in a shared link. We then propose a new AQM scheme using CHOKe mechanism, called DQC(Double Queue CHOKe), which uses two FIFO queues before applying CHOKe mechanism to adaptive congestion control. Simulation shows that it works well in protecting congestion-sensitive flows from congestion-causing flows and exhibits better performances than other AQM schemes. Also we use partial state information, proposed in LRURED, to improve our mechanism.

Caching and Concurrency Control in a Mobile Client/Sever Computing Environment (이동 클라이언트/서버 컴퓨팅환경에서의 캐싱 및 동시성 제어)

  • Lee, Sang-Geun;Hwang, Jong-Seon;Lee, Won-Gyu;Yu, Heon-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.8
    • /
    • pp.974-987
    • /
    • 1999
  • 이동 컴퓨팅 환경에서 자주 접근하는 데이터에 대한 캐싱은 무선 채널의 좁은 대역폭에서 경쟁을 줄일 수 있는 유용한 기술이다. 그러나, 트랜잭션 캐시 일관성을 지원하는 전통적인 클라이언트/서버 전략은 클라이언트와 서버간에 많은 양의 통신을 필요로 하기 때문에 이동 클라이언트/서버 컴퓨팅 환경에서는 적절하지 않다. 본 논문에서는 브로드캐스트-기반 캐시 무효화 정책을 사용하면서 트랜잭션 캐시 일관성을 지원하는 OCC-UTS (Optimistic Concurrency Control with Update TimeStamp) 프로토콜을 제안한다. 접근한 데이터에 대한 일관성 검사 및 완료 프로토콜은 캐시 무효화 과정의 내부 과정으로 완전 분산 형태로 효율적으로 구현되며, 일관성 체크의 대부분이 이동 클라이언트에서 수행된다. 또한, 분석 모델에 기반한 성능 비교를 통해, 본 논문에서 제안하는 OCC-UTS 프로토콜이 다른 경쟁 프로토콜보다 높은 트랜잭션 처리율을 얻으며, 데이터 항목을 자주 접근하면 할수록 지역 캐시를 사용하는 OCC-UTS 프로토콜이 더 효율적임을 보인다. 이동 클라이언트의 접속 단절에 대해서는 무효화 브로드캐스트 윈도우를 크게 하여 접속 단절에 적절히 대처할 수 있다.Abstract In a mobile computing environment, caching of frequently accessed data has been shown to be a useful technique for reducing contention on the narrow bandwidth of the wireless channels. However, the traditional client/server strategies for supporting transactional cache consistency that require extensive communications between a client and a server are not appropriate in a mobile client/server computing environment. In this paper, we propose a new protocol, called OCC-UTS (Optimisitic Concurrency Control with Update TimeStamp), to support transactional cache consistency in a mobile client/server computing environment by utilizing the broadcast-based solutions for the problem of invalidating caches. The consistency check on accessed data and the commitment protocol are implemented in a truly distributed fashion as an integral part of cache invalidation process, with most burden of consistency check being downloaded to mobile clients. Also, our experiments based on an analytical model substantiate the basic idea and study the performance characteristics. Experimental results show that OCC-UTS protocol without local cache outperforms other competitor protocol, and the more frequent a mobile client accesses data items the more efficient OCC-UTS protocol with local cache is. With respect to disconnection, the tolerance to disconnection is improved if the invalidation broadcast window size is extended.