• Title/Summary/Keyword: Error Performance

Search Result 9,555, Processing Time 0.042 seconds

A Method of Reproducing the CCT of Natural Light using the Minimum Spectral Power Distribution for each Light Source of LED Lighting (LED 조명의 광원별 최소 분광분포를 사용하여 자연광 색온도를 재현하는 방법)

  • Yang-Soo Kim;Seung-Taek Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.19-26
    • /
    • 2023
  • Humans have adapted and evolved to natural light. However, as humans stay in indoor longer in modern times, the problem of biorhythm disturbance has been induced. To solve this problem, research is being conducted on lighting that reproduces the correlated color temperature(CCT) of natural light that varies from sunrise to sunset. In order to reproduce the CCT of natural light, multiple LED light sources with different CCTs are used to produce lighting, and then a control index DB is constructed by measuring and collecting the light characteristics of the combination of input currents for each light source in hundreds to thousands of steps, and then using it to control the lighting through the light characteristic matching method. The problem with this control method is that the more detailed the steps of the combination of input currents, the more time and economic costs are incurred. In this paper, an LED lighting control method that applies interpolation and combination calculation based on the minimum spectral power distribution information for each light source is proposed to reproduce the CCT of natural light. First, five minimum SPD information for each channel was measured and collected for the LED lighting, which consisted of light source channels with different CCTs and implemented input current control function of a 256-steps for each channel. Interpolation calculation was performed to generate SPD of 256 steps for each channel for the minimum SPD information, and SPD for all control combinations of LED lighting was generated through combination calculation of SPD for each channel. Illuminance and CCT were calculated through the generated SPD, a control index DB was constructed, and the CCT of natural light was reproduced through a matching technique. In the performance evaluation, the CCT for natural light was provided within the range of an average error rate of 0.18% while meeting the recommended indoor illumination standard.

K-DEV: A Borehole Deviation Logging Probe Applicable to Steel-cased Holes (철재 케이싱이 설치된 시추공에서도 적용가능한 공곡검층기 K-DEV)

  • Yoonho, Song;Yeonguk, Jo;Seungdo, Kim;Tae Jong, Lee;Myungsun, Kim;In-Hwa, Park;Heuisoon, Lee
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.167-176
    • /
    • 2022
  • We designed a borehole deviation survey tool applicable for steel-cased holes, K-DEV, and developed a prototype for a depth of 500 m aiming to development of own equipment required to secure deep subsurface characterization technologies. K-DEV is equipped with sensors that provide digital output with verified high performance; moreover, it is also compatible with logging winch systems used in Korea. The K-DEV prototype has a nonmagnetic stainless steel housing with an outer diameter of 48.3 mm, which has been tested in the laboratory for water resistance up to 20 MPa and for durability by running into a 1-km deep borehole. We confirmed the operational stability and data repeatability of the prototype by constantly logging up and down to the depth of 600 m. A high-precision micro-electro-mechanical system (MEMS) gyroscope was used for the K-DEV prototype as the gyro sensor, which is crucial for azimuth determination in cased holes. Additionally, we devised an accurate trajectory survey algorithm by employing Unscented Kalman filtering and data fusion for optimization. The borehole test with K-DEV and a commercial logging tool produced sufficiently similar results. Furthermore, the issue of error accumulation due to drift over time of the MEMS gyro was successfully overcome by compensating with stationary measurements for the same attitude at the wellhead before and after logging, as demonstrated by the nearly identical result to the open hole. We believe that the methodology of K-DEV development and operational stability, as well as the data reliability of the prototype, were confirmed through these test applications.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

Evaluation of the Usefulness of MapPHAN for the Verification of Volumetric Modulated Arc Therapy Planning (용적세기조절회전치료 치료계획 확인에 사용되는 MapPHAN의 유용성 평가)

  • Woo, Heon;Park, Jang Pil;Min, Jae Soon;Lee, Jae Hee;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.115-121
    • /
    • 2013
  • Purpose: Latest linear accelerator and the introduction of new measurement equipment to the agency that the introduction of this equipment in the future, by analyzing the process of confirming the usefulness of the preparation process for applying it in the clinical causes some problems, should be helpful. Materials and Methods: All measurements TrueBEAM STX (Varian, USA) was used, and a file specific to each energy, irradiation conditions, the dose distribution was calculated using a computerized treatment planning equipment (Eclipse ver 10.0.39, Varian, USA). Measuring performance and cause errors in MapCHECK 2 were analyzed and measured against. In order to verify the performance of the MapCHECK 2, 6X, 6X-FFF, 10X, 10X-FFF, 15X field size $10{\times}10$ cm, gantry $0^{\circ}$, $180^{\circ}$ direction was measured by the energy. IGRT couch of the CT values affect the measurements in order to confirm, CT number values : -800 (Carbon) & -950 (COUCH in the air), -100 & 6X-950 in the state for FFF, 15X of the energy field sizes $10{\times}10$, gantry $180^{\circ}$, $135^{\circ}$, $275^{\circ}$ directionwas measured at, MapPHAN allocated to confirm the value of HU were compared, using the treatment planning computer for, Measurement error problem by the sharp edges MapPHAN Learn gantry direction MapPHAN of dependence was measured in three ways. GANTRY $90^{\circ}$, $270^{\circ}$ in the direction of the vertically erected settings 6X-FFF, 15X respectively, and Setting the state established as a horizontal field sizes $10{\times}10$, $90^{\circ}$, $45^{\circ}$, $315^{\circ}$, $270^{\circ}$ of in the direction of the energy-6X-FFF, 15X, respectively, were measured. Without intensity modulated beam of the third open arc were investigated. Results: Of basic performance MapCHECK confirm the attenuation measured by Couch, measured from the measured HU values that are assigned to the MAP-PHAN, check for calculation accuracy for the angled edge of the MapPHAN all come in a range of valid measurement errors do not affect the could see. three ways for the Gantry direction dependence, the first of the meter built into the value of the Gantry $270^{\circ}$ (relative $0^{\circ}$), $90^{\circ}$ (relative $180^{\circ}$), 6X-FFF, 15X from each -1.51, 0.83% and -0.63, -0.22% was not affected by the AP/PA direction represented. Setting the meter horizontally Gantry $90^{\circ}$, $270^{\circ}$ from the couch, Energy 6X-FFF 4.37, 2.84%, 15X, -9.63, -13.32% the difference. By-side direction measurements MapPHAN in value is not within the valid range can not, because that could be confirmed as gamma pass rate 3% of the value is greater than the value shown. You can check the Open Arc 6X-FFF, 15X energy, field size $10{\times}10$ cm $360^{\circ}$ rotation of the dose distribution in the state to look at nearly 90% pass rate to emerge. Conclusion: Based on the above results, the MapPHAN gantry direction dependence by side in the direction of the beam relative dose distribution suitable for measuring the gamma value, but accurate measurement of the absolute dose can not be considered is. this paper, a more accurate treatment plan in order to confirm, Reduce the tolerance for VMAT, such as lateral rotation investigation in order to measure accurate absolute isodose using a combination of IMF (Isocentric Mounting Fixture) MapCHEK 2, will be able to minimize the impact due to the angular dependence.

  • PDF

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

A Prospective Randomized Comparative Clinical Trial Comparing the Efficacy between Ondansetron and Metoclopramide for Prevention of Nausea and Vomiting in Patients Undergoing Fractionated Radiotherapy to the Abdominal Region (복부 방사선치료를 받는 환자에서 발생하는 오심 및 구토에 대한 온단세트론과 메토클로프라미드의 효과 : 제 3상 전향적 무작위 비교임상시험)

  • Park Hee Chul;Suh Chang Ok;Seong Jinsil;Cho Jae Ho;Lim John Jihoon;Park Won;Song Jae Seok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.2
    • /
    • pp.127-135
    • /
    • 2001
  • Purpose : This study is a prospective randomized clinical trial comparing the efficacy and complication of anti-emetic drugs for prevention of nausea and vomiting after radiotherapy which has moderate emetogenic potential. The aim of this study was to investigate whether the anti-emetic efficacy of ondansetron $(Zofran^{\circledR})$ 8 mg bid dose (Group O) is better than the efficacy of metoclopramide 5 mg lid dose (Group M) in patients undergoing fractionated radiotherapy to the abdominal region. Materials and Methods : Study entry was restricted to those patients who met the following eligibility criteria: histologically confirmed malignant disease; no distant metastasis; performance status of not more than ECOG grade 2; no previous chemotherapy and radiotherapy. Between March 1997 and February 1998, 60 patients enrolled in this study. All patients signed a written statement of informed consent prior to enrollment. Blinding was maintained by dosing identical number of tablets including one dose of matching placebo for Group O. The extent of nausea, appetite loss, and the number of emetic episodes were recorded everyday using diary card. The mean score of nausea, appetite loss and the mean number of emetic episodes were obtained in a weekly interval. Results : Prescription error occurred in one patient. And diary cards have not returned in 3 patients due to premature refusal of treatment. Card from one patient was excluded from the analysis because she had a history of treatment for neurosis. As a result, the analysis consisted of 55 patients. Patient characteristics and radiotherapy characteristics were similar except mean age was $52.9{\pm}11.2$ in group M, $46.5{\pm}9.5$ in group O. The difference of age was statistically significant. The mean score of nausea, appetite loss and emetic episodes in a weekly interval was higher in group M than O. In group M, the symptoms were most significant at 5th week. In a panel data analysis using mixed procedure, treatment group was only significant factor detecting the difference of weekly score for all three symptoms. Ondansetron $(Zofran^{\circledR})$ 8 mg bid dose and metoclopramide 5 mg lid dose were well tolerated without significant side effects. There were no clinically important changes In vital signs or clinical laboratory parameters with either drug. Conclusion : Concerning the fact that patients with younger age have higher emetogenic potential, there are possibilities that age difference between two treatment groups lowered the statistical power of analysis. There were significant difference favoring ondansetron group with respect to the severity of nausea, vomiting and loss of appetite. We concluded that ondansetron is more effective anti-emetic agents in the control of radiotherapy-induced nausea, vomiting, loss of appetite without significant toxicity, compared with commonly used drug, i.e., metoclopramide. However, there were patients suffering emesis despite the administration of ondansetron. The possible strategies to improve the prevention and the treatment of radiotherapy-induced emesis must be further studied.

  • PDF

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.