• Title/Summary/Keyword: Reduction of a work time

Search Result 454, Processing Time 0.025 seconds

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

A Study on the Verification of an Indoor Test of a Portable Penetration Meter Using the Cone Penetration Test Method (자유낙하 콘관입시험법을 활용한 휴대용 다짐도 측정기의 실내시험을 통한 검증 연구)

  • Park, Geoun Hyun;Yang, An Seung
    • Journal of the Korean GEO-environmental Society
    • /
    • v.20 no.2
    • /
    • pp.41-48
    • /
    • 2019
  • Soil compaction is one of the most important activities in the area of civil works, including road construction, airport construction, port construction and backfilling construction of structures. Soil compaction, particularly in road construction, can be categorized into subgrade compaction and roadbed compaction, and is significant work that when done poorly can serve as a factor causing poor construction due to a lack of compaction. Currently, there are many different types of compaction tests, and the plate bearing test and the unit weight of soil test based on the sand cone method are commonly used to measure the degree of compaction, but many other methods are under development as it is difficult to secure economic efficiency. For the purpose of this research, a portable penetration meter called the Free-Fall Penetration Test (FFPT) was developed and manufactured. In this study, a homogeneous sample was obtained from the construction site and soil was classified through a sieve analysis test in order to perform grain size analysis and a specific gravity test for an indoor test. The principle of FFPT is that the penetration needle installed at the tip of an object put into free fall using gravity is used to measure the depth of penetration into the road surface after subgrade or roadbed compaction has been completed; the degree of compaction is obtained through the unit weight of soil test according to the sand cone method and the relationship between the degree of compaction and the depth of the penetration needle is verified. The maximum allowable grain size of soil is 2.36 mm. For $A_1$ compaction, a trend line was developed using the result of the test performed from a drop height of 10 cm, and coefficient of determination of the trend line was $R^2=0.8677$, while for $D_2$ compaction, coefficient of determination of the trend line was $R^2=0.9815$ when testing at a drop height of 20 cm. Free fall test was carried out with the drop height adjusted from 10 cm to 50 cm at increments of 10 cm. This study intends to compare and analyze the correlation between the degree of compaction obtained from the unit weight of soil test based on the sand cone method and the depth of penetration of the penetration needle obtained from the FFPT meter. As such, it is expected that a portable penetration tester will make it easy to test the degree of compaction at many construction sites, and will lead to a reduction in time, equipment, and manpower which are the disadvantages of the current degree of compaction test, ultimately contributing to accurate and simple measurements of the degree of compaction as well as greater economic feasibility.

Simultaneous Multiple Transmit Focusing Method with Orthogonal Chirp Signal for Ultrasound Imaging System (초음파 영상 장치에서 직교 쳐프 신호를 이용한 동시 다중 송신집속 기법)

  • 정영관;송태경
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.1
    • /
    • pp.49-60
    • /
    • 2002
  • Receive dynamic focusing with an array transducer can provide near optimum resolution only in the vicinity of transmit focal depth. A customary method to increase the depth of field is to combine several beams with different focal depths, with an accompanying decrease in the frame rate. In this Paper. we Present a simultaneous multiple transmit focusing method in which chirp signals focused at different depths are transmitted at the same time. These chirp signals are mutually orthogonal in a sense that the autocorrelation function of each signal has a narrow mainlobe width and low sidelobe levels. and the crossorelation function of any Pair of the signals has values smaller than the sidelobe levels of each autocorrelation function. This means that each chirp signal can be separated from the combined received signals and compressed into a short pulse. which is then individually focused on a separate receive beamformer. Next. the individually focused beams are combined to form a frame of image. Theoretically, any two chirp signals defined over two nonoverlapped frequency bands are mutually orthogonal In the present work. however, a tractional overlap of adjacent frequency bands is permitted to design more chirp signals within a given transducer bandwidth. The elevation of the rosscorrelation values due to the frequency overlap could be reduced by alternating the direction of frequency sweep of the adjacent chirp signals We also observe that the Proposed method provides better images when the low frequency chirp is focused at a near Point and the high frequency chirp at a far point along the depth. better lateral resolution is obtained at the far field with reasonable SNR due to the SNR gain in Pulse compression Imaging .

The Diagnosis of Work Connectivity between Local Government Departments -Focused on Busan Metropolitan City IT Project - (지자체 부서 간 업무연계성 진단 -부산광역시 정보화사업을 중심으로 -)

  • JI, Sang-Tae;NAM, Kwang-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.176-188
    • /
    • 2018
  • Modern urban problems are increasingly becoming a market mix that can not be solved by the power of a single department and the necessity of establishing a cooperation system based on data communication between departments is increasing. Therefore, this study analyzed Busan metropolitan city's IT projects from 2014 to 2018 in order to understand the utilization and sharing status of departmental data from the viewpoint that cooperation between departments can start from the sharing of data with high common utilization. In addition, based on the results of the FGI(Focus Group Interview) conducted for the officials of the department responsible for the informatization project, we verified the results of data status analysis. At the same time, we figured out the necessity of data link between departments through SNA(Social Network Analysis) and presented data that should be shared first in the future. As a result, most of the information systems currently use limited data only within the department that produced the data. Most of the linked data was concentrated in the information department. Therefore, this study suggested the following solutions. First, in order to prevent overlapping investments caused by the operation of individual departments and share information, it is necessary to build a small platform to tie the departments, which have high connectivity with each other, into small blocks. Second, a local level process is needed to develop data standards as an extension of national standards in order to expand the information to be used in various fields. Third, as another solution, we proposed a system that can integrate various types of information based on address and location information through application of cloud-based GIS platform. The results of this study are expected to contribute to build a cooperation system between departments through expansion of information sharing with cost reduction.

A Review Study on Major Factors Influencing Chlorine Disappearances in Water Storage Tanks (저수조 내 잔류염소 감소에 미치는 주요 영향 인자에 관한 문헌연구)

  • Noh, Yoorae;Kim, Sang-Hyo;Choi, Sung-Uk;Park, Joonhong
    • Journal of Korean Society of Disaster and Security
    • /
    • v.9 no.2
    • /
    • pp.63-75
    • /
    • 2016
  • For safe water supply, residual chlorine has to be maintained in tap-water above a certain level from drinking water treatment plants to the final tap-water end-point. However, according to the current literature, approximately 30-60% of residual chlorine is being lost during the whole water supply pathways. The losses of residual chlorine may have been attributed to the current tendency for water supply managers to reduce chlorine dosage in drinking water treatment plants, aqueous phase decomposition of residual chlorine in supply pipes, accelerated chlorine decomposition at a high temperature during summer, leakage or losses of residual chlorine from old water supply pipes, and disappearances of residual chlorine in water storage tanks. Because of these, it is difficult to rule out the possibility that residual chlorine concentrations become lower than a regulatory level. In addition, it is concerned that the regulatory satisfaction of residual chlorine in water storage tanks can not always be guaranteed by using the current design method in which only storage capacity and/or hydraulic retention time are simply used as design factors, without considering other physico-chemical processes involved in chlorine disappearances in water storage tank. To circumvent the limitations of the current design method, mathematical models for aqueous chlorine decomposition, sorption of chlorine into wall surface, and mass-transfer into air-phase via evaporation were selected from literature, and residual chlorine reduction behavior in water storage tanks was numerically simulated. The model simulation revealed that the major factors influencing residual chlorine disappearances in water storage tanks are the water quality (organic pollutant concentration) of tap-water entering into a storage tank, the hydraulic dispersion developed by inflow of tap-water into a water storage tank, and sorption capacity onto the wall of a water storage tank. The findings from his work provide useful information in developing novel design and technology for minimizing residual chlorine disappearances in water storage tanks.

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Transport Properties of CO2 and CH4 using Poly(ether-block-amide)/GPTMS Hybird Membranes (Poly(ether-block-amide)/GPTMS 하이브리드 분리막을 이용한 이산화탄소와 메탄의 투과특성)

  • Lee, Keun Chul;Kim, Hyunjoon
    • Korean Chemical Engineering Research
    • /
    • v.54 no.5
    • /
    • pp.653-658
    • /
    • 2016
  • Poly(ether-block-amide)(PEBAX$_{(R)}$) resin is a thermoplastic elastomer combining linear chains of hard-rigid polyamide block interspaced soft-flexible polyether block. It was believed that the hard polyamide block provides the mechanical strength and permselectivity, whereas gas transport occurs primarily through the soft polyether block. The objective of this work was to investigate the gas permeation properties of carbon dioxide and methane for PEBAX$^{(R)}$-1657 membrane, and compare with those obtained for other grade of pure PEBAX$^{(R)}$, PEBAX$^{(R)}$-2533 and PEBAX$^{(R)}$ based hybrid membranes. The hybrid membranes based PEBAX$^{(R)}$ were obtained by a sol-gel process using GPTMS ((3-glycidoxypropyl) trimethoxysilane) as the only inorganic precursor. Molecular structure and morphology of membrane were analyzed by $^{29}Si$-NMR, DSC and SEM. PEBAX$_{(R)}$-2533 membrane exhibited higher gas permeability coefficients than PEBAX$^{(R)}$-1657 membrane. This was explained by the increase of chain mobility. In contrast, ideal separation factor of $CO_2/CH_4$ for PEBAX$^{(R)}$-1657 membrane was higher than PEBAX$^{(R)}$-2533 membrane. It was explained by the decrease of diffusion selectivity caused by increase of chain mobility. For PEBAX$^{(R)}$/GPTMS hybrid membrane, gas permeability coefficients were decreased with reaction time. Gas permeability coefficient of $CH_4$ was more significantly decreased than $CO_2$. It can be explained by the reduction of chain mobility caused by the sol-gel process, and strong affinity of PEO segment with $CO_2$. Comparing with pure PEBAX$^{(R)}$-1657 membrane, ideal separation factor of $CO_2/CH_4$ for PEBAX$^{(R)}$/GPTMS hybrid membrane has decreased to 4.5%, and gas permeability coefficient of $CO_2$ has increased 3.5 times.

Development of Measurement Scale for Korean Scaling Fear-1.0 and Related Factors (한국형 스켈링공포(KSF 1.0)의 측정도구 개발 및 관련요인)

  • Cho, Myung-Sook;Lee, Sung-Kook
    • Journal of dental hygiene science
    • /
    • v.9 no.3
    • /
    • pp.327-338
    • /
    • 2009
  • This study was to develop an instrument for multidimensional measurement of Korean scaling fear (KSF)-1.0 and analyze related factors. A sample of 720 subjects(scaling patients and community people) was studied in Daegu city from November in 2008 to March in 2009. Authors first conceptualized the KSF, item generation, item reduction, and questionnaire formatting were performed in the stage of the development. Item descriptive, missing%, item internal consistency, and item discriminant validity were analyzed in the item-level, also descriptive, floor and ceiling effect were analyzed in the scale-level. Cronbach's alpha, test-retest, inter-dimension correlations, and factor analysis were performed to evaluate the validity and reliability in the new instrument. Confirmative factor analysis was did to evaluate the fit of model. The results for item-level and scale-level were acceptable except item discriminant validity. The reliability for 0.92~0.96 of corelation coefficient range(Cronbach's alpha 0.96~0.98) was high in the test-retest, and there was no significant difference in paired t-test. Item internal consistency(range of pearson corelation coefficient 0.39~0.95) was also high. The result of explanatory factor analysis was the same as the intended dimension structure, also confirmatory factor analysis results revealed that the dimensional structure model were fined well in the evaluation of model fit($x^2$= 1245.66, df=146, p=0.0000; GFI=0.85; AGFI=0.80; RMSEA=0.10). Factors related to KSF by multiple regression were gender($\beta$=0.28, p=0.0004) and teeth brush method($\beta$=-0.15, p=0.0053) in scaling patients, also gender($\beta$=0.25, p=0.0002), educational level($\beta$=0.14, p=0.0155), teeth brush method($\beta$=-0.09, p=0.0229) and time of daily work out($\beta$=-0.10, p=0.0055) were significantly associated with KSF in no scaling group. In conclusion, The results of this study reveal that the new developed measurement scale was reliable and val id instrument for measuring the KSF in dental hygiene patients and community people. We recommend that further research should develop more the instrument for the Korean scaling fear.

  • PDF