• Title/Summary/Keyword: software error

Search Result 1,342, Processing Time 0.044 seconds

Basic Study on the Development of Analytical Instrument for Liquid Pig Manure Component Using Near Infra-Red Spectroscopy (근적외선 분광법을 이용한 돈분뇨 액비 성분분석기 개발을 위한 기초 연구)

  • Choi, D.Y.;Kwag, J.H.;Park, C.H.;Jeong, K.H.;Kim, J.H.;Song, J.I.;Yoo, Y.H.;Chung, M.S.;Yang, C.B.
    • Journal of Animal Environmental Science
    • /
    • v.13 no.2
    • /
    • pp.113-120
    • /
    • 2007
  • This study was conducted to measure Nitrogen(N), Phosphate($P_2O_5$), Potassium ($K_2O$), Organic matter(OM) and Moisture content of liquid pig manure by Near Infrared Spectroscopy(NIRS) and to develop an alternative and analytical instrument which are used for measurement of N, $P_2O_5$, $K_2O$, OM, and Moisture contents for liquid pig manure. The liquid pig manure sample's transmittance spectra were measured with a NIRS in the wavelength range of 400 to 2,500 nm. Multiple linear regression and partial least square regression were used for calibrations. The correlation coefficient(RSQ) and standard error of calibration(SEC) obtained for nitrogen were 0.9190 and 2.1649, respectively. The RSQ for phosphate, potassium, organic matter and moisture contents were 0.9749, 0.5046, 0.9883 and 0.9777, and the SEC were 0.5019, 1.9252, 0.1180 and 0.0789, respectively. These results are indications of the rapid determination of components of liquid pig manure through the NIR analysis. The simple analytical instrument for liquid pig manure consisted of a tungsten halogen lamp for light source, a sample holder, a quartz cell, a SM 301 spectrometer for spectrum analyzer, a power supply, an electronics, a computer and a software. Results showed that the simple analytical instrument that was developed can approximately predict the phosphate, organic matter and moisture content of the liquid pig manure when compared to the analysis taken by NIRS. The low predictability value of potassium however, needs further investigation. Generally, the experiment proved that the simple analytical instrument was reliable, feasible and practical for analyzing liquid pig manure.

  • PDF

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

Effect of abutment superimposition process of dental model scanner on final virtual model (치과용 모형 스캐너의 지대치 중첩 과정이 최종 가상 모형에 미치는 영향)

  • Yu, Beom-Young;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.203-210
    • /
    • 2019
  • Purpose: The purpose of this study was to verify the effect of the abutment superimposition process on the final virtual model in the scanning process of single and 3-units bridge model using a dental model scanner. Materials and methods: A gypsum model for single and 3-unit bridges was manufactured for evaluating. And working casts with removable dies were made using Pindex system. A dental model scanner (3Shape E1 scanner) was used to obtain CAD reference model (CRM) and CAD test model (CTM). The CRM was scanned without removing after dividing the abutments in the working cast. Then, CTM was scanned with separated from the divided abutments and superimposed on the CRM (n=20). Finally, three-dimensional analysis software (Geomagic control X) was used to analyze the root mean square (RMS) and Mann-Whitney U test was used for statistical analysis (${\alpha}=.05$). Results: The RMS mean abutment for single full crown preparation was $10.93{\mu}m$ and the RMS average abutment for 3 unit bridge preparation was $6.9{\mu}m$. The RMS mean of the two groups showed statistically significant differences (P<.001). In addition, errors of positive and negative of two groups averaged $9.83{\mu}m$, $-6.79{\mu}m$ and 3-units bridge abutment $6.22{\mu}m$, $-3.3{\mu}m$, respectively. The mean values of the errors of positive and negative of two groups were all statistically significantly lower in 3-unit bridge abutments (P<.001). Conclusion: Although the number of abutments increased during the scan process of the working cast with removable dies, the error due to the superimposition of abutments did not increase. There was also a significantly higher error in single abutments, but within the range of clinically acceptable scan accuracy.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

A Study for the development of the Korean orthodontic bracket (한국형 교정치료용 Bracket의 개발에 관한 연구)

  • Chang, Young-Il;Yang, Won-Sik;Nahm, Dong-Seok;Moon, Seong-cheol
    • The korean journal of orthodontics
    • /
    • v.30 no.5 s.82
    • /
    • pp.565-578
    • /
    • 2000
  • The aim of this study was development of the Straight-Wire Appliance(SWA) suitable lot the treatment or Korean. To accomplish the object of this study, Korean adult with normal occlusion were selected with following criteria : 1) no functional abnormality in the craniofacial area, 2) good dental arch form and posterior occlusal relationship, 3) Angle Class I occlusal relationship, 4) no experience of orthodontic, nor prosthodontic treatment, especially, no dental treatment on labial and buccal surfaces of teeth, 5) good racial profile. Impression were taken for upper and lower dental arches or the selected normal occlusion samples and the orthodontic dental stone models were fabricated. 5 well-trained orthodontists had examined the acquired dental stone models to select study samples which satisfy the Six keys to optimal occlusion of Andrews. 155 pairs of dental stone models (92 pairs of Male, 63 of Female) were finally selected. 3 dimensional digitization were performed with the Coordinate Measuring Machine(CMM, MPC802, WEGU-Messtechnik, Germany) and measuring of Angulation, Inclination, In-and-Out, Molar offset angle and Arch form were accomplished with a measuring software to achieve data for the development of SWA. Before the measurement, error study was performed on the 3 dimensional digitization with CMM, and the analysis of reliability of computerized measuring method adapted in this study and conventional manual method Presented by Andrews was performed. Results of this study were as to)lows : 1. Equi-distance digitization with mesh size 0.25 mm, 0.5 mm and 1.0 mm were acceptable in 3 dimensional digitization of dental stone model with the CMM, and the digitization with 1.0 mm mesh size was recommendable in terms of efficiency. 2. Computerized measuring method with 3 dimensional digitization was more reliable than manual measuring method of Andrews. 3. Data were collected for the development of SWA suitable for the morphological characteristics of Korean with the computerized measuring method with 3 dimensional digitization.

  • PDF

Development and Performance Evaluation of an Animal SPECT System Using Philips ARGUS Gamma Camera and Pinhole Collimator (Philips ARGUS 감마카메라와 바늘구멍조준기를 이용한 소동물 SPECT 시스템의 개발 및 성능 평가)

  • Kim, Joong-Hyun;Lee, Jae-Sung;Kim, Jin-Su;Lee, Byeong-Il;Kim, Soo-Mee;Choung, In-Soon;Kim, Yu-Kyeong;Lee, Won-Woo;Kim, Sang-Eun;Chung, June-Key;Lee, Myung-Chul;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.445-455
    • /
    • 2005
  • Purpose: We developed an animal SPECT system using clinical Philips ARGUS scintillation camera and pinhole collimator with specially manufactured small apertures. In this study, we evaluated the physical characteristics of this system and biological feasibility for animal experiments. Materials and Methods: Rotating station for small animals using a step motor and operating software were developed. Pinhole inserts with small apertures (diameter of 0.5, 1.0, and 2.0 mm) were manufactured and physical parameters including planar spatial resolution and sensitivity and reconstructed resolution were measured for some apertures. In order to measure the size of the usable field of view according to the distance from the focal point, manufactured multiple line sources separated with the same distance were scanned and numbers of lines within the field of view were counted. Using a Tc-99m line source with 0.5 mm diameter and 12 mm length placed in the exact center of field of view, planar spatial resolution according to the distance was measured. Calibration factor to obtain FWHM values in 'mm' unit was calculated from the planar image of two separated line sources. Te-99m point source with i mm diameter was used for the measurement of system sensitivity. In addition, SPECT data of micro phantom with cold and hot line inserts and rat brain after intravenous injection of [I-123]FP-CIT were acquired and reconstructed using filtered back protection reconstruction algorithm for pinhole collimator. Results: Size of usable field of view was proportional to the distance from the focal point and their relationship could be fitted into a linear equation (y=1.4x+0.5, x: distance). System sensitivity and planar spatial resolution at 3 cm measured using 1.0 mm aperture was 71 cps/MBq and 1.24 mm, respectively. In the SPECT image of rat brain with [I-123]FP-CIT acquired using 1.0 mm aperture, the distribution of dopamine transporter in the striatum was well identified in each hemisphere. Conclusion: We verified that this new animal SPECT system with the Phlilps ARGUS scanner and small apertures had sufficient performance for small animal imaging.

Development of a Predictive Model Describing the Growth of Listeria Monocytogenes in Fresh Cut Vegetable (샐러드용 신선 채소에서의 Listerio monocytogenes 성장예측모델 개발)

  • Cho, Joon-Il;Lee, Soon-Ho;Lim, Ji-Su;Kwak, Hyo-Sun;Hwang, In-Gyun
    • Journal of Food Hygiene and Safety
    • /
    • v.26 no.1
    • /
    • pp.25-30
    • /
    • 2011
  • In this study, predictive mathematical models were developed to predict the kinetics of Listeria monocytogenes growth in the mixed fresh-cut vegetables, which is the most popular ready-to-eat food in the world, as a function of temperature (4, 10, 20 and $30^{\circ}C$). At the specified storage temperatures, the primary growth curve fit well ($r^2$=0.916~0.981) with a Gompertz and Baranyi equation to determine the specific growth rate (SGR). The Polynomial model for natural logarithm transformation of the SGR as a function of temperature was obtained by nonlinear regression (Prism, version 4.0, GraphPad Software). As the storage temperature decreased from $30^{\circ}C$ to $4^{\circ}C$, the SGR decreased, respectively. Polynomial model was identified as appropriate secondary model for SGR on the basis of most statistical indices such as mean square error (MSE=0.002718 by Gompertz, 0.055186 by Baranyi), bias factor (Bf=1.050084 by Gompertz, 1.931472 by Baranyi) and accuracy factor (Af=1.160767 by Gompertz, 2.137181 by Baranyi). Results indicate L. monocytogenes growth was affected by temperature mainly, and equation was developed by Gompertz model (-0.1606+$0.0574^*Temp$+$0.0009^*Temp^*Temp$) was more effective than equation was developed by Baranyi model (0.3502-$0.0496^*Temp$+$0.0022^*Temp^*Temp$) for specific growth rate prediction of L.monocytogenes in the mixed fresh-cut vegetables.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

Age-related Changes of the Finger Photoplethysmogram in Frequency Domain Analysis (연령증가에 따른 지첨용적맥파의 주파수 영역에서의 변화)

  • Nam, Tong-Hyun;Park, Young-Bae;Park, Young-Jae;Shin, Sang-Hoon
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.12 no.1
    • /
    • pp.42-62
    • /
    • 2008
  • Objectives: It is well known that some parameters of the photoplethysmogram (PPG) acquired by time domain contour analysis can be used as markers of vascular aging. But the previous studies that have been performed for frequency domain analysis of the PPG to date have provided only restrictive and fragmentary information. The aim of the present investigation was to determine whether the harmonics extracted from the PPG using a fast Fourier transformation could be used as an index of vascular aging. Methods: The PPG was measured in 600 recruited subjects for 30 second durations, To grasp the gross age-related change of the PPG waveform, we grouped subjects according to gender and age and averaged the PPG signal of one pulse cycle. To calculate the conventional indices of vascular aging, we selected the 5-6 cycles of pulse that the baseline was relatively stable and then acquired the coordinates of the inflection points. For the frequency domain analysis we performed a power spectral analysis on the PPG signals for 30 seconds using a fast Fourier transformation and dissociated the harmonic components from the PPG signals. Results: A final number of 390 subjects (174 males and 216 females) were included in the statistical analysis. The normalized power of the harmonics decreased with age and on a logarithmic scale reduction of the normalized power in the third (r=-0.492, P<0.0001), fourth (r=-0.621, P<0.0001) and fifth harmonic (r=-0.487, P<0.0001) was prominent. From a multiple linear regression analysis, Stiffness index, reflection index and corrected up-stroke time influenced the normalized power of the harmonics on a logarithmic scale. Conclusions: The normalized harmonic power decreased with age in healthy subjects and may be less error prone due to the essential attributes of frequency domain analysis. Therefore, we expect that the normalized harmonic power density can be useful as a vascular aging marker.

  • PDF

Quality Assurance of Volumetric Modulated Arc Therapy Using the Dynalog Files (다이나로그 파일을 이용한 부피세기조절회전치료의 정도관리)

  • Kang, Dong-Jin;Jung, Jae-Yong;Shin, Young-Joo;Min, Jung-Whan;Kim, Yon-Lae;Yang, Hyung-jin
    • Journal of radiological science and technology
    • /
    • v.39 no.4
    • /
    • pp.577-585
    • /
    • 2016
  • The purpose of this study is to evaluate the accuracy of beam delivery QA software using the MLC dynalog file, about the VMAT plan with AAPM TG-119 protocol. The Clinac iX with a built-in 120 MLC was used to acquire the MLC dynalog file be imported in MobiusFx(MFX). To establish VMAT plan, Oncentra RTP system was used target and organ structures were contoured in Im'RT phantom. For evaluation of dose distribution was evaluated by using gamma index, and the point dose was evaluated by using the CC13 ion chamber in Im'RT phantom. For the evaluation of point dose, the mean of relative error between measured and calculated value was $1.41{\pm}0.92%$(Target) and $0.89{\pm}0.86%$(OAR), the confidence limit were 3.21(96.79%, Target) and 2.58(97.42%, OAR). For the evaluation of dose distribution, in case of $Delta^{4PT}$, the average percentage of passing rate were $99.78{\pm}0.2%$(3%/3 mm), $96.86{\pm}1.76%$(2%/2 mm). In case of MFX, the average percentage of passing rate were $99.90{\pm}0.14%$(3%/3 mm), $97.98{\pm}1.97%$(2%/2 mm), the confidence limits(CL) were in case of $Delta^{4PT}$ 0.62(99.38%, 3%/3 mm), 6.6(93.4%, 2%/2 mm), in case of MFX, 0.38(99.62%, 3%/3 mm), 5.88(94.12%, 2%/2 mm). In this study, we performed VMAT QA method using dynamic MLC log file compare to binary diode array chamber. All analyzed results were satisfied with acceptance criteria based on TG-119 protocol.