• Title/Summary/Keyword: Information Error

Search Result 11,131, Processing Time 0.038 seconds

A Study on Significance Testing of Driver's Visual Behavior due to the VMS Message Display Forms on the Road (도로상 VMS 표출방식별 운전자 유의성 검증에 관한 연구)

  • Kum, Ki-Jung;Son, Young-Tae;Bae, Deok-Mo;Son, Seung-Neo
    • International Journal of Highway Engineering
    • /
    • v.7 no.4 s.26
    • /
    • pp.151-162
    • /
    • 2005
  • Variable Message Sign (VMS), which provides drivers with direct information about state of traffic congestion and for prevent an accident, is the most effective method among the methods of providing information in Advanced Transportation Management System. Currently establishment and the VMS which is operated foundation lets in Guidelines on the use of Variable message sign (a book of the VMS) of 1999 November the Ministry Construction & Transportation, these contents mean main viewpoint on physical part such as message special quality variable (font, character size and line space, word interval) and position mainly among standard about establishment in general. But, it is true that using without effect verification on the character of VMS display and that using mode of stationary-centered. In this paper, it executed significance test to effort verification on the character of VMS display for more practical and effective information transmission based on the driver viewpoint For the researches; develop 3D-Simulation, select characteristics of driver's visual cognition behavior (the conspicuity, the legibility and the comprehensibility), evaluation each issue (day or night, 80km/h or 100km/h). Especially, that used the Eye Marker Recorder to measure of reading-time (legibility) thus, confirmed objectivity and reduce an observational error. The results showed that the conspicuity is Flashing> Stationary>Scroll. The legibility is not deference that Flashing between stationary form. Also the comprehensibility result showed that Flashing> Stationary>Stroll form.

  • PDF

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Lung cancer, chronic obstructive pulmonary disease and air pollution (대기오염에 의한 폐암 및 만성폐색성호흡기질환 -개인 흡연력을 보정한 만성건강영향평가-)

  • Sung, Joo-Hon;Cho, Soo-Hun;Kang, Dae-Hee;Yoo, Keun-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.3 s.58
    • /
    • pp.585-598
    • /
    • 1997
  • Background : Although there are growing concerns about the adverse health effect of air pollution, not much evidence on health effect of current air pollution level had been accumulated yet in Korea. This study was designed to evaluate the chronic health effect of ai. pollution using Korean Medical Insurance Corporation (KMIC) data and air quality data. Medical insurance data in Korea have some drawback in accuracy, but they do have some strength especially in their national coverage, in having unified ID system and individual information which enables various data linkage and chronic health effect study. Method : This study utilized the data of Korean Environmental Surveillance System Study (Surveillance Study), which consist of asthma, acute bronchitis, chronic obstructive pulmonary diseases (COPD), cardiovascular diseases (congestive heart failure and ischemic heart disease), all cancers, accidents and congenital anomaly, i. e., mainly potential environmental diseases. We reconstructed a nested case-control study wit5h Surveillance Study data and air pollution data in Korea. Among 1,037,210 insured who completed? questionnaire and physical examination in 1992, disease free (for chronic respiratory disease and cancer) persons, between the age of 35-64 with smoking status information were selected to reconstruct cohort of 564,991 persons. The cohort was followed-up to 1995 (1992-5) and the subjects who had the diseases in Surveillance Study were selected. Finally, the patients, with address information and available air pollution data, left to be 'final subjects' Cases were defined to all lung cancer cases (424) and COPD admission cases (89), while control groups are determined to all other patients than two case groups among 'final subjects'. That is, cases are putative chronic environmental diseases, while controls are mainly acute environmental diseases. for exposure, Air quality data in 73 monitoring sites between 1991 - 1993 were analyzed to surrogate air pollution exposure level of located areas (58 areas). Five major air pollutants data, TSP, $O_3,\;SO_2$, CO, NOx was available and the area means were applied to the residents of the local area. 3-year arithmetic mean value, the counts of days violating both long-term and shot-term standards during the period were used as indices of exposure. Multiple logistic regression model was applied. All analyses were performed adjusting for current and past smoking history, age, gender. Results : Plain arithmetic means of pollutants level did not succeed in revealing any relation to the risk of lung cancer or COPD, while the cumulative counts of non-at-tainment days did. All pollutants indices failed to show significant positive findings with COPD excess. Lung cancer risks were significantly and consistently associated with the increase of $O_3$ and CO exceedance counts (to corrected error level -0.017) and less strongly and consistently with $SO_2$ and TSP. $SO_2$ and TSP showed weaker and less consistent relationship. $O_3$ and CO were estimated to increase the risks of lung cancer by 2.04 and 1.46 respectively, the maximal probable risks, derived from comparing more polluted area (95%) with cleaner area (5%). Conclusions : Although not decisive due to potential misclassication of exposure, these results wert drawn by relatively conservative interpretation, and could be used as an evidence of chronic health effect especially for lung cancer. $O_3$ might be a candidate for promoter of lung cancer, while CO should be considered as surrogated measure of motor vehicle emissions. The control selection in this study could have been less appropriate for COPD, and further evaluation with another setting might be necessary.

  • PDF

Modeling and mapping fuel moisture content using equilibrium moisture content computed from weather data of the automatic mountain meteorology observation system (AMOS) (산악기상자료와 목재평형함수율에 기반한 산림연료습도 추정식 개발)

  • Lee, HoonTaek;WON, Myoung-Soo;YOON, Suk-Hee;JANG, Keun-Chang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.21-36
    • /
    • 2019
  • Dead fuel moisture content is a key variable in fire danger rating as it affects fire ignition and behavior. This study evaluates simple regression models estimating the moisture content of standardized 10-h fuel stick (10-h FMC) at three sites with different characteristics(urban and outside/inside the forest). Equilibrium moisture content (EMC) was used as an independent variable, and in-situ measured 10-h FMC was used as a dependent variable and validation data. 10-h FMC spatial distribution maps were created for dates with the most frequent fire occurrence during 2013-2018. Also, 10-h FMC values of the dates were analyzed to investigate under which 10-h FMC condition forest fire is likely to occur. As the results, fitted equations could explain considerable part of the variance in 10-h FMC (62~78%). Compared to the validation data, the models performed well with R2 ranged from 0.53 to 0.68, root mean squared error (RMSE) ranged from 2.52% to 3.43%, and bias ranged from -0.41% to 1.10%. When the 10-h FMC model fitted for one site was applied to the other sites, $R^2$ was maintained as the same while RMSE and bias increased up to 5.13% and 3.68%, respectively. The major deficiency of the 10-h FMC model was that it poorly caught the difference in the drying process after rainfall between 10-h FMC and EMC. From the analysis of 10-h FMC during the dates fire occurred, more than 70% of the fires occurred under a 10-h FMC condition of less than 10.5%. Overall, the present study suggested a simple model estimating 10-h FMC with acceptable performance. Applying the 10-h FMC model to the automatic mountain weather observation system was successfully tested to produce a national-scale 10-h FMC spatial distribution map. This data will be fundamental information for forest fire research, and will support the policy maker.

A Study on Damage factor Analysis of Slope Anchor based on 3D Numerical Model Combining UAS Image and Terrestrial LiDAR (UAS 영상 및 지상 LiDAR 조합한 3D 수치모형 기반 비탈면 앵커의 손상인자 분석에 관한 연구)

  • Lee, Chul-Hee;Lee, Jong-Hyun;Kim, Dal-Joo;Kang, Joon-Oh;Kwon, Young-Hun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.5-24
    • /
    • 2022
  • The current performance evaluation of slope anchors qualitatively determines the physical bonding between the anchor head and ground as well as cracks or breakage of the anchor head. However, such performance evaluation does not measure these primary factors quantitatively. Therefore, the time-dependent management of the anchors is almost impossible. This study is an evaluation of the 3D numerical model by SfM which combines UAS images with terrestrial LiDAR to collect numerical data on the damage factors. It also utilizes the data for the quantitative maintenance of the anchor system once it is installed on slopes. The UAS 3D model, which often shows relatively low precision in the z-coordinate for vertical objects such as slopes, is combined with terrestrial LiDAR scan data to improve the accuracy of the z-coordinate measurement. After validating the system, a field test is conducted with ten anchors installed on a slope with arbitrarily damaged heads. The damages (such as cracks, breakages, and rotational displacements) are detected and numerically evaluated through the orthogonal projection of the measurement system. The results show that the introduced system at the resolution of 8K can detect cracks less than 0.3 mm in any aperture with an error range of 0.05 mm. Also, the system can successfully detect the volume of the damaged part, showing that the maximum damage area of the anchor head was within 3% of the original design guideline. Originally, the ground adhesion to the anchor head, where the z-coordinate is highly relevant, was almost impossible to measure with the UAS 3D numerical model alone because of its blind spots. However, by applying the combined system, elevation differences between the anchor bottom and the irregular ground surface was identified so that the average value at 20 various locations was calculated for the ground adhesion. Additionally, rotation angle and displacement of the anchor head less than 1" were detected. From the observations, the validity of the 3D numerical model can obtain quantitative data on anchor damage. Such data collection can potentially create a database that could be used as a fundamental resource for quantitative anchor damage evaluation in the future.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Quality Assurance of Patients for Intensity Modulated Radiation Therapy (세기조절방사선치료(IMRT) 환자의 QA)

  • Yoon Sang Min;Yi Byong Yong;Choi Eun Kyung;Kim Jong Hoon;Ahn Seung Do;Lee Sang-Wook
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.81-90
    • /
    • 2002
  • Purpose : To establish and verify the proper and the practical IMRT (Intensity--modulated radiation therapy) patient QA (Quality Assurance). Materials and Methods : An IMRT QA which consists of 3 steps and 16 items were designed and examined the validity of the program by applying to 9 patients, 12 IMRT cases of various sites. The three step OA program consists of RTP related QA, treatment information flow QA, and a treatment delivery QA procedure. The evaluation of organ constraints, the validity of the point dose, and the dose distribution are major issues in the RTP related QA procedure. The leaf sequence file generation, the evaluation of the MLC control file, the comparison of the dry run film, and the IMRT field simulate image were included in the treatment information flow procedure QA. The patient setup QA, the verification of the IMRT treatment fields to the patients, and the examination of the data in the Record & Verify system make up the treatment delivery QA procedure. Results : The point dose measurement results of 10 cases showed good agreement with the RTP calculation within $3\%$. One case showed more than a $3\%$ difference and the other case showed more than $5\%$, which was out side the tolerance level. We could not find any differences of more than 2 mm between the RTP leaf sequence and the dry run film. Film dosimetry and the dose distribution from the phantom plan showed the same tendency, but quantitative analysis was not possible because of the film dosimetry nature. No error had been found from the MLC control file and one mis-registration case was found before treatment. Conclusion : This study shows the usefulness and the necessity of the IMRT patient QA program. The whole procedure of this program should be peformed, especially by institutions that have just started to accumulate experience. But, the program is too complex and time consuming. Therefore, we propose practical and essential QA items for institutions in which the IMRT is performed as a routine procedure.

DISEASE DIAGNOSED AND DESCRIBED BY NIRS

  • Tsenkova, Roumiana N.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1031-1031
    • /
    • 2001
  • The mammary gland is made up of remarkably sensitive tissue, which has the capability of producing a large volume of secretion, milk, under normal or healthy conditions. When bacteria enter the gland and establish an infection (mastitis), inflammation is initiated accompanied by an influx of white cells from the blood stream, by altered secretory function, and changes in the volume and composition of secretion. Cell numbers in milk are closely associated with inflammation and udder health. These somatic cell counts (SCC) are accepted as the international standard measurement of milk quality in dairy and for mastitis diagnosis. NIR Spectra of unhomogenized composite milk samples from 14 cows (healthy and mastitic), 7days after parturition and during the next 30 days of lactation were measured. Different multivariate analysis techniques were used to diagnose the disease at very early stage and determine how the spectral properties of milk vary with its composition and animal health. PLS model for prediction of somatic cell count (SCC) based on NIR milk spectra was made. The best accuracy of determination for the 1100-2500nm range was found using smoothed absorbance data and 10 PLS factors. The standard error of prediction for independent validation set of samples was 0.382, correlation coefficient 0.854 and the variation coefficient 7.63%. It has been found that SCC determination by NIR milk spectra was indirect and based on the related changes in milk composition. From the spectral changes, we learned that when mastitis occurred, the most significant factors that simultaneously influenced milk spectra were alteration of milk proteins and changes in ionic concentration of milk. It was consistent with the results we obtained further when applied 2DCOS. Two-dimensional correlation analysis of NIR milk spectra was done to assess the changes in milk composition, which occur when somatic cell count (SCC) levels vary. The synchronous correlation map revealed that when SCC increases, protein levels increase while water and lactose levels decrease. Results from the analysis of the asynchronous plot indicated that changes in water and fat absorptions occur before other milk components. In addition, the technique was used to assess the changes in milk during a period when SCC levels do not vary appreciably. Results indicated that milk components are in equilibrium and no appreciable change in a given component was seen with respect to another. This was found in both healthy and mastitic animals. However, milk components were found to vary with SCC content regardless of the range considered. This important finding demonstrates that 2-D correlation analysis may be used to track even subtle changes in milk composition in individual cows. To find out the right threshold for SCC when used for mastitis diagnosis at cow level, classification of milk samples was performed using soft independent modeling of class analogy (SIMCA) and different spectral data pretreatment. Two levels of SCC - 200 000 cells/$m\ell$ and 300 000 cells/$m\ell$, respectively, were set up and compared as thresholds to discriminate between healthy and mastitic cows. The best detection accuracy was found with 200 000 cells/$m\ell$ as threshold for mastitis and smoothed absorbance data: - 98% of the milk samples in the calibration set and 87% of the samples in the independent test set were correctly classified. When the spectral information was studied it was found that the successful mastitis diagnosis was based on reviling the spectral changes related to the corresponding changes in milk composition. NIRS combined with different ways of spectral data ruining can provide faster and nondestructive alternative to current methods for mastitis diagnosis and a new inside into disease understanding at molecular level.

  • PDF