• Title/Summary/Keyword: measuring

Search Result 22,846, Processing Time 0.055 seconds

Usefulness of Troponin-I, Lactate, C-reactive protein as a Prognostic Markers in Critically Ill Non-cardiac Patients (비 순환기계 중환자의 예후 인자로서의 Troponin-I, Lactate, C-reactive protein의 유용성)

  • Cho, Yu Ji;Ham, Hyeon Seok;Kim, Hwi Jong;Kim, Ho Cheol;Lee, Jong Deok;Hwang, Young Sil
    • Tuberculosis and Respiratory Diseases
    • /
    • v.58 no.6
    • /
    • pp.562-569
    • /
    • 2005
  • Background : The severity scoring system is useful for predicting the outcome of critically ill patients. However, the system is quite complicated and cost-ineffective. Simple serologic markers have been proposed to predict the outcome, which include troponin-I, lactate and C-reactive protein(CRP). The aim of this study was to evaluate the prognostic values of troponin-I, lactate and CRP in critically ill non-cardiac patients. Methods : From September 2003 to June 2004, 139 patients(Age: $63.3{\pm}14.7$, M:F = 88:51), who were admitted to the MICU with non-cardiac critical illness at Gyeongsang National University Hospital, were enrolled in this study. This study evaluated the severity of the illness and the multi-organ failure score (Acute Physiologic and Chronic Health EvaluationII, Simplified Acute Physiologic ScoreII and Sequential Organ Failure Assessment) and measured the troponin-I, lactate and CRP within 24 hours after admission in the MICU. Each value in the survivors and non-survivors was compared at the 10th and 30th day after ICU admission. The mortality rate was compared at 10th and 30th day in normal and abnormal group. In addition, the correlations between each value and the severity score were assessed. Results : There were significantly higher troponin-I and CRP levels, not lactate, in the non-survivors than in the survivors at 10th day($1.018{\pm}2.58ng/ml$, $98.48{\pm}69.24mg/L$ vs. $4.208{\pm}10.23ng/ml$, $137.69{\pm}70.18mg/L$) (p<0.05). There were significantly higher troponin-I, lactate and CRP levels in the non-survivors than in the survivors on the 30th day ($0.99{\pm}2.66ng/ml$, $8.02{\pm}9.54ng/dl$, $96.87{\pm}68.83mg/L$ vs. $3.36{\pm}8.74ng/ml$, $15.42{\pm}20.57ng/dl$, $131.28{\pm}71.23mg/L$) (p<0.05). The mortality rate was significantly higher in the abnormal group of troponin-I, lactate and CRP than in the normal group of troponin-I, lactate and CRP at 10th day(28.1%, 31.6%, 18.9% vs. 11.0%, 15.8 %, 0%) and 30th day(38.6%, 47.4%, 25.8% vs. 15.9%, 21.7%, 14.3%) (p<0.05). Troponin-I and lactate were significantly correlated with the SAPS II score($r^2=0.254$, 0.365, p<0.05). Conclusion : Measuring the troponin-I, lactate and CRP levels upon admission may be useful for predicting the outcome of critically ill non-cardiac patients.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Quality Assurance for Intensity Modulated Radiation Therapy (세기조절방사선치료(Intensity Modulated Radiation Therapy; IMRT)의 정도보증(Quality Assurance))

  • Cho Byung Chul;Park Suk Won;Oh Do Hoon;Bae Hoonsik
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.275-286
    • /
    • 2001
  • Purpose : To setup procedures of quality assurance (OA) for implementing intensity modulated radiation therapy (IMRT) clinically, report OA procedures peformed for one patient with prostate cancer. Materials and methods : $P^3IMRT$ (ADAC) and linear accelerator (Siemens) with multileaf collimator are used to implement IMRT. At first, the positional accuracy, reproducibility of MLC, and leaf transmission factor were evaluated. RTP commissioning was peformed again to consider small field effect. After RTP recommissioning, a test plan of a C-shaped PTV was made using 9 intensity modulated beams, and the calculated isocenter dose was compared with the measured one in solid water phantom. As a patient-specific IMRT QA, one patient with prostate cancer was planned using 6 beams of total 74 segmented fields. The same beams were used to recalculate dose in a solid water phantom. Dose of these beams were measured with a 0.015 cc micro-ionization chamber, a diode detector, films, and an array detector and compared with calculated one. Results : The positioning accuracy of MLC was about 1 mm, and the reproducibility was around 0.5 mm. For leaf transmission factor for 10 MV photon beams, interleaf leakage was measured $1.9\%$ and midleaf leakage $0.9\%$ relative to $10\times\;cm^2$ open filed. Penumbra measured with film, diode detector, microionization chamber, and conventional 0.125 cc chamber showed that $80\~20\%$ penumbra width measured with a 0.125 cc chamber was 2 mm larger than that of film, which means a 0.125 cc ionization chamber was unacceptable for measuring small field such like 0.5 cm beamlet. After RTP recommissioning, the discrepancy between the measured and calculated dose profile for a small field of $1\times1\;cm^2$ size was less than $2\%$. The isocenter dose of the test plan of C-shaped PTV was measured two times with micro-ionization chamber in solid phantom showed that the errors upto $12\%$ for individual beam, but total dose delivered were agreed with the calculated within $2\%$. The transverse dose distribution measured with EC-L film was agreed with the calculated one in general. The isocenter dose for the patient measured in solid phantom was agreed within $1.5\%$. On-axis dose profiles of each individual beam at the position of the central leaf measured with film and array detector were found that at out-of-the-field region, the calculated dose underestimates about $2\%$, at inside-the-field the measured one was agreed within $3\%$, except some position. Conclusion : It is necessary more tight quality control of MLC for IMRT relative to conventional large field treatment and to develop QA procedures to check intensity pattern more efficiently. At the conclusion, we did setup an appropriate QA procedures for IMRT by a series of verifications including the measurement of absolute dose at the isocenter with a micro-ionization chamber, film dosimetry for verifying intensity pattern, and another measurement with an array detector for comparing off-axis dose profile.

  • PDF

A study on lead exposure indices of male workers exposed to lead less than 1 year in storage battery industries (축전지 제조업에서 입사 1년 미만 남자 사원들의 연 노출 지표치에 관한 연구)

  • HwangBo, Young;Kim, Yong-Bae;Lee, Gap-Soo;Lee, Sung-Soo;Ahn, Kyu-Dong;Lee, Byung-Kook;Kim, Joung-Soon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.29 no.4 s.55
    • /
    • pp.747-764
    • /
    • 1996
  • This study intended to obtain an useful information for health management of lead exposed workers and determine biological monitoring interval in early period of exposure by measuring the lead exposure indices and work duration in all male workers (n=433 persons) exposed less than 1 year in 6 storage battery industries and in 49 males who are not exposed to lead as control. The examined variables were blood lead concentration (PBB), Zinc-protoporphyrin concentration (ZPP), Hemoglobin (HB) and personal history; also measured lead concentration in air (PBA) in the workplace. According to the geometric mean of lead concentration in the air, the factories were grouped into three categories: A; When it is below $0.05mg/m^3$, B; When it is between 0.05 and $0.10mg/m^3$, and C; When it is above $0.10mg/m^3$. The results obtained were as follows: 1. The means of blood lead concentration (PBB), ZPP concentration and hemoglobin(HB) in all male workers exposed to lead less than 1 year in storage battery industries were $29.5{\pm}12.4{\mu}g/100ml,\;52.9{\pm}30.0{\mu}g/100ml\;and\;15.2{\pm}1.1\;gm/100ml$. 2. The means of blood lead concentration (PBB), ZPP concentration and hemoglobin(HB) in control group were $5.8{\pm}1.6{\mu}g/100ml,\;30.8{\pm}12.7{\mu}g/100ml\;and\;15.7{\pm}1.6{\mu}g/100ml$, being much lower than that of study group exposed to lead. 3. The means of blood lead concentration and ZPP concentration among group A were $21.9{\pm}7.6{\mu}g/100,\;41.4{\pm}12.6{\mu}g/100ml$ ; those of group B were $29.8{\pm}11.6{\mu}g/100,\;52.6{\pm}27.9{\mu}g/100ml$ ; those of group C were $37.2{\pm}13.5{\mu}g/100,\;66.3{\pm}40.7{\mu}g/100ml$. Significant differences were found among three factory group(P<0.01) that was classified by the geometric mean of lead concentration in the air, group A being the lowest. 4. The mean of blood lead concentration of workers who have different work duration (month) was as follows ; When the work duration was $1\sim2$ month, it was $24.1{\pm}12.4{\mu}g/100ml$, ; When the work duration was $3\sim4$ month, it was $29.2{\pm}13.4{\mu}g/100ml$ ; and it was $28.9\sim34.5{\mu}g/100ml$ for the workers who had longer work duration than other. Significant differences were found among work duration group(P<0.05). 5. The mean of ZPP concentration of workers who have different work duration (month) was as follows ; When the work duration was $1\sim2$ month, it was $40.6{\pm}18.0{\mu}g/100ml$, ; When the work duration was $3\sim4$ month, it was $53.4{\pm}38.4{\mu}g/100ml$ ; and it was $51.5\sim60.4{\mu}g/100ml$ for the workers who had longer work duration than other. Significant differences were found among work duration group(P<0.05). 6. Among total workers(433 person), 18.2% had PBB concentration higher than $40{\mu}g/100ml$ and 7.1% had ZPP concentration higher than $100{\mu}g/100ml$ ; In workers of factory group A, those were 0.9% and 0.0% ; In workers of factory group B, those were 17.1% and 6.9% ; In workers of factory group C, those were 39.4% and 15.4%. 7. The proportions of total workers(433 person) with blood lead concentration lower than $25{\mu}g/100ml$ and ZPP concentration lower than $50{\mu}g/100ml$ were 39.7% and 61.9%, respectively ; In workers of factory group A, those were 65.5% and 82.3% : In workers of factory group B, those were 36.1% and 60.2% ; In workers of factory group C, those were 19.2% and 43.3%. 8. Blood lead concentration (r=0.177, P<0.01), ZPP concentration (r=0.135, P<0.01), log ZPP (r=0.170, P<0.01) and hemoglobin (r=0.096, P<0.05) showed statistically significant correlation with work duration (month). ZPP concentration (r=0.612, P<0.01) and log ZPP (r=0.614, P<0.01) showed statistically significant correlation with blood lead concentration 9. The slopes of simple linear regression between work duration(month, independent variable) and blood lead concentration (dependent variable) in workplace with low air concentration of lead was less steeper than that of poor working condition with high geometric mean air concentration of lead. The study result indicates that new employees should be provided with biological monitoring including blood lead concentration test and education about personal hygiene and work place management within $3\sim4$ month.

  • PDF