• Title/Summary/Keyword: Performance Standard

Search Result 6,942, Processing Time 0.038 seconds

Diagnosis of Obstructive Sleep Apnea Syndrome Using Overnight Oximetry Measurement (혈중산소포화도검사를 이용한 폐쇄성 수면무호흡증의 흡증의 진단)

  • Youn, Tak;Park, Doo-Heum;Choi, Kwang-Ho;Kim, Yong-Sik;Woo, Jong-Inn;Kwon, Jun-Soo;Ha, Kyoo-Seob;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.34-40
    • /
    • 2002
  • Objectives: The gold standard for diagnosing obstructive sleep apnea syndrome (OSAS) is nocturnal polysomnography (NPSG). This is rather expensive and somewhat inconvenient, however, and consequently simpler and cheaper alternatives to NPSG have been proposed. Oximetry is appealing because of its widespread availability and ease of application. In this study, we have evaluated whether oximetry alone can be used to diagnose or screen OSAS. The diagnostic performance of an analysis algorithm using arterial oxygen saturation ($SaO_2$) base on 'dip index', mean of $SaO_2$, and CT90 (the percentage of time spent at $SaO_2$<90%) was compared with that of NPSG. Methods: Fifty-six patients referred for NPSG to the Division of Sleep Studies at Seoul National University Hospital, were randomly selected. For each patient, NPSG with oximetry was carried out. We obtained three variables from the oximetry data such as the dip index most linearly correlated with respiratory disturbance index (RDI) from NPSG, mean $SaO_2$, and CT90 with diagnosis from NPSG. In each case, sensitivity, specificity and positive and negative predictive values of oximetry data were calculated. Results: Thirty-nine patients out of fifty-six patients were diagnosed as OSAS with NPSG. Mean RDI was 17.5, mean $SaO_2$ was 94.9%, and mean CT90 was 5.1%. The dip index [4%-4sec] was most linearly correlated with RDI (r=0.861). With dip index [4%-4sec]${\geq}2$ as diagnostic criteria, we obtained sensitivity of 0.95, specificity of 0.71, positive predictive value of 0.88, and negative predictive value of 0.86. Using mean $SaO_2{\leq}97%$, we obtained sensitivity of 0.95, specificity of 0.41, positive predictive value of 0.79, and negative predictive value of 0.78. Using $CT90{\geq}5%$, we obtained sensitivity of 0.28, specificity of 1.00, positive predictive value of 1.00, and negative predictive value of 0.38. Conclusions: The dip index [4%-4sec] and mean $SaO_2{\leq}97%$ obtained from nocturnal oximetry data are helpful in diagnosis of OSAS. CT90${\leq}$5% can be also used in excluding OSAS.

  • PDF

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

An Analysis of the Moderating Effects of User Ability on the Acceptance of an Internet Shopping Mall (인터넷 쇼핑몰 수용에 있어 사용자 능력의 조절효과 분석)

  • Suh, Kun-Soo
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.27-55
    • /
    • 2008
  • Due to the increasing and intensifying competition in the Internet shopping market, it has been recognized as very important to develop an effective policy and strategy for acquiring loyal customers. For this reason, web site designers need to know if a new Internet shopping mall(ISM) will be accepted. Researchers have been working on identifying factors for explaining and predicting user acceptance of an ISM. Some studies, however, revealed inconsistent findings on the antecedents of user acceptance of a website. Lack of consideration for individual differences in user ability is believed to be one of the key reasons for the mixed findings. The elaboration likelihood model (ELM) and several studies have suggested that individual differences in ability plays an moderating role on the relationship between the antecedents and user acceptance. Despite the critical role of user ability, little research has examined the role of user ability in the Internet shopping mall context. The purpose of this study is to develop a user acceptance model that consider the moderating role of user ability in the context of Internet shopping. This study was initiated to see the ability of the technology acceptance model(TAM) to explain the acceptance of a specific ISM. According to TAM. which is one of the most influential models for explaining user acceptance of IT, an intention to use IT is determined by usefulness and ease of use. Given that interaction between user and website takes place through web interface, the decisions to accept and continue using an ISM depend on these beliefs. However, TAM neglects to consider the fact that many users would not stick to an ISM until they trust it although they may think it useful and easy to use. The importance of trust for user acceptance of ISM has been raised by the relational views. The relational view emphasizes the trust-building process between the user and ISM, and user's trust on the website is a major determinant of user acceptance. The proposed model extends and integrates the TAM and relational views on user acceptance of ISM by incorporating usefulness, ease of use, and trust. User acceptance is defined as a user's intention to reuse a specific ISM. And user ability is introduced into the model as moderating variable. Here, the user ability is defined as a degree of experiences, knowledge and skills regarding Internet shopping sites. The research model proposes that the ease of use, usefulness and trust of ISM are key determinants of user acceptance. In addition, this paper hypothesizes that the effects of the antecedents(i.e., ease of use, usefulness, and trust) on user acceptance may differ among users. In particular, this paper proposes a moderating effect of a user's ability on the relationship between antecedents with user's intention to reuse. The research model with eleven hypotheses was derived and tested through a survey that involved 470 university students. For each research variable, this paper used measurement items recognized for reliability and widely used in previous research. We slightly modified some items proper to the research context. The reliability and validity of the research variables were tested using the Crobnach's alpha and internal consistency reliability (ICR) values, standard factor loadings of the confirmative factor analysis, and average variance extracted (AVE) values. A LISREL method was used to test the suitability of the research model and its relating six hypotheses. Key findings of the results are summarized in the following. First, TAM's two constructs, ease of use and usefulness directly affect user acceptance. In addition, ease of use indirectly influences user acceptance by affecting trust. This implies that users tend to trust a shopping site and visit repeatedly when they perceive a specific ISM easy to use. Accordingly, designing a shopping site that allows users to navigate with heuristic and minimal clicks for finding information and products within the site is important for improving the site's trust and acceptance. Usefulness, however, was not found to influence trust. Second, among the three belief constructs(ease of use, usefulness, and trust), trust was empirically supported as the most important determinants of user acceptance. This implies that users require trustworthiness from an Internet shopping site to be repeat visitors of an ISM. Providing a sense of safety and eliminating the anxiety of online shoppers in relation to privacy, security, delivery, and product returns are critically important conditions for acquiring repeat visitors. Hence, in addition to usefulness and ease of use as in TAM, trust should be a fundamental determinants of user acceptance in the context of internet shopping. Third, the user's ability on using an Internet shopping site played a moderating role. For users with low ability, ease of use was found to be a more important factors in deciding to reuse the shopping mall, whereas usefulness and trust had more effects on users with high ability. Applying the EML theory to these findings, we can suggest that experienced and knowledgeable ISM users tend to elaborate on such usefulness aspects as efficient and effective shopping performance and trust factors as ability, benevolence, integrity, and predictability of a shopping site before they become repeat visitors of the site. In contrast, novice users tend to rely on the low elaborating features, such as the perceived ease of use. The existence of moderating effects suggests the fact that different individuals evaluate an ISM from different perspectives. The expert users are more interested in the outcome of the visit(usefulness) and trustworthiness(trust) than those novice visitors. The latter evaluate the ISM in a more superficial manner focusing on the novelty of the site and on other instrumental beliefs(ease of use). This is consistent with the insights proposed by the Heuristic-Systematic model. According to the Heuristic-Systematic model. a users act on the principle of minimum effort. Thus, the user considers an ISM heuristically, focusing on those aspects that are easy to process and evaluate(ease of use). When the user has sufficient experience and skills, the user will change to systematic processing, where they will evaluate more complex aspects of the site(its usefulness and trustworthiness). This implies that an ISM has to provide a minimum level of ease of use to make it possible for a user to evaluate its usefulness and trustworthiness. Ease of use is a necessary but not sufficient condition for the acceptance and use of an ISM. Overall, the empirical results generally support the proposed model and identify the moderating effect of the effects of user ability. More detailed interpretations and implications of the findings are discussed. The limitations of this study are also discussed to provide directions for future research.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

The Classification System and Information Service for Establishing a National Collaborative R&D Strategy in Infectious Diseases: Focusing on the Classification Model for Overseas Coronavirus R&D Projects (국가 감염병 공동R&D전략 수립을 위한 분류체계 및 정보서비스에 대한 연구: 해외 코로나바이러스 R&D과제의 분류모델을 중심으로)

  • Lee, Doyeon;Lee, Jae-Seong;Jun, Seung-pyo;Kim, Keun-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.127-147
    • /
    • 2020
  • The world is suffering from numerous human and economic losses due to the novel coronavirus infection (COVID-19). The Korean government established a strategy to overcome the national infectious disease crisis through research and development. It is difficult to find distinctive features and changes in a specific R&D field when using the existing technical classification or science and technology standard classification. Recently, a few studies have been conducted to establish a classification system to provide information about the investment research areas of infectious diseases in Korea through a comparative analysis of Korea government-funded research projects. However, these studies did not provide the necessary information for establishing cooperative research strategies among countries in the infectious diseases, which is required as an execution plan to achieve the goals of national health security and fostering new growth industries. Therefore, it is inevitable to study information services based on the classification system and classification model for establishing a national collaborative R&D strategy. Seven classification - Diagnosis_biomarker, Drug_discovery, Epidemiology, Evaluation_validation, Mechanism_signaling pathway, Prediction, and Vaccine_therapeutic antibody - systems were derived through reviewing infectious diseases-related national-funded research projects of South Korea. A classification system model was trained by combining Scopus data with a bidirectional RNN model. The classification performance of the final model secured robustness with an accuracy of over 90%. In order to conduct the empirical study, an infectious disease classification system was applied to the coronavirus-related research and development projects of major countries such as the STAR Metrics (National Institutes of Health) and NSF (National Science Foundation) of the United States(US), the CORDIS (Community Research & Development Information Service)of the European Union(EU), and the KAKEN (Database of Grants-in-Aid for Scientific Research) of Japan. It can be seen that the research and development trends of infectious diseases (coronavirus) in major countries are mostly concentrated in the prediction that deals with predicting success for clinical trials at the new drug development stage or predicting toxicity that causes side effects. The intriguing result is that for all of these nations, the portion of national investment in the vaccine_therapeutic antibody, which is recognized as an area of research and development aimed at the development of vaccines and treatments, was also very small (5.1%). It indirectly explained the reason of the poor development of vaccines and treatments. Based on the result of examining the investment status of coronavirus-related research projects through comparative analysis by country, it was found that the US and Japan are relatively evenly investing in all infectious diseases-related research areas, while Europe has relatively large investments in specific research areas such as diagnosis_biomarker. Moreover, the information on major coronavirus-related research organizations in major countries was provided by the classification system, thereby allowing establishing an international collaborative R&D projects.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

A Study for Improvement of Nursing Service Administration (병원 간호행정 개선을 위한 연구)

  • 박정호
    • Journal of Korean Academy of Nursing
    • /
    • v.3 no.1
    • /
    • pp.13-40
    • /
    • 1972
  • Much has teed changed in the field of hospital administration in the It wake of the rapid development of sciences, techniques ana systematic hospital management. However, we still have a long way to go in organization, in the quality of hospital employees and hospital equipment and facilities, and in financial support in order to achieve proper hospital management. The above factors greatly effect the ability of hospitals to fulfill their obligation in patient care and nursing services. The purpose of this study is to determine the optimal methods of standardization and quality nursing so as to improve present nursing services through investigations and analyses of various problems concerning nursing administration. This study has been undertaken during the six month period from October 1971 to March 1972. The 41 comprehensive hospitals have been selected iron amongst the 139 in the whole country. These have been categorized according-to the specific purposes of their establishment, such as 7 university hospitals, 18 national or public hospitals, 12 religious hospitals and 4 enterprise ones. The following conclusions have been acquired thus far from information obtained through interviews with nursing directors who are in charge of the nursing administration in each hospital, and further investigations concerning the purposes of establishment, the organization, personnel arrangements, working conditions, practices of service, and budgets of the nursing service department. 1. The nursing administration along with its activities in this country has been uncritical1y adopted from that of the developed countries. It is necessary for us to re-establish a new medical and nursing system which is adequate for our social environments through continuous study and research. 2. The survey shows that the 7 university hospitals were chiefly concerned with education, medical care and research; the 18 national or public hospitals with medical care, public health and charity work; the 2 religious hospitals with medical care, charity and missionary works; and the 4 enterprise hospitals with public health, medical care and charity works. In general, the main purposes of the hospitals were those of charity organizations in the pursuit of medical care, education and public benefits. 3. The survey shows that in general hospital facilities rate 64 per cent and medical care 60 per-cent against a 100 per cent optimum basis in accordance with the medical treatment law and approved criteria for training hospitals. In these respects, university hospitals have achieved the highest standards, followed by religious ones, enterprise ones, and national or public ones in that order. 4. The ages of nursing directors range from 30 to 50. The level of education achieved by most of the directors is that of graduation from a nursing technical high school and a three year nursing junior college; a very few have graduated from college or have taken graduate courses. 5. As for the career tenure of nurses in the hospitals: one-third of the nurses, or 38 per cent, have worked less than one year; those in the category of one year to two represent 24 pet cent. This means that a total of 62 per cent of the career nurses have been practicing their profession for less than two years. Career nurses with over 5 years experience number only 16 per cent: therefore the efficiency of nursing services has been rated very low. 6. As for the standard of education of the nurses: 62 per cent of them have taken a three year course of nursing in junior colleges, and 22 per cent in nursing technical high schools. College graduate nurses come up to only 15 per cent; and those with graduate course only 0.4 per cent. This indicates that most of the nurses are front nursing technical high schools and three year nursing junior colleges. Accordingly, it is advisable that nursing services be divided according to their functions, such as professional, technical nurses and nurse's aides. 7. The survey also shows that the purpose of nursing service administration in the hospitals has been regulated in writing in 74 per cent of the hospitals and not regulated in writing in 26 per cent of the hospitals. The general purposes of nursing are as follows: patient care, assistance in medical care and education. The main purpose of these nursing services is to establish proper operational and personnel management which focus on in-service education. 8. The nursing service departments belong to the medical departments in almost 60 per cent of the hospitals. Even though the nursing service department is formally separated, about 24 per cent of the hospitals regard it as a functional unit in the medical department. Only 5 per cent of the hospitals keep the department as a separate one. To the contrary, approximately 12 per cent of the hospitals have not established a nursing service department at all but surbodinate it to the other department. In this respect, it is required that a new hospital organization be made to acknowledge the independent function of the nursing department. In 76 per cent of the hospitals they have advisory committees under the nursing department, such as a dormitory self·regulating committee, an in-service education committee and a nursing procedure and policy committee. 9. Personnel arrangement and working conditions of nurses 1) The ratio of nurses to patients is as follows: In university hospitals, 1 to 2.9 for hospitalized patients and 1 to 4.0 for out-patients; in religious hospitals, 1 to 2.3 for hospitalized patients and 1 to 5.4 for out-patients. Grouped together this indicates that one nurse covers 2.2 hospitalized patients and 4.3 out-patients on a daily basis. The current medical treatment law stipulates that one nurse should care for 2.5 hospitalized patients or 30.0 out-patients. Therefore the statistics indicate that nursing services are being peformed with an insufficient number of nurses to cover out-patients. The current law concerns the minimum number of nurses and disregards the required number of nurses for operation rooms, recovery rooms, delivery rooms, new-born baby rooms, central supply rooms and emergency rooms. Accordingly, tile medical treatment law has been requested to be amended. 2) The ratio of doctors to nurses: In university hospitals, the ratio is 1 to 1.1; in national of public hospitals, 1 to 0.8; in religious hospitals 1 to 0.5; and in private hospitals 1 to 0.7. The average ratio is 1 to 0.8; generally the ideal ratio is 3 to 1. Since the number of doctors working in hospitals has been recently increasing, the nursing services have consequently teen overloaded, sacrificing the services to the patients. 3) The ratio of nurses to clerical staff is 1 to 0.4. However, the ideal ratio is 5 to 1, that is, 1 to 0.2. This means that clerical personnel far outnumber the nursing staff. 4) The ratio of nurses to nurse's-aides; The average 2.5 to 1 indicates that most of the nursing service are delegated to nurse's-aides owing to the shortage of registered nurses. This is the main cause of the deterioration in the quality of nursing services. It is a real problem in the guest for better nursing services that certain hospitals employ a disproportionate number of nurse's-aides in order to meet financial requirements. 5) As for the working conditions, most of hospitals employ a three-shift day with 8 hours of duty each. However, certain hospitals still use two shifts a day. 6) As for the working environment, most of the hospitals lack welfare and hygienic facilities. 7) The salary basis is the highest in the private university hospitals, with enterprise hospitals next and religious hospitals and national or public ones lowest. 8) Method of employment is made through paper screening, and further that the appointment of nurses is conditional upon the favorable opinion of the nursing directors. 9) The unemployment ratio for one year in 1971 averaged 29 per cent. The reasons for unemployment indicate that the highest is because of marriage up to 40 per cent, and next is because of overseas employment. This high unemployment ratio further causes the deterioration of efficiency in nursing services and supplementary activities. The hospital authorities concerned should take this matter into a jeep consideration in order to reduce unemployment. 10) The importance of in-service education is well recognized and established. 1% has been noted that on the-job nurses. training has been most active, with nursing directors taking charge of the orientation programs of newly employed nurses. However, it is most necessary that a comprehensive study be made of instructors, contents and methods of education with a separate section for in-service education. 10. Nursing services'activities 1) Division of services and job descriptions are urgently required. 81 per rent of the hospitals keep written regulations of services in accordance with nursing service manuals. 19 per cent of the hospitals do not keep written regulations. Most of hospitals delegate to the nursing directors or certain supervisors the power of stipulating service regulations. In 21 per cent of the total hospitals they have policy committees, standardization committees and advisory committees to proceed with the stipulation of regulations. 2) Approximately 81 per cent of the hospitals have service channels in which directors, supervisors, head nurses and staff nurses perform their appropriate services according to the service plans and make up the service reports. In approximately 19 per cent of the hospitals the staff perform their nursing services without utilizing the above channels. 3) In the performance of nursing services, a ward manual is considered the most important one to be utilized in about 32 percent of hospitals. 25 per cent of hospitals indicate they use a kardex; 17 per cent use ward-rounding, and others take advantage of work sheets or coordination with other departments through conferences. 4) In about 78 per cent of hospitals they have records which indicate the status of personnel, and in 22 per cent they have not. 5) It has been advised that morale among nurses may be increased, ensuring more efficient services, by their being able to exchange opinions and views with each other. 6) The satisfactory performance of nursing services rely on the following factors to the degree indicated: approximately 32 per cent to the systematic nursing activities and services; 27 per cent to the head nurses ability for nursing diagnosis; 22 per cent to an effective supervisory system; 16 per cent to the hospital facilities and proper supply, and 3 per cent to effective in·service education. This means that nurses, supervisors, head nurses and directors play the most important roles in the performance of nursing services. 11. About 87 per cent of the hospitals do not have separate budgets for their nursing departments, and only 13 per cent of the hospitals have separate budgets. It is recommended that the planning and execution of the nursing administration be delegated to the pertinent administrators in order to bring about improved proved performances and activities in nursing services.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF