• Title/Summary/Keyword: 문제의 구조

Search Result 9,949, Processing Time 0.042 seconds

Development of Geometrical Quality Control Real-time Analysis Program using an Electronic Portal Imaging (전자포탈영상을 이용한 기하학적 정도관리 실시간 분석 프로그램의 개발)

  • Lee, Sang-Rok;Jung, Kyung-Yong;Jang, Min-Sun;Lee, Byung-Gu;Kwon, Young-Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.77-84
    • /
    • 2012
  • Purpose: To develop a geometrical quality control real-time analysis program using an electronic portal imaging to replace film evaluation method. Materials and Methods: A geometrical quality control item was established with the Eclipse treatment planning system (Version 8.1, Varian, USA) after the Electronic Portal Imaging Device (EPID) took care of the problems occurring from the fixed substructure of the linear accelerator (CL-iX, Varian, USA). Electronic portal image (single exposure before plan) was created at the treatment room's 4DTC (Version 10.2, Varian, USA) and a beam was irradiated in accordance with each item. The gaining the entire electronic portal imaging at the Off-line review and was evaluated by a self-developed geometrical quality control real-time analysis program. As for evaluation methods, the intra-fraction error was analyzed by executing 5 times in a row under identical conditions and procedures on the same day, and in order to confirm the infer-fraction error, it was executed for 10 days under identical conditions of all procedures and was compared with the film evaluation method using an Iso-align$^{TM}$ quality control device. Measurement and analysis time was measured by sorting the time into from the device setup to data achievement and the time amount after the time until the completion of analysis and the convenience of the users and execution processes were compared. Results: The intra-fraction error values for each average 0.1, 0.2, 0.3, 0.2 mm at light-radiation field coincidence, collimator rotation axis, couch rotation axis and gantry rotation axis. By checking the infer-fraction error through 10 days of continuous quality control, the error values obtained were average 1.7, 1.4, 0.7, 1.1 mm for each item. Also, the measurement times were average 36 minutes, 15 minutes for the film evaluation method and electronic portal imaging system, and the analysis times were average 30 minutes, 22 minutes. Conclusion: When conducting a geometrical quality control using an electronic portal imaging, it was found that it is efficient as a quality control tool. It not only reduces costs through not using films, but also reduces the measurement and analysis time which enhances user convenience and can improve the execution process by leaving out film developing procedures etc. Also, images done with evaluation from the self-developed geometrical quality control real-time analysis program, data processing is capable which supports the storage of information.

  • PDF

A Study on the Relationship of Learning, Innovation Capability and Innovation Outcome (학습, 혁신역량과 혁신성과 간의 관계에 관한 연구)

  • Kim, Kui-Won
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.380-420
    • /
    • 2014
  • We increasingly see the importance of employees acquiring enough expert capability or innovation capability to prepare for ever growing uncertainties in their operation domains. However, despite the above circumstances, there have not been an enough number of researches on how operational input components for employees' innovation outcome, innovation activities such as acquisition, exercise and promotion effort of employee's innovation capability, and their resulting innovation outcome interact with each other. This trend is believed to have been resulted because most of the current researches on innovation focus on the units of country, industry and corporate entity levels but not on an individual corporation's innovation input components, innovation outcome and innovation activities themselves. Therefore, this study intends to avoid the currently prevalent study frames and views on innovation and focus more on the strategic policies required for the enhancement of an organization's innovation capabilities by quantitatively analyzing employees' innovation outcomes and their most suggested relevant innovation activities. The research model that this study deploys offers both linear and structural model on the trio of learning, innovation capability and innovation outcome, and then suggests the 4 relevant hypotheses which are quantitatively tested and analyzed as follows: Hypothesis 1] The different levels of innovation capability produce different innovation outcomes (accepted, p-value = 0.000<0.05). Hypothesis 2] The different amounts of learning time produce different innovation capabilities (rejected, p-value = 0.199, 0.220>0.05). Hypothesis 3] The different amounts of learning time produce different innovation outcomes. (accepted, p-value = 0.000<0.05). Hypothesis 4] the innovation capability acts as a significant parameter in the relationship of the amount of learning time and innovation outcome (structural modeling test). This structural model after the t-tests on Hypotheses 1 through 4 proves that irregular on-the-job training and e-learning directly affects the learning time factor while job experience level, employment period and capability level measurement also directly impacts on the innovation capability factor. Also this hypothesis gets further supported by the fact that the patent time absolutely and directly affects the innovation capability factor rather than the learning time factor. Through the 4 hypotheses, this study proposes as measures to maximize an organization's innovation outcome. firstly, frequent irregular on-the-job training that is based on an e-learning system, secondly, efficient innovation management of employment period, job skill levels, etc through active sponsorship and energization community of practice (CoP) as a form of irregular learning, and thirdly a model of Yί=f(e, i, s, t, w)+${\varepsilon}$ as an innovation outcome function that is soundly based on a smart system of capability level measurement. The innovation outcome function is what this study considers the most appropriate and important reference model.

A Case Study of Artist-centered Art Fair for Popularizing Art Market (미술 대중화를 위한 작가중심형 아트페어 사례 연구)

  • Kim, Sun-Young;Yi, Eni-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.279-292
    • /
    • 2018
  • Unlike the global art market which experienced rapid recovery from the impacts of the Global Financial Crisis in 2008, the Korean art market has not yet fully recovered. The gallery-oriented distribution system, vulnerable primary art market functions, and the market structure centered on a small number of collectors make it difficult for young and medium artists to enter the market and, as a result, deepen the economic polarization of artists. In addition, the high price of art works limits market participation by restricting the general public. This study began with the idea that the interest of the public in the art market as well as their participation in the market are urgent. To this end, we noted that public awareness of art transactions can be a starting point for improving the constitution of the fragile art market, focusing on the 'Artist-centered Art Fair' rather than existing art fairs. To examine the contribution of such an art fair to the popularization of the art market, we analyzed the case of the 'Visual Artist Market (VAM)' project of the Korea Arts Management Service. Results found that the 'Artist-centered Art Fair' focuses on providing opportunities for market entry to young and medium artists rather than on the interests of distributors, and promotes the popularization of the art market by promoting low-priced works to the general public. Also, the 'Artist-centered Art Fair' seems to play a primary role in the public sector to foster solid groups of artists as well as to establish healty distribution networks of Korean Art market. However, in the long run, it is necessary to promote sustainable development of the 'Artist-centered Art Fair' through indirect support, such as the provision of a publicity platform or consumer finance support, rather than direct support.

A study on the reduction on magnetic susceptible artifacts through the usage of silicon (실리콘을 이용한 자화율 인공물의 감소에 관한 연구)

  • Choi, Kwan-Woo;Lee, Ho-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.12
    • /
    • pp.5937-5942
    • /
    • 2012
  • This study used silicon that is similar to the density of the tissue of the human body to compensate for the uneven areas that are in contact with air in order to reduce susceptible artifacts. The subjects of the study were 16 normal people and the areas of the human body in which there are a lot of uneven areas with complicated structure and a lot of susceptible artifacts were formed since the surface area that comes into contact with the air is large were the areas that were chosen to be examined. A 3.0T superconducting magnetic resonance device was used as the test equipment and SPIR images that are sensitive to magnetic differences were obtained as sagittal planes on a line that extended the metatarsal and the phalanges, including the middle of the longitudinal arc and the 5 distal phalanxes. The method of analysis was to reduce the susceptibility between the tissue and the air to discover the reduction of susceptible artifacts by comparing the SNR and CNR before and after applying silicon. A statistical analysis was utilized for the sample matching T examination. The results of the study revealed that the susceptible artifacts were reduced in the images of the uneven areas that were compensated and applied with silicon. The SNR increased in significant amount in correlation from $3.91{\pm}1.33$ before application to $21.69{\pm}4.52$ after application and the CNR decreased in significant amount in correlation from $28.97{\pm}8.20$ before application to $4.88{\pm}2.14$. In conclusion, this study did not affect the voxel but it was an innovative method of improvement that compensated for the fundamental issue of the difference in susceptibility between the air and the body. The application is simple and the study has great significance in that it proposed a method to reduce susceptible artifacts in a low cost and highly efficient manner.

A Study on the Development of a Home Mess-Cleanup Robot Using an RFID Tag-Floor (RFID 환경을 이용한 홈 메스클린업 로봇 개발에 관한 연구)

  • Kim, Seung-Woo;Kim, Sang-Dae;Kim, Byung-Ho;Kim, Hong-Rae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.508-516
    • /
    • 2010
  • An autonomous and automatic home mess-cleanup robot is newly developed in this paper. Thus far, vacuum-cleaners have lightened the burden of household chores but the operational labor that vacuum-cleaners entail has been very severe. Recently, a cleaning robot was commercialized to solve but it also was not successful because it still had the problem of mess-cleanup, which pertained to the clean-up of large trash and the arrangement of newspapers, clothes, etc. Hence, we develop a new home mess-cleanup robot (McBot) to completely overcome this problem. The robot needs the capability for agile navigation and a novel manipulation system for mess-cleanup. The autonomous navigational system has to be controlled for the full scanning of the living room and for the precise tracking of the desired path. It must be also be able to recognize the absolute position and orientation of itself and to distinguish the messed object that is to be cleaned up from obstacles that should merely be avoided. The manipulator, which is not needed in a vacuum-cleaning robot, has the functions of distinguishing the large trash that is to be cleaned from the messed objects that are to be arranged. It needs to use its discretion with regard to the form of the messed objects and to properly carry these objects to the destination. In particular, in this paper, we describe our approach for achieving accurate localization using RFID for home mess-cleanup robots. Finally, the effectiveness of the developed McBot is confirmed through live tests of the mess-cleanup task.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Clinical Characteristics and Prognosis of Neonatal Seizures (신생아 경련의 임상적 양상 및 예후에 관한 고찰)

  • Kim, Chang Wu;Jang, Chang Hwan;Kim, Heng Mi;Choe, Byung Ho;Kwon, Soon Hak
    • Clinical and Experimental Pediatrics
    • /
    • v.46 no.12
    • /
    • pp.1253-1259
    • /
    • 2003
  • Backgroud : Seizures in the neonate are relatively common and their clinical features are different from those in children and adults. The study aimed to provide the clinical profiles of neonatal seizure in our hospital. Methods : A total of 41 newborns with seizures were enrolled in this study over a period of three years. They were evaluated with special reference to risk factors, neurologic examinations, laboratory data, neuroimaging studies, EEG findings, seizure types, response to treatment, and prognosis, etc. Results : The average age at onset of seizures was $6.1{\pm}4.6days$ and the majority of patients(42%) had multifocal clonic seizure and 24% had subtle seizure. Factors that are known to increase risk of neonatal seizures include abnormal delivery history, birth asphyxia, and electrolyte imbalance, etc. However, they remain obscure in about 20% of cases. More than 50 percent showed abnormal lesions on neuroimaging studies such as brain hemorrhage, periventricular leukomalacia, brain infarction, cortical dysplasia, hydrocephalus, etc. and 17 out of 32 patients showed abnormal electroencephalographic patterns. Phenobarbital was tried as a first line antiepileptic drug and phenytoin was added if it failed to control seizures. The treatments were terminated in the majority of patients during the hospital stay. The overall prognosis was relatively good except for those with abnormal EEG background or congenital central nervous system malformations. Conclusion : Neonatal seizures may permanently disrupt brain development. Better understanding of their clinical profiles and appropriate management may lead to a reduction in neurological disability in later childhood.

Effect of Bacillus Strains on the Chungkook-jang Processing (1) Changes of the Components and Enzyme Activities During Chungkookjang-koji Preparation (균주(菌株)를 달리한 청국장의 제조(製造)에 관(關)한 연구(硏究) 제1보(第1報)-청국장메주 발효과정중(醱酵過程中)의 성분(成分)과 효소력(酵素力)-)

  • Lee, Hyun-Ja;Suh, Jung-Sook
    • Journal of Nutrition and Health
    • /
    • v.14 no.2
    • /
    • pp.97-104
    • /
    • 1981
  • In order to study the changes of components and enzyme activities during Chungkookjang-Koji preparation, the Kojies were prepared with Bacillus Natto, Bacillus subtilis and traditional method. The temperature of Koji materials during Koji preparation was very different according to the experimental group. The content of ethyl alcohol, reducing sugar, amino nitrogen and water soluble nitrogen were changed by the Koji preparing stage and experimental group. Amylase and protease activities showed on irregular change on standing and their activities were not remarkably different among the groups and appeared weakly.

  • PDF

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.