• Title/Summary/Keyword: Technology Application

Search Result 20,602, Processing Time 0.058 seconds

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

A Consideration of Apron's Shielding in Nuclear Medicine Working Environment (PET검사 작업환경에 있어서 APRON의 방어에 대한 고찰)

  • Lee, Seong-wook;Kim, Seung-hyun;Ji, Bong-geun;Lee, Dong-wook;Kim, Jeong-soo;Kim, Gyeong-mok;Jang, Young-do;Bang, Chan-seok;Baek, Jong-hoon;Lee, In-soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.110-114
    • /
    • 2014
  • Purpose: The advancement in PET/CT test devices has decreased the test time and popularized the test, and PET/CT tests have continuously increased. However, this increases the exposure dose of radiation workers, too. This study aims to measure the radiation shielding rate of $^{18}F-FDG$ with a strong energy and the shielding effect when worker wore an apron during the PET/CT test. Also, this study compared the shielding rate with $^{99m}TC$ to minimize the exposure dose of radiation workers. Materials and Methods: This study targeted 10 patients who visited in this hospital for the PET/CT test for 8 days from May 2nd to 10th 2013, and the $^{18}F-FDG$ distribution room, patient relaxing room (stand by room after $^{18}F-FDG$ injection) and PET/CT test room were chosen as measuring spots. Then, the changes in the dose rate were measured before and after the application of the APRON. For an accurate measurement, the distance from patients or sources was fixed at 1M. Also, the same method applied to $^{99m}TC's$ Source in order to compare the reduction in the dose by the Apron. Results: 1) When there was only L-block in the $^{18}F-FDG$ distribution room, the average dose rate was $0.32{\mu}Sv$, and in the case of L-blockK+ apron, it was $0.23{\mu}Sv$. The differences in the dose and dose rate between the two cases were respectively, $0.09{\mu}Sv$ and 26%. 2) When there was no apron in the relaxing room, the average dose rate was $33.1{\mu}Sv$, and when there was an apron, it was $22.3{\mu}Sv$. The differences in the dose and dose rate between them were respectively, $10.8{\mu}Sv$ and 33%. 3) When there was no APRON in the PET/CT room, the average dose rate was $6.9{\mu}Sv$, and there was an APRON, it was $5.5{\mu}Sv$. The differences in the dose and dose rate between them were respectively, $1.4{\mu}Sv$ and 25%. 4) When there was no apron, the average dose rate of $^{99m}TC$ was $23.7{\mu}Sv$, and when there was an apron, it was $5.5{\mu}Sv$. The differences in the dose and dose rate between them were respectively, $18.2{\mu}Sv$ and 77%. Conclusion: According to the result of the experiment, $^{99m}TC$ injected into patients showed an average shielding rate of 77%, and $^{18F}FDG$ showed a relatively low shielding rate of 27%. When comparing the sources only, $^{18F}FDG$ showed a shielding rate of 17%, and $^{99m}TC$'s was 77%. Though it had a lower shielding effect than $^{99m}TC$, $^{18}F-FDG$ also had a shielding effect on the apron. Therefore, it is considered that wearing an apron appropriate for high energy like $^{18}F-FDG$ would minimize the exposure dose of radiation workers.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Comparison of Results According to Reaction Conditions of Thyroglobulin Test (Thyroglobulin 검사의 반응조건에 따른 결과 비교 분석)

  • Joung, Seung-Hee;Lee, Young-Ji;Moon, Hyung-Ho;Yoo, So-yoen;Kim, Nyun-Ok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.39-43
    • /
    • 2017
  • Purpose Thyroglobulin (Tg) is a biologic marker of differentiated thyroid carcinoma (DTC), produced by normal thyroid tissue or thyroid cancer tissue. Therefore, the Tg values of DTC patients is the most specific indicator for judging whether recurrence occur or whether the remaining thyroid cancer is present. Thyroid cancer is currently the most common cancer in Korea, of which 90% is differentiated thyroid cancer. The number of patients with thyroid disease of this application also increased, and an accurate and prompt results are required. However, the incubation time of the Tg commonly takes about 24 hours in our hospital, and the result reporting time is delayed, and We could not satisfied with the requirements of clinical departments and patients. In order to fulfill these requirements, experiments were conducted by shortening the incubation time between company B's Kit currently in use and company C's Kit used in other hospitals. Through these experiments, we could perform the correlation with the original method and shortening method, and could find the optimum reaction time to satisfy the needs of the departments and the patients, and we will improve the competitiveness with the EIA examination. Materials and Methods In September 2016, we tested 65 patients company B's kit and company C's kit by three incubation ways. First method $37^{\circ}C$ shaking 2hr/2hr, Second method RT shaking 3hr/2hr, Third method 1hr/1hr shaking at $37^{\circ}C$. Fourth method RT shaking 3hr method which is the original method of Company C's Kit. Fifth method, the incubation time was shortened under room temperature shaking 2hr, Sixth method $37^{\circ}C$ shaking 2hr. And we performed and compared the correlation and coefficient of each methods. Results As a result of performing shortening method on company B currently in use, when comparing the Original method of company B kit, First method $37^{\circ}C$ shaking 2hr/2hr was less than Tg 1.0 ng/mL and the ratio of $R^2=0.5906$, above 1.0 ng/mL In the value, $R^2=0.9597$. Second method RT shaking 3hr/2hr was $R^2=0.7262$ less than value of 1.0 ng/mL, $R^2=0.9566$ above than value of 1.0 ng/mL. Third method $37^{\circ}C$ shaking 1hr/1hr was $R^2=0.7728$ less than value of 1.0 ng/mL, $R^2=0.8904$ above than value of 1.0 ng/mL. Forth, Company C's The original method, RT shaking 3hr was $R^2=0.7542$ less than value of 1.0 ng/mL, and $R^2=0.9711$ above than value of 1.0 ng/mL. Fifth method RT shaking 2hr was $R^2=0.5477$ less than value of 1.0 ng/mL, $R^2=0.9231$ above than value of 1.0 ng/mL. Sixth method $37^{\circ}C$ shaking 2hr showed $R^2=0.2848$ less than value of 1.0 ng/mL, $R^2=0.9028$ above than value of 1.0 ng/mL. Conclusion Samples with both values of 1.0 ng/mL or higher in both of the six methods showed relatively high correlation, but the correlation was relatively low less than value of 1.0 ng/mL. Especially, the $37^{\circ}C$ shaking 2hr method of company C showed a sharp fluctuation from the low concentration value of 1.0 ng/mL or less. Therefore, we are planning to continuously test the time, equipment, incubation temperature and so on for the room temperature shaking 2hr method and $37^{\circ}C$ shaking 1hr/1hr of company C which showed a relatively high correlation. After that, we can search for an appropriate shortening method through additional experiments such as recovery test, dilution test, sensitivity test, and provide more accurate and prompt results to the department of medical treatment, It is competitive with EIA test.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

The Building Plan of Online ADR Model related to the International Commercial Transaction Dispute Resolution (국제상거래 분쟁해결을 위한 온라인 ADR 모델 구축방안)

  • Kim Sun-Kwang;Kim Jong-Rack;Hong Sung-Kyu
    • Journal of Arbitration Studies
    • /
    • v.15 no.2
    • /
    • pp.3-35
    • /
    • 2005
  • The meaning of Online ADR lies in the prompt and economical resolution of disputes by applying the information/communication element (Internet) to existing ADR. However, if the promptness and economical efficiency are overemphasized, the fairness and appropriateness of dispute resolution may be compromised and consequently Online ADR will be belittled and criticized as second-class trials. In addition, as communication is mostly made using texts in Online ADR it is difficult to investigate cases and to create atmosphere and induce dynamic feelings, which are possible in the process of dispute resolution through face-to-face contact. Despite such difficulties, Online ADR is expanding its area not only in online but also in offline due to its advantages such as promptness, low expenses and improved resolution methods, and is expected to develop rapidly as the electronic government decided to adopt it in the future. Accordingly, the following points must be focused on for the continuous First, in the legal and institutional aspects for the development of Online ADR, it is necessary to establish a framework law on ADR. A framework law on ADR comprehending existing mediation and arbitration should be established and it must include contents of Online ADR, which utilizes electronic communication means. However, it is too early to establish a separate law for Online ADR because Online ADR must develop based on the theoretical system of ADR. Second, although Online ADR is expanding rapidly, it may take time to be settled as a tool of dispute resolution. As discussed earlier, additionally, if the amount of money in dispute is large or the dispute is complicated, Online ADR may have a negative effect on the resolution of the dispute. Thus, it is necessary to apply Online ADR to trifle cases or domestic cases in the early stage, accumulating experiences and correcting errors. Moreover, in order to settle numerous disputes effectively, Online ADR cases should be analyzed systematically and cases should be classified by type so that similar disputes may be settled automatically. What is more, these requirements should reflected in developing Online ADR system. Third, the application of Online ADR is being expanded to consumer disputes, domain name disputes, commercial disputes, legal disputes, etc., millions of cases are settled through Online ADR, and 115 Online ADR sites are in operation throughout the world. Thus Online ADR requires not temporary but continuous attention, and mediators and arbitrators participating in Online ADR should be more intensively educated on negotiation and information technologies. In particular, government-led research projects should be promoted to establish Online ADR model and these projects should be supported by comprehensive researches on mediation, arbitration and Online ADR. Fourth, what is most important in the continuous development and expansion of Online ADR is to secure confidence in Online ADR and advertise Online ADR to users. For this, incentives and rewards should be given to specialists such as lawyers when they participate in Online ADR as mediators and arbitrators in order to improve their expertise. What is more, from the early stage, the government and public institutions should have initiative in promoting Online ADR so that parties involved in disputes recognize the substantial contribution of Online ADR to dispute resolution. Lastly, dispute resolution through Online ADR is performed by organizations such as Korea Institute for Electronic Commerce and Korea Consumer Protection Board and partially by Korean Commercial Arbitration Board. Online ADR is expected to expand its area to commercial disputes in offline in the future. In response to this, Korean Commercial Arbitration Board, which is an organization for commercial dispute resolution, needs to be restructured.

  • PDF

The Efficiency and Performance of Porous Film Containing Freshness Maintenance Ingredients (신선도 유지성분을 포함한 다공성 필름의 성능과 효능)

  • Kim, Kyeong-Yee;Lee, Eun-Kyung
    • Food Science and Preservation
    • /
    • v.16 no.6
    • /
    • pp.810-816
    • /
    • 2009
  • To identify effective food packaging compounds that could significantly affect the freshness of stored food, the efficiency and performance of porous polypropylene film containing mustard oil as a freshness maintenance ingredient was studied by GC-MS analysis and storage testing of bread. AITC (allyl-isothiocyanate)-emitting properties of films impregnated with mustard oil were evaluated by GC-MS. AITC was extracted from mustard oil, and used as a vapor as an effective antimicrobial agent. Films were prepared under four different conditions (the film types were abbreviated 25SF1, 25SF2, 50LF, and IAF) and the amounts of AITC inside vinyl packs constructed using the four films were measured. The results showed that the 25SF2 film (width 25 mm, length 20 cm) yielded a greater amount of AITC than did the 50LF film (width 50 mm, length 20 cm). We confirmed that the amount of gas emission showed better between layer and layer of the film side than the internal film. In storage testing using various films at $35^{\circ}C$ for 25 days, 25SF2 film provided excellent preservation of bread compared with 50LF film. This was in line with the fact that 25SF2 film yielded the highest amount of AITC. Emission capacities AITC of 2 cm film were measured using bottles various volumes (43 mL, 500 mL, 1000 mL) and both closed and open systems. The AITC content of the film in 43 mL bottle was much higher than that yielded by other films in the closed system, and AITC was rapidly emitted, with relatively low residual gas emission after 4 days in an open system. Mustard oil is a useful freshness maintenance ingredient hence, analysis of AITC emission kinetics from various films were helpful to develop films with optimal antimicrobial effects, and will allow application of such films in food packaging systems.