• Title/Summary/Keyword: time-domain methods

Search Result 866, Processing Time 0.035 seconds

Evaluation of Drainage Improvement Effect Using Geostatistical Analysis in Poorly Drained Sloping Paddy Soil (경사지 배수불량 논에서 배수개선 효과의 지구통계적 기법을 이용한 평가)

  • Jung, Ki-Yuol;Yun, Eul-Soo;Park, Ki-Do;Park, Chang-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.804-811
    • /
    • 2010
  • The lower portion of sloping paddy fields normally contains excessive moisture and the higher water table caused by the inflow of ground water from the upper part of the field resulting in non-uniform water content distribution. Four drainage methods namely Open Ditch, Vinyl Barrier, Pipe Drainage and Tube Bundle for multiple land use were installed within 1-m position from the lower edge of the upper embankment of sloping alluvial paddy fields. Knowledge of the spatial variability of soil water properties is of primary importance for management of agricultural lands. This study was conducted to evaluate the effect of drainage in the soil on spatial variability of soil water content using the geostatistical analysis. The soil water content was collected by a TDR (Time Domain Reflectometry) sensor after the installation of subsurface drainage on regular square grid of 80 m at 20 m paddy field located at Oesan-ri, Buk-myeon, Changwon-si in alluvial slopping paddy fields ($35^{\circ}22^{\prime}$ N, $128^{\circ}35^{\prime}$). In order to obtain the most accurate field information, the sampling grid was divided 3 m by 3 m unit mesh by four drainage types. The results showed that spatial variance of soil water content by subsurface drainage was reduced, though yield of soybean showed the same trends. Value of "sill" of soil water content with semivariogram was 9.7 in Pipe Drainage, 86.2 in Open Ditch, and 66.8 in Vinyl Barrier and 15.7 in Tube Bundle.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

The Building Plan of Online ADR Model related to the International Commercial Transaction Dispute Resolution (국제상거래 분쟁해결을 위한 온라인 ADR 모델 구축방안)

  • Kim Sun-Kwang;Kim Jong-Rack;Hong Sung-Kyu
    • Journal of Arbitration Studies
    • /
    • v.15 no.2
    • /
    • pp.3-35
    • /
    • 2005
  • The meaning of Online ADR lies in the prompt and economical resolution of disputes by applying the information/communication element (Internet) to existing ADR. However, if the promptness and economical efficiency are overemphasized, the fairness and appropriateness of dispute resolution may be compromised and consequently Online ADR will be belittled and criticized as second-class trials. In addition, as communication is mostly made using texts in Online ADR it is difficult to investigate cases and to create atmosphere and induce dynamic feelings, which are possible in the process of dispute resolution through face-to-face contact. Despite such difficulties, Online ADR is expanding its area not only in online but also in offline due to its advantages such as promptness, low expenses and improved resolution methods, and is expected to develop rapidly as the electronic government decided to adopt it in the future. Accordingly, the following points must be focused on for the continuous First, in the legal and institutional aspects for the development of Online ADR, it is necessary to establish a framework law on ADR. A framework law on ADR comprehending existing mediation and arbitration should be established and it must include contents of Online ADR, which utilizes electronic communication means. However, it is too early to establish a separate law for Online ADR because Online ADR must develop based on the theoretical system of ADR. Second, although Online ADR is expanding rapidly, it may take time to be settled as a tool of dispute resolution. As discussed earlier, additionally, if the amount of money in dispute is large or the dispute is complicated, Online ADR may have a negative effect on the resolution of the dispute. Thus, it is necessary to apply Online ADR to trifle cases or domestic cases in the early stage, accumulating experiences and correcting errors. Moreover, in order to settle numerous disputes effectively, Online ADR cases should be analyzed systematically and cases should be classified by type so that similar disputes may be settled automatically. What is more, these requirements should reflected in developing Online ADR system. Third, the application of Online ADR is being expanded to consumer disputes, domain name disputes, commercial disputes, legal disputes, etc., millions of cases are settled through Online ADR, and 115 Online ADR sites are in operation throughout the world. Thus Online ADR requires not temporary but continuous attention, and mediators and arbitrators participating in Online ADR should be more intensively educated on negotiation and information technologies. In particular, government-led research projects should be promoted to establish Online ADR model and these projects should be supported by comprehensive researches on mediation, arbitration and Online ADR. Fourth, what is most important in the continuous development and expansion of Online ADR is to secure confidence in Online ADR and advertise Online ADR to users. For this, incentives and rewards should be given to specialists such as lawyers when they participate in Online ADR as mediators and arbitrators in order to improve their expertise. What is more, from the early stage, the government and public institutions should have initiative in promoting Online ADR so that parties involved in disputes recognize the substantial contribution of Online ADR to dispute resolution. Lastly, dispute resolution through Online ADR is performed by organizations such as Korea Institute for Electronic Commerce and Korea Consumer Protection Board and partially by Korean Commercial Arbitration Board. Online ADR is expected to expand its area to commercial disputes in offline in the future. In response to this, Korean Commercial Arbitration Board, which is an organization for commercial dispute resolution, needs to be restructured.

  • PDF

The Effect of Nuclear Overhauser Enhancement in Liver and Heart $^{31}P$ NMR Spectra Localized by 2D Chemical Shift Technique (이차원 화학변위 기법을 이용한 간 및 심장 $^{31}P$ 자기공명분광에서의 Nuclear Overhauser 효과에 대한 연구)

  • Ryeom Hun-Kyu;Lee Jongmin;Kim Yong-Sun;Lee Sang-Kwon;Suh Kyung-Jin;Bae Sung-Jin;Chang Yongmin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.94-99
    • /
    • 2004
  • Purpose : To investigate the signal enhancement ratio by NOE effect on in vivo $^{31}P$ MRS in human heart muscle and liver. we also evaluated the enhancement ratios of different phosphorus metabolites, which are important in 31P MRS for each organ. Materials and Methods : Ten normal subjects (M:F = 8:2, age range = 24-32 yrs) were included for in vivo $^{31}P$ MRS measurements on a 1.5 T whole-body MRI/MRS system using $^1H-^{31}P$ dual tuned surface coil. Two-dimensional Chemical Shift Imaging (2D CSI) pulse sequence for $^{31}P$ MRS was employed in all $^{31}P$ MRS measurements. First, $^{31}P$ MRS performed without NOE effect and then the same 2D CSI data acquisitions were repeated with NOE effect. After postprocessing the MRS raw data in the time domain, the signal enhancements in percent were estimated from the major metabolites. Results : The calculated NOE enhancement for liver $^{31}P$ MRS were $\alpha-ATP\;(7\%),\;\beta-ATP\;(9\%),\;\gamma-ATP\;(17\%),\;Pi\;(1\%),\;PDE\;(19\%)$ and $PME\;(31\%)$. Because there is no creatine kinase activity in liver, PCr signal is absent. For cardiac $^{31}P$ MRS, whole body coil gave better scout images and thus better localization than surface coil. In $^{31}P$cardiac multi-voxel spectra, DPG signal increased from left to right according to the amount of blood included. The calculated enhancement for cardiac $^{31}P$ MRS were : $\alpha-ATP\;(12\%),\;\beta-ATP\;(19\%),\;\gamma-ATP\;(30\%),\;PCr\;(34\%),\;Pi\;(20\%),\;(PDE)\;(51\%),\;and\;DPG\;(72\%)$. Conclusion : Our results revealed that the NOE effect was more pronounced in heart muscle than in liver with different coupling to 1H spin system and thus different heteronuclear cross-relaxation.

  • PDF

Effect of Protein Kinase C Inhibitor (PKCI) on Radiation Sensitivity and c-fos Transcription Activity (Protein Kinase C Inhibitor (PKCI)에 의한 방사선 민감도 변화와 c-fos Proto-oncogene의 전사 조절)

  • Choi Eun Kyung;Chang Hyesook;Rhee Yun-Hee;Park Kun-Koo
    • Radiation Oncology Journal
    • /
    • v.17 no.4
    • /
    • pp.299-306
    • /
    • 1999
  • Purpose : The human genetic disorder ataxia-telangiectasia (AT) is a multisystem disease characterized by extreme radiosensitivity. The recent identification of the gene mutated in AT, ATM, and the demonstration that it encodes a homologous domain of phosphatidylinositol 3-kinase (PI3-K), the catalytic subunit of an enzyme involved in transmitting signals from the cell surface to the nucleus, provide support for a role of this gene in signal transduction. Although ionizing radiation was known to induce c-fos transcription, nothing is known about how ATM or PKCI mediated signal transduction pathway modulates the c-fos gene transcription and gene expression. Here we have studied the effect of PKCI on radiation sensitivity and c-fos transcription in normal and AT cells. Materials and Methods: Normal (LM217) and AT (AT5BIVA) cells were transfected with PKCI expression plasmid and the overexpression and integration of PKCI was evaluated by northern blotting and polymerase chain reaction, respectively. 5 Gy of radiation was exposed to LM and AT cells transfected with PKCI expression plasmid and cells were harvested 48 hours after radiation and investigated apoptosis with TUNEL method. The c-fos transcription activity was studied by performing CAT assay of reporter gene after transfection of c-fos CAT plasmid into AT and LM cells. Results: Our results demonstrate for the first time a role of PKCI on the radiation sensitivity and c-fos expression in LM and AT cells. PKCI increased radiation induced apoptosis in LM cells but reduced apoptosis in AT cells. The basal c-fos transcription activity is 70 times lower in AT cells than that in LM cells. The c-fos transcription activity was repressed by overexpression of PKCI in LM cells but not in AT cells. After induction of c-fos by Ras protein, overexpression of PKCI repressed c-fos transcription in LM cells but not in AT cells Conclusion: Overexpression of PKCI increased radiation sensitivity and repressed c-fos transcription in LM cells but not in AT cells. The results may be a. reason of increased radiation sensitivity of AT cells. PKCI may be involved in an ionizing radiation induced signal transduction pathway responsible for radiation sensitivity and c-fos transcription. The data also provided evidence for novel transcriptional difference between LM and AT cells.

  • PDF

Balance of Power and the Relative Military Capacity - Empirical Analysis and Implication to North East Asia - (세력균형(power balance)에서의 군사력 수준과 동북아시아에 주는 함의)

  • Kim, Myung-soo
    • Strategy21
    • /
    • s.38
    • /
    • pp.112-162
    • /
    • 2015
  • This study began to confirm or review the balance of power theory by applying scientific methods through experiential cases. Though there are several kinds of national power, this study supposes military power as a crucial power when it comes to war and peace. This research covered balance and imbalance through comparing relative military power between nations or nations' group. Comparison of relative military power can be achieved by statistically processing the values of which has been converted into the standard variables in same domain, then calculating the values of nation's power which has been synthesized different experiential factors. In addition, the criteria of experiential experiment is highly dedicated to European countries, USA, Japan prior to 1st and 2nd World War, as well as USA, Soviet Union and North East Asia during Cold War era. In addition, the balance of power theory has been redefined to review the action of the state upon the changes of power as mentioned in the theory. To begin with, the redefined theory states that relative level of military power between nations defines the consistency of peace and balance of power. If military power is enough to be on the range of level required to keep the power in equilibrium, peace and balance can be achieved. The opposite would unbalance the military power, causing conflicts. While the relative military level between nations change, nations seek to establish 'nations group' via military cooperation such as alliance, which also shift relative military power between nations group as well. Thus, in order to achieve balance of power, a nation seeks to strengthen its military power(self-help), while pursuing military cooperation(or alliance). This changes relative military power between nations group also. In other words, if there exists balance of power between nations, there is balance of power between nations group as well. In this theory, WWI and II broke out due to the imbalance of military force between nations and nations group, and reviewed that due to the balance of military force during the Cold War, peace was maintained. WWI was resulted from imbalance of military cooperation between two powerful states group and WWII was occurred because of the imbalance among the states. Peace was maintained from cooperation of military power and balance among the states during the Cold War. Imbalance among continental states is more threatening than maritime states and balance of power made by army force and naval force also is feasible. Also the outcomes of two variables are found military power balanced ratio of military power for balance is 67% when variable ratio of balance is 100% and standard value for balance is 0.86. Military power exists in a form of range. The range is what unstabilized the international system causing nations to supplement their military powers. These results made possible the calculation and comparison between state's military power. How balance of power inflicted war and peace has been studied through scientific reviews. Military conflict is highly possible upon already unbalanced military powers of North East Asian countries, if the US draws its power back to America. China and Japan are constantly building up their military force. On the other hand, Korean military force is inferior so in accordance to change of international situation state's survival could be threatened and it is difficult to achieve drastic increase in military force like Germany did. Especially constructing naval force demands lots of time; however but has benefit that naval force can overcome imbalance between continental states and maritime states.

The Application of Dynamic Acquisition with Motion Correction for Static Image (동적 영상 획득 방식을 이용한 정적 영상의 움직임 보정)

  • Yoon, Seok-Hwan;Seung, Jong-Min;Kim, Kye-Hwan;Kim, Jae-Il;Lee, Hyung-Jin;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.46-53
    • /
    • 2010
  • Purpose: The static image of nuclear medicine study should be acquired without a motion, however, it is difficult to acquire static image without movement for the serious patients, advanced aged patients. These movements cause decreases in reliability for quantitative and qualitative analysis, therefore re-examination was inevitable in the some cases. Consequently, in order to improve the problem of motion artifacts, the authors substituted the dynamic acquisition technique for the static acquisition, using motion correction. Materials and Methods: A capillary tube and IEC body phantom were used. First, the static image was acquired for 60 seconds while the dynamic images were acquired with a protocol, 2 sec/frame${\times}$30 frames, under the same parameter and the frames were summed up into one image afterwards. Also, minimal motion and excessive motion were applied during the another dynamic acquisition and the coordinate correction was applied towards X and Y axis on the frames where the motion artifact occurred. But the severe blurred images were deleted. Finally, the resolution and counts were compared between the static image and the summed dynamic images which before and after applying motion correction, and the signal of frequency was analysed after frequency spatial domain was transformed into 2D FFT. Supplementary examination, the blind test was performed by the nuclear medicine department staff. Results: First, the resolution in the static image and summed dynamic image without motion were 8.32 mm, 8.37 mm on X-axis and 8.30 mm, 8.42 mm on Y-axis, respectively. The counts were 484 kcounts, 485 kcounts each, so there was nearly no difference. Secondly, the resolution in the image with minimal motion applying motion correction was 8.66 mm on X-axis, 8.85 mm on Y-axis and had 469 kcounts while the image without motion correction was 21.81 mm, 24.02 mm and 469 kcounts in order. So, this shows the image with minimal motion applying motion correction has similar resolution with the static image. Lastly, the resolution in the images with excessive motion applying motion correction were 9.09 mm on X-axis, 8.83 mm on Y-axis and had 469 kcounts while the image without motion correction was 47.35 mm, 40.46 mm and 255 kcounts in order. Although there was difference in counts because of deletion of blurred frames, we could get similar resolution. And when the image was transformed into frequency, the high frequency was decreased by the movement. However, the frequency was improved again after motion correction. In the blind test, there was no difference between the image applying motion correction and the static image without motion. Conclusion: There was no significant difference between the static image and the summed dynamic image. This technique can be applied to patients who may have difficulty remaining still during the imaging process, so that the quality of image can be improved as well as the reliance for analysis of quantity. Moreover, the re-examination rate will be considerably decreased. However, there is a limit of motion correction, more time will be required to successfully image the patients applying motion correction. Also, the decrease of total counts due to deletion of the severe blurred images should be calculated and the proper number of frames should be acquired.

  • PDF

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.