• Title/Summary/Keyword: System Performance Test

Search Result 7,375, Processing Time 0.062 seconds

Preliminary Report of the $1998{\sim}1999$ Patterns of Care Study of Radiation Therapy for Esophageal Cancer in Korea (식도암 방사선 치료에 대한 Patterns of Care Study ($1998{\sim}1999$)의 예비적 결과 분석)

  • Hur, Won-Joo;Choi, Young-Min;Lee, Hyung-Sik;Kim, Jeung-Kee;Kim, Il-Han;Lee, Ho-Jun;Lee, Kyu-Chan;Kim, Jung-Soo;Chun, Mi-Son;Kim, Jin-Hee;Ahn, Yong-Chan;Kim, Sang-Gi;Kim, Bo-Kyung
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: For the first time, a nationwide survey in the Republic of Korea was conducted to determine the basic parameters for the treatment of esophageal cancer and to offer a solid cooperative system for the Korean Pattern of Care Study database. $\underline{Materials\;and\;Methods}$: During $1998{\sim}1999$, biopsy-confirmed 246 esophageal cancer patients that received radiotherapy were enrolled from 23 different institutions in South Korea. Random sampling was based on power allocation method. Patient parameters and specific information regarding tumor characteristics and treatment methods were collected and registered through the web based PCS system. The data was analyzed by the use of the Chi-squared test. $\underline{Results}$: The median age of the collected patients was 62 years. The male to female ratio was about 91 to 9 with an absolute male predominance. The performance status ranged from ECOG 0 to 1 in 82.5% of the patients. Diagnostic procedures included an esophagogram (228 patients, 92.7%), endoscopy (226 patients, 91.9%), and a chest CT scan (238 patients, 96.7%). Squamous cell carcinoma was diagnosed in 96.3% of the patients; mid-thoracic esophageal cancer was most prevalent (110 patients, 44.7%) and 135 patients presented with clinical stage III disease. Fifty seven patients received radiotherapy alone and 37 patients received surgery with adjuvant postoperative radiotherapy. Half of the patients (123 patients) received chemotherapy together with RT and 70 patients (56.9%) received it as concurrent chemoradiotherapy. The most frequently used chemotherapeutic agent was a combination of cisplatin and 5-FU. Most patients received radiotherapy either with 6 MV (116 patients, 47.2%) or with 10 MV photons (87 patients, 35.4%). Radiotherapy was delivered through a conventional AP-PA field for 206 patients (83.7%) without using a CT plan and the median delivered dose was 3,600 cGy. The median total dose of postoperative radiotherapy was 5,040 cGy while for the non-operative patients the median total dose was 5,970 cGy. Thirty-four patients received intraluminal brachytherapy with high dose rate Iridium-192. Brachytherapy was delivered with a median dose of 300 cGy in each fraction and was typically delivered $3{\sim}4\;times$. The most frequently encountered complication during the radiotherapy treatment was esophagitis in 155 patients (63.0%). $\underline{Conclusion}$: For the evaluation and treatment of esophageal cancer patients at radiation facilities in Korea, this study will provide guidelines and benchmark data for the solid cooperative systems of the Korean PCS. Although some differences were noted between institutions, there was no major difference in the treatment modalities and RT techniques.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on Improvement on National Legislation for Sustainable Progress of Space Development Project (우주개발사업의 지속발전을 위한 국내입법의 개선방향에 관한 연구)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.1
    • /
    • pp.97-158
    • /
    • 2010
  • The purpose of this paper is to research on the contents and improvement of national legislations relating to space development in Korea to make the sustainable progress of space development project in Korea. Korea has launched its first satellite KITST-1 in 1992. The National Space Committee has established "The Space Development Promotion Basic Plan" in 2007. The plan addressed the development of total 13 satellites by 2010 and the space launch vehicle by 2020, and the launch of moon exploration spaceship by 2021. Korea has built the space center at Oinarodo, Goheng Province in June 2009. In Korea the first small launch vehicle KSLV-1 was launched at the Naro Space Center in August 2009, and its second launch was made in June 2010. The United Nations has adopted five treaties relating to the development of outer space as follows : The Outer Space Treaty of 1967, the Rescue and Return Agreement of 1968, the Liability Convention of 1972, the Registration Convention of 1974, and the Moon Treaty of 1979. All five treaties has come into force. Korea has ratified the Outer Space Treaty, the Rescue and Return Agreement, the Liability Convention and the Registration Convention excepting the Moon Treaty. Most of development countries have enacted the national legislation relating to the development of our space as follows : The National Aeronautic and Space Act of 1958 and the Commercial Space Act of 1998 in the United States, Outer Space Act of 1986 in England, Establishment Act of National Space Center of 1961 in France, Canadian Space Agency Act of 1990 in Canada, Space Basic Act of 2008 in Japan, and Law on Space Activity of 1993 in Russia. There are currently three national legislations relating to space development in Korea as follows : Aerospace Industry Development Promotion Act of 1987, Outer Space Development Promotion Act of 2005, Outer Space Damage Compensation Act of 2008. The Ministry of Knowledge Economy of Korea has announced the Full Amendment Draft of Aerospace Industry Development Promotion Act in December 2009, and it's main contents are as follows : (1) Changing the title of Act into Aerospace Industry Promotion Act, (2) Newly regulating the definition of air flight test place, etc., (3) Establishment of aerospace industry basic plan, establishment of aerospace industry committee, (4) Project for promoting aerospace industry, (5) Exploration development, international joint development, (6) Cooperative research development, (7) Mutual benefit project, (8) Project for furthering basis of aerospace industry, (9) Activating cluster of aerospace industry, (10) Designation of air flight test place, etc., (11) Abolishing the designation and assistance of specific enterprise, (12) Abolishing the inspection of performance and quality. The Outer Space Development Promotion Act should be revised with regard to the following matters : (1) Overlapping problem in legal system between the Outer Space Development Promotion Act and the Aerospace industry Development promotion Act, (2) Distribution and adjustment problem of the national research development budget for space development between National Space Committee and National Science Technology Committee, (3) Consideration and preservation of environment in space development, (4) Taking the legal action and maintaining the legal system for policy and regulation relating to space development. The Outer Space Damage Compensation Act should be revised with regard to the following matters : (1) Definition of space damage and indirect damage, (2) Currency unit of limit of compensation liability, (3) Joint liability and compensation claim right of launching person of space object, (4) Establishment of Space Damage Compensation Council. In Korea, it will be possible to make a space tourism in 2013, and it is planned to introduce and operate a manned spaceship in 2013. Therefore, it is necessary to develop the policy relating to the promotion of commercial space transportation industry. Also it is necessary to make the proper maintenance of the current Aviation Law and space development-related laws and regulations for the promotion of space transportation industry in Korea.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

A Study on Usefulness of Specific Agents with Liver Disease at MRI Imaging: Comparison with Ferucarbotran and Gd-EOB-DTPA Contrast Agents (간 병변 특이성 조영제 자기공명영상에 대한 연구: Ferucarbotran과 Gd-EOB-DTPA 조영제의 비교)

  • Lee, Jae-Seung;Goo, Eun-Hoe;Park, Cheol-Soo;Lee, Sun-Yeob;Choi, Yong-Seok
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.235-243
    • /
    • 2009
  • The purpose of this experiment is to know the relation of the detection and characterization of liver's diseases as comparison of finding at MR imaging using a Ferucarbotran (SPIO) and Gd-EOB-DTPA (Primovist) agents in diffuse liver disease. A total of 50 patients (25 men and 25 women, mean age: 50 years) with liver diseases were investigated at 3.0T machine (GE, General Electric Medical System, Excite HD) "with 8 Ch body coil for comparison of diseases and contrast's uptake relation, which used the LAVA, MGRE." All images were performed on the same location with before and after Ferucarbotran and Gd-EOB-DTPA administrations (p<0.05). Contrast to noise ratio of Ferucarbotran and Gd-EOB-DTPA in the HCC were $3.08{\pm}0.12$ and $7.00{\pm}0.27$ with MGRE and LAVA pulse sequence, $3.62{\pm}0.13$ and $2.60{\pm}0.23$ in the hyper-plastic nodule, $1.70{\pm}0.09$ and $2.60{\pm}0.23$ in the meta, $2.12{\pm}0.28$ and $5.86{\pm}0.28$ in the FNH, $4.45{\pm}0.28$ and $1.73{\pm}0.02$ in the abscess and ANOVA test was used to evaluate the diagnostic performance of each disease (p<0.05). In conclusions, two techniques were well demonstrated with the relation of the detection and characterization of liver's diseases.

  • PDF

IMAGING SIMULATIONS FOR THE KOREAN VLBI NETWORK(KVN) (한국우주전파관측망(KVN)의 영상모의실험)

  • Jung, Tae-Hyun;Rhee, Myung-Hyun;Roh, Duk-Gyoo;Kim, Hyun-Goo;Sohn, Bong-Won
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.1
    • /
    • pp.1-12
    • /
    • 2005
  • The Korean VLBI Network (KVN) will open a new field of research in astronomy, geodesy and earth science using the newest three Elm radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA) image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was -6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS) software package. As the KVN array has a resolution of about 6 mas (milli arcsecond) at 220Hz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

Structural Relationships Among Factors to Adoption of Telehealth Service (원격의료서비스 수용요인의 구조적 관계 실증연구)

  • Kim, Sung-Soo;Ryu, See-Won
    • Asia pacific journal of information systems
    • /
    • v.21 no.3
    • /
    • pp.71-96
    • /
    • 2011
  • Within the traditional medical delivery system, patients residing in medically vulnerable areas, those with body movement difficulties, and nursing facility residents have had limited access to good healthcare services. However, Information and Communication Technology (ICT) provides us with a convenient and useful means of overcoming distance and time constraints. ICT is integrated with biomedical science and technology in a way that offers a new high-quality medical service. As a result, rapid technological advancement is expected to play a pivotal role bringing about innovation in a wide range of medical service areas, such as medical management, testing, diagnosis, and treatment; offering new and improved healthcare services; and effecting dramatic changes in current medical services. The increase in aging population and chronic diseases has caused an increase in medical expenses. In response to the increasing demand for efficient healthcare services, a telehealth service based on ICT is being emphasized on a global level. Telehealth services have been implemented especially in pilot projects and system development and technological research. With the service about to be implemented in earnest, it is necessary to study its overall acceptance by consumers, which is expected to contribute to the development and activation of a variety of services. In this sense, the study aims at positively examining the structural relationship among the acceptance factors for telehealth services based on the Technology Acceptance Model (TAM). Data were collected by showing audiovisual material on telehealth services to online panels and requesting them to respond to a structured questionnaire sheet, which is known as the information acceleration method. Among the 1,165 adult respondents, 608 valid samples were finally chosen, while the remaining were excluded because of incomplete answers or allotted time overrun. In order to test the reliability and validity of the assessment scale items, we carried out reliability and factor analyses, and in order to explore the causal relation among potential variables, we conducted a structural equation modeling analysis using AMOS 7.0 and SPSS 17.0. The research outcomes are as follows. First, service quality, innovativeness of medical technology, and social influence were shown to affect perceived ease of use and perceived usefulness of the telehealth service, which was statistically significant, and the two factors had a positive impact on willingness to accept the telehealth service. In addition, social influence had a direct, significant effect on intention to use, which is paralleled by the TAM used in previous research on technology acceptance. This shows that the research model proposed in the study effectively explains the acceptance of the telehealth service. Second, the research model reveals that information privacy concerns had a insignificant impact on perceived ease of use of the telehealth service. From this, it can be gathered that the concerns over information protection and security are reduced further due to advancements in information technology compared to the initial period in the information technology industry, and thus the improvement in quality of medical services appeared to ensure that information privacy concerns did not act as a prohibiting factor in the acceptance of the telehealth service. Thus, if other factors have an enormous impact on ease of use and usefulness, concerns over these results in the initial period of technology acceptance may become irrelevant. However, it is clear that users' information privacy concerns, as other studies have revealed, is a major factor affecting technology acceptance. Thus, caution must be exercised while interpreting the result, and further study is required on the issue. Numerous information technologies with outstanding performance and innovativeness often attract few consumers. A revised bill for those urgently in need of telehealth services is about to be approved in the national assembly. As telemedicine is implemented between doctors and patients, a wide range of systems that will improve the quality of healthcare services will be designed. In this sense, the study on the consumer acceptance of telehealth services is meaningful and offers strong academic evidence. Based on the implications, it can be expected to contribute to the activation of telehealth services. Further study is needed to assess the acceptance factors for telehealth services, such as motivation to remain healthy, health care involvement, knowledge on health, and control of health-related behavior, in order to develop unique services according to the categorization of customers based on health factors. In addition, further study may focus on various theoretical cognitive behavior models other than the TAM, such as the health belief model.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

Norm-referenced criteria for strength of the elbow joint for the korean high school baseball players using the isokinetic equipment: (Focusing on seoul and gyeonggi-do) (등속성 장비를 이용하여 한국고교야구선수 주관절 근력 평가기준치 설정: (서울 및 경기도 중심으로))

  • Kim, Su-Hyun;Lee, Jin-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.442-447
    • /
    • 2017
  • The purpose of this study was to establish norm-referenced criteria for the isokinetic strength of the elbow joint in Korean high school baseball players. Two hundred and one high school baseball players participated in this study, none of whom had any medical problem with their upper limbs. The elbow flexion/extension test was conducted four times at a speed of $60^{\circ}/sec$. The HUMAC NORM (CSMI, USA) system was used to obtain the values of the peak torque and peak torque per body weight. The results were presented as norm-referenced criterion valuesusing the 5-point scale of Cajori which consists of five stages (6.06%, 24.17%, 38.30%, 24.17%, and 6.06%). In the results of this study, the peak torques of the elbow (flexor and extensor?) at an angular velocity of $60^{\circ}/sec$ were $37.88{\pm}8.14Nm$ and $44.59{\pm}11.79Nm$, and the peak torque per body weight of the elbow (flexor and extensor?) were $50.06{\pm}8.66Nm$ and $58.28{\pm}12.84Nm$, respectively. The reference values of the peak torque and peak torque per body weight of the elbow flexor and extensor were setat an angular velocity of $60^{\circ}/sec$. On the basis of the results analyzed in this study, the following conclusions were drawn. There is a lack of proper studies on the elbow joint strength, even though the most common injury in baseball players occurs in the elbow joint. Therefore, we need to establish a standard muscle strength in order to prevent elbow joint injuries and improve their performance. The criteria for the peak torque and peak torque per body weight established here in will provide useful information for high school baseball players, baseball coaches, athletic trainers and sports injury rehabilitation specialists in injury recovery and return to rehabilitation, which can beutilized as objective clinical assessment data.