• Title/Summary/Keyword: detection measure

Search Result 1,328, Processing Time 0.052 seconds

Simultaneous Detection of Seven Phosphoproteins in a Single Lysate Sample during Oocyte Maturation Process (난자성숙 과정의 단일 시료에서 일곱 가지 인산화 단백질의 동시 분석 방법)

  • Yoon, Se-Jin;Kim, Yun-Sun;Kim, Kyeoung-Hwa;Yoon, Tae-Ki;Lee, Woo-Sik;Lee, Kyung-Ah
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.36 no.3
    • /
    • pp.187-197
    • /
    • 2009
  • Objective: Phosphorylation and dephosphorylation of proteins are important in regulating cellular signaling pathways. Bead-based multiplex phosphorylation assay was conducted to detect the phosphorylation of seven proteins to maximize the information obtained from a single lysate of stage-specific mouse oocytes at a time. Methods: Cumulus-oocyte complexes (COCs) were cultured for 2 h, 8 h, and 16 h, respectively to address phosphorylation status of seven target proteins during oocyte maturation process. We analyzed the changes in phosphorylation at germinal vesicle (GV, 0 h), germinal vesicle breakdown (GVBD, 2 h), metaphase I (MI, 8 h), and metaphase II (MII, 16 h in vitro or in vivo) mouse oocytes by using Bio-Plex phosphoprotein assay system. We chose seven target proteins, namely, three mitogen-activated protein kinases (MAPKs), ERK1/2, JNK, and p38 MAPK, and other 4 well known signaling molecules, Akt, GSK-$3{\alpha}/{\beta}$, $I{\kappa}B{\alpha}$, and STAT3 to measure their phosphorylation status. Western blot analysis and kinase inhibitor treatment for ERK1/2, JNK, and Akt during in vitro maturation of oocytes were conducted for the confirmation. Results: Phosphorylation of ERK1/2, JNK, p38 MAPK and STAT3 was increased over 3 folds up to 20 folds, while phosphorylation of the other three signal molecules, Akt, GSK-$3{\alpha}/{\beta}$, and $I{\kapa}B{\alpha}$ was less than 3 folds. All of these results except for Akt were statistically significant (p<0.05). Conclusion: This is the first report on the new and valuable method measuring many phosphoproteins simultaneously in one minute sample such as oocyte lysates. All of the three MAPKs, ERK1/2, JNK, and p38 MAPK are involved in the process of mouse oocyte maturation. In addition, STAT3 might be important regulator of oocyte maturation, while Akt phosphorylation at Serine 473 may not be involved in the regulation of oocyte maturation.

Detection of Surface Changes by the 6th North Korea Nuclear Test Using High-resolution Satellite Imagery (고해상도 위성영상을 활용한 북한 6차 핵실험 이후 지표변화 관측)

  • Lee, Won-Jin;Sun, Jongsun;Jung, Hyung-Sup;Park, Sun-Cheon;Lee, Duk Kee;Oh, Kwan-Young
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1479-1488
    • /
    • 2018
  • On September 3rd 2017, strong artificial seismic signals from North Korea were detected in KMA (Korea Meteorological Administration) seismic network. The location of the epicenter was estimated to be Punggye-ri nuclear test site and it was the most powerful to date. The event was not studied well due to accessibility and geodetic measurements. Therefore, we used remote sensing data to analyze surface changes around Mt. Mantap area. First of all, we tried to detect surface deformation using InSAR method with Advanced Land Observation Satellite-2 (ALOS-2). Even though ALOS-2 data used L-band long wavelength, it was not working well for this particular case because of decorrelation on interferogram. The main reason would be large deformation near the Mt. Mantap area. To overcome this limitation of decorrelation, we applied offset tracking method to measure deformation. However, this method is affected by window kernel size. So we applied various window sizes from 32 to 224 in 16 steps. We could retrieve 2D surface deformation of about 3 m in maximum in the west side of Mt. Mantap. Second, we used Pleiadas-A/B high resolution satellite optical images which were acquired before and after the 6th nuclear test. We detected widespread surface damage around the top of Mt. Mantap such as landslide and suspected collapse area. This phenomenon may be caused by a very strong underground nuclear explosion test. High-resolution satellite images could be used to analyze non-accessible area.

The Present Status and a Proposal of the Prospective Measures for Parasitic Diseases Control in Korea (우리나라 기생충병관리의 현황(現況)과 효율적방안에 관(關)한 연구(硏究))

  • Loh, In-Kyu
    • Journal of Preventive Medicine and Public Health
    • /
    • v.3 no.1
    • /
    • pp.1-16
    • /
    • 1970
  • The present status of control measures for public health important helminthic infections in Korea was surveyed in 1969 and the following results were obtained. The activities of parasitic examination and Ascaris treatment for the positives which were done during 1966 to 1969 were brought in poor result and could not decrease the infection rate. It is needed to improve or strengthen the activities. The mass treatment activities for paragonimiasis and clonorchiasis in the areas which were designated by the Ministry of Health were carried out during 1965 to 1968 with no good results in decrease of estimated number of the patients. There were too many pharmaceutical companies where many kinds of anthelmintics were produced. It may be better to reduce the number of anthelmintics produced and control the quality. The human feces, the most important source of helminthic infections, was generally not treated in sanitary ways because of the poor sewerage system and no sewage treatment plant in urban areas and insanitary latrines in rural areas. The field soils of 170 specimens were collected from 34 areas out of 55 urban and tourist areas where night soil has been prohibited by a regulation to be used as a fertilizer, and examined for parasites contamination with the result of Ascaris egg detection in 44%. Some kinds of vegetables of 64 specimens each from the supply agents of parasite free vegetables and general markets were collected and examined for parasites contamination with the results of Ascaris egg detection in 25% and 36% respectively. The parasite control activities and the ability of parasitological examination techniques in the health centers of the country were not satisfactory. The budget of the Ministry of Health for the parasite control was very poor. The actual expenditure needed for cellophane thick smear technique was 8 Won per a specimen. As a principle the control of helminthic infections might be led toward breaking the chain of events in the life cycle of the prasites and eliminating environmental and host factors concerned with the infections, and the following methods nay be pointed out. 1) Mass treatment might be done to eliminate human reservoirs of an infection. 2) Animal reservoirs which are related with human infections night be eliminated. 3) The excretes of reservoirs, particularly human feces, should be treated in sanitary ways by the means of sanitary sewerage system and sewage treatment plant in urban areas and sanitary latrines such as waterborne latrine, aqua privy and pit latrine in rural areas. The increase of national economical development and prohibition of the habit of using night soils as a fertilizer might be very important factors to achieve the purpose. 4) The control of vehicles and intermediate hosts might be done by the means of prohibition of soil contamination with parasites, food sanitation, insect control and snail control. 5) The improvement of insanitary attitudes and bad habits which are related with parasitic infections night be done by the means of prohibition of habit of using night soils as a fertilizer, and improving eating habits and personal hygiene. 6) Chemoprophylactic measure and vaccination may be effective to prevent the infections or the development of a parasite to adult in the bodies when the bodies were invaded by parasites. Further studies and development of this kind of measures are needed.

  • PDF

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

The Effect of Pleural Thickening on the Impairment of Pulmonary Function in Asbestos Exposed Workers (석면취급 근로자에서 늑막비후가 폐기능에 미치는 영향)

  • Kim, Jee-Won;Ahn, Hyeong-Sook;Kim, Kyung-Ah;Lim, Young;Yun, Im-Goung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.6
    • /
    • pp.923-933
    • /
    • 1995
  • Background: Pleural abnormality is the the most common respiratory change caused by asbestos dust inhalation and also develop other asbestos related disease after cessation of asbestos exposure. So we conducted epidemiologic study to investigate if the pleural abnormality is associated with pulmonary function change and what factors are influenced on pulmonary function impairment. Methods: Two hundred and twenty two asbestos workers from 9 industries using asbestos in Korea were selected to measure the concentration of sectional asbestos fiber. Ouestionnaire, chest X-ray, PFT were also performed. All the data were analyzed by student t-test and chi-square test using SAS. Regressional analysis was performed to evaluate important factors, for example smoking, exposure concentration, period and the existence of pleural thickening, affecting to the change of pulmonary function. Results: 1) All nine industries except two, airborn asbestos fiber concentration was less than an average permissible concentration. PFT was performed on 222 workers and the percentage of male was 88.3%, their mean age was $41{\pm}9$ years old, and the duration of asbestos exposure was $10.6{\pm}7.8$ yrs. 2) The chest X-ray showed normal(89.19%), pulmonary Tb(inactive)(2.7%), pleral thickening (7.66%), suspected reticulonodular shadow(0.9%). 3) The mean values of height, smoking status, concentration of asbestos fiberwere not different between the subjects with pleural thickening and others, but age, cumulative pack-years, the duration of asbestos exposure were higher in subjects with pleural thickening. 4) All the PFT indices were lower in the subjects with pleural thickening than in the subjects without pleural thickening. 5) Simple regression analysis showed there was a significant correlation between $FEF_{75}$ which is sensitive in small airway obstruction and cumulative smoking pack-years, the duration of asbestos exposure and the concentration of asbestos fiber. 6) Multiple regression analysis showed all the pulmonary function indices were decreased as the increase of cumulative smoking pack-years and especially in the indices those are sensitive in small airway obstruction. Pleural thickening was associated with reduction in FVC, $FEV_1$, PEFR and $FEF_{25}$. Conclusion: The more concentration of asbestos fiber and the more duration of asbestos exposure, the greater reduction in $FEF_{50}$, $FEF_{75}$. Therefore PFT was important in the evaluation of early detection for small airway obstruction. Furthermore pleural thickening without asbesto-related parenchymal lung disease is associated with reduction in pulmonary function.

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Analysis of Variation for Parallel Test between Reagent Lots in in-vitro Laboratory of Nuclear Medicine Department (핵의학 체외검사실에서 시약 lot간 parallel test 시 변이 분석)

  • Chae, Hong Joo;Cheon, Jun Hong;Lee, Sun Ho;Yoo, So Yeon;Yoo, Seon Hee;Park, Ji Hye;Lim, Soo Yeon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.51-58
    • /
    • 2019
  • Purpose In in-vitro laboratories of nuclear medicine department, when the reagent lot or reagent lot changes Comparability test or parallel test is performed to determine whether the results between lots are reliable. The most commonly used standard domestic laboratories is to obtain %difference from the difference in results between two lots of reagents, and then many laboratories are set the standard to less than 20% at low concentrations and less than 10% at medium and high concentrations. If the range is deviated from the standard, the test is considered failed and it is repeated until the result falls within the standard range. In this study, several tests are selected that are performed in nuclear medicine in-vitro laboratories to analyze parallel test results and to establish criteria for customized percent difference for each test. Materials and Methods From January to November 2018, the result of parallel test for reagent lot change is analyzed for 7 items including thyroid-stimulating hormone (TSH), free thyroxine (FT4), carcinoembryonic antigen (CEA), CA-125, prostate-specific antigen (PSA), HBs-Ab and Insulin. The RIA-MAT 280 system which adopted the principle of IRMA is used for TSH, FT4, CEA, CA-125 and PSA. TECAN automated dispensing equipment and GAMMA-10 is used to measure insulin test. For the test of HBs-Ab, HAMILTON automated dispensing equipment and Cobra Gamma ray measuring instrument are used. Separate reagent, customized calibrator and quality control materials are used in this experiment. Results 1. TSH [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [14.8 / 4.4 / 3.7 / 0.0 ] C-2(middle concentration) [10.1 / 4.2 / 3.7 / 0.0] 2. FT4 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [10.0 / 4.2 / 3.9 / 0.0] C-2(high concentration) [9.6 / 3.3 / 3.1 / 0.0 ] 3. CA-125 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 4.3 / 4.3 / 0.3] C-2(high concentration) [6.5 / 3.5 / 4.3 / 0.4] 4. CEA [%diffrence Max / Mean / median] (P-value by t-test > 0.05) C-1(low concentration) [9.8 / 4.2 / 3.0 / 0.0] C-2(middle concentration) [8.7 / 3.7 / 2.3 / 0.3] 5. PSA [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [15.4 / 7.6 / 8.2 / 0.0] C-2(middle concentration) [8.8 / 4.5 / 4.8 / 0.9] 6. HBs-Ab [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 3.7 / 2.7 / 0.2] C-2(high concentration) [8.9 / 4.1 / 3.6 / 0.3] 7. Insulin [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [8.7 / 3.1 / 2.4 / 0.9] C-2(high concentration) [8.3 / 3.2 / 1.5 / 0.1] In some low concentration measurements, the percent difference is found above 10 to nearly 15 percent in result of target value calculated at a lower concentration. In addition, when the value is measured after Standard level 6, which is the highest value of reagents in the dispensing sequence, the result would have been affected by a hook effect. Overall, there was no significant difference in lot change of quality control material (p-value>0.05). Conclusion Variations between reagent lots are not large in immunoradiometric assays. It is likely that this is due to the selection of items that have relatively high detection rate in the immunoradiometric method and several remeasurements. In most test results, the difference was less than 10 percent, which was within the standard range. TSH control level 1 and PSA control level 1, which have low concentration target value, exceeded 10 percent more than twice, but it did not result in a value that was near 20 percent. As a result, it is required to perform a longer period of observation for more homogenized average results and to obtain laboratory-specific acceptance criteria for each item. Also, it is advised to study observations considering various variables.