• Title/Summary/Keyword: 문제 생성

Search Result 4,184, Processing Time 0.042 seconds

Does the Gut Microbiota Regulate a Cognitive Function? (장내미생물과 인지기능은 서로 연관되어 있는가?)

  • Choi, Jeonghyun;Jin, Yunho;Kim, Joo-Heon;Hong, Yonggeun
    • Journal of Life Science
    • /
    • v.29 no.6
    • /
    • pp.747-753
    • /
    • 2019
  • Cognitive decline is characterized by reduced long-/short-term memory and attention span, and increased depression and anxiety. Such decline is associated with various degenerative brain disorders, especially Alzheimer's disease (AD) and Parkinson's disease (PD). The increases in elderly populations suffering from cognitive decline create social problems and impose economic burdens, and also pose safety threats; all of these problems have been extensively researched over the past several decades. Possible causes of cognitive decline include metabolic and hormone imbalance, infection, medication abuse, and neuronal changes associated with aging. However, no treatment for cognitive decline is available. In neurodegenerative diseases, changes in the gut microbiota and gut metabolites can alter molecular expression and neurobehavioral symptoms. Changes in the gut microbiota affect memory loss in AD via the downregulation of NMDA receptor expression and increased glutamate levels. Furthermore, the use of probiotics resulted in neurological improvement in an AD model. PD and gut microbiota dysbiosis are linked directly. This interrelationship affected the development of constipation, a secondary symptom in PD. In a PD model, the administration of probiotics prevented neuron death by increasing butyrate levels. Dysfunction of the blood-brain barrier (BBB) has been identified in AD and PD. Increased BBB permeability is also associated with gut microbiota dysbiosis, which led to the destruction of microtubules via systemic inflammation. Notably, metabolites of the gut microbiota may trigger either the development or attenuation of neurodegenerative disease. Here, we discuss the correlation between cognitive decline and the gut microbiota.

Accuracy Analysis of Target Recognition according to EOC Conditions (Target Occlusion and Depression Angle) using MSTAR Data (MSTAR 자료를 이용한 EOC 조건(표적 폐색 및 촬영부각)에 따른 표적인식 정확도 분석)

  • Kim, Sang-Wan;Han, Ahrim;Cho, Keunhoo;Kim, Donghan;Park, Sang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.457-470
    • /
    • 2019
  • Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) has been attracted attention in the fields of surveillance, reconnaissance, and national security due to its advantage of all-weather and day-and-night imaging capabilities. However, there have been some difficulties in automatically identifying targets in real situation due to various observational and environmental conditions. In this paper, ATR problems in Extended Operating Conditions (EOC) were investigated. In particular, we considered partial occlusions of the target (10% to 50%) and differences in the depression angle between training ($17^{\circ}$) and test data ($30^{\circ}$ and $45^{\circ}$). To simulate various occlusion conditions, SARBake algorithm was applied to Moving and Stationary Target Acquisition and Recognition (MSTAR) images. The ATR accuracies were evaluated by using the template matching and Adaboost algorithms. Experimental results on the depression angle showed that the target identification rate of the two algorithms decreased by more than 30% from the depression angle of $45^{\circ}$ to $30^{\circ}$. The accuracy of template matching was about 75.88% while Adaboost showed better results with an accuracy of about 86.80%. In the case of partial occlusion, the accuracy of template matching decreased significantly even in the slight occlusion (from 95.77% under no occlusion to 52.69% under 10% occlusion). The Adaboost algorithm showed better performance with an accuracy of 85.16% in no occlusion condition and 68.48% in 10% occlusion condition. Even in the 50% occlusion condition, the Adaboost provided an accuracy of 52.48%, which was much higher than the template matching (less than 30% under 50% occlusion).

A Study Concerning the Background of Formation in Deleuze's System (들뢰즈 체계의 형성 배경에 대한 연구 - 칸트 선험철학 체계 그 심연으로부터의 역류 -)

  • Kim, Dae-hyeon
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.37
    • /
    • pp.329-355
    • /
    • 2021
  • The objective of this paper is to reveal that the formation of Deleuze's system is a result of a back flow of the 'ideal of pure reason' in Kant's system. I will try to seize upon the keyword in his main book, Difference and Repetition, and examine the aspect of mutual transformation between Deleuze's transcendental empiricism and Kant's transcendentalism. When analyzing Deleuze's system, most researchers tend to focus on anti-Hegelianism, but it is proper that Kant be adopted as the start when tracing the way of deployment directly. Fundamentally, Deleuze is different from Hegel in his approach to observing entire ground of thought. Even if Deleuze surely has the capability of becoming in the dialectical context, his systemic environment wherein dialectics is applied is different even at the onset. While Hegel follows the way of origin and copy or a system that begins from a preceding point of origin, Deleuze follows a way of copy and recopy or a system that begins without a point of origin. This characteristic of Deleuze's system originates directly from idealistic play. In fact, we can anticipate and identify in his book that he refers to Kant who accepted the tradition of empiricism. Therefore, the main contents of this paper is to present an overview of Kant's influence on Deleuze's system. While tracing ideas back to Kant's system, the cohabitation of empiricism and rationalism, which Kant felicitously revoiced, there emerges a definitude of world recognition. This occurs through cohabitation, and this is both deconstructed and integrated by Deleuze, and therein definitude is turned into a vision of prosperity. To the vision of prosperity that spans definitude to recognition, a philosopher has the right to select a philosophical system because selection methodology in philosophy is not a problem of legitimacy so much as the needs of the times. Deleuze's choice resulted in the opening of pandora's box in an abyss and secret contents have in turn risen sharply.

Application of Plant Flavonoids as Natural Antioxidants in Poultry Production (가금 생산에서 천연 항산화제로서 식물성 Flavonoids의적용)

  • Kang-Min, Seomoon;In-Surk, Jang
    • Korean Journal of Poultry Science
    • /
    • v.49 no.4
    • /
    • pp.211-220
    • /
    • 2022
  • Poultry are exposed to extremely high levels of oxidative stress as a consequence of the excessive production of reactive oxygen species (ROS) induced by endogenous and exogenous stressors, such as high-stocking densities, thermal stress, environmental and feed contamination, along with factors associated with intensive breeding systems. Oxidative stress promotes lipid peroxidation, DNA damage, and inflammation, which can have detrimental effects on the health of birds. During the course of evolution, birds have developed antioxidant defense mechanisms that contribute to maintaining homeostasis when exposed to endogenous and exogenous stressors. The primary antioxidant defense systems are enzymatic and non-enzymatic in nature and play roles in protecting cells from ROS attack. Recently, plant flavonoids, which have been established to reduce oxidative stress, have been attracting considerable attention as potential feed additives. Flavonoids are a group of polyphenolic compounds that can be stabilized by binding structural compounds with ROS, and can promote the elimination of ROS by inducing the expression of antioxidant enzymes. However, although flavonoids can contribute to reducing lipid peroxidation and thereby enhance the antioxidant capacity of birds, they have low solubility in the gastrointestinal tract, and consequently, it is necessary to develop a delivery technology that can facilitate the effect intestinal absorption of these compounds. Furthermore, it is important to determine the dietary levels of flavonoids by assessing the exact antioxidant effects in the gastrointestinal tract wherein the concentrations of dietary flavonoids are highest. It is also necessary to examine the expression of transcriptional factors and vitagenes associated with the efficient antioxidant effects induced by flavonoids. It is anticipated that the application of flavonoids as natural antioxidants will become a particularly important field in the poultry industry.

SF Movie Star Trek Series and the Motif of Time Travel (SF영화 <스타트랙> 시리즈와 시간여행의 모티프)

  • Noh, Shi-Hun
    • Journal of Popular Narrative
    • /
    • v.25 no.1
    • /
    • pp.165-191
    • /
    • 2019
  • The purpose of this article is to elucidate why the motif of time travel is repeated in the science fiction narrative by examining the functions of this motif in the SF movie series of Star Trek in its narrative and non-narrative aspects. Star Trek IV: The Voyage Home (1986) aims to attract the audience's interest in the story through the use of plausible time travel in the form of the slingshot effect which causes the spacecraft to fly at very fast speeds around an astronomical object. The movie also touches upon the predestination paradox that arises from a change of history in which it describes a formula of transparent aluminum that did not exist at the time. The film also serves as an evocation of the ideology of ecology by including humpback whales in the central narrative and responding to the real issue of the whale protection movement of the times. Star Track VIII: First Contact (1996) intends to interest the audience in the narrative with the warp drive, a virtual device that enables travel at speeds faster than that of light and a signature visual of Star Trek, at the time of its birth through time travel. The film emphasizes the continuation of peaceful efforts by warning the destruction of humanity that nuclear war can bring. It tackles with the view of pacifism and idealism by stressing the importance of cooperation between countries in the real world by making the audience anticipate the creation of the United Federation of Planets through encounters with the extraterrestrial. Star Trek: The Beginning (2009) improves interest through the idea of time travel to the past, this time using a black hole and the parallel universe created thereby. The parallel universe functions as a reboot, allowing a new story to be created on an alternate timeline while maintaining the original storyline. In addition, this film repeats the themes pacifism and idealism shown in the 1996 film through the confrontation between Spock (and the Starfleet) and Nero, the destruction of the Vulcan and the Romulus, and the cooperation of humans and Vulcans. Eventually, time travel in three Star Trek films has the function of maximizing the audience's interest in the story and allowing it to develop freely as a narrative tool. It also functions as an ideal solution for commenting on current problems in the non-narrative aspect. The significance of this paper is to stress the possibility that the motif of time travel in SF narrative will evolve as it continues to repeat in different forms as mentioned above.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Coexistence Model in a Dynamic Platform with ICT-based Multi-Value Chains: focusing on Healthcare Service (ICT 기반 다중 가치사슬의 동적 플랫폼에서의 공존 모형: 의료서비스를 중심으로)

  • Lee, Hyun Jung;Chang, Yong Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.69-93
    • /
    • 2017
  • The development of ICT has leaded the diversification and changes of supplies and demands in markets. It also caused the creations of a variety of values which are differentiated from those in the existing market. Therefore, a new-type market is created, which can include multi-value chains which are from ICT-based created markets as well as the existing markets. We defined the platform as the new-type market. In the platform, the multi-value chains can be coexisted with multi-values. In true market, when a new-type value chain entered into an existing market, it is general that it can be conflicted with the existing value chain in the market. The conflicted problem among multi-value chains in a market is caused by the sharing of limited market resources like suppliers, consumers, services or products among the value chains. In other words, if there are multi-value chains in the platform, then it is possible to have conflictions, overlapping, creations or losses of values among the value chains. To solve the problem, we introduce coexistence factors to reduce the conflictions to reach market equilibrium in the platform. In the other hand, it is possible to lead the creations of differentiated values from the existing market and to augment the total market values in the platform. In the early era of ICT development, ICT was introduced for improvement of efficiency and effectiveness of the value chains in the existing market. However, according to the changed role of ICT from the supporter to the promotor of the market, ICT became to lead the variations of the value chains and creations of various values in the markets. For instance, Uber Taxi created a new value chain with ICT-based new-type service or products with new resources like new suppliers and consumers. When Uber and Traditional Taxi services are playing at the same time in Taxi service platform, it is possible to create values or make conflictions among values between the new and old value chains. In this research, like Uber and traditional taxi services, if there are conflictions among the multi-value chains, then it is necessary to minimize the conflictions in the platform for the coexistence of multi-value chains which can create the value-added values in the platform. So, it is important to predict and discuss the possible conflicted problems between new and old value chains. The confliction should be solved to reach market equilibrium with multi-value chains in the platform. That is, we discuss the possibility of the coexistence of multi-value chains in the platform which are comprised of a variety of suppliers and customers. To do this, especially we are focusing on the healthcare markets. Nowadays healthcare markets are popularized in global market as well as domestic. Therefore, there are a lot of and a variety of healthcare services like Traditional-, Tele-, or Intelligent- healthcare services and so on. It shows that there are multi-suppliers, -consumers and -services as components of each different value chain in the same platform. The platform can be shared by different values that are created or overlapped by confliction and loss of values in the value chains. In this research, as was said, we focused on the healthcare services to show if a platform can be shared by different value chains like traditional-, tele-healthcare and intelligent-healthcare services and products. Additionally, we try to show if it is possible to increase the value of each value chain as well as the total value of the platform. As the result, it is possible to increase of each value of each value chain as well as the total value in the platform. Finally, we propose a coexistence model to overcome such problems and showed the possibility of coexistence between the value chains through experimentation.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Analysis of Systemic Pesticide Imidacloprid and Its Metabolites in Pepper using QuEChERS and LC-MS/MS (QuEChERS 전처리와 LC-MS/MS를 이용한 고추 중 침투성농약 Imidacloprid 및 대사물질 동시분석법)

  • Seo, Eun-Kyung;Kim, Taek-Kyum;Hong, Su-Myeong;Kwon, Hye-Yong;Kwon, Ji-Hyung;Son, Kyung-Ae;Kim, Jang-Eok;Kim, Doo-Ho
    • The Korean Journal of Pesticide Science
    • /
    • v.17 no.4
    • /
    • pp.264-270
    • /
    • 2013
  • Imidacloprid is a systemic insecticide which act as an insect neurotoxin. It used for control of pest such as aphids and other sucking insects in fruits and vegetables. Systemic pesticides move inside a crop following absorption by the plant, and these were converted into a variety of metabolites. Sometimes these metabolites make a problem about safety of agricultural products. So a simultaneous determination method of pesticide and its metabolites is needed, to monitor their presence in agricultural product and study on the fate of pesticide in a plant. This study's aim is to investigate simultaneous analysis method of imidacloprid and its metabolites, imidacloprid guanidine, imidacloprid olefin, imidacloprid urea, and 6-chloronicotinic acid in red pepper using QuEChERS method and LC-MS/MS systems. QuEChERS method was modifed beacuase $MgSO_4$ salts decreased the recoveries of 6-chloronicotinic acid in extraction procedure. Imidacloprid and its metabolites were extracted by acetonitrile with 1% glacial acetic acid and the extracts were purified through QuEChERS with primary secondary amine (PSA) and $C_{18}$ and analyzed with LC-MS/MS in ESI positive mode. Standard calibration curves were made by matrix matched standards and their correlation coefficients were higher than 0.999. Recovery studies were carried out on spiked pepper blank sample at four concentration levels (0.01, 0.04 and 0.1, 0.4 mg/kg). The average recoveries of imidacloprid and its metabolites were in the range of 70~120% with < 20% RSD. This result indicated that the method using QuEChERS and LC-MS/MS was suitable for the simultaneous determination of imidacloprid and its metabolites in red pepper.