• Title/Summary/Keyword: mobile search

Search Result 765, Processing Time 0.026 seconds

A Study of the Beauty Commerce Customer Segment Classification and Application based on Machine Learning: Focusing on Untact Service (머신러닝 기반의 뷰티 커머스 고객 세그먼트 분류 및 활용 방안: 언택트 서비스 중심으로)

  • Sang-Hyeak Yoon;Yoon-Jin Choi;So-Hyun Lee;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.4
    • /
    • pp.75-92
    • /
    • 2020
  • As population and generation structures change, more and more customers tend to avoid facing relation due to the development of information technology and spread of smart phones. This phenomenon consists with efficiency and immediacy, which are the consumption patterns of modern customers who are used to information technology, so offline network-oriented distribution companies actively try to switch their sales and services to untact patterns. Recently, untact services are boosted in various fields, but beauty products are not easy to be recommended through untact services due to many options depending on skin types and conditions. There have been many studies on recommendations and development of recommendation systems in the online beauty field, but most of them are the ones that develop recommendation algorithm using survey or social data. In other words, there were not enough studies that classify segments based on user information such as skin types and product preference. Therefore, this study classifies customer segments using machine learning technique K-prototypesalgorithm based on customer information and search log data of mobile application, which is one of untact services in the beauty field, based on which, untact marketing strategy is suggested. This study expands the scope of the previous literature by classifying customer segments using the machine learning technique. This study is practically meaningful in that it classifies customer segments by reflecting new consumption trend of untact service, and based on this, it suggests a specific plan that can be used in untact services of the beauty field.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

An Interactive Cooking Video Query Service System with Linked Data (링크드 데이터를 이용한 인터랙티브 요리 비디오 질의 서비스 시스템)

  • Park, Woo-Ri;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.59-76
    • /
    • 2014
  • The revolution of smart media such as smart phone, smart TV and tablets has brought easiness for people to get contents and related information anywhere and anytime. The characteristics of the smart media have changed user behavior for watching the contents from passive attitude into active one. Video is a kind of multimedia resources and widely used to provide information effectively. People not only watch video contents, but also search for related information to specific objects appeared in the contents. However, people have to use extra views or devices to find the information because the existing video contents provide no information through the contents. Therefore, the interaction between user and media is becoming a major concern. The demand for direct interaction and instant information is much increasing. Digital media environment is no longer expected to serve as a one-way information service, which requires user to search manually on the internet finding information they need. To solve the current inconvenience, an interactive service is needed to provide the information exchange function between people and video contents, or between people themselves. Recently, many researchers have recognized the importance of the requirements for interactive services, but only few services provide interactive video within restricted functionality. Only cooking domain is chosen for an interactive cooking video query service in this research. Cooking is receiving lots of people attention continuously. By using smart media devices, user can easily watch a cooking video. One-way information nature of cooking video does not allow to interactively getting more information about the certain contents, although due to the characteristics of videos, cooking videos provide various information such as cooking scenes and explanation for each recipe step. Cooking video indeed attracts academic researches to study and solve several problems related to cooking. However, just few studies focused on interactive services in cooking video and they still not sufficient to provide the interaction with users. In this paper, an interactive cooking video query service system with linked data to provide the interaction functionalities to users. A linked recipe schema is used to handle the linked data. The linked data approach is applied to construct queries in systematic manner when user interacts with cooking videos. We add some classes, data properties, and relations to the linked recipe schema because the current version of the schema is not enough to serve user interaction. A web crawler extracts recipe information from allrecipes.com. All extracted recipe information is transformed into ontology instances by using developed instance generator. To provide a query function, hundreds of questions in cooking video web sites such as BBC food, Foodista, Fine cooking are investigated and analyzed. After the analysis of the investigated questions, we summary the questions into four categories by question generalization. For the question generalization, the questions are clustered in eleven questions. The proposed system provides an environment associating UI (User Interface) and UX (User Experience) that allow user to watch cooking videos while obtaining the necessary additional information using extra information layer. User can use the proposed interactive cooking video system at both PC and mobile environments because responsive web design is applied for the proposed system. In addition, the proposed system enables the interaction between user and video in various smart media devices by employing linked data to provide information matching with the current context. Two methods are used to evaluate the proposed system. First, through a questionnaire-based method, computer system usability is measured by comparing the proposed system with the existing web site. Second, the answer accuracy for user interaction is measured to inspect to-be-offered information. The experimental results show that the proposed system receives a favorable evaluation and provides accurate answers for user interaction.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Comparisons of Popularity- and Expert-Based News Recommendations: Similarities and Importance (인기도 기반의 온라인 추천 뉴스 기사와 전문 편집인 기반의 지면 뉴스 기사의 유사성과 중요도 비교)

  • Suh, Kil-Soo;Lee, Seongwon;Suh, Eung-Kyo;Kang, Hyebin;Lee, Seungwon;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.191-210
    • /
    • 2014
  • As mobile devices that can be connected to the Internet have spread and networking has become possible whenever/wherever, the Internet has become central in the dissemination and consumption of news. Accordingly, the ways news is gathered, disseminated, and consumed have changed greatly. In the traditional news media such as magazines and newspapers, expert editors determined what events were worthy of deploying their staffs or freelancers to cover and what stories from newswires or other sources would be printed. Furthermore, they determined how these stories would be displayed in their publications in terms of page placement, space allocation, type sizes, photographs, and other graphic elements. In turn, readers-news consumers-judged the importance of news not only by its subject and content, but also through subsidiary information such as its location and how it was displayed. Their judgments reflected their acceptance of an assumption that these expert editors had the knowledge and ability not only to serve as gatekeepers in determining what news was valuable and important but also how to rank its value and importance. As such, news assembled, dispensed, and consumed in this manner can be said to be expert-based recommended news. However, in the era of Internet news, the role of expert editors as gatekeepers has been greatly diminished. Many Internet news sites offer a huge volume of news on diverse topics from many media companies, thereby eliminating in many cases the gatekeeper role of expert editors. One result has been to turn news users from passive receptacles into activists who search for news that reflects their interests or tastes. To solve the problem of an overload of information and enhance the efficiency of news users' searches, Internet news sites have introduced numerous recommendation techniques. Recommendations based on popularity constitute one of the most frequently used of these techniques. This popularity-based approach shows a list of those news items that have been read and shared by many people, based on users' behavior such as clicks, evaluations, and sharing. "most-viewed list," "most-replied list," and "real-time issue" found on news sites belong to this system. Given that collective intelligence serves as the premise of these popularity-based recommendations, popularity-based news recommendations would be considered highly important because stories that have been read and shared by many people are presumably more likely to be better than those preferred by only a few people. However, these recommendations may reflect a popularity bias because stories judged likely to be more popular have been placed where they will be most noticeable. As a result, such stories are more likely to be continuously exposed and included in popularity-based recommended news lists. Popular news stories cannot be said to be necessarily those that are most important to readers. Given that many people use popularity-based recommended news and that the popularity-based recommendation approach greatly affects patterns of news use, a review of whether popularity-based news recommendations actually reflect important news can be said to be an indispensable procedure. Therefore, in this study, popularity-based news recommendations of an Internet news portal was compared with top placements of news in printed newspapers, and news users' judgments of which stories were personally and socially important were analyzed. The study was conducted in two stages. In the first stage, content analyses were used to compare the content of the popularity-based news recommendations of an Internet news site with those of the expert-based news recommendations of printed newspapers. Five days of news stories were collected. "most-viewed list" of the Naver portal site were used as the popularity-based recommendations; the expert-based recommendations were represented by the top pieces of news from five major daily newspapers-the Chosun Ilbo, the JoongAng Ilbo, the Dong-A Daily News, the Hankyoreh Shinmun, and the Kyunghyang Shinmun. In the second stage, along with the news stories collected in the first stage, some Internet news stories and some news stories from printed newspapers that the Internet and the newspapers did not have in common were randomly extracted and used in online questionnaire surveys that asked the importance of these selected news stories. According to our analysis, only 10.81% of the popularity-based news recommendations were similar in content with the expert-based news judgments. Therefore, the content of popularity-based news recommendations appears to be quite different from the content of expert-based recommendations. The differences in importance between these two groups of news stories were analyzed, and the results indicated that whereas the two groups did not differ significantly in their recommendations of stories of personal importance, the expert-based recommendations ranked higher in social importance. This study has importance for theory in its examination of popularity-based news recommendations from the two theoretical viewpoints of collective intelligence and popularity bias and by its use of both qualitative (content analysis) and quantitative methods (questionnaires). It also sheds light on the differences in the role of media channels that fulfill an agenda-setting function and Internet news sites that treat news from the viewpoint of markets.

A Study of Performance Analysis on Effective Multiple Buffering and Packetizing Method of Multimedia Data for User-Demand Oriented RTSP Based Transmissions Between the PoC Box and a Terminal (PoC Box 단말의 RTSP 운용을 위한 사용자 요구 중심의 효율적인 다중 수신 버퍼링 기법 및 패킷화 방법에 대한 성능 분석에 관한 연구)

  • Bang, Ji-Woong;Kim, Dae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.54-75
    • /
    • 2011
  • PoC(Push-to-talk Over Cellular) is an integrated technology of group voice calls, video calls and internet based multimedia services. If a PoC user can not participate in the PoC session for various reasons such as an emergency situation, lack of battery capacity, then the user can use the PoC Box which has a similar functionality to the MM Box in the MMS(Multimedia Messaging Service). The RTSP(Real-Time Streaming Protocol) method is recommended to be used when there is a transmission session between the PoC box and a terminal. Since the existing VOD service uses a wired network, the packet size of RTSP-based VOD service is huge, however, the PoC service has wireless communication environments which have general characteristics to be used in RTSP method. Packet loss in a wired communication environments is relatively less than that in wireless communication environment, therefore, a buffering latency occurs in PoC service due to a play-out delay which means an asynchronous play of audio & video contents. Those problems make a user to be difficult to find the information they want when the media contents are played-out. In this paper, the following techniques and methods were proposed and their performance and superiority were verified through testing: cross-over dual reception buffering technique, advance partition multi-reception buffering technique, and on-demand multi-reception buffering technique, which are designed for effective picking up of information in media content being transmitted in short amount of time using RTSP when a user searches for media, as well as for reduction in playback delay; and same-priority packetization transmission method and priority-based packetization transmission method, which are media data packetization methods for transmission. From the simulation of functional evaluation, we could find that the proposed multiple receiving buffering and packetizing methods are superior, with respect to the media retrieval inclination, to the existing single receiving buffering method by 6-9 points from the viewpoint of effectiveness and excellence. Among them, especially, on-demand multiple receiving buffering technology with same-priority packetization transmission method is able to manage the media search inclination promptly to the requests of users by showing superiority of 3-24 points above compared to other combination methods. In addition, users could find the information they want much quickly since large amount of informations are received in a focused media retrieval period within a short time.

Review on Usefulness of EPID (Electronic Portal Imaging Device) (EPID (Electronic Portal Imaging Device)의 유용성에 관한 고찰)

  • Lee, Choong Won;Park, Do Keun;Choi, A Hyun;Ahn, Jong Ho;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2013
  • Purpose: Replacing the film which used to be used for checking the set-up of the patient and dosimetry during radiation therapy, more and more EPID equipped devices are in use at present. Accordingly, this article tried to evaluated the accuracy of the position check-up and the usefulness of dosimetry during the use of an electronic portal imaging device. Materials and Methods: On 50 materials acquired with the search of Korea Society Radiotherapeutic Technology, The Korean Society for Radiation Oncology, and Pubmed using "EPID", "Portal dosimetry", "Portal image", "Dose verification", "Quality control", "Cine mode", "Quality - assurance", and "In vivo dosimetry" as indexes, the usefulness of EPID was analyzed by classifying them as history of EPID and dosimetry, set-up verification and characteristics of EPID. Results: EPID is developed from the first generation of Liquid-filled ionization chamber, through the second generation of Camera-based fluoroscopy, and to the third generation of Amorphous-silicon EPID imaging modes can be divided into EPID mode, Cine mode and Integrated mode. When evaluating absolute dose accuracy of films and EPID, it was found that EPID showed within 1% and EDR2 film showed within 3% errors. It was confirmed that EPID is better in error measurement accuracy than film. When gamma analyzing the dose distribution of the base exposure plane which was calculated from therapy planning system, and planes calculated by EDR2 film and EPID, both film and EPID showed less than 2% of pixels which exceeded 1 at gamma values (r%>1) with in the thresholds such as 3%/3 mm and 2%/2 mm respectively. For the time needed for full course QA in IMRT to compare loads, EDR2 film recorded approximately 110 minutes, and EPID recorded approximately 55 minutes. Conclusion: EPID could easily replace conventional complicated and troublesome film and ionization chamber which used to be used for dosimetry and set-up verification, and it was proved to be very efficient and accurate dosimetry device in quality assurance of IMRT (intensity modulated radiation therapy). As cine mode imaging using EPID allows locating tumors in real-time without additional dose in lung and liver which are mobile according to movements of diaphragm and in rectal cancer patients who have unstable position, it may help to implement the most optimal radiotherapy for patients.

  • PDF

An Embedding /Extracting Method of Audio Watermark Information for High Quality Stereo Music (고품질 스테레오 음악을 위한 오디오 워터마크 정보 삽입/추출 기술)

  • Bae, Kyungyul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.21-35
    • /
    • 2018
  • Since the introduction of MP3 players, CD recordings have gradually been vanishing, and the music consuming environment of music users is shifting to mobile devices. The introduction of smart devices has increased the utilization of music through music playback, mass storage, and search functions that are integrated into smartphones and tablets. At the time of initial MP3 player supply, the bitrate of the compressed music contents generally was 128 Kbps. However, as increasing of the demand for high quality music, sound quality of 384 Kbps appeared. Recently, music content of FLAC (Free License Audio Codec) format using lossless compression method is becoming popular. The download service of many music sites in Korea has classified by unlimited download with technical protection and limited download without technical protection. Digital Rights Management (DRM) technology is used as a technical protection measure for unlimited download, but it can only be used with authenticated devices that have DRM installed. Even if music purchased by the user, it cannot be used by other devices. On the contrary, in the case of music that is limited in quantity but not technically protected, there is no way to enforce anyone who distributes it, and in the case of high quality music such as FLAC, the loss is greater. In this paper, the author proposes an audio watermarking technology for copyright protection of high quality stereo music. Two kinds of information, "Copyright" and "Copy_free", are generated by using the turbo code. The two watermarks are composed of 9 bytes (72 bits). If turbo code is applied for error correction, the amount of information to be inserted as 222 bits increases. The 222-bit watermark was expanded to 1024 bits to be robust against additional errors and finally used as a watermark to insert into stereo music. Turbo code is a way to recover raw data if the damaged amount is less than 15% even if part of the code is damaged due to attack of watermarked content. It can be extended to 1024 bits or it can find 222 bits from some damaged contents by increasing the probability, the watermark itself has made it more resistant to attack. The proposed algorithm uses quantization in DCT so that watermark can be detected efficiently and SNR can be improved when stereo music is converted into mono. As a result, on average SNR exceeded 40dB, resulting in sound quality improvements of over 10dB over traditional quantization methods. This is a very significant result because it means relatively 10 times improvement in sound quality. In addition, the sample length required for extracting the watermark can be extracted sufficiently if the length is shorter than 1 second, and the watermark can be completely extracted from music samples of less than one second in all of the MP3 compression having a bit rate of 128 Kbps. The conventional quantization method can extract the watermark with a length of only 1/10 compared to the case where the sampling of the 10-second length largely fails to extract the watermark. In this study, since the length of the watermark embedded into music is 72 bits, it provides sufficient capacity to embed necessary information for music. It is enough bits to identify the music distributed all over the world. 272 can identify $4*10^{21}$, so it can be used as an identifier and it can be used for copyright protection of high quality music service. The proposed algorithm can be used not only for high quality audio but also for development of watermarking algorithm in multimedia such as UHD (Ultra High Definition) TV and high-resolution image. In addition, with the development of digital devices, users are demanding high quality music in the music industry, and artificial intelligence assistant is coming along with high quality music and streaming service. The results of this study can be used to protect the rights of copyright holders in these industries.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Method Development for the Profiling Analysis of Endogenous Metabolites by Accurate-Mass Quadrupole Time-of-Flight(Q-TOF) LC/MS (LC/TOFMS를 이용한 생체시료의 내인성 대사체 분석법 개발)

  • Lee, In-Sun;Kim, Jin-Ho;Cho, Soo-Yeul;Shim, Sun-Bo;Park, Hye-Jin;Lee, Jin-Hee;Lee, Ji-Hyun;Hwang, In-Sun;Kim, Sung-Il;Lee, Jung-Hee;Cho, Su-Yeon;Choi, Don-Woong;Cho, Yang-Ha
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • Metabolomics aims at the comprehensive, qualitative and quantitative analysis of wide arrays of endogenous metabolites in biological samples. It has shown particular promise in the area of toxicology and drug development, functional genomics, system biology and clinical diagnosis. In this study, analytical technique of MS instrument with high resolution mass measurement, such as time-of-flight (TOF) was validated for the purpose of investigation of amino acids, sugars and fatty acids. Rat urine and serum samples were extracted by selected each solvent (50% acetonitrile, 100% acetonitrile, acetone, methanol, water, ether) extraction method. We determined the optimized liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS) system and selected appropriated columns, mobile phases, fragment energy and collision energy, which could search 17 metabolites. The spectral data collected from LC/TOFMS were tested by ANOVA. Obtained with the use of LC/TOFMS technique, our results indicated that (1) MS and MS/MS parameters were optimized and most abundant product ion of each metabolite were selected to be monitorized; (2) with design of experiment analysis, methanol yielded the optimal extraction efficiency. Therefore, the results of this study are expected to be useful in the endogenous metabolite fields according to validated SOP for endogenous amino acids, sugars and fatty acids.