• Title/Summary/Keyword: 사용자 평가

Search Result 4,601, Processing Time 0.032 seconds

Development of A Material Flow Model for Predicting Nano-TiO2 Particles Removal Efficiency in a WWTP (하수처리장 내 나노 TiO2 입자 제거효율 예측을 위한 물질흐름모델 개발)

  • Ban, Min Jeong;Lee, Dong Hoon;Shin, Sangwook;Lee, Byung-Tae;Hwang, Yu Sik;Kim, Keugtae;Kang, Joo-Hyon
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.345-353
    • /
    • 2022
  • A wastewater treatment plant (WWTP) is a major gateway for the engineered nano-particles (ENPs) entering the water bodies. However existing studies have reported that many WWTPs exceed the No Observed Effective Concentration (NOEC) for ENPs in the effluent and thus they need to be designed or operated to more effectively control ENPs. Understanding and predicting ENPs behaviors in the unit and \the whole process of a WWTP should be the key first step to develop strategies for controlling ENPs using a WWTP. This study aims to provide a modeling tool for predicting behaviors and removal efficiencies of ENPs in a WWTP associated with process characteristics and major operating conditions. In the developed model, four unit processes for water treatment (primary clarifier, bioreactor, secondary clarifier, and tertiary treatment unit) were considered. Additionally the model simulates the sludge treatment system as a single process that integrates multiple unit processes including thickeners, digesters, and dewatering units. The simulated ENP was nano-sized TiO2, (nano-TiO2) assuming that its behavior in a WWTP is dominated by the attachment with suspendid solids (SS), while dissolution and transformation are insignificant. The attachment mechanism of nano-TiO2 to SS was incorporated into the model equations using the apparent solid-liquid partition coefficient (Kd) under the equilibrium assumption between solid and liquid phase, and a steady state condition of nano-TiO2 was assumed. Furthermore, an MS Excel-based user interface was developed to provide user-friendly environment for the nano-TiO2 removal efficiency calculations. Using the developed model, a preliminary simulation was conducted to examine how the solid retention time (SRT), a major operating variable affects the removal efficiency of nano-TiO2 particles in a WWTP.

The development of resources for the application of 2020 Dietary Reference Intakes for Koreans (2020 한국인 영양소 섭취기준 활용 자료 개발)

  • Hwang, Ji-Yun;Kim, Yangha;Lee, Haeng Shin;Park, EunJu;Kim, Jeongseon;Shin, Sangah;Kim, Ki Nam;Bae, Yun Jung;Kim, Kirang;Woo, Taejung;Yoon, Mi Ock;Lee, Myoungsook
    • Journal of Nutrition and Health
    • /
    • v.55 no.1
    • /
    • pp.21-35
    • /
    • 2022
  • The recommended meal composition allows the general people to organize meals using the number of intakes of foods from each of six food groups (grains, meat·fish·eggs·beans, vegetables, fruits, milk·dairy products and oils·sugars) to meet Dietary Reference Intakes for Koreans (KDRIs) without calculating complex nutritional values. Through an integrated analysis of data from the 6th to 7th Korean National Health and Nutrition Examination Surveys (2013-2018), representative foods for each food group were selected, and the amounts of representative foods per person were derived based on energy. Based on the EER by age and gender from the KDRIs, a total of 12 kinds of diets were suggested by differentiating meal compositions by age (aged 1-2, 3-5, 6-11, 12-18, 19-64, 65-74 and ≥ 75 years) and gender. The 2020 Food Balance Wheel included the 6th food group of oils and sugars to raise public awareness and avoid confusion in the practical utilization of the model by industries or individuals in reducing the consistent increasing intakes of oils and sugars. To promote the everyday use of the Food Balance Wheel and recommended meal compositions among the general public, the poster of the Food Balance Wheel was created in five languages (Korean, English, Japanese, Vietnamese and Chinese) along with card news. A survey was conducted to provide a basis for categorizing nutritional problems by life cycles and developing customized web-based messages to the public. Based on survey results two types of card news were produced for the general public and youth. Additionally, the educational program was developed through a series of processes, such as prioritization of educational topics, setting educational goals for each stage, creation of a detailed educational system chart and teaching-learning plans for the development of educational materials and media.

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.

Development of deep learning structure for complex microbial incubator applying deep learning prediction result information (딥러닝 예측 결과 정보를 적용하는 복합 미생물 배양기를 위한 딥러닝 구조 개발)

  • Hong-Jik Kim;Won-Bog Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.1
    • /
    • pp.116-121
    • /
    • 2023
  • In this paper, we develop a deep learning structure for a complex microbial incubator that applies deep learning prediction result information. The proposed complex microbial incubator consists of pre-processing of complex microbial data, conversion of complex microbial data structure, design of deep learning network, learning of the designed deep learning network, and GUI development applied to the prototype. In the complex microbial data preprocessing, one-hot encoding is performed on the amount of molasses, nutrients, plant extract, salt, etc. required for microbial culture, and the maximum-minimum normalization method for the pH concentration measured as a result of the culture and the number of microbial cells to preprocess the data. In the complex microbial data structure conversion, the preprocessed data is converted into a graph structure by connecting the water temperature and the number of microbial cells, and then expressed as an adjacency matrix and attribute information to be used as input data for a deep learning network. In deep learning network design, complex microbial data is learned by designing a graph convolutional network specialized for graph structures. The designed deep learning network uses a cosine loss function to proceed with learning in the direction of minimizing the error that occurs during learning. GUI development applied to the prototype shows the target pH concentration (3.8 or less) and the number of cells (108 or more) of complex microorganisms in an order suitable for culturing according to the water temperature selected by the user. In order to evaluate the performance of the proposed microbial incubator, the results of experiments conducted by authorized testing institutes showed that the average pH was 3.7 and the number of cells of complex microorganisms was 1.7 × 108. Therefore, the effectiveness of the deep learning structure for the complex microbial incubator applying the deep learning prediction result information proposed in this paper was proven.

Research on Making a Disaster Situation Management Intelligent Based on User Demand (사용자 수요 기반의 재난 상황관리 지능화에 관한 연구)

  • Seon-Hwa Choi;Jong-Yeong Son;Mi-Song Kim;Heewon Yoon;Shin-Hye Ryu;Sang Hoon Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.811-825
    • /
    • 2023
  • In accordance with the government's stance of actively promoting intelligent administrative service policies through data utilization, in the disaster and safety management field, it also is proceeding with disaster and safety management policies utilizing data and constructing systems for responding efficiently to new and complex disasters and establishing scientific and systematic safety policies. However, it is difficult to quickly and accurately grasp the on-site situation in the event of a disaster, and there are still limitations in providing information necessary for situation judgment and response only by displaying vast data. This paper focuses on deriving specific needs to make disaster situation management work more intelligent and efficient by utilizing intelligent information technology. Through individual interviews with workers at the Central Disaster and Safety Status Control Center, we investigated the scope of disaster situation management work and the main functions and usability of the geographic information system (GIS)-based integrated situation management system by practitioners in this process. In addition, the data built in the system was reclassified according to purpose and characteristics to check the status of data in the GIS-based integrated situation management system. To derive needed to make disaster situation management more intelligent and efficient by utilizing intelligent information technology, 3 strategies were established to quickly and accurately identify on-site situations, make data-based situation judgments, and support efficient situation management tasks, and implementation tasks were defined and task priorities were determined based on the importance of implementation tasks through analytic hierarchy process (AHP) analysis. As a result, 24 implementation tasks were derived, and to make situation management efficient, it is analyzed that the use of intelligent information technology is necessary for collecting, analyzing, and managing video and sensor data and tasks that can take a lot of time of be prone to errors when performed by humans, that is, collecting situation-related data and reporting tasks. We have a conclusion that among situation management intelligence strategies, we can perform to develop technologies for strategies being high important score, that is, quickly and accurately identifying on-site situations and efficient situation management work support.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Image Quality Evaluation of CsI:Tl and Gd2O2S Detectors in the Indirect-Conversion DR System (간접변환방식 DR장비에서 CsI:Tl과 Gd2O2S의 검출기 화질 평가)

  • Kong, Changgi;Choi, Namgil;Jung, Myoyoung;Song, Jongnam;Kim, Wook;Han, Jaebok
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.1
    • /
    • pp.27-35
    • /
    • 2017
  • The purpose of this study was to investigate the features of CsI:Tl and $Gd_2O_2S$ detectors with an indirect conversion method using phantom in the DR (digital radiography) system by obtaining images of thick chest phantom, medium thickness thigh phantom, and thin hand phantom and by analyzing the SNR and CNR. As a result of measuring the SNR and CNR according to the thickness change of the subject, the SNR and CNR were higher in CsI:Tl detector than in $Gd_2O_2S$ detector when the medium thickness thigh phantom and thin hand phantom were scanned. However, when the thick chest phantom was used, for the SNR at 80~125 kVp and the CNR at 80~110 kVp in the $Gd_2O_2S$ detector, the values were higher than those of CsI:Tl detector. The SNR and CNR both increased as the tube voltage increased. The SNR and CNR of CsI:Tl detector in the medium thickness thigh phantom increased at 40~50 kVp and decreased as the tube voltage increased. The SNR and CNR of $Gd_2O_2S$ detector increased at 40~60 kVp and decreased as the tube voltage increased. The SNR and CNR of CsI:Tl detctor in the thin hand phantom decreased at the low tube voltage and increased as the tube voltage increased, but they decreased again at 100~110 kVp, while the SNR and CNR of $Gd_2O_2S$ detector were found to decrease as the tube voltage increased. The MTF of CsI:Tl detector was 6.02~90.90% higher than that of $Gd_2O_2S$ detector at 0.5~3 lp/mm. The DQE of CsI:Tl detector was 66.67~233.33% higher than that of $Gd_2O_2S$ detector. In conclusion, although the values of CsI:Tl detector were higher than those of $Gd_2O_2S$ detector in the comparison of MTF and DQE, the cheaper $Gd_2O_2S$ detector had higher SNR and CNR than the expensive CsI:Tl detector at a certain tube voltage range in the thick check phantom. At chest X-ray, if the $Gd_2O_2S$ detector is used rather than the CsI:Tl detector, chest images with excellent quality can be obtained, which will be useful for examination. Moreover, price/performance should be considered when determining the detector type from the viewpoint of the user.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.