• Title/Summary/Keyword: object-based analysis

Search Result 1,794, Processing Time 0.031 seconds

Class Classification and Validation of a Musculoskeletal Risk Factor Dataset for Manufacturing Workers (제조업 노동자 근골격계 부담요인 데이터셋 클래스 분류와 유효성 검증)

  • Young-Jin Kang;;;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • There are various items in the safety and health standards of the manufacturing industry, but they can be divided into work-related diseases and musculoskeletal diseases according to the standards for sickness and accident victims. Musculoskeletal diseases occur frequently in manufacturing and can lead to a decrease in labor productivity and a weakening of competitiveness in manufacturing. In this paper, to detect the musculoskeletal harmful factors of manufacturing workers, we defined the musculoskeletal load work factor analysis, harmful load working postures, and key points matching, and constructed data for Artificial Intelligence(AI) learning. To check the effectiveness of the suggested dataset, AI algorithms such as YOLO, Lite-HRNet, and EfficientNet were used to train and verify. Our experimental results the human detection accuracy is 99%, the key points matching accuracy of the detected person is @AP0.5 88%, and the accuracy of working postures evaluation by integrating the inferred matching positions is LEGS 72.2%, NECT 85.7%, TRUNK 81.9%, UPPERARM 79.8%, and LOWERARM 92.7%, and considered the necessity for research that can prevent deep learning-based musculoskeletal diseases.

GIS-based Estimation of Climate-induced Soil Erosion in Imha Basin (기후변화에 따른 임하댐 유역의 GIS 기반 토양침식 추정)

  • Lee, Khil Ha;Lee, Geun Sang;Cho, Hong Yeon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.3D
    • /
    • pp.423-429
    • /
    • 2008
  • The object of the present study is to estimate the potential effects of climate change and land use on soil erosion in the mid-east Korea. Simulated precipitation by CCCma climate model during 2030-2050 is used to model predicted soil erosion, and results are compared to observation. Simulation results allow relative comparison of the impact of climate change on soil erosion between current and predicted future condition. Expected land use changes driven by socio-economic change and plant growth driven by the increase of temperature and are taken into accounts in a comprehensive way. Mean precipitation increases by 17.7% (24.5%) for A2 (B2) during 2030-2050 compared to the observation period (1966-1998). In general predicted soil erosion for the B2 scenario is larger than that for the A2 scenario. Predicted soil erosion increases by 48%~90% under climate change except the scenario 1 and 2. Predicted soil erosion under the influence of temperature-induced fast plant growth, higher evapotranspiration rate, and fertilization effect (scenario 5 and 6) is approximately 25% less than that in the scenario 3 and 4. On the basis of the results it is said that precipitation and the corresponding soil erosion is likely to increase in the future and care needs to be taken in the study area.

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

A Study on Detection Methodology for Influential Areas in Social Network using Spatial Statistical Analysis Methods (공간통계분석기법을 이용한 소셜 네트워크 유력지역 탐색기법 연구)

  • Lee, Young Min;Park, Woo Jin;Yu, Ki Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.21-30
    • /
    • 2014
  • Lately, new influentials have secured a large number of volunteers on social networks due to vitalization of various social media. There has been considerable research on these influential people in social networks but the research has limitations on location information of Location Based Social Network Service(LBSNS). Therefore, the purpose of this study is to propose a spatial detection methodology and application plan for influentials who make comments about diverse social and cultural issues in LBSNS using spatial statistical analysis methods. Twitter was used to collect analysis object data and 168,040 Twitter messages were collected in Seoul over a month-long period. In addition, 'politics,' 'economy,' and 'IT' were set as categories and hot issue keywords as given categories. Therefore, it was possible to come up with an exposure index for searching influentials in respect to hot issue keywords, and exposure index by administrative units of Seoul was calculated through a spatial joint operation. Moreover, an influential index that considers the spatial dependence of the exposure index was drawn to extract information on the influential areas at the top 5% of the influential index and analyze the spatial distribution characteristics and spatial correlation. The experimental results demonstrated that spatial correlation coefficient was relatively high at more than 0.3 in same categories, and correlation coefficient between politics category and economy category was also more than 0.3. On the other hand, correlation coefficient between politics category and IT category was very low at 0.18, and between economy category and IT category was also very weak at 0.15. This study has a significance for materialization of influentials from spatial information perspective, and can be usefully utilized in the field of gCRM in the future.

Analysis of Image Distortion on Magnetic Resonance Diffusion Weighted Imaging

  • Cho, Ah Rang;Lee, Hae Kag;Yoo, Heung Joon;Park, Cheol-Soo
    • Journal of Magnetics
    • /
    • v.20 no.4
    • /
    • pp.381-386
    • /
    • 2015
  • The purpose of this study is to improve diagnostic efficiency of clinical study by setting up guidelines for more precise examination with a comparative analysis of signal intensity and image distortion depending on the location of X axial of object when performing magnetic resonance diffusion weighted imaging (MR DWI) examination. We arranged the self-produced phantom with a 45 mm of interval from the core of 44 regent bottles that have a 16 mm of external diameter and 55 mm of height, and were placed in 4 rows and 11 columns in an acrylic box. We also filled up water and margarine to portrait the fat. We used 3T Skyra and 18 Channel Body array coil. We also obtained the coronal image with the direction of RL (right to left) by using scan slice thinkness 3 mm, slice gap: 0mm, field of view (FOV): $450{\times}450mm^2$, repetition time (TR): 5000 ms, echo time (TE): 73/118 ms, Matrix: $126{\times}126$, slice number: 15, scan time: 9 min 45sec, number of excitations (NEX): 3, phase encoding as a diffusion-weighted imaging parameter. In order to scan, we set b-value to $0s/mm^2$, $400s/mm^2$, and $1,400s/mm^2$, and obtained T2 fat saturation image. Then we did a comparative analysis on the differences between image distortion and signal intensity depending on the location of X axial based on iso-center of patient's table. We used "Image J" as a comparative analysis programme, and used SPSS v18.0 as a statistic programme. There was not much difference between image distortion and signal intensity on fat and water from T2 fat saturation image. But, the average value depends on the location of X axial was statistically significant (p < 0.05). From DWI image, when b-value was 0 and 400, there was no significant difference up to $2^{nd}$ columns right to left from the core of patient's table, however, there was a decline in signal intensity and image distortion from the $3^{rd}$ columns and they started to decrease rapidly at the $4^{th}$ columns. When b-value was 1,400, there was not much difference between the $1^{st}$ row right to left from the core of patient's table, however, image distortion started to appear from the $2^{nd}$ columns with no change in signal intensity, the signal was getting decreased from the $3^{rd}$ columns, and both signal intensity and image distortion started to get decreased rapidly. At this moment, the reagent bottles from outside out of 11 reagent bottles were not verified from the image, and only 9 reagent bottles were verified. However, it was not possible to verify anything from the $5^{th}$ columns. But, the average value depends on the location of X axial was statistically significant. On T2 FS image, there was a significant decline in image distortion and signal intensity over 180mm from the core of patient's table. On diffusion-weighted image, there was a significant decline in image distortion and signal intensity over 90 mm, and they became unverifiable over 180 mm. Therefore, we should make an image that has a diagnostic value from examinations that are hard to locate patient's position.

The Evaluation of Crime Prevention Environment for Cultural Heritage using the 3D Visual Exposure Index (3D 시각노출도를 이용한 문화재 범죄예방환경의 평가)

  • Kim, Choong-Sik
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.35 no.1
    • /
    • pp.68-82
    • /
    • 2017
  • Strengthening surveillance, one of the most important factors in the crime prevention environment of cultural heritages, has difficulty in evaluating and diagnosing the site. For this reasons, surveillance enhancement has been assessed by modelling the shape of cultural heritage, topography, and trees digitally. The purpose of this study is to develop the evaluation method of crime prevention environment for cultural heritage by using the 3D visual exposure index (3DVE) which can quantitatively evaluate the surveillance enhancement in three dimensions. For the study, the evaluation factors were divided into natural, organizational, mechanical, and integrated surveillance. To conduct the analysis, the buildings, terrain, walls, and trees of the study site were modeled in three dimensions and the analysis program was developed by using the Unity 3D. Considering the working area of the person, it is possible to analyze the surveillance point by dividing it into the head and the waist position. In order to verify the feasibility of the 3DVE as the analysis program, we assessed the crime prevention environment by digitally modeling the Donam Seowon(Historic Site No. 383) located in Nonsan. As a result of the study, it was possible to figure out the problems of patrol circulation, the blind spot, and the weak point in natural, mechanical, and organizational surveillance of Donam Seowon. The results of the 3DVE were displayed in 3D drawings, so that the position and object could be identified clearly. Surveillance during the daytime is higher in the order of natural, mechanical, and organizational surveillance, while surveillance during the night is higher in the order of organizational, mechanical, and natural surveillance. The more the position of the work area becomes low, the more it is easy to be shielded, so it is necessary to evaluate the waist position. It is possible to find out and display the blind spot by calculating the surveillance range according to the specification, installation location and height of CCTV. Organizational surveillance, which has been found to be complementary to mechanical surveillance, needs to be analyzed at the vulnerable time when crime might happen. Furthermore, it is note that the analysis of integrated surveillance can be effective in examining security light, CCTV, patrol circulation, and other factors. This study was able to diagnose the crime prevention environment by simulating the actual situation. Based on this study, consecutive researches should be conducted to evaluate and compare alternatives to design the crime prevention environment.

A Study on the Verification of an Indoor Test of a Portable Penetration Meter Using the Cone Penetration Test Method (자유낙하 콘관입시험법을 활용한 휴대용 다짐도 측정기의 실내시험을 통한 검증 연구)

  • Park, Geoun Hyun;Yang, An Seung
    • Journal of the Korean GEO-environmental Society
    • /
    • v.20 no.2
    • /
    • pp.41-48
    • /
    • 2019
  • Soil compaction is one of the most important activities in the area of civil works, including road construction, airport construction, port construction and backfilling construction of structures. Soil compaction, particularly in road construction, can be categorized into subgrade compaction and roadbed compaction, and is significant work that when done poorly can serve as a factor causing poor construction due to a lack of compaction. Currently, there are many different types of compaction tests, and the plate bearing test and the unit weight of soil test based on the sand cone method are commonly used to measure the degree of compaction, but many other methods are under development as it is difficult to secure economic efficiency. For the purpose of this research, a portable penetration meter called the Free-Fall Penetration Test (FFPT) was developed and manufactured. In this study, a homogeneous sample was obtained from the construction site and soil was classified through a sieve analysis test in order to perform grain size analysis and a specific gravity test for an indoor test. The principle of FFPT is that the penetration needle installed at the tip of an object put into free fall using gravity is used to measure the depth of penetration into the road surface after subgrade or roadbed compaction has been completed; the degree of compaction is obtained through the unit weight of soil test according to the sand cone method and the relationship between the degree of compaction and the depth of the penetration needle is verified. The maximum allowable grain size of soil is 2.36 mm. For $A_1$ compaction, a trend line was developed using the result of the test performed from a drop height of 10 cm, and coefficient of determination of the trend line was $R^2=0.8677$, while for $D_2$ compaction, coefficient of determination of the trend line was $R^2=0.9815$ when testing at a drop height of 20 cm. Free fall test was carried out with the drop height adjusted from 10 cm to 50 cm at increments of 10 cm. This study intends to compare and analyze the correlation between the degree of compaction obtained from the unit weight of soil test based on the sand cone method and the depth of penetration of the penetration needle obtained from the FFPT meter. As such, it is expected that a portable penetration tester will make it easy to test the degree of compaction at many construction sites, and will lead to a reduction in time, equipment, and manpower which are the disadvantages of the current degree of compaction test, ultimately contributing to accurate and simple measurements of the degree of compaction as well as greater economic feasibility.

Exploring the contextual factors of episodic memory: dissociating distinct social, behavioral, and intentional episodic encoding from spatio-temporal contexts based on medial temporal lobe-cortical networks (일화기억을 구성하는 맥락 요소에 대한 탐구: 시공간적 맥락과 구분되는 사회적, 행동적, 의도적 맥락의 내측두엽-대뇌피질 네트워크 특징을 중심으로)

  • Park, Jonghyun;Nah, Yoonjin;Yu, Sumin;Lee, Seung-Koo;Han, Sanghoon
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.2
    • /
    • pp.109-133
    • /
    • 2022
  • Episodic memory consists of a core event and the associated contexts. Although the role of the hippocampus and its neighboring regions in contextual representations during encoding has become increasingly evident, it remains unclear how these regions handle various context-specific information other than spatio-temporal contexts. Using high-resolution functional MRI, we explored the patterns of the medial temporal lobe (MTL) and cortical regions' involvement during the encoding of various types of contextual information (i.e., journalism principle 5W1H): "Who did it?," "Why did it happen?," "What happened?," "When did it happen?," "Where did it happen?," and "How did it happen?" Participants answered six different contextual questions while looking at simple experimental events consisting of two faces with one object on the screen. The MTL was divided to sub-regions by hierarchical clustering from resting-state data. General linear model analyses revealed a stronger activation of MTL sub-regions, the prefrontal lobe (PFC), and the inferior parietal lobule (IPL) during social (Who), behavioral (How), and intentional (Why) contextual processing when compared with spatio-temporal (Where/When) contextual processing. To further investigate the functional networks involved in contextual encoding dissociation, a multivariate pattern analysis was conducted with features selected as the task-based connectivity links between the hippocampal subfields and PFC/IPL. Each social, behavioral, and intentional contextual processing was individually and successfully classified from spatio-temporal contextual processing, respectively. Thus, specific contexts in episodic memory, namely social, behavior, and intention, involve distinct functional connectivity patterns that are distinct from those for spatio-temporal contextual memory.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.