• 제목/요약/키워드: 영상 데이터 수집 시스템

검색결과 261건 처리시간 0.03초

A Study of Arrow Performance using Artificial Neural Network (Artificial Neural Network를 이용한 화살 성능에 대한 연구)

  • Jeong, Yeongsang;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제24권5호
    • /
    • pp.548-553
    • /
    • 2014
  • In order to evaluate the performance of arrow that manufactures through production process, it is used that personal experiences such as hunters who have been using bow and arrow for a long time, technicians who produces leisure and sports equipment, and experts related with this industries. Also, the intensity of arrow's impact point which obtains from repeated shooting experiments is an important indicator for evaluating the performance of arrow. There are some ongoing researches for evaluating performance of arrow using intensity of the arrow's impact point and the arrow's flying image that obtained from high-speed camera. However, the research that deals with mutual relation between distribution of the arrow's impact point and characteristics of the arrow (length, weight, spine, overlap, straightness) is not enough. Therefore, this paper suggests both the system that could describes the distribution of the arrow's impact point into numerical representation and the correlation model between characteristics of arrow and impact points. The inputs of the model are characteristics of arrow (spine, straightness). And the output is MAD (mean absolute distance) of triangular shaped coordinates that could be obtained from 3 times repeated shooting by changing knock degree 120. The input-output data is collected for learning the correlation model, and ANN (artificial neural network) is used for implementing the model.

Evaluation of the Utilization Potential of High-Resolution Optical Satellite Images in Port Ship Management: A Case Study on Berth Utilization in Busan New Port (고해상도 광학 위성영상의 항만선박관리 활용 가능성 평가: 부산 신항의 선석 활용을 대상으로)

  • Hyunsoo Kim ;Soyeong Jang ;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • 제39권5_4호
    • /
    • pp.1173-1183
    • /
    • 2023
  • Over the past 20 years, Korea's overall import and export cargo volume has increased at an average annual rate of approximately 5.3%. About 99% of the cargo is still being transported by sea. Due to recent increases in maritime cargo volume, congestion in maritime logistics has become challenging due to factors such as the COVID-19 pandemic and conflicts. Continuous monitoring of ports has become crucial. Various ground observation systems and Automatic Identification System (AIS) data have been utilized for monitoring ports and conducting numerous preliminary studies for the efficient operation of container terminals and cargo volume prediction. However, small and developing countries' ports face difficulties in monitoring due to environmental issues and aging infrastructure compared to large ports. Recently, with the increasing utility of artificial satellites, preliminary studies have been conducted using satellite imagery for continuous maritime cargo data collection and establishing ocean monitoring systems in vast and hard-to-reach areas. This study aims to visually detect ships docked at berths in the Busan New Port using high-resolution satellite imagery and quantitatively evaluate berth utilization rates. By utilizing high-resolution satellite imagery from Compact Advanced Satellite 500-1 (CAS500-1), Korea Multi-Purpose satellite-3 (KOMPSAT-3), PlanetScope, and Sentinel-2A, ships docked within the port berths were visually detected. The berth utilization rate was calculated using the total number of ships that could be docked at the berths. The results showed variations in berth utilization rates on June 2, 2022, with values of 0.67, 0.7, and 0.59, indicating fluctuations based on the time of satellite image capture. On June 3, 2022, the value remained at 0.7, signifying a consistent berth utilization rate despite changes in ship types. A higher berth utilization rate indicates active operations at the berth. This information can assist in basic planning for new ship operation schedules, as congested berths can lead to longer waiting times for ships in anchorages, potentially resulting in increased freight rates. The duration of operations at berths can vary from several hours to several days. The results of calculating changes in ships at berths based on differences in satellite image capture times, even with a time difference of 4 minutes and 49 seconds, demonstrated variations in ship presence. With short observation intervals and the utilization of high-resolution satellite imagery, continuous monitoring within ports can be achieved. Additionally, utilizing satellite imagery to monitor changes in ships at berths in minute increments could prove useful for small and developing country ports where harbor management is not well-established, offering valuable insights and solutions.

가스장 이온원 시스템에서 마이크로 채널 플레이트의 잡음 제거 방법

  • Han, Cheol-Su;Park, In-Yong;Jo, Bok-Rae;Park, Chang-Jun;An, Sang-Jeong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 한국진공학회 2014년도 제46회 동계 정기학술대회 초록집
    • /
    • pp.422.2-422.2
    • /
    • 2014
  • 가스장 이온원(GFIS: Gas Field Ionization Source)은 전자현미경보다 분해능이 향상된 이온현미경의 광원으로 사용하기 위하여 연구되고 있고, 큰 각전류 밀도, 작은 크기의 가상 이온원 그리고 좁은 에너지 퍼짐을 특징으로 한다. 여러 가지 장점을 가지고 있는 GFIS을 개발하기 위해서는 GFIS에서 발생된 이온빔의 형상을 관찰 것이 매우 중요하며, 이러한 관찰을 위한 시스템에는 주로 마이크로 채널 플레이트 (MCP: Micro Channel Plate)가 사용된다. MCP는 채널내부에 입사한 입자의 에너지에 의해서 생성된 이차전자를 수 천 배에서 수 백 만 배 이상 증폭시켜 형광판에 조사하고 발광시키는 방법으로 작은 신호를 영상으로 관찰 할 수 있도록 한다. MCP의 큰 증폭비는 작은 크기의 신호를 큰 신호로 증폭하여 관찰하는데 용이하여, GFIS 방법으로 생성된 이온빔(이온빔 전류 값은 pA 수준)을 관찰하기에 적합하다. 그러나 MCP를 이용하여도 증폭된 이온빔의 세기가 매우 작기때문에 생성된 이온빔 형상을 정확하게 관찰하기 위해서는 MCP의 형광판을 촬영하는 카메라 노출시간을 길게하여 데이터 수집 시간을 늘려야 하는 문제가 있다. 본 발표에서는 이온빔 형상 관찰에 소요되는 시간을 단축하기 위하여 MCP의 잡음이 GFIS의 이온빔 이미지 관찰에 미치는 영향을 분석하고 이를 제거 방법을 소개한다. 본 연구에서는 GFIS 방출 이온빔의 이미지에 포함된 MCP 잡음 특성을 장(전계)이온현미경 (Field Ion Microscope)실험을 통하여 분석하였고, 디지털 이미지 처리 방법을 이용하여 방출 이온빔 이미지에서 MCP 잡음을 제거하여 방출 이온빔 이미지만 추출할 수 있었다. 본 연구에서 제안한 방법을 GFIS 방출 이온빔 관찰시스템에 적용함으로써 기존 방법에 비해 노출시간을 단축하여 방출 이온빔을 관찰 할 수 있었으며, 노이즈 제거 효과로 향상된 이온빔 형상을 얻을 수 있었다. 본 연구결과의 관찰시간 단축과 향상된 이온빔 형상 획득은 이온현미경 개발에 필수적인 단원자 이온빔을 보다 효율적으로 개발할 수 있으며 디지털 이미지 처리로 GFIS 이온빔 생성을 자동화하는데 응용할 수 있다. 더불어 기존방법에 비해 이미지 획득을 위한 MCP의 노출시간을 단축할 수 있으므로 실험장비 수명 단축 방지 및 관리에 큰 장점이 있다.

  • PDF

Development of Crack Detection System for Highway Tunnels using Imaging Device and Deep Learning (영상장비와 딥러닝을 이용한 고속도로 터널 균열 탐지 시스템 개발)

  • Kim, Byung-Hyun;Cho, Soo-Jin;Chae, Hong-Je;Kim, Hong-Ki;Kang, Jong-Ha
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • 제25권4호
    • /
    • pp.65-74
    • /
    • 2021
  • In order to efficiently inspect rapidly increasing old tunnels in many well-developed countries, many inspection methodologies have been proposed using imaging equipment and image processing. However, most of the existing methodologies evaluated their performance on a clean concrete surface with a limited area where other objects do not exist. Therefore, this paper proposes a 6-step framework for tunnel crack detection deep learning model development. The proposed method is mainly based on negative sample (non-crack object) training and Cascade Mask R-CNN. The proposed framework consists of six steps: searching for cracks in images captured from real tunnels, labeling cracks in pixel level, training a deep learning model, collecting non-crack objects, retraining the deep learning model with the collected non-crack objects, and constructing final training dataset. To implement the proposed framework, Cascade Mask R-CNN, an instance segmentation model, was trained with 1561 general crack images and 206 non-crack images. In order to examine the applicability of the trained model to the real-world tunnel crack detection, field testing is conducted on tunnel spans with a length of about 200m where electric wires and lights are prevalent. In the experimental result, the trained model showed 99% precision and 92% recall, which shows the excellent field applicability of the proposed framework.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • 제19권2호
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Study on forest fires Prediction and Detection Algorithm using Intelligent Context-awareness sensor (상황인지 센서를 활용한 지능형 산불 이동 예측 및 탐지 알고리즘에 관한 연구)

  • Kim, Hyeng-jun;Shin, Gyu-young;Woo, Byeong-hun;Koo, Nam-kyoung;Jang, Kyung-sik;Lee, Kang-whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제19권6호
    • /
    • pp.1506-1514
    • /
    • 2015
  • In this paper, we proposed a forest fires prediction and detection system. It could provide a situation of fire prediction and detection methods using context awareness sensor. A fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire in complex situations. In addition, it is possible to differential management of intensive fire detection and prediction for required dividing the state of fire zone. Therefore we propose an algorithm to determine the prediction and detection from the fire parameters as an temperature, humidity, Co2 and the flame in real-time by using a context awareness sensor and also suggest algorithm that provide the path of fire diffusion and service the secure safety zone prediction.

Design and Implementation of OpenCV-based Inventory Management System to build Small and Medium Enterprise Smart Factory (중소기업 스마트공장 구축을 위한 OpenCV 기반 재고관리 시스템의 설계 및 구현)

  • Jang, Su-Hwan;Jeong, Jopil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제19권1호
    • /
    • pp.161-170
    • /
    • 2019
  • Multi-product mass production small and medium enterprise factories have a wide variety of products and a large number of products, wasting manpower and expenses for inventory management. In addition, there is no way to check the status of inventory in real time, and it is suffering economic damage due to excess inventory and shortage of stock. There are many ways to build a real-time data collection environment, but most of them are difficult to afford for small and medium-sized companies. Therefore, smart factories of small and medium enterprises are faced with difficult reality and it is hard to find appropriate countermeasures. In this paper, we implemented the contents of extension of existing inventory management method through character extraction on label with barcode and QR code, which are widely adopted as current product management technology, and evaluated the effect. Technically, through preprocessing using OpenCV for automatic recognition and classification of stock labels and barcodes, which is a method for managing input and output of existing products through computer image processing, and OCR (Optical Character Recognition) function of Google vision API. And it is designed to recognize the barcode through Zbar. We propose a method to manage inventory by real-time image recognition through Raspberry Pi without using expensive equipment.

Intelligent Motion Pattern Recognition Algorithm for Abnormal Behavior Detections in Unmanned Stores (무인 점포 사용자 이상행동을 탐지하기 위한 지능형 모션 패턴 인식 알고리즘)

  • Young-june Choi;Ji-young Na;Jun-ho Ahn
    • Journal of Internet Computing and Services
    • /
    • 제24권6호
    • /
    • pp.73-80
    • /
    • 2023
  • The recent steep increase in the minimum hourly wage has increased the burden of labor costs, and the share of unmanned stores is increasing in the aftermath of COVID-19. As a result, theft crimes targeting unmanned stores are also increasing, and the "Just Walk Out" system is introduced to prevent such thefts, and LiDAR sensors, weight sensors, etc. are used or manually checked through continuous CCTV monitoring. However, the more expensive sensors are used, the higher the initial cost of operating the store and the higher the cost in many ways, and CCTV verification is difficult for managers to monitor around the clock and is limited in use. In this paper, we would like to propose an AI image processing fusion algorithm that can solve these sensors or human-dependent parts and detect customers who perform abnormal behaviors such as theft at low costs that can be used in unmanned stores and provide cloud-based notifications. In addition, this paper verifies the accuracy of each algorithm based on behavior pattern data collected from unmanned stores through motion capture using mediapipe, object detection using YOLO, and fusion algorithm and proves the performance of the convergence algorithm through various scenario designs.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • 제18권3호
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • 제19권1호
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.