• Title/Summary/Keyword: industrial computer vision

Search Result 151, Processing Time 0.028 seconds

Development of Chicken Carcass Segmentation Algorithm using Image Processing System (영상처리 시스템을 이용한 닭 도체 부위 분할 알고리즘 개발)

  • Cho, Sung-Ho;Lee, Hyo-Jai;Hwang, Jung-Ho;Choi, Sun;Lee, Hoyoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.446-452
    • /
    • 2021
  • As a higher standard for food consumption is required, the consumption of chicken meat that can satisfy the subdivided food preferences is increasing. In March 2003, the quality criteria for chicken carcasses notified by the Livestock Quality Assessment Service suggested quality grades according to fecal contamination and the size and weight of blood and bruises. On the other hand, it is too difficult for human inspection to qualify mass products, which is key to maintaining consistency for grading thousands of chicken carcasses. This paper proposed the computer vision algorithm as a non-destructive inspection, which can identify chicken carcass parts according to the detailed standards. To inspect the chicken carcasses conveyed at high speed, the image calibration was involved in providing robustness to the side effect of external lighting interference. The separation between chicken and background was achieved by a series of image processing, such as binarization based on Expectation Maximization, Erosion, and Labeling. In terms of shape analysis of chicken carcasses, the features are presented to reveal geometric information. After applying the algorithm to 78 chicken carcass samples, the algorithm was effective in segmenting chicken carcass against a background and analyzing its geometric features.

Actual Cases of Internet of Thing on Smart City Industry (스마트시티 산업에서의 사물인터넷 적용 사례 연구)

  • Lee, Seong-Hoon;Shim, Dong-Hee;Lee, Dong-Woo
    • Journal of Convergence Society for SMB
    • /
    • v.6 no.4
    • /
    • pp.65-70
    • /
    • 2016
  • Smart city is an urban development vision to integrate multiple information and communication technology(ICT) and Internet of Things(IoT). The goal of building a smart city is to improve the quality of life by using urban informatics and technology to improve the efficiency of services and meet residents' needs. Many devices in today have been used on various industrial regions. These devices use Internet to transfer their informations. We call these situations as the IoT(Internet of Things). We studied various application examples of IoT in smart city industrial region. In this paper, we described two actual cases such as smart park system and smart bin.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

Implementation of A Thin Film Hydroponic Cultivation System Using HMI

  • Gyu-Seok Lee;Tae-Sung Kim;Myeong-Chul Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.55-62
    • /
    • 2024
  • In this paper, we propose a thin-film hydroponic plant cultivator using HMI display and IoT technology. Existing plant cultivators were difficult to manage due to soil-based cultivation, and it was difficult to optimize environmental conditions due to the open cultivation environment. In addition, there are problems with plant cultivation as immediate control is difficult and growth of plants is delayed. To solve this problem, a cultivation environment was established by connecting the MCU and sensors, and the environment information could be checked and quickly controlled by linking with the HMI display. Additionally, a case was applied to minimize changes in environmental information. Implementation of a thin-film hydroponic cultivation system made soil management easier, improved functionality through operation and control, and made it easy to understand environmental information through the display. The effectiveness of rapid growth was confirmed through crop cultivation experiments in existing growers and hydroponic growers. Future research directions will include optimizing growth information by transmitting and storing cultivation environment information and linking and comparing growth information using vision cameras. It is expected that this will enable efficient and stable plant cultivation.

Sparse Class Processing Strategy in Image-based Livestock Defect Detection (이미지 기반 축산물 불량 탐지에서의 희소 클래스 처리 전략)

  • Lee, Bumho;Cho, Yesung;Yi, Mun Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1720-1728
    • /
    • 2022
  • The industrial 4.0 era has been opened with the development of artificial intelligence technology, and the realization of smart farms incorporating ICT technology is receiving great attention in the livestock industry. Among them, the quality management technology of livestock products and livestock operations incorporating computer vision-based artificial intelligence technology represent key technologies. However, the insufficient number of livestock image data for artificial intelligence model training and the severely unbalanced ratio of labels for recognizing a specific defective state are major obstacles to the related research and technology development. To overcome these problems, in this study, combining oversampling and adversarial case generation techniques is proposed as a method necessary to effectively utilizing small data labels for successful defect detection. In addition, experiments comparing performance and time cost of the applicable techniques were conducted. Through experiments, we confirm the validity of the proposed methods and draw utilization strategies from the study results.

A Comprehensive Survey of Lightweight Neural Networks for Face Recognition (얼굴 인식을 위한 경량 인공 신경망 연구 조사)

  • Yongli Zhang;Jaekyung Yang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.55-67
    • /
    • 2023
  • Lightweight face recognition models, as one of the most popular and long-standing topics in the field of computer vision, has achieved vigorous development and has been widely used in many real-world applications due to fewer number of parameters, lower floating-point operations, and smaller model size. However, few surveys reviewed lightweight models and reimplemented these lightweight models by using the same calculating resource and training dataset. In this survey article, we present a comprehensive review about the recent research advances on the end-to-end efficient lightweight face recognition models and reimplement several of the most popular models. To start with, we introduce the overview of face recognition with lightweight models. Then, based on the construction of models, we categorize the lightweight models into: (1) artificially designing lightweight FR models, (2) pruned models to face recognition, (3) efficient automatic neural network architecture design based on neural architecture searching, (4) Knowledge distillation and (5) low-rank decomposition. As an example, we also introduce the SqueezeFaceNet and EfficientFaceNet by pruning SqueezeNet and EfficientNet. Additionally, we reimplement and present a detailed performance comparison of different lightweight models on the nine different test benchmarks. At last, the challenges and future works are provided. There are three main contributions in our survey: firstly, the categorized lightweight models can be conveniently identified so that we can explore new lightweight models for face recognition; secondly, the comprehensive performance comparisons are carried out so that ones can choose models when a state-of-the-art end-to-end face recognition system is deployed on mobile devices; thirdly, the challenges and future trends are stated to inspire our future works.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Position Improvement of a Mobile Robot by Real Time Tracking of Multiple Moving Objects (실시간 다중이동물체 추적에 의한 이동로봇의 위치개선)

  • Jin, Tae-Seok;Lee, Min-Jung;Tack, Han-Ho;Lee, In-Yong;Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.187-192
    • /
    • 2008
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human Jollowing by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

A Study on Multi-Object Tracking Method using Color Clustering in ISpace (컬러 클러스터링 기법을 이용한 공간지능화의 다중이동물체 추척 기법)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2179-2184
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper described appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.