• Title/Summary/Keyword: sRGB

Search Result 354, Processing Time 0.023 seconds

Development of a Program for Calculating Typhoon Wind Speed and Data Visualization Based on Satellite RGB Images for Secondary-School Textbooks (인공위성 RGB 영상 기반 중등학교 교과서 태풍 풍속 산출 및 데이터 시각화 프로그램 개발)

  • Chae-Young Lim;Kyung-Ae Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.173-191
    • /
    • 2024
  • Typhoons are significant meteorological phenomena that cause interactions among the ocean, atmosphere, and land within Earth's system. In particular, wind speed, a key characteristic of typhoons, is influenced by various factors such as central pressure, trajectory, and sea surface temperature. Therefore, a comprehensive understanding based on actual observational data is essential. In the 2015 revised secondary school textbooks, typhoon wind speed is presented through text and illustrations; hence, exploratory activities that promote a deeper understanding of wind speed are necessary. In this study, we developed a data visualization program with a graphical user interface (GUI) to facilitate the understanding of typhoon wind speeds with simple operations during the teaching-learning process. The program utilizes red-green-blue (RGB) image data of Typhoons Mawar, Guchol, and Bolaven -which occurred in 2023- from the Korean geostationary satellite GEO-KOMPSAT-2A (GK-2A) as the input data. The program is designed to calculate typhoon wind speeds by inputting cloud movement coordinates around the typhoon and visualizes the wind speed distribution by inputting parameters such as central pressure, storm radius, and maximum wind speed. The GUI-based program developed in this study can be applied to typhoons observed by GK-2A without errors and enables scientific exploration based on actual observations beyond the limitations of textbooks. This allows students and teachers to collect, process, analyze, and visualize real observational data without needing a paid program or professional coding knowledge. This approach is expected to foster digital literacy, an essential competency for the future.

Development of surface detection model for dried semi-finished product of Kimbukak using deep learning (딥러닝 기반 김부각 건조 반제품 표면 검출 모델 개발)

  • Tae Hyong Kim;Ki Hyun Kwon;Ah-Na Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.205-212
    • /
    • 2024
  • This study developed a deep learning model that distinguishes the front (with garnish) and the back (without garnish) surface of the dried semi-finished product (dried bukak) for screening operation before transfter the dried bukak to oil heater using robot's vacuum gripper. For deep learning model training and verification, RGB images for the front and back surfaces of 400 dry bukak that treated by data preproccessing were obtained. YOLO-v5 was used as a base structure of deep learning model. The area, surface information labeling, and data augmentation techniques were applied from the acquired image. Parameters including mAP, mIoU, accumulation, recall, decision, and F1-score were selected to evaluate the performance of the developed YOLO-v5 deep learning model-based surface detection model. The mAP and mIoU on the front surface were 0.98 and 0.96, respectively, and on the back surface, they were 1.00 and 0.95, respectively. The results of binary classification for the two front and back classes were average 98.5%, recall 98.3%, decision 98.6%, and F1-score 98.4%. As a result, the developed model can classify the surface information of the dried bukak using RGB images, and it can be used to develop a robot-automated system for the surface detection process of the dried bukak before deep frying.

Automatic Detecting of Joint of Human Body and Mapping of Human Body using Humanoid Modeling (인체 모델링을 이용한 인체의 조인트 자동 검출 및 인체 매핑)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.851-859
    • /
    • 2011
  • In this paper, we propose the method that automatically extracts the silhouette and the joints of consecutive input image, and track joints to trace object for interaction between human and computer. Also the proposed method presents the action of human being to map human body using joints. To implement the algorithm, we model human body using 14 joints to refer to body size. The proposed method converts RGB color image acquired through a single camera to hue, saturation, value images and extracts body's silhouette using the difference between the background and input. Then we automatically extracts joints using the corner points of the extracted silhouette and the data of body's model. The motion of object is tracted by applying block-matching method to areas around joints among all image and the human's motion is mapped using positions of joints. The proposed method is applied to the test videos and the result shows that the proposed method automatically extracts joints and effectively maps human body by the detected joints. Also the human's action is aptly expressed to reflect locations of the joints

Real-time Multiple Stereo Image Synthesis using Depth Information (깊이 정보를 이용한 실시간 다시점 스테레오 영상 합성)

  • Jang Se hoon;Han Chung shin;Bae Jin woo;Yoo Ji sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4C
    • /
    • pp.239-246
    • /
    • 2005
  • In this paper. we generate a virtual right image corresponding to the input left image by using given RGB texture data and 8 bit gray scale depth data. We first transform the depth data to disparity data and then produce the virtual right image with this disparity. We also proposed a stereo image synthesis algorithm which is adaptable to a viewer's position and an real-time processing algorithm with a fast LUT(look up table) method. Finally, we could synthesize a total of eleven stereo images with different view points for SD quality of a texture image with 8 bit depth information in a real time.

Detecting Boundaries between Different Color Regions in Color Codes

  • Kwon B. H.;Yoo H. J.;Kim T. W.
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.846-849
    • /
    • 2004
  • Compared to the bar code which is being widely used for commercial products management, color code is advantageous in both the outlook and the number of combinations. And the color code has application areas complement to the RFID's. However, due to the severe distortion of the color component values, which is easily over $50{\%}$ of the scale, color codes have difficulty in finding applications in the industry. To improve the accuracy of recognition of color codes, it'd better to statistically process an entire color region and then determine its color than to process some samples selected from the region. For this purpose, we suggest a technique to detect edges between color regions in this paper, which is indispensable for an accurate segmentation of color regions. We first transformed RGB color image to HSI and YIQ color models, and then extracted I- and Y-components from them, respectively. Then we performed Canny edge detection on each component image. Each edge image usually had some edges missing. However, since the resulting edge images were complementary, we could obtain an optimal edge image by combining them.

  • PDF

A Fast Implementation of JPEG and Its Application to Multimedia Service in Mobile Handset

  • Jeong Gu-Min;Jung Doo-Hee;Na Seung-Won;Lee Yang-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.12
    • /
    • pp.1649-1657
    • /
    • 2005
  • In this paper, a fast implementation of JPEG is discussed and its application to multimedia service is presented for mobile wireless internet. A fast JPEG player is developed based on several fast algorithms for mobile handset. In the color transformation, RCT is adopted instead of ICT for JPEG source. For the most time-consuming DCT part, the binDCT can reduce the decoding time. In upsampling and RGB conversion, the transformation from YCbCr to RGB 16 bit is made at one time. In some parts, assembly language is applied for high-speed. Also, an implementation of multimedia in mobile handset is described using MJPEG (Motion JPEG) and QCELP(Qualcomm Code Excited Linear Prediction Coding). MJPEG and QCELP are used for video and sound, which are synchronized in handset. For the play of MJPEG, the decoder is implemented as a S/W upon the MSM 5500 baseband chip using the fast JPEG decoder. For the play of QCELP, the embedded QCELP player in handset is used. The implemented multimedia player has a fast speed preserving the image quality.

  • PDF

Distance Measuring Method for Motion Capture Animation (모션캡쳐 애니메이션을 위한 거리 측정방법)

  • Lee, Heei-Man;Seo, Jeong-Man;Jung, Suun-Key
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.129-138
    • /
    • 2002
  • In this paper, a distance measuring algorithm for motion capture using color stereo camera is proposed. The color markers attached on articulations of an actor are captured by stereo color video cameras, and color region which has the same color of the marker's color in the captured images is separated from the other colors by finding dominant wavelength of colors. Color data in RGB (red, green, blue) color space is converted into CIE (Commission Internationale del'Eclairage) color space for the purpose of calculating wavelength. The dominant wavelength is selected from histogram of the neighbor wavelengths. The motion of the character in the cyber space is controlled by a program using the distance information of the moving markers.

An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot (실내 이동로봇을 위한 거리 정보 기반 물체 인식 방법)

  • Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.958-964
    • /
    • 2015
  • In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each object's point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Variations in the functions of Pitta Dosha as per gender and Prakriti

  • Agrawal, Sonam;Gehlot, Sangeeta
    • CELLMED
    • /
    • v.7 no.4
    • /
    • pp.18.1-18.8
    • /
    • 2017
  • The Tridosha theory, which is the cornerstone of Ayurvedic physiology governs all the functions of human body and mind. Tridosha are responsible in determining one's Prakriti and their functional status may vary in both gender of different Prakriti. No research work is available to assess the functions of Dosha by objective parameters. Therefore, this study was planned to find out the variation in functional status of different types of Pitta, using certain objective parameters, in 201 young healthy volunteers of both gender belonging to different Prakriti. Serum level of triglycerides, cholesterol, total protein and glucose level were estimated for Pachaka Pitta and hemoglobin concentration for Ranjaka Pitta, visual acuity for Alochaka Pitta, memory and reaction time for Sadhaka Pitta and RGB value for Bhrajaka Pitta were measured. Except the functioning of Bhrajaka Pitta, variation in functional status of all type of Pitta was not the same in different Prakriti of both gender. However these findings were not significant which may have occurred due to small sample size and homogenous population. Thus we propose the consideration of sex differences while planning and evaluating the studies that are based on Prakriti.