• Title/Summary/Keyword: Processing Optimization

Search Result 1,582, Processing Time 0.027 seconds

Deep Learning Braille Block Recognition Method for Embedded Devices (임베디드 기기를 위한 딥러닝 점자블록 인식 방법)

  • Hee-jin Kim;Jae-hyuk Yoon;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-9
    • /
    • 2023
  • In this paper, we propose a method to recognize the braille blocks for embedded devices in real time through deep learning. First, a deep learning model for braille block recognition is trained on a high-performance computer, and the learning model is applied to a lightweight tool to apply to an embedded device. To recognize the walking information of the braille block, an algorithm is used to determine the path using the distance from the braille block in the image. After detecting braille blocks, bollards, and crosswalks through the YOLOv8 model in the video captured by the embedded device, the walking information is recognized through the braille block path discrimination algorithm. We apply the model lightweight tool to YOLOv8 to detect braille blocks in real time. The precision of YOLOv8 model weights is lowered from the existing 32 bits to 8 bits, and the model is optimized by applying the TensorRT optimization engine. As the result of comparing the lightweight model through the proposed method with the existing model, the path recognition accuracy is 99.05%, which is almost the same as the existing model, but the recognition speed is reduced by 59% compared to the existing model, processing about 15 frames per second.

Efficient Multicasting Mechanism for Mobile Computing Environment Machine learning Model to estimate Nitrogen Ion State using Traingng Data from Plasma Sheath Monitoring Sensor (Plasma Sheath Monitoring Sensor 데이터를 활용한 질소이온 상태예측 모형의 기계학습)

  • Jung, Hee-jin;Ryu, Jinseung;Jeong, Minjoong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.27-30
    • /
    • 2022
  • The plasma process, which has many advantages in terms of efficiency and environment compared to conventional process methods, is widely used in semiconductor manufacturing. Plasma Sheath is a dark region observed between the plasma bulk and the chamber wall surrounding it or the electrode. The Plasma Sheath Monitoring Sensor (PSMS) measures the difference in voltage between the plasma and the electrode and the RF power applied to the electrode in real time. The PSMS data, therefore, are expected to have a high correlation with the state of plasma in the plasma chamber. In this study, a model for predicting the state of nitrogen ions in the plasma chamber is training by a deep learning machine learning techniques using PSMS data. For the data used in the study, PSMS data measured in an experiment with different power and pressure settings were used as training data, and the ratio, flux, and density of nitrogen ions measured in plasma bulk and Si substrate were used as labels. The results of this study are expected to be the basis of artificial intelligence technology for the optimization of plasma processes and real-time precise control in the future.

  • PDF

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Understanding of Generative Artificial Intelligence Based on Textual Data and Discussion for Its Application in Science Education (텍스트 기반 생성형 인공지능의 이해와 과학교육에서의 활용에 대한 논의)

  • Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.307-319
    • /
    • 2023
  • This study aims to explain the key concepts and principles of text-based generative artificial intelligence (AI) that has been receiving increasing interest and utilization, focusing on its application in science education. It also highlights the potential and limitations of utilizing generative AI in science education, providing insights for its implementation and research aspects. Recent advancements in generative AI, predominantly based on transformer models consisting of encoders and decoders, have shown remarkable progress through optimization of reinforcement learning and reward models using human feedback, as well as understanding context. Particularly, it can perform various functions such as writing, summarizing, keyword extraction, evaluation, and feedback based on the ability to understand various user questions and intents. It also offers practical utility in diagnosing learners and structuring educational content based on provided examples by educators. However, it is necessary to examine the concerns regarding the limitations of generative AI, including the potential for conveying inaccurate facts or knowledge, bias resulting from overconfidence, and uncertainties regarding its impact on user attitudes or emotions. Moreover, the responses provided by generative AI are probabilistic based on response data from many individuals, which raises concerns about limiting insightful and innovative thinking that may offer different perspectives or ideas. In light of these considerations, this study provides practical suggestions for the positive utilization of AI in science education.

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.

Cost Analysis of the Recent Projects for Overseas Vanadium Metallurgical Processing Plants (해외 바나듐 제련 플랜트 관련 사업 비용 분석)

  • Gyuri Kim;Sang-hun Lee
    • Resources Recycling
    • /
    • v.33 no.3
    • /
    • pp.3-11
    • /
    • 2024
  • This study addressed the cost structure of metallurgical plants for vanadium recovery or production, which were previously planned or implemented. Vanadium metallurgy consists of several sub-processes such as such as pretreatment, roasting, leaching, precipitation, and filtration, in order to finally produce vanadium pentoxide. Here, lots of costs should be spent for such plants, in which these costs are largely divided into CAPEX (Capital Expenditure) and OPEX (Operational Expenditure). As a result, the capacities (feed input rates) and vanadium contents are various along the target projects for this study. However, final production rates and grades of vanadium pentoxide showed relatively small differences. In addition, a noticeable correlation is found between capacities and specific operating costs, in that a steadily decreasing trend is described with a non-linear curve with around -0.3 power. Therefore, for the plant capacity below 100,000 tons per year, the specific operating cost rapidly decreases as the capacity increases, whereas the cost remains relatively stable in the range of 0.6 to 1.2 million tons per year of the capacity. From a technical perspective, effective optimization of the metallurgical process plant can be achieved by improving vanadium recovery rate in the pre-treatment and/or roasting-leaching processes. Finally, the results of this study should be updated through future research with on-going field verification and further detailed cost analysis.

Integrated Data Safe Zone Prototype for Efficient Processing and Utilization of Pseudonymous Information in the Transportation Sector (교통분야 가명정보의 효율적 처리 및 활용을 위한 통합데이터안심구역 프로토타입)

  • Hyoungkun Lee;Keedong Yoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.48-66
    • /
    • 2024
  • According to the three amended Laws of the Data Economy and the Data Industry Act of Korea, systems for pseudonymous data integration and Data Safe Zones have been operated separately by selected agencies, eventually causing a burden of use in SMEs, startups, and general users because of complicated and ineffective procedures. An over-stringent pseudonymization policy to prevent data breaches has also compromised data quality. Such trials should be improved to ensure the convenience of use and data quality. This paper proposes a prototype system of the Integrated Data Safe Zone based on redesigned and optimized pseudonymization workflows. Conventional workflows of pseudonymization were redesigned by applying the amended guidelines and selectively revising existing guidelines for business process redesign. The proposed prototype has been shown quantitatively to outperform the conventional one: 6-fold increase in time efficiency, 1.28-fold in cost reduction, and 1.3-fold improvement in data quality.

Optimization of Processing Process for Functional Anchovy Fish Sauce in Addition with Raw Sea Tangle (다시마를 첨가한 기능성 멸치액젓 제조조건 확립)

  • Jeong, Min-Hong;Jeong, Woo-Young;Gyu, Hyeon-Jin;Jeong, Sang-Won;Park, Hun-Kyu;Cho, Young-Je;Shim, Kil-Bo
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.25 no.6
    • /
    • pp.1408-1418
    • /
    • 2013
  • To investigate the quality properties of functional anchovy fish sauce added with raw sea tangle, 2%, 5%, and 10% (w/w) of sea tangle was added to 25% (w/w) salted anchovy and then fermented at $20^{\circ}C$. During fermentation period, the amino nitrogen contents were increased at all groups and the highest contents were at 450 days of fermentation with $11.99{\pm}0.08$, $12.51{\pm}0.08$, and $11.95{\pm}0.08mg/mL$ at 2%, 5%, and 10% addition of raw sea tangle, respectively. After later, the contents were keeping at a similar level. VBN contents were continuously increased until 270 days of fermentation with $208.10{\pm}3.50$, $210.00{\pm}4.10$, $215.15{\pm}1.50mg/100ml$ at 2%, 5%, 10% addition of raw sea tangle, respectively. Alginic acid recovery was gradually increased in fermentation duration, showed the highest concentration at 540 days of fermentation with 67.00, 67.25, 67.90% at 2%, 5% and 10% addition of raw sea tangle, respectively. Dietary fiber recovery was rapidly increased at the beginning of fermentation and then decreased slowly as the fermentation is progressed. The highest recovery was at 30 days with 18.7, 18.6, and 17.9%, and the lowest was at 360 days with 8.7 and 11.1% at 2 and 10% addition of raw sea tangle, respectively, and 450 days with 11.4% at 5% sea tangle. The lowest fucoidan contents were exhibited at 30 days of fermentation with 0.07% at both of 2% and 5% addition, and 90 days with 0.10% at 10% addtion of sea tangle. The highest fucoidan contents were 270 days showing 0.24, 0.25, and 0.23% at 2, 5, and 10% addition, respectively. All groups adding different sea tangle concentration were not significantly different at all properties. However, the newly developed products were sufficient to the standard guideline of Korea Food Drug Adminstration. The best processing process of functional anchovy fish sauce in addition with raw sea tangle is 2% addition of raw sea tangle and fermented more than 450 days. The results obtained in this study indicated that the fish sauce added with sea tangle is superior in taste, functions to traditional fish sauce and could be competitive fishery fermented food.

Quality Properties and Processing Optimization of Mackerel (Scomber japonicus) Sausage (수세 횟수 및 첨가제 비율에 따른 고등어(Scomber japonicus) 소시지의 품질 특성 및 제조조건 최적화)

  • Kim, Koth-Bong-Woo-Ri;Jeong, Da-Hyun;Bark, Si-Woo;Kang, Bo-Kyeong;Pak, Won-Min;Kang, Ja-Eun;Park, Hong-Min;Ahn, Dong-Hyun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.10
    • /
    • pp.1656-1663
    • /
    • 2013
  • Processing conditions of mackerel sausage were optimized for number of washes (0, 1, 2, and 3 times) and percentages of various additives: salt (1, 1.5, 2, 3%), phosphate complex (0.1, 0.3, 0.5%), sugar (1, 2, 3%), and corn starch (1, 3, 5%). The whiteness of mackerel sausage significantly increased with increasing washing time, but the whiteness of mackerel sausage prepared with additives did not show large differences. Conditions consisting of two washes, 2% salt, 2% sugar, and 5% corn starch showed the highest hardness and gel strength, whereas the group supplemented with phosphate complex showed no considerable differences compared to the control. In the sensory evaluation, the mackerel sausage prepared with two washes compared to the control scored higher for color, aroma, and overall preference. In addition, mackerel sausage supplemented with 2% salt, 2% sugar, and 5% corn starch scored highest in overall preference. There was no significant difference in mackerel sausage supplemented with phosphate complex. Therefore, these results suggest the optimal conditions for improving the texture and sensory properties of mackerel sausage were two washes, 2% salt, 0.5% phosphate complex, 2% sugar, and 5% corn starch.