• Title/Summary/Keyword: computer algorithms

Search Result 3,790, Processing Time 0.031 seconds

Automatic Walking Guide for Visually Impaired People Utilizing an Object Recognition Technology (객체 인식 기술을 활용한 시각장애인 자동 보행 안내)

  • Chang, Jae-Young;Lee, Gyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • As city environments have recently become crowded, there are many obstacles that interfere with the walking of the visually impaired on pedestrian roads. Typical examples include ballads, parking breakers and standing signs, which usually do not get in the way, but blind people may be injured by collisions. To solve such a problem, many solutions have been proposed, but they are limited in applied in practical environments due to the several restrictions such as outside use only, inaccurate obstacle sensing and requirement of special devices. In this paper, we propose a new method to automatically detect obstacles while walking on the pedestrian roads and warn the collision risk in advance by using only sensors embedded in typical mobile phones. The proposed method supports the walking of the visually impaired by notifying the type of obstacles appearing in front of them as well as the distance remaining from the obstacles. To accomplish this goal, we utilized an object recognition technology applying the latest deep learning algorithms in order to identify the obstacles appeared in real-time videos. In addition, we also calculate the distance to the obstacles using the number of steps and the pedestrian's stride. Compared to the existing walking support technologies for the visually impaired, our proposed method ensures efficient and safe walking with only simple devices regardless of the places.

Anomaly detection and attack type classification mechanism using Extra Tree and ANN (Extra Tree와 ANN을 활용한 이상 탐지 및 공격 유형 분류 메커니즘)

  • Kim, Min-Gyu;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.79-85
    • /
    • 2022
  • Anomaly detection is a method to detect and block abnormal data flows in general users' data sets. The previously known method is a method of detecting and defending an attack based on a signature using the signature of an already known attack. This has the advantage of a low false positive rate, but the problem is that it is very vulnerable to a zero-day vulnerability attack or a modified attack. However, in the case of anomaly detection, there is a disadvantage that the false positive rate is high, but it has the advantage of being able to identify, detect, and block zero-day vulnerability attacks or modified attacks, so related studies are being actively conducted. In this study, we want to deal with these anomaly detection mechanisms, and we propose a new mechanism that performs both anomaly detection and classification while supplementing the high false positive rate mentioned above. In this study, the experiment was conducted with five configurations considering the characteristics of various algorithms. As a result, the model showing the best accuracy was proposed as the result of this study. After detecting an attack by applying the Extra Tree and Three-layer ANN at the same time, the attack type is classified using the Extra Tree for the classified attack data. In this study, verification was performed on the NSL-KDD data set, and the accuracy was 99.8%, 99.1%, 98.9%, 98.7%, and 97.9% for Normal, Dos, Probe, U2R, and R2L, respectively. This configuration showed superior performance compared to other models.

Development of Open Set Recognition-based Multiple Damage Recognition Model for Bridge Structure Damage Detection (교량 구조물 손상탐지를 위한 Open Set Recognition 기반 다중손상 인식 모델 개발)

  • Kim, Young-Nam;Cho, Jun-Sang;Kim, Jun-Kyeong;Kim, Moon-Hyun;Kim, Jin-Pyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.117-126
    • /
    • 2022
  • Currently, the number of bridge structures in Korea is continuously increasing and enlarged, and the number of old bridges that have been in service for more than 30 years is also steadily increasing. Bridge aging is being treated as a serious social problem not only in Korea but also around the world, and the existing manpower-centered inspection method is revealing its limitations. Recently, various bridge damage detection studies using deep learning-based image processing algorithms have been conducted, but due to the limitations of the bridge damage data set, most of the bridge damage detection studies are mainly limited to one type of crack, which is also based on a close set classification model. As a detection method, when applied to an actual bridge image, a serious misrecognition problem may occur due to input images of an unknown class such as a background or other objects. In this study, five types of bridge damage including crack were defined and a data set was built, trained as a deep learning model, and an open set recognition-based bridge multiple damage recognition model applied with OpenMax algorithm was constructed. And after performing classification and recognition performance evaluation on the open set including untrained images, the results were analyzed.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

Trends in the Use of Artificial Intelligence in Medical Image Analysis (의료영상 분석에서 인공지능 이용 동향)

  • Lee, Gil-Jae;Lee, Tae-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.453-462
    • /
    • 2022
  • In this paper, the artificial intelligence (AI) technology used in the medical image analysis field was analyzed through a literature review. Literature searches were conducted on PubMed, ResearchGate, Google and Cochrane Review using the key word. Through literature search, 114 abstracts were searched, and 98 abstracts were reviewed, excluding 16 duplicates. In the reviewed literature, AI is applied in classification, localization, disease detection, disease segmentation, and fit degree of registration images. In machine learning (ML), prior feature extraction and inputting the extracted feature values into the neural network have disappeared. Instead, it appears that the neural network is changing to a deep learning (DL) method with multiple hidden layers. The reason is thought to be that feature extraction is processed in the DL process due to the increase in the amount of memory of the computer, the improvement of the calculation speed, and the construction of big data. In order to apply the analysis of medical images using AI to medical care, the role of physicians is important. Physicians must be able to interpret and analyze the predictions of AI algorithms. Additional medical education and professional development for existing physicians is needed to understand AI. Also, it seems that a revised curriculum for learners in medical school is needed.

A Case Study on Utilizing Open-Source Software SDL in C Programming Language Learning (C 프로그래밍 언어 학습에 공개 소스 소프트웨어 SDL 활용 사례 연구)

  • Kim, Sung Deuk
    • Journal of Practical Engineering Education
    • /
    • v.14 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Learning C programming language in electronics education is an important basic education course for understanding computer programming and acquiring the ability to use microprocessors in embedded systems. In order to focus on understanding basic grammar and algorithms, it is a common teaching method to write programs based on C standard library functions in the console window and learn theory and practice in parallel. However, if a student wants to start a project activity or go to a deeper stage after acquiring some basic knowledge of the C language, using only the C standard library function in the console window limits what a student can express or control with the C program. For the purpose of making it easier for a student to use graphics or multimedia resources and increase educational value, this paper studies a case of applying Simple DirectMedia Layer (SDL), an open source software, into the C programming language learning process. The SDL-based programming course applied after completing the basic programming curriculum performed in the console window is introduced, and the educational value is evaluated through a survey. As a result, more than 56% of the respondents expressed positive opinions in terms of improved application ability, stimulating interest, and overall usefulness, and less than 4% of them had negative opinions.

CycleGAN Based Translation Method between Asphalt and Concrete Crack Images for Data Augmentation (데이터 증강을 위한 순환 생성적 적대 신경망 기반의 아스팔트와 콘크리트 균열 영상 간의 변환 기법)

  • Shim, Seungbo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.171-182
    • /
    • 2022
  • The safe use of a structure requires it to be maintained in an undamaged state. Thus, a typical factor that determines the safety of a structure is a crack in it. In addition, cracks are caused by various reasons, damage the structure in various ways, and exist in different shapes. Making matters worse, if these cracks are unattended, the risk of structural failure increases and proceeds to a catastrophe. Hence, recently, methods of checking structural damage using deep learning and computer vision technology have been introduced. These methods usually have the premise that there should be a large amount of training image data. However, the amount of training image data is always insufficient. Particularly, this insufficiency negatively affects the performance of deep learning crack detection algorithms. Hence, in this study, a method of augmenting crack image data based on the image translation technique was developed. In particular, this method obtained the crack image data for training a deep learning neural network model by transforming a specific case of a asphalt crack image into a concrete crack image or vice versa . Eventually, this method expected that a robust crack detection algorithm could be developed by increasing the diversity of its training data.

Development of UDP based Massive VLBI Data Transfer Program (UDP 기반의 대용량 VLBI 데이터 전송 프로그램 개발)

  • Song, Min-Gyu;Kim, Hyun-Goo;Sohn, Bong-Won;Wi, Seog-Oh;Kang, Yong-Woo;Yeom, Jae-Hwan;Byun, Do-Young;Han, Seog-Tae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.5
    • /
    • pp.37-51
    • /
    • 2010
  • In this paper, we discuss the program implementation and system optimization for the effective transfer of huge amount of data. In VLBI which is observing the celestial bodies by using radio telescope hundreds thousands km apart, it is necessary for each VLBI observatory to transfer up to terabytes of observed data. For this reason, e-VLBI research based on advanced network is being actively carried out for the transfer of data efficiently. Following this trend, in this paper, we discuss design & implementation of system for the high speed Gbps data transfer rates. As a data transfer protocol, we use UDP for designing data transmission program with much higher speeds than currently available via VTP(VLBI Transport Protocol). Tsunami-UDP algorithms is applied to implementing data transfer program so that transmission performance could be maximize, also we make it possible to transfer observed data more fast and reliable through optimization of computer systems in each VLBI statopm.

Log Collection Method for Efficient Management of Systems using Heterogeneous Network Devices (이기종 네트워크 장치를 사용하는 시스템의 효율적인 관리를 위한 로그 수집 방법)

  • Jea-Ho Yang;Younggon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.119-125
    • /
    • 2023
  • IT infrastructure operation has advanced, and the methods for managing systems have become widely adopted. Recently, research has focused on improving system management using Syslog. However, utilizing log data collected through these methods presents challenges, as logs are extracted in various formats that require expert analysis. This paper proposes a system that utilizes edge computing to distribute the collection of Syslog data and preprocesses duplicate data before storing it in a central database. Additionally, the system constructs a data dictionary to classify and count data in real-time, with restrictions on transmitting registered data to the central database. This approach ensures the maintenance of predefined patterns in the data dictionary, controls duplicate data and temporal duplicates, and enables the storage of refined data in the central database, thereby securing fundamental data for big data analysis. The proposed algorithms and procedures are demonstrated through simulations and examples. Real syslog data, including extracted examples, is used to accurately extract necessary information from log data and verify the successful execution of the classification and storage processes. This system can serve as an efficient solution for collecting and managing log data in edge environments, offering potential benefits in terms of technology diffusion.