• Title/Summary/Keyword: a simple algorithm

Search Result 3,604, Processing Time 0.033 seconds

Evaluation of Low-cost MEMS Acceleration Sensors to Detect Earthquakes

  • Lee, Jangsoo;Kwon, Young-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.73-79
    • /
    • 2020
  • As the number of earthquakes gradually increases on the Korean Peninsula, much research has been actively conducted to detect earthquakes quickly and accurately. Because traditional seismic stations are expensive to install and operate, recent research is currently being conducted to detect earthquakes using low-cost MEMS sensors. In this article, we evaluate how a low-cost MEMS acceleration sensor installed in a smartphone can be used to detect earthquakes. To this end, we installed about 280 smartphones at various locations in Korea to collect acceleration data and then assessed the installed sensors' noise floor through PSD calculation. The noise floor computed from PSD determines the magnitude of the earthquake that the installed MEMS acceleration sensors can detect. For the last few months of real operation, we collected acceleration data from 200 smartphones among 280 installed smartphones and then computed their PSDs. Based on our experiments, the MEMS acceleration sensor installed in the smartphone is capable of observing and detecting earthquakes with a magnitude 3.5 or more occurring within 10km from an epic center. During the last several months of operation, the smartphone acceleration sensor recorded an earthquake of magnitude 3.5 in Miryang on December 30, 2019, and it was confirmed as an earthquake using STA/LTA which is a simple earthquake detection algorithm. The earthquake detection system using MEMS acceleration sensors is expected to be able to detect increasing earthquakes more quickly and accurately.

An Explicit Dynamic Memory Management Scheme in Java Run-Time Environment (자바 실행시간 환경에서 명시적인 동적 메모리 관리 기법)

  • 배수강;이승룡;전태웅
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.58-72
    • /
    • 2003
  • The objects generated by the keyword new in Java are automatically managed by the garbage collector Inside Java Virtual Machine (JVM) not like using the keywords free or delete in C or C++ programming environments. This provides a means of freedom of memory management burden to the application programmers. The garbage collector however. inherently has its own run time execution overhead. Thus it causes the performance degradation of JVM significantly. In order to mitigate the execution burden of a garbage collector, we propose a novel way of dynamic memory management scheme in Java environment. In the proposed method, the application programmers can explicitly manage the objects In a simple way, which in consequence the run-time overhead can be reduced while the garbage collector is under processing. In order to accomplish this, Java application firstly calls the APIs that arc implemented by native Jana, and then calls the subroutines depending on the JVM, which in turn support to keep the portability characteristic Java has. In this way, we can not only sustain the stability in execution environments. but also improve performance of garbage collector by simply calling the APIs. Our simulation study show that the proposed scheme improves the execution time of the garbage collector from 10.07 percent to 52.24 percent working on Mark-and-Sweep algorithm.

Design of Optimized Fuzzy Controller by Means of HFC-based Genetic Algorithms for Rotary Inverted Pendulum System (회전형 역 진자 시스템에 대한 계층적 공정 경쟁 기반 유전자 알고리즘을 이용한 최적 Fuzzy 제어기 설계)

  • Jung, Seung-Hyun;Choi, Jeoung-Nae;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.236-242
    • /
    • 2008
  • In this paper, we propose an optimized fuzzy controller based on Hierarchical Fair Competition-based Genetic Algorithms (HFCGA) for rotary inverted pendulum system. We adopt fuzzy controller to control the rotary inverted pendulum and the fuzzy rules of the fuzzy controller are designed based on the design methodology of Linear Quadratic Regulator (LQR) controller. Simple Genetic Algorithms (SGAs) is well known as optimization algorithms supporting search of a global character. There is a long list of successful usages of GAs reported in different application domains. It should be stressed, however, that GAs could still get trapped in a sub-optimal regions of the search space due to premature convergence. Accordingly the parallel genetic algorithm was developed to eliminate an effect of premature convergence. In particular, as one of diverse types of the PGA, HFCGA has emerged as an effective optimization mechanism for dealing with very large search space. We use HFCGA to optimize the parameter of the fuzzy controller. A comparative analysis between the simulation and the practical experiment demonstrates that the proposed HFCGA based fuzzy controller leads to superb performance in comparison with the conventional LQR controller as well as SGAs based fuzzy controller.

Stereo Vision based on Planar Algebraic Curves (평면대수곡선을 기반으로 한 스테레오 비젼)

  • Ahn, Min-Ho;Lee, Chung-Nim
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.50-61
    • /
    • 2000
  • Recently the stereo vision based on conics has received much attention by many authors. Conics have many features such as their matrix expression, efficient correspondence checking, abundance of conical shapes in real world. Extensions to higher algebraic curves met with limited success. Although irreducible algebraic curves are rather rare in the real world, lines and conics are abundant whose products provide good examples of higher algebraic curves. We consider plane algebraic curves of an arbitrary degree $n{\geq}2$ with a fully calibrated stereo system. We present closed form solutions to both correspondence and reconstruction problems. Let $f_1,\;f_2,\;{\pi}$ be image curves and plane and $VC_P(g)$ the cone with generator (plane) curve g and vertex P. Then the relation $VC_{O1}(f_1)\;=\;VC_{O1}(VC_{O2}(f_2)\;∩\;{\pi})$ gives polynomial equations in the coefficient $d_1,\;d_2,\;d_3$ of the plane ${\pi}$. After some manipulations, we get an extremely simple polynomial equation in a single variable whose unique real positive root plays the key role. It is then followed by evaluating $O(n^2)$ polynomials of a single variable at the root. It is in contrast to the past works which usually involve a simultaneous system of multivariate polynomial equations. We checked our algorithm using synthetic as well as real world images.

  • PDF

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Methodology for Making Military Surveillance System to be Intelligent Applied by AI Model (AI모델을 적용한 군 경계체계 지능화 방안)

  • Changhee Han;Halim Ku;Pokki Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.57-64
    • /
    • 2023
  • The ROK military faces a significant challenge in its vigilance mission due to demographic problems, particularly the current aging population and population cliff. This study demonstrates the crucial role of the 4th industrial revolution and its core artificial intelligence algorithm in maximizing work efficiency within the Command&Control room by mechanizing simple tasks. To achieve a fully developed military surveillance system, we have chosen multi-object tracking (MOT) technology as an essential artificial intelligence component, aligning with our goal of an intelligent and automated surveillance system. Additionally, we have prioritized data visualization and user interface to ensure system accessibility and efficiency. These complementary elements come together to form a cohesive software application. The CCTV video data for this study was collected from the CCTV cameras installed at the 1st and 2nd main gates of the 00 unit, with the cooperation by Command&Control room. Experimental results indicate that an intelligent and automated surveillance system enables the delivery of more information to the operators in the room. However, it is important to acknowledge the limitations of the developed software system in this study. By highlighting these limitations, we can present the future direction for the development of military surveillance systems.

Cavitation signal detection based on time-series signal statistics (시계열 신호 통계량 기반 캐비테이션 신호 탐지)

  • Haesang Yang;Ha-Min Choi;Sock-Kyu Lee;Woojae Seong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.4
    • /
    • pp.400-405
    • /
    • 2024
  • When cavitation noise occurs in ship propellers, the level of underwater radiated noise abruptly increases, which can be a critical threat factor as it increases the probability of detection, particularly in the case of naval vessels. Therefore, accurately and promptly assessing cavitation signals is crucial for improving the survivability of submarines. Traditionally, techniques for determining cavitation occurrence have mainly relied on assessing acoustic/vibration levels measured by sensors above a certain threshold, or using the Detection of Envelop Modulation On Noise (DEMON) method. However, technologies related to this rely on a physical understanding of cavitation phenomena and subjective criteria based on user experience, involving multiple procedures, thus necessitating the development of techniques for early automatic recognition of cavitation signals. In this paper, we propose an algorithm that automatically detects cavitation occurrence based on simple statistical features reflecting cavitation characteristics extracted from acoustic signals measured by sensors attached to the hull. The performance of the proposed technique is evaluated depending on the number of sensors and model test conditions. It was confirmed that by sufficiently training the characteristics of cavitation reflected in signals measured by a single sensor, the occurrence of cavitation signals can be determined.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Liver Splitting Using 2 Points for Liver Graft Volumetry (간 이식편의 체적 예측을 위한 2점 이용 간 분리)

  • Seo, Jeong-Joo;Park, Jong-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.123-126
    • /
    • 2012
  • This paper proposed a method to separate a liver into left and right liver lobes for simple and exact volumetry of the river graft at abdominal MDCT(Multi-Detector Computed Tomography) image before the living donor liver transplantation. A medical team can evaluate an accurate river graft with minimized interaction between the team and a system using this algorithm for ensuring donor's and recipient's safe. On the image of segmented liver, 2 points(PMHV: a point in Middle Hepatic Vein and PPV: a point at the beginning of right branch of Portal Vein) are selected to separate a liver into left and right liver lobes. Middle hepatic vein is automatically segmented using PMHV, and the cutting line is decided on the basis of segmented Middle Hepatic Vein. A liver is separated on connecting the cutting line and PPV. The volume and ratio of the river graft are estimated. The volume estimated using 2 points are compared with a manual volume that diagnostic radiologist processed and estimated and the weight measured during surgery to support proof of exact volume. The mean ${\pm}$ standard deviation of the differences between the actual weights and the estimated volumes was $162.38cm^3{\pm}124.39$ in the case of manual segmentation and $107.69cm^3{\pm}97.24$ in the case of 2 points method. The correlation coefficient between the actual weight and the manually estimated volume is 0.79, and the correlation coefficient between the actual weight and the volume estimated using 2 points is 0.87. After selection the 2 points, the time involved in separation a liver into left and right river lobe and volumetry of them is measured for confirmation that the algorithm can be used on real time during surgery. The mean ${\pm}$ standard deviation of the process time is $57.28sec{\pm}32.81$ per 1 data set ($149.17pages{\pm}55.92$).

Implementation of Motion Detection based on Extracting Reflected Light using 3-Successive Video Frames (3개의 연속된 프레임을 이용한 반사된 빛 영역추출 기반의 동작검출 알고리즘 구현)

  • Kim, Chang Min;Lee, Kyu Woong
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.133-138
    • /
    • 2016
  • Motion detection algorithms based on difference image are classified into background subtraction and previous frame subtraction. 1) Background subtraction is a convenient and effective method for detecting foreground objects in a stationary background. However in real world scenarios, especially outdoors, this restriction, (i.e., stationary background) often turns out to be impractical since the background may not be stable. 2) Previous frame subtraction is a simple technique for detecting motion in an image. The difference between two frames depends upon the amount of motion that occurs from one frame to the next. Both these straightforward methods fail when the object moves very "slightly and slowly". In order to efficiently deal with the problem, in this paper we present an algorithm for motion detection that incorporates "reflected light area" and "difference image". This reflected light area is generated during the frame production process. It processes multiplex difference image and AND-arithmetic of bitwise. This process incorporates the accuracy of background subtraction and environmental adaptability of previous frame subtraction and reduces noise generation. Also, the performance of the proposed method is demonstrated by the performance assessment of each method using Gait database sample of CASIA.