• Title/Summary/Keyword: Complex Matching

Search Result 291, Processing Time 0.026 seconds

Fault Pattern Extraction Via Adjustable Time Segmentation Considering Inflection Points of Sensor Signals for Aircraft Engine Monitoring (센서 데이터 변곡점에 따른 Time Segmentation 기반 항공기 엔진의 고장 패턴 추출)

  • Baek, Sujeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.86-97
    • /
    • 2021
  • As mechatronic systems have various, complex functions and require high performance, automatic fault detection is necessary for secure operation in manufacturing processes. For conducting automatic and real-time fault detection in modern mechatronic systems, multiple sensor signals are collected by internet of things technologies. Since traditional statistical control charts or machine learning approaches show significant results with unified and solid density models under normal operating states but they have limitations with scattered signal models under normal states, many pattern extraction and matching approaches have been paid attention. Signal discretization-based pattern extraction methods are one of popular signal analyses, which reduce the size of the given datasets as much as possible as well as highlight significant and inherent signal behaviors. Since general pattern extraction methods are usually conducted with a fixed size of time segmentation, they can easily cut off significant behaviors, and consequently the performance of the extracted fault patterns will be reduced. In this regard, adjustable time segmentation is proposed to extract much meaningful fault patterns in multiple sensor signals. By considering inflection points of signals, we determine the optimal cut-points of time segments in each sensor signal. In addition, to clarify the inflection points, we apply Savitzky-golay filter to the original datasets. To validate and verify the performance of the proposed segmentation, the dataset collected from an aircraft engine (provided by NASA prognostics center) is used to fault pattern extraction. As a result, the proposed adjustable time segmentation shows better performance in fault pattern extraction.

A pilot study of an automated personal identification process: Applying machine learning to panoramic radiographs

  • Ortiz, Adrielly Garcia;Soares, Gustavo Hermes;da Rosa, Gabriela Cauduro;Biazevic, Maria Gabriela Haye;Michel-Crosato, Edgard
    • Imaging Science in Dentistry
    • /
    • v.51 no.2
    • /
    • pp.187-193
    • /
    • 2021
  • Purpose: This study aimed to assess the usefulness of machine learning and automation techniques to match pairs of panoramic radiographs for personal identification. Materials and Methods: Two hundred panoramic radiographs from 100 patients (50 males and 50 females) were randomly selected from a private radiological service database. Initially, 14 linear and angular measurements of the radiographs were made by an expert. Eight ratio indices derived from the original measurements were applied to a statistical algorithm to match radiographs from the same patients, simulating a semi-automated personal identification process. Subsequently, measurements were automatically generated using a deep neural network for image recognition, simulating a fully automated personal identification process. Results: Approximately 85% of the radiographs were correctly matched by the automated personal identification process. In a limited number of cases, the image recognition algorithm identified 2 potential matches for the same individual. No statistically significant differences were found between measurements performed by the expert on panoramic radiographs from the same patients. Conclusion: Personal identification might be performed with the aid of image recognition algorithms and machine learning techniques. This approach will likely facilitate the complex task of personal identification by performing an initial screening of radiographs and matching ante-mortem and post-mortem images from the same individuals.

Associations of unspecified pain, idiopathic pain and COVID-19 in South Korea: a nationwide cohort study

  • Kim, Namwoo;Kim, Jeewuan;Yang, Bo Ram;Hahm, Bong-Jin
    • The Korean Journal of Pain
    • /
    • v.35 no.4
    • /
    • pp.458-467
    • /
    • 2022
  • Background: Few studies have investigated unspecified or idiopathic pain associated with COIVD-19. This study aimed to provide the incidence rates of unspecified pain and idiopathic pain in patients with COVID-19 for 90 days after COVID-19 diagnosis. Methods: A propensity score matched cohort was used, including all patients with COVID-19 in South Korea, and analyzed their electronic medical records. The control group consisted of those who had not had tests for COVID-19 at all. Unspecified pain diagnoses consisted of diagnoses related to pain included in the ICD-10 Chapter XVIII. Idiopathic pain disorders included fibromyalgia, temporomandibular joint disorders, headaches, chronic prostatitis, complex regional pain syndrome, atypical facial pain, irritable bowel syndrome, and interstitial cystitis. Results: After matching, the number of participants in each group was 7,911. For most unspecified pain, the incidences were higher in the COVID-19 group (11.7%; 95% confidence interval [CI], 11.0-12.5) than in the control group (6.5%; 95% CI, 6.0-7.1). For idiopathic pain, only the headaches had a significantly higher incidence in the COVID-19 group (6.6%; 95% CI, 6.1-7.2) than in the control group (3.7%; 95% CI, 3.3-4.1). However, using a different control group that included only patients who visited a hospital at least once for any reasons, the incidences of most unspecified and idiopathic pain were higher in the control group than in the COVID-19 group. Conclusions: Patients with COVID-19 might be at a higher risk of experiencing unspecified pain in the acute phase or after recovery compared with individuals who had not had tests for COVID-19.

A Study on the Effectiveness of Investment Protection in North Korea (대북 투자보호의 실효성 제고 방안에 대한 고찰)

  • Hyun-suk Oh
    • Journal of Arbitration Studies
    • /
    • v.33 no.2
    • /
    • pp.53-83
    • /
    • 2023
  • The investment agreement prepared at the beginning of inter-Korean economic cooperation in 2000 can be evaluated as very ineffective as a product of mutual political and diplomatic compromise rather than an effective protection for our investment assets. South Korean companies suffered a lot of losses due to the freezing of assets in the Geumgang mountain district and the closure of the Kaeseung Industrial Complex, but they did not receive practical damage relief due to institutional vulnerabilities. Currently, North Korea is under international economic sanctions of the UN Security Council, so it is true that the resumption of inter-Korean economic cooperation is far away, but North Korea's human resources and geographical location are still attractive investment destinations for us. Therefore, if strained relations between the two Koreas recover in the future and international economic sanctions on North Korea are eased, Korean companies' investment in North Korea will resume. However, the previous inter-Korean investment agreement system was a fictional systemthat was ineffective. Therefore, if these safety devices are not reorganized when economic cooperation resumes, unfair damage to Korean companies will be repeated again. The core of the improved investment guarantee system is not a bilateral system between the two Koreas, but the establishment of a multilateral system through North Korea's inclusion in the international economy. Specifically, it includes encouraging North Korea to join international agreements for the execution of arbitration decisions, securing subrogation rights through membership of international insurance groups such as MIGA, creating matching funds by international financial organizations. Through this new approach, it will be possible to improve the safety of Korean companies' investment in North Korea, and ultimately, it will be necessary to lay the foundation for mutual development through economic cooperation between the two Koreas.

  • PDF

Revitalization and Support Policies of Closed Schools at the Age of Low Fertility and Super-Aging - Focusing on Closed School in Japan - (저출산·초고령화시대의 폐교 활용 및 지원시책 연구 - 일본의 사례를 중심으로 -)

  • Byun, Kyeonghwa;Yoo, Changgeun
    • Journal of the Korean Institute of Rural Architecture
    • /
    • v.25 no.3
    • /
    • pp.27-35
    • /
    • 2023
  • This study aims to provide implications for Korea's efficient application policies for closed schools by identifying the current status of how closed schools are revitalized in Japan and their supportive measures. In Japan, a total of 2,215 schools have closed from 1992 to 2001, and 8,580 schools from 2002 to 2020, with 10,709 closures occurring from 1992 to 2020. The average number is about 369 per year. In terms of the overall trend, the number of closed schools have been put into use and the numbers have increased from 70% in 2013 to 74% in 2020. To summarize the characteristics of the use, first, there is a complex phenomenon in which the use of closed schools are becoming more diverse. Second, closed schools are most often revitalized as educational facilities for residents, followed by social sports facilities, social education facilities and cultural facilities. Third, the use of closed schools in the industries are increasing as they are used as "corporal facilities and start-up support facilities". In order to promote the use of closed schools, the Ministry of Education, Culture, Sports, Science and Technology in Japan promoted the simplification and elasticity of property disposal procedures in 2008. Since 2010, the disclosure of information on closed facilities and matching service between providers and users have been unified through the "Let's Connect to the Future ~ Closed School for All" project. The Cabinet Office including the four offices, and five central governments are advocating the use of closed schools by promoting subsidy support projects.

Place Assimilation in OT

  • Lee, Sechang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.109-116
    • /
    • 1996
  • In this paper, I would like to explore the possibility that the nature of place assimilation can be captured in terms of the OCP within the Optimality Theory (Mccarthy & Prince 1999. 1995; Prince & Smolensky 1993). In derivational models, each assimilatory process would be expressed through a different autosegmental rule. However, what any such model misses is a clear generalization that all of those processes have the effect of avoiding a configuration in which two consonantal place nodes are adjacent across a syllable boundary, as illustrated in (1):(equation omitted) In a derivational model, it is a coincidence that across languages there are changes that have the result of modifying a structure of the form (1a) into the other structure that does not have adjacent consonantal place nodes (1b). OT allows us to express this effect through a constraint given in (2) that forbids adjacent place nodes: (2) OCP(PL): Adjacent place nodes are prohibited. At this point, then, a question arises as to how consonantal and vocalic place nodes are formally distinguished in the output for the purpose of applying the OCP(PL). Besides, the OCP(PL) would affect equally complex onsets and codas as well as coda-onset clusters in languages that have them such as English. To remedy this problem, following Mccarthy (1994), I assume that the canonical markedness constraint is a prohibition defined over no more than two segments, $\alpha$ and $\beta$: that is, $^{*}\{{\alpha, {\;}{\beta{\}$ with appropriate conditions imposed on $\alpha$ and $\beta$. I propose the OCP(PL) again in the following format (3) OCP(PL) (table omitted) $\alpha$ and $\beta$ are the target and the trigger of place assimilation, respectively. The '*' is a reminder that, in this format, constraints specify negative targets or prohibited configurations. Any structure matching the specifications is in violation of this constraint. Now, in correspondence terms, the meaning of the OCP(PL) is this: the constraint is violated if a consonantal place $\alpha$ is immediately followed by a consonantal place $\bebt$ in surface. One advantage of this format is that the OCP(PL) would also be invoked in dealing with place assimilation within complex coda (e.g., sink [si(equation omitted)k]): we can make the constraint scan the consonantal clusters only, excluding any intervening vowels. Finally, the onset clusters typically do not undergo place assimilation. I propose that the onsets be protected by certain constraint which ensures that the coda, not the onset loses the place feature.

  • PDF

Microwave Absorbing Properties of Iron Particles-Rubber Composites in Mobile Telecommunication Frequency Band (이동통신주파수 대역에서 순철 분말-고무 복합체 Sheet의 전파흡수특성)

  • Kim, Sun-Tae;Kim, Sant-Keun;Kim, Sung-Soo;Yoon, Yeo-Choon;Lee, Kyung-Sub;Choi, Kwang-Bo
    • Journal of the Korean Magnetics Society
    • /
    • v.14 no.4
    • /
    • pp.131-137
    • /
    • 2004
  • For the aim of thin electromagnetic wave absorbers used in mobile telecommunication frequency band (0.8-2.0㎓), we investigate high-frequency magnetic, dielectric and microwave absorbing properties of iron particles dispersed in rubber matrix in this study. The major experimental variables are particle shape (sphere and flake) and initial particle size (in the range 5-70 $\mu\textrm{m}$) of iron powders. High value of magnetic permeability and dielectric permittivity can be obtained in the composites containing thin plate-shape (flake) iron particles (of which thickness is less than skin depth in ㎓frequency), which can be produced by mechanical forging of spherical iron powders using an attrition mill. This result is attributed to the reduction of eddy current loss (increase of permeability) and the increase of space charge polarization (increase of permeability). The optimum initial particle size is found to be about 10 $\mu\textrm{m}$ for the attainment of the material parameters (particularly, real part of complex permeability) satisfying the wave impedance matching. With the iron powders controlled in size and shape as absorbent fillers in rubber matrix, the thickness can be reduced to about 0.7mm with respect to -5㏈ reflection loss (70% power absorption) in mobile telecommunication frequency band.

Single-layered Microwave Absorbers containing Carbon nanofibers and NiFe particles (탄소나노섬유와 NiFe 분말을 함유한 단층형 전자기파 흡수체)

  • Park, Ki-Yeon;Han, Jae-Hung;Lee, Sang-Bok;Kim, Jin-Bong;Yi, Jin-Woo;Lee, Sang-Kwan
    • Composites Research
    • /
    • v.21 no.5
    • /
    • pp.9-14
    • /
    • 2008
  • Carbon nanofibers (CNFs) were used as dielectric lossy materials and NiFe particles were used as magnetic lossy materials. Total twelve specimens for the three types such as dielectric, magnetic and mixed radar absorbing materials (RAMs) were fabricated. Their complex permittivities and permeabilities in the range of $2{\sim}18$ GHz were measured using the transmission line technique. The parametric studios for reflection loss characteristics of each specimen to design the single-layered RAMs were performed. The mixed RAMs generally showed the improved absorbing characteristics with thinner matching thickness. One of the mixed RAMs, MD3with the thickness of 2.00 mm had the 10 dB absorbing bandwidth of 4.0 GHz in the X-band ($8.2{\sim}12.4$ GHz). It also showed very broad 10 dB absorbing bandwidth as wide as 6.0 GHz in the Ku-band ($12.0{\sim}18.0$ GHz) with the thickness tuning to 1.49 mm. The experimental results for selected several specimens were in very good agreements with simulation ones in terms of the overall reflection loss characteristics and 10 dB absorbing bandwidth.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.