• Title/Summary/Keyword: 분할-정복

Search Result 44, Processing Time 0.033 seconds

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

An Automated Outsole Inspection System Using Scale Block and Divide-and-Conquer Technique (눈금 블록과 분할정복 기법을 이용한 신발 밑창 자동 검사 시스템)

  • Kim, Do-Hyeon;Kang, Dong-Koo;Cha, Eui-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.8
    • /
    • pp.625-632
    • /
    • 2002
  • We propose an outsole measurement/inspection system to improve the quality of the shoe product. It uses the Divide-and-Conquer technique to measure the length of shoes'outsole. First, it detects edge positions of outsole's toe and heel from each image frame using an unique scale block we defined and calculates the outsole's length as the distance of two edge positions. Then it compensates the total length of outsole using the side image of outsole. Next, it classifies the outsole as inferior goods if the measurement error is bigger than 5.8mm. As a result of testing with the various kinds of outsoles, it was shown that the 95% accuracy was acquired within 1mm allowable error range. In conclusion, the proposed inspection system is effective and useful in the measurement/inspection process of shoe product and any material object as well.

The Difference between Short and Long Intramedullary Nailing as the Treatment for Unstable Intertrochanteric Femoral Fracture (AO/OTA 31-A2) in Elderly Patients (고령환자에서 발생한 불안정성 대퇴골 전자간부 골절(AO/OTA 31-A2)의 치료 시 골수강내 금속정의 길이에 따른 추시 결과)

  • Shin, Won Chul;Lee, Eun Sung;Suh, Kuen Tak
    • Journal of the Korean Orthopaedic Association
    • /
    • v.52 no.1
    • /
    • pp.25-32
    • /
    • 2017
  • Purpose: The purpose of this study was to analyze the radiological and clinical outcomes in elderly patients with unstable intertrochanteric femur fractures in accordance with the length of intramedullary nail. Materials and Methods: Between August 2009 and December 2014, a total of 139 patients-older than 65 years of age with AO/OTA classification of 31-A2 unstable intertrochanteric femur fracture-who has been followed-up for at least 1 year after the treatment with internal fixation by using an intramedullary nail were enrolled for this retrospective control study. The subjects were classified into two groups according to the length of intramedullary nail: 106 patients in the short group (group I) and 33 patients in the long group (group II). For radiological assessments, the reduction state, time to union, and implant related complications were examined. The clinical outcomes were assessed by preoperative hemoglobin, operating time, intraoperative bleeding amount, blood transfusion rate, hospitalization period, and Charnley hip pain scoring system at the final follow-up. Results: The postoperative radiographs showed good or acceptable reduction in all cases. The mean time of radiologic bone union was 4.8 months, and there was no difference between the two groups. With respect to surgical time, the group II was found to take longer (57.87 minutes) than the group I (45.65 minutes) (p=0.003). The bleeding amount during surgery of the group II was greater (288.78 ml) than that of the group I (209.90 ml) (p=0.046). The clinical results at the final follow-up were found to be satisfactory in both groups. Conclusion: In cases of good reduction of the fracture from the treatment of unstable intertrochanteric femur fracture accompanying the posteromedial fragment in elderly patients, both groups-long and short intramedullary nails-showed satisfactory radiological and clinical outcomes.

Quicksort Using Range Pivot (범위 피벗 퀵정렬)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.139-145
    • /
    • 2012
  • Generally, Quicksort selects the pivot from leftmost, rightmost, middle, or random location in the array. This paper suggests Quicksort using middle range pivot $P_0$ and continually divides into 2. This method searches the minimum value $L$ and maximum value $H$ in the length n of list $A$. Then compute the initial pivot key $P_0=(H+L)/2$ and swaps $a[i]{\geq}P_0$,$a[j]<P_0$ until $i$=$j$ or $i$>$j$. After the swap, the length of list $A_0$ separates in two lists $a[1]{\leq}A_1{\leq}a[j]$ and $a[i]{\leq}A_2{\leq}a[n]$ and the pivot values are selected by $P_1=P_0/2$, $P_2=P_0+P_1$. This process repeated until the length of partial list is two. At the length of list is two and $a$[1]>$a$[2], swaps as $a[1]{\leftrightarrow}a[2]$. This method is simpler pivot key process than Quicksort and improved the worst-case computational complexity $O(n^2)$ to $O(n{\log}n)$.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Clinical Features of Simple Bronchial Anthracofibrosis which is not Associated with Tuberculosis (비결핵성 기관지탄분섬유화증의 임상 양상)

  • Lee, Hee-Seub;Maeng, Joo-Hee;Park, Pae-Gun;Jang, Jin-Gun;Park, Wan;Ryu, Dae-Sik;Kang, Gil-Hyun;Jung, Bock-Hyun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.53 no.5
    • /
    • pp.510-518
    • /
    • 2002
  • Background : Bronchial anthracofibrosis (BAF) is a dark black or brown pigmentation of multiple large bronchi associated with a fibrotic stenosis or obliteration that is incidentally found during a diagnostic bronchoscopy some reporters have suggested endobronchial tuberculosis or tuberculous lymphadenitis as a possible cause of BAF. However, some BAF patients do not have any medical history of tuberculosis. The aim of this study was to elucidate the clinical features of simple BAF patients, which were not associated with tuberculosis. Methods : We reviewed the patients' charts retrospectiely and interviewed all BAF patients who were followed up for 1 year or more. Among the 114 BAF patients, 43 patents (38 %) had no associated tuberculosis, cancer and pneumoconiosis. The clinical characteristics, radiological findings and associated pulmonary diseases of these patients were evaluated. Results : Most patients were non-smokers, old aged, housewifes who resided in a farming village. The common respiratory symptoms were dyspnea, cough and hemoptysis. The predominant X-ray findings were a multiple bronchial wall thickening(89%), bronchial narrowing or atelectasis (76%) and a mediastinal lymph node enlargement with/without calcification (78%). Pulmonary function test usually showed mild obstructive ventilatory abnormalities but no patient showed a restrictive ventilatory pattern and the patients were frequently affected with chronic bronchitis(51%), post-obstructive pneumonia(40%) and chronic asthma(4%). Conclusion : Because BAF is frequently associated with chronic bronchitis and obstructive pneumonia as well as tuberculosis, a careful clinical evaluation and accurate differential diagnosis is more essential than empirical anti-tuberculous medication.

A Study on Algorithm for the Wavelength and Routing Assignment Problem on All-optical Tree Networks (전광 트리 네트워크에서 파장 및 경로설정 문제를 해결하는 알고리즘에 관한 연구)

  • Kim, Soon-Seok;Yeo, Sang-Su;Kim, Sung-Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3952-3963
    • /
    • 2000
  • This paper considers the WRA(Wavelength and Houting Assignment) problem on all-optical tree networks using WDM(Wavelength Division Multiplexing). Each link between a pair of request nodes on all optical networks is assigned different wavelengths because of technical constraint. On the basis of this, we give an polynomial time algorithm to assign wavelengths to the all patbs of a arbitrary tree network using divide and conquer method. The time complexity of this algorithm is O(Q. R), in which Q is the request nodes for all to'all communication in a tree topology and U is the maximum number of wavelength. Also we implemented our algorithm using C in Pentium II 233MHz and analyzed performance results.

  • PDF

Finding the Minimum MBRs Embedding K Points (K개의 점 데이터를 포함하는 최소MBR 탐색)

  • Kim, Keonwoo;Kim, Younghoon
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.71-77
    • /
    • 2017
  • There has been a recent spate in the usage of mobile device equipped GPS sensors, such as smart phones. This trend enables the posting of geo-tagged messages (i.e., multimedia messages with GPS locations) on social media such as Twitter and Facebook, and the volume of such spatial data is rapidly growing. However, the relationships between the location and content of messages are not always explicitly shown in such geo-tagged messages. Thus, the need arises to reorganize search results to find the relationship between keywords and the spatial distribution of messages. We find the smallest minimum bounding rectangle (MBR) that embedding k or more points in order to find the most dense rectangle of data, and it can be usefully used in the location search system. In this paper, we suggest efficient algorithms to discover a group of 2-Dimensional spatial data with a close distance, such as MBR. The efficiency of our proposed algorithms with synthetic and real data sets is confirmed experimentally.

Chromatic Number Algorithm for Exam Scheduling Problem (시험 일정 계획 수립 문제에 관한 채색 수 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.111-117
    • /
    • 2015
  • The exam scheduling problem has been classified as nondeterministic polynomial time-complete (NP-complete) problem because of the polynomial time algorithm to obtain the exact solution has been unknown yet. Gu${\acute{e}}$ret et al. tries to obtain the solution using linear programming with $O(m^4)$ time complexity for this problem. On the other hand, this paper suggests chromatic number algorithm with O(m) time complexity. The proposed algorithm converts the original data to incompatibility matrix for modules and graph firstly. Then, this algorithm packs the minimum degree vertex (module) and not adjacent vertex to this vertex into the bin $B_i$ with color $C_i$ in order to exam within minimum time period and meet the incompatibility constraints. As a result of experiments, this algorithm reduces the $O(m^4)$ of linear programming to O(m) time complexity for exam scheduling problem, and gets the same solution with linear programming.

Internal Fixation of Proximal Humerus Fracture with Polyaxial Angular Stable Locking Compression Plate in Patients Older Than 65 Years (65세 이상의 상완골 근위부 골절 환자에서 다축 각안정 잠김 압박 금속판을 이용한 내고정술)

  • Lee, Ki-Won;Choi, Young-Joon;Ahn, Hyung-Sun;Kim, Chung-Hwan;Hwang, Jae-Kwang;Kang, Jeong-Ho;Choo, Han-Ho;Park, Jun-Seok;Kim, Tae-Kyung
    • Clinics in Shoulder and Elbow
    • /
    • v.15 no.1
    • /
    • pp.25-31
    • /
    • 2012
  • Purpose: The clinical and radiographic outcomes of the internal fixation, which were executed on patients over the age of 65 with proximal humerus fracture by using a polyaxial angular stable locking compression plate (Non-Contact-Bridging proximal humerus plate, Zimmer, Switzerland, NCB), were evaluated. Materials and Methods: Thirty two patients over the age of 65 among the proximal humerus fracture treated with NCB plate, between August 2007 and January 2011, were chosen as the subjects. The average age of patients was 71 years, and the average postoperative follow-up period was 11.5 months. The fractures included 14 two-part and 18 three-part fractures. The clinical results were evaluated, using the visual analog scale (VAS) score and the Constant score. The radiological results were evaluated by time to union and Paavolainen method, which measures the neck shaft angle. Results: At the last follow-up examination, the mean VAS score was 3 points and the mean Constant score was 64.5 points, with bone union achieved after the average of 16.2 weeks following the surgery in all the cases. The mean neck shaft angle was 125.9 and 24 cases had good results, while 8 cases had fair results by Paavolainen method, at the last follow-up. There were 1 case of delayed union and cerclage wire failure, and 3 cases of subacromial impingement. There were no complications, such as loss of reduction, nonunion, screw loosening, or avascular necrosis of the humeral head. Conclusion: Internal fixation, using a NCB plate, was considered to be an effective surgical method in treating proximal humerus fracture in the elderly patients, on whom the fixation of the fracture and maintenance of reduction are difficult.