• Title/Summary/Keyword: divide-by N

Search Result 75, Processing Time 0.028 seconds

An Efficient Clustering Algorithm for Massive GPS Trajectory Data (대용량 GPS 궤적 데이터를 위한 효율적인 클러스터링)

  • Kim, Taeyong;Park, Bokuk;Park, Jinkwan;Cho, Hwan-Gue
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.40-46
    • /
    • 2016
  • Digital road map generation is primarily based on artificial satellite photographing or in-site manual survey work. Therefore, these map generation procedures require a lot of time and a large budget to create and update road maps. Consequently, people have tried to develop automated map generation systems using GPS trajectory data sets obtained by public vehicles. A fundamental problem in this road generation procedure involves the extraction of representative trajectory such as main roads. Extracting a representative trajectory requires the base data set of piecewise line segments(GPS-trajectories), which have close starting and ending points. So, geometrically similar trajectories are selected for clustering before extracting one representative trajectory from among them. This paper proposes a new divide- and-conquer approach by partitioning the whole map region into regular grid sub-spaces. We then try to find similar trajectories by sweeping. Also, we applied the $Fr{\acute{e}}chet$ distance measure to compute the similarity between a pair of trajectories. We conducted experiments using a set of real GPS data with more than 500 vehicle trajectories obtained from Gangnam-gu, Seoul. The experiment shows that our grid partitioning approach is fast and stable and can be used in real applications for vehicle trajectory clustering.

Accuracy Comparison as World Geodetic Datum Transformation of 1/1000 Digital Map (1/1,000 수치지형도의 세계측지계 변환에 따른 정확도 비교)

  • Yun, Seok-Jin;Park, Joung-Hyun;Park, Joon-Kyu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.169-175
    • /
    • 2009
  • According as standard of measurement is changed to world geodetic system by surveying law revision, we need to transform previous 1/1,000 digital maps as a standards of world geodetic system. And, we should acquire standard strategy to minimize confusion and error by conversion of geodetic surveying standards. Thus, conversion of digital maps must be transformed efficiently and consistently according to notice of relevant standard. As common point, we have used 1/1,000 digital map and local geodetic system coordinates and world geodetic system coordinates that had been used in UIS business of Pusan city and, make a analysis of distortion quantity using KASM Trans Ver 2.2. As the result of distortion quantity calculation about all Pusan city, numbers of area that error is over 0.05m are 35 in case of X(N) and 43 in case of Y(E). Because some business section have especially much error, we divided into 3 areas, that was A,B,C, and analyzed. As a result of analysis, errors of more than 0.05m are occurred only 1 X(E) in the B area and 1 X(N) and 1 Y(E) in the C area. In conclusion, We think It is a good method that we consider a distortion quantity and divide a region, and transfer to world geodetic system for large area like Pusan city.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Automated Cell Counting Method for HeLa Cells Image based on Cell Membrane Extraction and Back-tracking Algorithm (세포막 추출과 역추적 알고리즘 기반의 HeLa 세포 이미지 자동 셀 카운팅 기법)

  • Kyoung, Minyoung;Park, Jeong-Hoh;Kim, Myoung gu;Shin, Sang-Mo;Yi, Hyunbean
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1239-1246
    • /
    • 2015
  • Cell counting is extensively used to analyze cell growth in biomedical research, and as a result automated cell counting methods have been developed to provide a more convenient and means to analyze cell growth. However, there are still many challenges to improving the accuracy of the cell counting for cells that proliferate abnormally, divide rapidly, and cluster easily, such as cancer cells. In this paper, we present an automated cell counting method for HeLa cells, which are used as reference for cancer research. We recognize and classify the morphological conditions of the cells by using a cell segmentation algorithm based on cell membrane extraction, and we then apply a cell back-tracking algorithm to improve the cell counting accuracy in cell clusters that have indistinct cell boundary lines. The experimental results indicate that our proposed segmentation method can identify each of the cells more accurately when compared to existing methods and, consequently, can improve the cell counting accuracy.

The Expression of CD 18 on Ischemia- Reperfusion Injury of TRAM Flap of Rats (흰쥐의 복직근피부피판에 일으킨 허혈-재관류 손상에서 CD18의 발현)

  • Yoon, Sang Yup;Lee, Taik Jong;Hong, Joon Pio
    • Archives of Plastic Surgery
    • /
    • v.33 no.6
    • /
    • pp.737-741
    • /
    • 2006
  • Purpose: This study was to evaluate the expression pattern of CD 18(leukocyte adhesion glycoprotein) in ischemia-reperfusion injury of TRAM flap of rats. Through this study, we can obtain more information about ischemia-reperfusion injury. We want to develop specific medicine to improve the survival rate of TRAM flap in the future. Methods: A TRAM flap supplied by a single pedicle superior epigastric artery and vein was elevated on 60 Sprauge-Dawley rats. The rats were divide into 6 groups (each group n=10); Group O: sham, no ischemia-reperfusion injury, Group I: 2 hour reperfusion after 4 hour ischemia, Group II: 4 hour reperfusion after 4 hour ischemia, Group III: 8 hour reperfusion after 4 hour ischemia, Group IV: 12 hour reperfusion after 4 hour ischemia, and Group V: 24 hour reperfusion after 4 hour ischemia. This study consisted of gross examination for flap survival and flow cytometry study of CD18 on neutrophils. Results: The gross measurement of the flap showed different survival rate in group I(71%), II(68%), III(37%), IV(34%) and V(34%). All experimental groups showed an increase in the expression of CD18 compared to group O. The expression of CD18 was rapidly increased in ascending order in group I, II and III. But, the expression of CD18 was maintained in group IV and V. Conclusion: The results can be implemented in the study to develop drugs which are capable of reducing ischemia-reperfusion injury in microsurgical breast reconstruction.

H-Plane 8-Way Rectangular Waveguide Power Divider Using Y-Junction (Y-Junction을 이용한 H-평면 8-Way 구형 도파관 전력 분배기)

  • Lee, Sang-Heun;Yoon, Ji-Hwan;Yoon, Young-Joong;Kim, Jun-Yeon;Lee, Woo-Sang;Park, Seul-Gi
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.23 no.2
    • /
    • pp.151-158
    • /
    • 2012
  • This paper proposes a H-plane 8-way rectangular waveguide power divider using Y-junction. A general N-way power divider can be composed of multi-stage T-junctions. However, if the distances of output ports are close, the matching characteristic is not improved by using only T-junctions because of space limitation. In this case, since other types of 3-port junctions should be used to final output stage, Y-junctions are used with T-junctions in this paper. The proposed Y-junction uses the tapered-line impedance transformer and inductive irises to improve impedance matching characteristic. The 8-way power divider using Y-junction is fabricated and measured. The measured return loss and insertion loss from input port to output port are -30.8 dB and -9.3 dB at operating frequency, respectively. The measured maximum phase difference is about $1^{\circ}$. Therefore, the proposed power divider will be useful to apply to various microwave systems, which need to divide the input power equally, such as feed networks for array antennas.

ISAAC : An Integrated System with User Interface for Sentence Analysis (ISAAC :문장분석용 통합시스템 및 사용자 인터페이스)

  • Kim, Gon;Kim, Min-Chan;Bae, Jae-Hak;Lee, Jong-Hyuk
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.107-116
    • /
    • 2004
  • This paper introduces ISAAC (An Interface for Sentence Analysis & Abstraction with Cogitation) which provides an integrated user interface for sentence analysis. Into ISAAC, the various linguistic tools and resources are integrated. They are necessary for sentence analysis. Most of the tools and resources for sentence analysis are developed and accumulated independently. In the sentence analyzing with these tools and resources, it is difficult for sentence analyst to manage and control information which is taken on each step. In this respect, we have integrated the usable tools and resources, and made ISAAC to provide the consistent user oriented interface to each function. We have been able to divide sentence analysis process Into 14 steps. In ISAAC, these steps are processed by four individual modules $\cicled1$syntactic analysis of sentence,$\cicled2$retrieval of a root word,$\cicled3$searching category information in Roget s Thesaurus, and $\cicled4$searching category information in OfN(Ontology for Narratives). Therefore, in case of sentence analysis with ISAAC, the process of total 14 steps falls into 4 steps. This means that it is able to improve the performance of sentence analyst to the extent 3.5 times or more. Furthermore, ISAAC undertaking tedious transcription needed to process each step, we expect that ISAAC can help the analyst to maintain the accuracy of sentence analysis.

Comparison of Efficacy of Propofol When Used with or without Remifentanil during Conscious Sedation with a Target-Controlled Infuser for Impacted Teeth Extraction

  • Sung, Juhan;Kim, Hyun-Jeong;Choi, Yoon Ji;Lee, Soo Eon;Seo, Kwang-Suk
    • Journal of The Korean Dental Society of Anesthesiology
    • /
    • v.14 no.4
    • /
    • pp.213-219
    • /
    • 2014
  • Background: Clinical use of propofol along with remifentanil for intravenous sedation is increasing in these days, but there are not enough researches to evaluate proper target concentration when these drugs are infused by using target controlled infusion (TCI) pump in dental treatment cases. In this study, we compared efficacy of TCI conscious sedation and target concentration of propofol when it used with or without remifentanil during conscious sedation with the help of a TCI for the surgical extraction of impacted teeth. Methods: After IRB approval, all the charts of patients who had undergone surgical extraction of impacted teeth under propofol TCI sedation for 6 months were selected and reviewed for this study. After reviewal of charts, we could divide patients in two groups. In one group (group 1), only propofol was selected for sedation and initial effect site concentration of propofol was $1{\mu}g/ml$ (n = 33), and in another group (group 2), both propofol and remifentanil was infused and initial effect site concentration of each drug was $0.6{\mu}g/ml$ and 1 ng/ml respectively (n = 25). For each group, average propofol target concentration was measured. In addition, we compared heart rate, respiratory rate, and systolic and diastolic blood pressure as well as oxygen saturation. Besides, BIS, sedation scores (OAAS/S), and subjective satisfaction scores were compared. Results: Between group 1 and 2, there were no significant differences in demographics (age, weight and height), and total sedation time. However, total infused dose and the effect site target concentration of propofol was $163.8{\pm}74.5mg$ and $1.13{\pm}0.21{\mu}g/ml$ in group 1, and $104.3{\pm}46.5mg$ and $0.72{\pm}0.26{\mu}g/ml$ in the group 2 with $1.02{\pm}0.21ng/l$ of the effect site target concentration of remifentanil, respectively. During sedation, there were no differences between overall vital sign, BIS and OAAS/S in 2 groups (P > 0.05). However, we figured out patients in group 2 had decreased pain sensation during sedation. Conclusions: Co-administration of propofol along with remifentanil via a TCI for the surgical extraction of impacted teeth may be safe and effective compared to propofol only administration.

A Study on the Fashion Illustration of 17th Century (17세기 복식디자인화에 관한 연구)

  • 이순홍;황수정
    • The Research Journal of the Costume Culture
    • /
    • v.2 no.2
    • /
    • pp.395-413
    • /
    • 1994
  • Costume is mirror of diverse life styles and attitudes in human life. It has a meaning beyond "clothing" . Fashion illlustration is to express these costumes with a picture. So, it can be said that it is a ′mirror of costumes′ in historical side. The purpose of this study is to find the meaning of fashion illustration of 17th century, which called its first one and to look into its characteristics and costumes of 17th century respotlighting fashion illustrators and painters related with fashion illustration in those days. This study is based on Western Europe by literatures. The fashion illustration in 17th century designed by painters and fashion illustrators. They are Wenceslaus Hollar, Abraham Bosse, Jacques Callot, Jean de st Jean, N. Bonar, A. Trouvain, A. Arnoult in France and so on. The characteristics of fashion illustration in 17th century are as follows : 1. There was a quickening of modern civil consciousness in 17th century. As the subject of costume culture moved from noble class to the working class which began to have a free, the fashion illustration changed to the direction of informing their social class and job. 2. The fashion illustrations of 17th century showed storng realism which was a base of modern picture. 3. The most of them showed costume plates. It was not to transmit adding intended forecast but to describe sincerely in costumes′ record. However, the fashion illustration since the middle of 17th century was designed considering fashion. 4. It could be said that the fashion illustration of 17th century was the forest one of today. It was expressed by Wenceslaus Hollar′s ones. And it is found in his suggestion of popular costumes before and behind and delicate description like accessories. 5. They were transmitted by fashion magazines internationally. Le Mercure Galant, which printed mode plates in 1678, was the first modern fashion magazine aiming at general readers. The fashion illustration of 17th century can divide into ones for court, for working classes, costume plates. The fashion illustrations for court designed by court painters. There were court costumes of early time, spanish Mode and of lately time, French Court Culture. They had baroque elements with a bunddle of ribbons and race decoration. On the other hand, the fashion iooustrations for working class were under the influence on Netherlands styles. They were designed for the purpose of god function and much use. That′s why was under the influence of puritanical life creed. In this situation, the costume plates directed the fashion in those days. At that time, they were supplied widely and it amy be an attempt of popularization. The fashion illustrations of 17th century appeared that they had transmissible character and artistics expression. On the basis of them, we can look into the fashion illustrations of today.

  • PDF

Efficient Skew Estimation for Document Images Based on Selective Attention (선택적 주의집중에 의한 문서영상의 효율적인 기울어짐 추정)

  • Gwak, Hui-Gyu;Kim, Su-Hyeong
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.10
    • /
    • pp.1193-1203
    • /
    • 1999
  • 본 논문에서는 한글과 영문 문서 영상들에 대한 기울어짐 추정(skew estimation) 알고리즘을 제안한다. 제안 방법은 전체 문서 영상에서 텍스트 요소들이 밀집되어 있는 영역을 선별하고, 선별된 영역에 대해 허프 변환을 적용하는 선택적 주의집중(selective attention) 방식을 채택한다. 제안 방법의 기울기 추정 과정은 2단계로 구성되는데, coarse 단계에서는 전체 영상을 몇 개의 영역으로 나누고 동일한 영역에 속하는 데이타들간의 연결 각도를 계산하여 각 영역별 accumulator에 저장한다. accumulator에 저장된 빈도치를 기준으로 $\pm$45$^{\circ}$범위 내에서 최대 $\pm$1$^{\circ}$의 오차를 가진 각 영역별 기울기를 계산한 후, 이들 중 최대 빈도값을 갖는 영역을 선정하고 그 영역의 기울기 각도를 문서 영상의 대략적인 기울기 각도로 결정한다. Refine 단계에서는 coarse 단계에서 선정된 영역에 허프 변환을 적용하여 정확한 기울기를 계산하는데, coarse 단계에서 추정한 기울기의 $\pm$1$^{\circ}$범위 내에서 0.1$^{\circ}$간격으로 측정한다. 이와 같은 선택적 주의집중 방식을 통해 기울기 추정에 소요되는 시간 비용은 최소화하고, 추정의 정확도는 최대화 할 수 있다.제안 방법의 성능 평가를 위한 실험은 다양한 형태의 영문과 한글 문서 영상 2,016개에 적용되었다. 제안 방법의 평균 수행 시간은 Pentium 200MHz PC에서 0.19초이고 평균 오차는 $\pm$0.08$^{\circ}$이다. 또한 기존의 기울기 추정 방법과 제안 방법의 성능을 비교하여 제안 방법의 우수성을 입증하였다.Abstract In this paper we propose a skew estimation algorithm for English and Korean document images. The proposed method adopts a selective attention strategy, in which we choose a region of interest which contains a cluster of text components and then apply a Hough transform to this region. The skew estimation process consists of two steps. In the coarse step, we divide the entire image into several regions, and compute the skew angle of each region by accumulating the slopes of lines connecting any two components in the region. The skew angle is estimated within the range of $\pm$45 degree with a maximum error of $\pm$1 degree. Next we select a region which has the most frequent slope in the accumulators and determine the skew angle of the image roughly as the angle corresponding to the most frequent slope. In the refine step, a Hough transform is applied for the selected region within the range of $\pm$1 degree along the angle computed from the coarse step, with an angular resolution of 0.1 degree. Based on this selective attention strategy, we can minimize the time cost and maximize the accuracy of the skew estimation.We have measured the performance of the proposed method by an experiment with 2,016 images of various English and Korean documents. The average run time is 0.19 second on a Pentium 200MHz PC, and the average error is $\pm$0.08 degree. We also have proven the superiority of our algorithm by comparing the performance with that of other well-known methods in the literature.