• Title/Summary/Keyword: 알고리즘화

Search Result 6,413, Processing Time 0.036 seconds

A Study on the Applicability of the Crack Measurement Digital Data Graphics Program for Field Investigations of Buildings Adjacent to Construction Sites (건설 현장 인접 건물의 현장 조사를 위한 균열 측정 디지털 데이터 그래픽 프로그램 적용 가능성에 관한 연구)

  • Ui-In Jung;Bong-Joo Kim
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.12 no.1
    • /
    • pp.63-71
    • /
    • 2024
  • Through the development of construction technology, various construction projects such as redevelopment projects, undergrounding of roads, expansion of subways, and metro railways are being carried out. However, this has led to an increase in the number of construction projects in existing urban centers and neighborhoods, resulting in an increase in the number of damages and disputes between neighboring buildings and residents, as well as an increase in safety accidents due to the aging of existing buildings. In this study, digital data was applied to a graphics program to objectify the progress of cracks by comparing the creation of cracks and the increase in length and width through photographic images and presenting the degree of cracks numerically. Through the application of the program, the error caused by the subjective judgment of crack change, which was mentioned as a shortcoming of the existing field survey, was solved. It is expected that the program can be used universally in the building diagnosis process by improving its reliability if supplemented and improved in the process of use. As a follow-up study, it is necessary to apply the extraction algorithm of the digital graphic data program to calculate the length and width of the crack by itself without human intervention in the preprocessing work and to check the overall change of the building.

Analysis of Keywords in national river occupancy permits by region using text mining and network theory (텍스트 마이닝과 네트워크 이론을 활용한 권역별 국가하천 점용허가 키워드 분석)

  • Seong Yun Jeong
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.185-197
    • /
    • 2023
  • This study was conducted using text mining and network theory to extract useful information for application for occupancy and performance of permit tasks contained in the permit contents from the permit register, which is used only for the simple purpose of recording occupancy permit information. Based on text mining, we analyzed and compared the frequency of vocabulary occurrence and topic modeling in five regions, including Seoul, Gyeonggi, Gyeongsang, Jeolla, Chungcheong, and Gangwon, as well as normalization processes such as stopword removal and morpheme analysis. By applying four types of centrality algorithms, including stage, proximity, mediation, and eigenvector, which are widely used in network theory, we looked at keywords that are in a central position or act as an intermediary in the network. Through a comprehensive analysis of vocabulary appearance frequency, topic modeling, and network centrality, it was found that the 'installation' keyword was the most influential in all regions. This is believed to be the result of the Ministry of Environment's permit management office issuing many permits for constructing facilities or installing structures. In addition, it was found that keywords related to road facilities, flood control facilities, underground facilities, power/communication facilities, sports/park facilities, etc. were at a central position or played a role as an intermediary in topic modeling and networks. Most of the keywords appeared to have a Zipf's law statistical distribution with low frequency of occurrence and low distribution ratio.

Analysis of Significance between SWMM Computer Simulation and Artificial Rainfall on Rainfall Runoff Delay Effects of Vegetation Unit-type LID System (식생유니트형 LID 시스템의 우수유출 지연효과에 대한 SWMM 전산모의와 인공강우 모니터링 간의 유의성 분석)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.3
    • /
    • pp.34-44
    • /
    • 2020
  • In order to suggest performance analysis directions of ecological components based on a vegetation-based LID system model, this study seeks to analyze the statistical significance between monitoring results by using SWMM computer simulation and rainfall and run-off simulation devices and provide basic data required for a preliminary system design. Also, the study aims to comprehensively review a vegetation-based LID system's soil, a vegetation model, and analysis plans, which were less addressed in previous studies, and suggest a performance quantification direction that could act as a substitute device-type LID system. After monitoring artificial rainfall for 40 minutes, the test group zone and the control group zone recorded maximum rainfall intensity of 142.91mm/hr. (n=3, sd=0.34) and 142.24mm/hr. (n=3, sd=0.90), respectively. Compared to a hyetograph, low rainfall intensity was re-produced in 10-minute and 50-minute sections, and high rainfall intensity was confirmed in 20-minute, 30-minute, and 40-minute sections. As for rainwater run-off delay effects, run-off intensity in the test group zone was reduced by 79.8% as it recorded 0.46mm/min at the 50-minute point when the run-off intensity was highest in the control group zone. In the case of computer simulation, run-off intensity in the test group zone was reduced by 99.1% as it recorded 0.05mm/min at the 50-minute point when the run-off intensity was highest. The maximum rainfall run-off intensity in the test group zone (Dv=30.35, NSE=0.36) recorded 0.77mm/min and 1.06mm/min in artificial rainfall monitoring and SWMM computer simulation, respectively, at the 70-minute point in both cases. Likewise, the control group zone (Dv=17.27, NSE=0.78) recorded 2.26mm/min and 2.38mm/min, respectively, at the 50-minutes point. Through statistical assessing the significance between the rainfall & run-off simulating systems and the SWMM computer simulations, this study was able to suggest a preliminary design direction for the rainwater run-off reduction performance of the LID system applied with single vegetation. Also, by comprehensively examining the LID system's soil and vegetation models, and analysis methods, this study was able to compile parameter quantification plans for vegetation and soil sectors that can be aligned with a preliminary design. However, physical variables were caused by the use of a single vegetation-based LID system, and follow-up studies are required on algorithms for calibrating the statistical significance between monitoring and computer simulation results.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

The Estimation Model of an Origin-Destination Matrix from Traffic Counts Using a Conjugate Gradient Method (Conjugate Gradient 기법을 이용한 관측교통량 기반 기종점 OD행렬 추정 모형 개발)

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.43-62
    • /
    • 2004
  • Conventionally the estimation method of the origin-destination Matrix has been developed by implementing the expansion of sampled data obtained from roadside interview and household travel survey. In the survey process, the bigger the sample size is, the higher the level of limitation, due to taking time for an error test for a cost and a time. Estimating the O-D matrix from observed traffic count data has been applied as methods of over-coming this limitation, and a gradient model is known as one of the most popular techniques. However, in case of the gradient model, although it may be capable of minimizing the error between the observed and estimated traffic volumes, a prior O-D matrix structure cannot maintained exactly. That is to say, unwanted changes may be occurred. For this reason, this study adopts a conjugate gradient algorithm to take into account two factors: estimation of the O-D matrix from the conjugate gradient algorithm while reflecting the prior O-D matrix structure maintained. This development of the O-D matrix estimation model is to minimize the error between observed and estimated traffic volumes. This study validates the model using the simple network, and then applies it to a large scale network. There are several findings through the tests. First, as the consequence of consistency, it is apparent that the upper level of this model plays a key role by the internal relationship with lower level. Secondly, as the respect of estimation precision, the estimation error is lied within the tolerance interval. Furthermore, the structure of the estimated O-D matrix has not changed too much, and even still has conserved some attributes.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Development of a Small Gamma Camera Using NaI(T1)-Position Sensitive Photomultiplier Tube for Breast Imaging (NaI (T1) 섬광결정과 위치민감형 광전자증배관을 이용한 유방암 진단용 소형 감마카메라 개발)

  • Kim, Jong-Ho;Choi, Yong;Kwon, Hong-Seong;Kim, Hee-Joung;Kim, Sang-Eun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Moon-Hae;Joo, Koan-Sik;Kim, Byuug-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.4
    • /
    • pp.365-373
    • /
    • 1998
  • Purpose: The conventional gamma camera is not ideal for scintimammography because of its large detector size (${\sim}500mm$ in width) causing high cost and low image quality. We are developing a small gamma camera dedicated for breast imaging. Materials and Methods: The small gamma camera system consists of a NaI (T1) crystal ($60 mm{\times}60 mm{\times}6 mm$) coupled with a Hamamatsu R3941 Position Sensitive Photomultiplier Tube (PSPMT), a resister chain circuit, preamplifiers, nuclear instrument modules, an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a standard resistive charge division which multiplexes the 34 cross wire anode channels into 4 signals ($X^+,\;X^-,\;Y^+,\;Y^-$). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated ana digitized via triggering signal and used to localize the position of an event by applying the Anger logic. Results: The intrinsic sensitivity of the system was approximately 8,000 counts/sec/${\mu}Ci$. High quality flood and hole mask images were obtained. Breast phantom containing $2{\sim}7 mm$ diameter spheres was successfully imaged with a parallel hole collimator The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We have succesfully developed a small gamma camera using NaI(T1)-PSPMT and nuclear Instrument modules. The small gamma camera developed in this study might improve the diagnostic accuracy of scintimammography by optimally imaging the breast.

  • PDF

Estimation of Reliability of Real-time Control Parameters for Animal Wastewater Treatment Process and Establishment of an Index for Supplemental Carbon Source Addition (가축분뇨처리공정의 자동제어 인자 신뢰성 평가 및 적정 외부탄소원 공급량 지표 확립)

  • Pak, JaeIn;Ra, Jae In-
    • Journal of Animal Science and Technology
    • /
    • v.50 no.4
    • /
    • pp.561-572
    • /
    • 2008
  • Responses of real-time control parameters, such as ORP, DO and pH, to the conditions of biological animal wastewater treatment process were examined to evaluate the stability of real-time control using each parameter. Also an optimum index for supplemental carbon source addition based on NOx-N level was determined under a consideration of denitrification rate by endogenous respiration of microorganism and residual organic matter in liquor. Experiment was performed with lab-scale sequencing batch reactor(SBR) and working volume of the process was 45L. The distinctive nitrogen break point(NBP) on ORP-and DO-time profiles, which mean the termination of nitrification, started disappearing with the maintenance of low NH4-N loading rate. Also the NBP on ORP-and DO-time profiles was no longer observed when high NOx-N was loaded into the reactor, and the sensitivity of ORP became dull with the increase of NOx-N level. However, the distinctive NBP was constantly occurred on pH(mV)-time profile, maintaining unique profile patterns. This stable occurrence of NBP on pH(mV)-time profile was lasted even at very high NOx-N:NH4-N ratio(over 80:1) in reactor, and the specific point could be easily detected by tracking moving slope change(MSC) of the curve. Revelation of NBP on pH(mV)-time profile and recognition of the realtime control point using MSC were stable at a condition of over 300mg/L NOx-N level in reactor. The occurrence of distinctive NBP was persistent on pH(mV)-time profile even at a level of 10,000mg/L STOC(soluble total organic carbon) and the recognition of NBP was feasible by tracing MSC, but that point on ORP and DO-time profiles began to disappear with the increase of STOC level in reactor. The denitrfication rate by endogenous respiration and residual organic matter was about 0.4mg/L.hr., and it was found that 0.83 would be accepted as an index for supplemental carbon source addition when 0.1 of safety factor was applied.

Development of JPEG2000 Viewer for Mobile Image System (이동형 의료영상 장치를 위한 JPEG2000 영상 뷰어 개발)

  • 김새롬;정해조;강원석;이재훈;이상호;신성범;유선국;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.124-130
    • /
    • 2003
  • Currently, as a consequence of PACS (Picture Archiving Communication System) implementation many hospitals are replacing conventional film-type interpretations of diagnostic medical images with new digital-format interpretations that can also be saved, and retrieve However, the big limitation in PACS is considered to be the lack of mobility. The purpose of this study is to determine the optimal communication packet size. This was done by considering the terms occurred in the wireless communication. After encoding medical image using JPGE2000 image compression method, This method embodied auto-error correction technique preventing the loss of packets occurred during wireless communication. A PC class server, with capabilities to load, collect data, save images, and connect with other network, was installed. Image data were compressed using JPEG2000 algorithm which supports the capability of high energy density and compression ratio, to communicate through a wireless network. Image data were also transmitted in block units coeded by JPEG2000 to prevent the loss of the packets in a wireless network. When JPGE2000 image data were decoded in a PUA (Personal Digital Assistant), it was instantaneous for a MR (Magnetic Resonance) head image of 256${\times}$256 pixels, while it took approximately 5 seconds to decode a CR (Computed Radiography) chest image of 800${\times}$790 pixels. In the transmission of the image data using a CDMA 1X module (Code-Division Multiple Access 1st Generation), 256 byte/sec was considered a stable transmission rate, but packets were lost in the intervals at the transmission rate of 1Kbyte/sec. However, even with a transmission rate above 1 Kbyte/sec, packets were not lost in wireless LAN. Current PACS are not compatible with wireless networks. because it does not have an interface between wired and wireless. Thus, the mobile JPEG2000 image viewing system was developed in order to complement mobility-a limitation in PACS. Moreover, the weak-connections of the wireless network was enhanced by re-transmitting image data within a limitations The results of this study are expected to play an interface role between the current wired-networks PACS and the mobile devices.

  • PDF

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.