• Title/Summary/Keyword: 알고리즘화

Search Result 6,435, Processing Time 0.035 seconds

A Study on the Applicability of the Crack Measurement Digital Data Graphics Program for Field Investigations of Buildings Adjacent to Construction Sites (건설 현장 인접 건물의 현장 조사를 위한 균열 측정 디지털 데이터 그래픽 프로그램 적용 가능성에 관한 연구)

  • Ui-In Jung;Bong-Joo Kim
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.12 no.1
    • /
    • pp.63-71
    • /
    • 2024
  • Through the development of construction technology, various construction projects such as redevelopment projects, undergrounding of roads, expansion of subways, and metro railways are being carried out. However, this has led to an increase in the number of construction projects in existing urban centers and neighborhoods, resulting in an increase in the number of damages and disputes between neighboring buildings and residents, as well as an increase in safety accidents due to the aging of existing buildings. In this study, digital data was applied to a graphics program to objectify the progress of cracks by comparing the creation of cracks and the increase in length and width through photographic images and presenting the degree of cracks numerically. Through the application of the program, the error caused by the subjective judgment of crack change, which was mentioned as a shortcoming of the existing field survey, was solved. It is expected that the program can be used universally in the building diagnosis process by improving its reliability if supplemented and improved in the process of use. As a follow-up study, it is necessary to apply the extraction algorithm of the digital graphic data program to calculate the length and width of the crack by itself without human intervention in the preprocessing work and to check the overall change of the building.

Evaluation of Hazardous Zones by Evacuation Scenario under Disasters on Training Ships (실습선 재난 시 피난 시나리오 별 위험구역 평가)

  • SangJin Lim;YoonHo Lee
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.2
    • /
    • pp.200-208
    • /
    • 2024
  • The occurrence a fire on a training ship with a large number of people on board can lead to severe casualties. Hence the Seafarers' Act and Safety Life At Sea(SOLAS) emphasizes the importance of the abandon ship drill. Therefore, in this study, the training ship of Mokpo National Maritime University, Segero, which has a large number of people on board, was selected as the target ship and the likelihood and severity of fire accidents on each deck were predicted through the preliminary hazard analysis(PHA) qualitative risk assessment. Additionally, assuming a fire in a high-risk area, a simulation of evacuation time and population density was performed to quantitatively predict the risk. The the total evacuation time was predicted to be the longest at 501s in the meal time scenario, in which the population distribution was concentrated in one area. Depending on the scenario, some decks had relatively high population densities of over 1.4pers/m2, preventing stagnation in the number of evacuees. The results of this study are expected to be used as basic data to develop training scenarios for training ships by quantifying evacuation time and population density according to various evacuation scenarios, and the research can be expanded in the future through comparison of mathematical models and experimental values.

Evaluation method for interoperability of weapon systems applying natural language processing techniques (자연어처리 기법을 적용한 무기체계의 상호운용성 평가방법)

  • Yong-Gyun Kim;Dong-Hyen Lee
    • Journal of The Korean Institute of Defense Technology
    • /
    • v.5 no.3
    • /
    • pp.8-17
    • /
    • 2023
  • The current weapon system is operated as a complex weapon system with various standards and protocols applied, so there is a risk of failure in smooth information exchange during combined and joint operations on the battlefield. The interoperability of weapon systems to carry out precise strikes on key targets through rapid situational judgment between weapon systems is a key element in the conduct of war. Since the Korean military went into service, there has been a need to change the configuration and improve performance of a large number of software and hardware, but there is no verification system for the impact on interoperability, and there are no related test tools and facilities. In addition, during combined and joint training, errors frequently occur during use after arbitrarily changing the detailed operation method and software of the weapon/power support system. Therefore, periodic verification of interoperability between weapon systems is necessary. To solve this problem, rather than having people schedule an evaluation period and conduct the evaluation once, AI should continuously evaluate the interoperability between weapons and power support systems 24 hours a day to advance warfighting capabilities. To solve these problems, To this end, preliminary research was conducted to improve defense interoperability capabilities by applying natural language processing techniques (①Word2Vec model, ②FastText model, ③Swivel model) (using published algorithms and source code). Based on the results of this experiment, we would like to present a methodology (automated evaluation of interoperability requirements evaluation / level measurement through natural language processing model) to implement an automated defense interoperability evaluation tool without relying on humans.

  • PDF

A Study on Dementia Prediction Models and Commercial Utilization Strategies Using Machine Learning Techniques: Based on Sleep and Activity Data from Wearable Devices (머신러닝 기법을 활용한 치매 예측 모델과 상업적 활용 전략: 웨어러블 기기의 수면 및 활동 데이터를 기반으로)

  • Youngeun Jo;Jongpil Yu;Joongan Kim
    • Information Systems Review
    • /
    • v.26 no.2
    • /
    • pp.137-153
    • /
    • 2024
  • This study aimed to propose early diagnosis and management of dementia, which is increasing in aging societies, and suggest commercial utilization strategies by leveraging digital healthcare technologies, particularly lifelog data collected from wearable devices. By introducing new approaches to dementia prevention and management, this study sought to contribute to the field of dementia prediction and prevention. The research utilized 12,184 pieces of lifelog information (sleep and activity data) and dementia diagnosis data collected from 174 individuals aged between 60 and 80, based on medical pathological diagnoses. During the research process, a multidimensional dataset including sleep and activity data was standardized, and various machine learning algorithms were analyzed, with the random forest model showing the highest ROC-AUC score, indicating superior performance. Furthermore, an ablation test was conducted to evaluate the impact of excluding variables related to sleep and activity on the model's predictive power, confirming that regular sleep and activity have a significant influence on dementia prevention. Lastly, by exploring the potential for commercial utilization strategies of the developed model, the study proposed new directions for the commercial spread of dementia prevention systems.

Analysis of Significance between SWMM Computer Simulation and Artificial Rainfall on Rainfall Runoff Delay Effects of Vegetation Unit-type LID System (식생유니트형 LID 시스템의 우수유출 지연효과에 대한 SWMM 전산모의와 인공강우 모니터링 간의 유의성 분석)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.3
    • /
    • pp.34-44
    • /
    • 2020
  • In order to suggest performance analysis directions of ecological components based on a vegetation-based LID system model, this study seeks to analyze the statistical significance between monitoring results by using SWMM computer simulation and rainfall and run-off simulation devices and provide basic data required for a preliminary system design. Also, the study aims to comprehensively review a vegetation-based LID system's soil, a vegetation model, and analysis plans, which were less addressed in previous studies, and suggest a performance quantification direction that could act as a substitute device-type LID system. After monitoring artificial rainfall for 40 minutes, the test group zone and the control group zone recorded maximum rainfall intensity of 142.91mm/hr. (n=3, sd=0.34) and 142.24mm/hr. (n=3, sd=0.90), respectively. Compared to a hyetograph, low rainfall intensity was re-produced in 10-minute and 50-minute sections, and high rainfall intensity was confirmed in 20-minute, 30-minute, and 40-minute sections. As for rainwater run-off delay effects, run-off intensity in the test group zone was reduced by 79.8% as it recorded 0.46mm/min at the 50-minute point when the run-off intensity was highest in the control group zone. In the case of computer simulation, run-off intensity in the test group zone was reduced by 99.1% as it recorded 0.05mm/min at the 50-minute point when the run-off intensity was highest. The maximum rainfall run-off intensity in the test group zone (Dv=30.35, NSE=0.36) recorded 0.77mm/min and 1.06mm/min in artificial rainfall monitoring and SWMM computer simulation, respectively, at the 70-minute point in both cases. Likewise, the control group zone (Dv=17.27, NSE=0.78) recorded 2.26mm/min and 2.38mm/min, respectively, at the 50-minutes point. Through statistical assessing the significance between the rainfall & run-off simulating systems and the SWMM computer simulations, this study was able to suggest a preliminary design direction for the rainwater run-off reduction performance of the LID system applied with single vegetation. Also, by comprehensively examining the LID system's soil and vegetation models, and analysis methods, this study was able to compile parameter quantification plans for vegetation and soil sectors that can be aligned with a preliminary design. However, physical variables were caused by the use of a single vegetation-based LID system, and follow-up studies are required on algorithms for calibrating the statistical significance between monitoring and computer simulation results.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

The Estimation Model of an Origin-Destination Matrix from Traffic Counts Using a Conjugate Gradient Method (Conjugate Gradient 기법을 이용한 관측교통량 기반 기종점 OD행렬 추정 모형 개발)

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.43-62
    • /
    • 2004
  • Conventionally the estimation method of the origin-destination Matrix has been developed by implementing the expansion of sampled data obtained from roadside interview and household travel survey. In the survey process, the bigger the sample size is, the higher the level of limitation, due to taking time for an error test for a cost and a time. Estimating the O-D matrix from observed traffic count data has been applied as methods of over-coming this limitation, and a gradient model is known as one of the most popular techniques. However, in case of the gradient model, although it may be capable of minimizing the error between the observed and estimated traffic volumes, a prior O-D matrix structure cannot maintained exactly. That is to say, unwanted changes may be occurred. For this reason, this study adopts a conjugate gradient algorithm to take into account two factors: estimation of the O-D matrix from the conjugate gradient algorithm while reflecting the prior O-D matrix structure maintained. This development of the O-D matrix estimation model is to minimize the error between observed and estimated traffic volumes. This study validates the model using the simple network, and then applies it to a large scale network. There are several findings through the tests. First, as the consequence of consistency, it is apparent that the upper level of this model plays a key role by the internal relationship with lower level. Secondly, as the respect of estimation precision, the estimation error is lied within the tolerance interval. Furthermore, the structure of the estimated O-D matrix has not changed too much, and even still has conserved some attributes.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Development of a Small Gamma Camera Using NaI(T1)-Position Sensitive Photomultiplier Tube for Breast Imaging (NaI (T1) 섬광결정과 위치민감형 광전자증배관을 이용한 유방암 진단용 소형 감마카메라 개발)

  • Kim, Jong-Ho;Choi, Yong;Kwon, Hong-Seong;Kim, Hee-Joung;Kim, Sang-Eun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Moon-Hae;Joo, Koan-Sik;Kim, Byuug-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.4
    • /
    • pp.365-373
    • /
    • 1998
  • Purpose: The conventional gamma camera is not ideal for scintimammography because of its large detector size (${\sim}500mm$ in width) causing high cost and low image quality. We are developing a small gamma camera dedicated for breast imaging. Materials and Methods: The small gamma camera system consists of a NaI (T1) crystal ($60 mm{\times}60 mm{\times}6 mm$) coupled with a Hamamatsu R3941 Position Sensitive Photomultiplier Tube (PSPMT), a resister chain circuit, preamplifiers, nuclear instrument modules, an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a standard resistive charge division which multiplexes the 34 cross wire anode channels into 4 signals ($X^+,\;X^-,\;Y^+,\;Y^-$). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated ana digitized via triggering signal and used to localize the position of an event by applying the Anger logic. Results: The intrinsic sensitivity of the system was approximately 8,000 counts/sec/${\mu}Ci$. High quality flood and hole mask images were obtained. Breast phantom containing $2{\sim}7 mm$ diameter spheres was successfully imaged with a parallel hole collimator The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We have succesfully developed a small gamma camera using NaI(T1)-PSPMT and nuclear Instrument modules. The small gamma camera developed in this study might improve the diagnostic accuracy of scintimammography by optimally imaging the breast.

  • PDF

Estimation of Reliability of Real-time Control Parameters for Animal Wastewater Treatment Process and Establishment of an Index for Supplemental Carbon Source Addition (가축분뇨처리공정의 자동제어 인자 신뢰성 평가 및 적정 외부탄소원 공급량 지표 확립)

  • Pak, JaeIn;Ra, Jae In-
    • Journal of Animal Science and Technology
    • /
    • v.50 no.4
    • /
    • pp.561-572
    • /
    • 2008
  • Responses of real-time control parameters, such as ORP, DO and pH, to the conditions of biological animal wastewater treatment process were examined to evaluate the stability of real-time control using each parameter. Also an optimum index for supplemental carbon source addition based on NOx-N level was determined under a consideration of denitrification rate by endogenous respiration of microorganism and residual organic matter in liquor. Experiment was performed with lab-scale sequencing batch reactor(SBR) and working volume of the process was 45L. The distinctive nitrogen break point(NBP) on ORP-and DO-time profiles, which mean the termination of nitrification, started disappearing with the maintenance of low NH4-N loading rate. Also the NBP on ORP-and DO-time profiles was no longer observed when high NOx-N was loaded into the reactor, and the sensitivity of ORP became dull with the increase of NOx-N level. However, the distinctive NBP was constantly occurred on pH(mV)-time profile, maintaining unique profile patterns. This stable occurrence of NBP on pH(mV)-time profile was lasted even at very high NOx-N:NH4-N ratio(over 80:1) in reactor, and the specific point could be easily detected by tracking moving slope change(MSC) of the curve. Revelation of NBP on pH(mV)-time profile and recognition of the realtime control point using MSC were stable at a condition of over 300mg/L NOx-N level in reactor. The occurrence of distinctive NBP was persistent on pH(mV)-time profile even at a level of 10,000mg/L STOC(soluble total organic carbon) and the recognition of NBP was feasible by tracing MSC, but that point on ORP and DO-time profiles began to disappear with the increase of STOC level in reactor. The denitrfication rate by endogenous respiration and residual organic matter was about 0.4mg/L.hr., and it was found that 0.83 would be accepted as an index for supplemental carbon source addition when 0.1 of safety factor was applied.