• Title/Summary/Keyword: Complexity analysis

Search Result 2,439, Processing Time 0.029 seconds

A Study on the Potential Risk Analysis for the Safety Management in the Formwork (거푸집공사 안전관리를 위한 잠재적 위험 분석에 관한 연구)

  • Shin, Yoon-Seok
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.2
    • /
    • pp.121-128
    • /
    • 2021
  • Due to the increase of size and complexity of construction project, the frequency of serious accidents in construction industry has been increased. Especially, the proportion of accidents in formwork of building construction site is very high, and many previous studies have been conducted to preventing them in the viewpoint of the diverse approaches. However, their effectiveness on accidents prevention was poor, and as a result, it is limited to consider the potential risks because many workers and managers tend not to be concerned with unsafe factors in formwork. Therefore, in this study, a realistic and proactive way for analyzing these potential risks was proposed in the manner of quantitatively assessing the potentials resulted from the unsafe factors in formwork. To verify the applicability of the proposed methodology, group survey was carried out, and the results were compared with those of the traditional importance-performance analysis(hereafter IPA) technique. Through the use of the proposed methodology, unsafe factors that were not found in the IPA but have potential risk were identified. Eventually, this study is expected to contribute to the proactive prevention of construction serious disaster accidents in formwork by enabling a more efficient management.

Analysis of solute transport in rivers using a stochastic storage model (확률론적 저장대모형을 이용한 하천에서의 물질혼합거동 해석)

  • Kim, Byunguk;Seo, Il Won;Kwon, Siyoon;Jung, Sung Hyun;Yun, Se Hun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.335-345
    • /
    • 2021
  • The one-dimensional solute transport models have been developed for recent decades to predict behavior and fate of solutes in rivers. Transient storage model (TSM) is the most popular model because of its simple conceptualization to consider the complexity of natural rivers. However, the TSM is highly dependent on its parameters which cannot be directly measured. In addition, the TSM interprets the late-time behavior of concentration curves in the shape of an exponential function, which has been evaluated as not suitable for actual solute behavior in natural rivers. In this study, we suggested a stochastic approach to the solute transport analysis. We delineated the model development and model application to a natural river, and compared the results of the proposed model to those of the TSM. To validate the proposed model, a tracer test was carried out in the 4.85 km reach of Gam Creek, one of the first-order tributaries of Nakdong River, South Korea. As a result of comparing the power-law slope of the tail of breakthrough curves, the simulation results from the stochastic storage model yielded the average error rate of 0.24, which is more accurate than the 14.03 and 1.87 from advection-dispersion model and TSM, respectively. This study demonstrated the appropriateness of the power-law residence time distribution to the hyporheic zone of the Gam Creek.

Discovering abstract structure of unmet needs and hidden needs in familiar use environment - Analysis of Smartphone users' behavior data (일상적 사용 환경에서의 잠재니즈, 은폐니즈의 추상구조 발견 - 스마트폰 사용자의 행동데이터 수집 및 해석)

  • Shin, Sung Won;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.169-184
    • /
    • 2017
  • There is a lot of needs that are not expressed as much as the expressed needs in familiar products and services that are used in daily life such as a smartphone. Finding the 'Inconveniences in familiar use' make it possible to create opportunities for value expanding in the existing products and service area. There are a lot of related works, which have studied the definition of hidden needs and the methods to find it. But, they are making it difficult to address the hidden needs in the cases of familiar use due to focus on the new product or service developing typically. In this study, we try to redefine the hidden needs in the daily familiarity and approach it in the new way to find out. Because of the users' unability to express what they want and the complexity of needs which can not be explained clearly, we can not approach it as the quantitative issue. For this reason, the basic data type selected as the user behavior data excluding all description is the screen-shot of the smartphone. We try to apply the integrated rules and patterns to the individual data using the qualitative coding techniques to overcome the limitations of qualitative analysis based on unstructured data. From this process, We can not only extract meaningful clues which can make to understand the hidden needs but also identify the possibility as a way to discover hidden needs through the review of relevance to actual market trends. The process of finding hidden needs is not easy to systemize in itself, but we expect the possibility to be conducted a reference frame for finding hidden needs of other further studies.

Maritime Safety Tribunal Ruling Analysis using SentenceBERT (SentenceBERT 모델을 활용한 해양안전심판 재결서 분석 방법에 대한 연구)

  • Bori Yoon;SeKil Park;Hyerim Bae;Sunghyun Sim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.843-856
    • /
    • 2023
  • The global surge in maritime traffic has resulted in an increased number of ship collisions, leading to significant economic, environmental, physical, and human damage. The causes of these maritime accidents are multifaceted, often arising from a combination of crew judgment errors, negligence, complexity of navigation routes, weather conditions, and technical deficiencies in the vessels. Given the intricate nuances and contextual information inherent in each incident, a methodology capable of deeply understanding the semantics and context of sentences is imperative. Accordingly, this study utilized the SentenceBERT model to analyze maritime safety tribunal decisions over the last 20 years in the Busan Sea area, which encapsulated data on ship collision incidents. The analysis revealed important keywords potentially responsible for these incidents. Cluster analysis based on the frequency of specific keyword appearances was conducted and visualized. This information can serve as foundational data for the preemptive identification of accident causes and the development of strategies for collision prevention and response.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

MCP, Kernel Density Estimation and LoCoH Analysis for the Core Area Zoning of the Red-crowned Crane's Feeding Habitat in Cheorwon, Korea (철원지역 두루미 취식지의 핵심지역 설정을 위한 MCP, 커널밀도측정법(KDE)과 국지근린지점외곽연결(LoCoH) 분석)

  • Yoo, Seung-Hwa;Lee, Ki-Sup;Park, Chong-Hwa
    • Korean Journal of Environment and Ecology
    • /
    • v.27 no.1
    • /
    • pp.11-21
    • /
    • 2013
  • We tried to find out the core feeding site of the Red-crowned Crane(Grus japonensis) in Cheorwon, Korea by using analysis techniques which are MCP(minimum convex polygon), KDE(kernel density estimation), LoCoH(local nearest-neighbor convex-hull). And, We discussed the difference and meaning of result among analysis methods. We choose the data of utilization distribution from distribution map of Red-crowned Crane in Cheorwon, Korea at $17^{th}$ February 2012. Extent of the distribution area was $140km^2$ by MCP analysis. Extents of core feeding area of the Red-crowned Crane were $33.3km^2$($KDE_{1000m}$), $25.7km^2$($KDE_{CVh}$), $19.7km^2$($KDE_{LSCVh}$), according to the 1000m, CVh, LSCVh in value of bandwidth. Extent, number and shape complexity of the core area has decreased, and size of each core area have decreased as small as the bandwidth size(default:1000m, CVh: 554.6m, LSCVh: 329.9). We would suggest the CVh value in KDE analysis as a proper bandwidth value for the Red-crowned crane's core area zoning. Extent of the distribution range and core area have increased and merged into the large core area as a increasing of k value in LoCoH analysis. Proper value for the selecting core area of Red-crowned Crane's distribution was k=24, and extent of the core area was $18.2km^2$, 16.5% area of total distribution area. Finally, the result of LoCoH analysis, we selected two core area, and number of selected core area was smaller than selected area of KDE analysis. Exact value of bandwidth have not been used in studies using KDE analysis in most articles and presentations of the Korea. As a result, it is needed to clarify the exact using bandwidth value in KDE studies.

Phase Image Analysis in Conduction Disturbance Patients (심실내 전도장애 환자에서의 $^{99m}Tc$-RBC Gated Blood-Pool Scintigraphy을 통한 Phase Image Analysis)

  • Kwak, Byeng-Su;Choi, Si-Wan;Kang, Seung-Sik;Park, Ki-Nam;Lee, Kang-Wook;Jeon, Eun-Seok;Park, Chong-Hun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.28 no.1
    • /
    • pp.44-51
    • /
    • 1994
  • It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of it's events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of $^{99m}Tc$-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls ($44.4{\pm}13.9%$ vs $69.9{\pm}4.2%,\;2.48{\pm}0.98$ vs $3.51{\pm}0.62,\;1.76{\pm}0.71$ vs $3.38{\pm}0.92$, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls ($20.6{\pm}18.1$ vs $8.6{\pm}1.8,\;22.5{\pm}9.2$ vs $16.0{\pm}3.9,\;95.7{\pm}31.7$ vs $51.3{\pm}5.4$, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the Wolff-parkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls ($10.6{\pm}2.6$ vs $8.6{\pm}1.8$, p<0.05, $69.8{\pm}11.7$ vs $51.3{\pm}5.4$, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventricles in patients with normal conduction, but markedly delayed phase in the left ventricle of patients with LBBB. 6) In 13 cases of WPW syndrome, the site of preexcitation could be localized in 10 cases (77%) by phase image analysis. Therefore, it can be concluded that phase image analysis can provide an accurate noninvasive method to detect the mechanical consequences of a wide variety of abnormal electrical activation in ventricles.

  • PDF

Analysis of Characteristics in Ara River Basin Using Fractal Dimension (프랙탈 차원을 이용한 아라천 유역특성 분석)

  • Hwang, Eui-Ho;Lee, Eul-Rae;Lim, Kwang-Suop;Jung, Kwan-Sue
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.10
    • /
    • pp.831-841
    • /
    • 2011
  • In this study, with the assumption that the geographical characteristics of the river basin have selfsimilarity, fractal dimensions are used to quantify the complexity of the terrain. For this, Area exponent and hurst exponent was applied to estimate the fractal dimension by using spatial analysis. The result shows that the value of area exponent and hurst exponent calculated by the fractal dimension are 2.008~2.074 and 2.132~2.268 respectively. Also the $R^2$ of area exponent and hurst exponent are 94.9% and 87.1% respectively too. It shows that the $R^2$ is relatively high. After analyzing the spatial self-similarity parameter, it is shown that traditional urban area's moderate slope geographical characteristic closed to 2D fractal in Ara water way. In addition, the relation between fractal dimension and geographical elements are identified. With these results, fractal dimension is the representative value of basin characteristics.

Classification of Performance Types for Knowledge Intensive Service Supporting SMEs Using Clustering Techniques: Focused on the Case of K Research Institute (클러스터링 기법을 활용한 중소기업 지원 지식서비스의 성과유형 분류: K 연구원 사례를 중심으로)

  • Lee, Jungwoo;Kim, Sung Jin;Kim, Min Kwan;Yoo, Jae Young;Hahn, Hyuk;Park, Hun;Han, Chang-Hee
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.3
    • /
    • pp.87-103
    • /
    • 2017
  • In recent years, many small and medium-sized manufacturing companies are making process innovation and product innovation through the public knowledge services. K Research institute provides different types of knowledge services in combination and due to this complexity, it is difficult to analyze the performance of knowledge service programs precisely. In this study, we derived performance items from bottom-up viewpoints, rather than top-down approaches selecting those items as in previous performance analysis. As a result, 74 items were finded from 82 companies in the K Research Institute case book, and the final result was refined to 17 items. After that a case-performance matrix was constructed, and binary data was entered to analyze. As a result, three clusters were identified through K-means clustering as 'enhancement of core competitiveness (product and patent),' 'expansion of domestic and overseas market,' and 'improvement of operational efficiency.'