• Title/Summary/Keyword: 계층구조

Search Result 2,889, Processing Time 0.029 seconds

Benefit·Cost Analysis of Combine Method Using Hollow Precast Concrete Column (중공 PC기둥 복합공법의 편익-비용 분석)

  • Kim, Jae-Yeob;Park, Byeong-Hun;Lee, Ung-Kyun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.16 no.5
    • /
    • pp.429-436
    • /
    • 2016
  • Because of the shortage of construction workers due to The rising labor costs and an aging labor force, construction time has been extended. As a solution, The construction time of high-rise buildings can be reduced by adopting precast concrete construction methods. Most relevant studies have focused on the development and structural analysis of such methods and not on their construction management. Therefore, this study focused on the construction management of the hollow precast concrete column (HPC) method. The objective of this study was to evaluate the performance of HPC formulations through the analytic hierarchy process and benefit-cost analysis. After a gap analysis of the available literature and expert interviews, the evaluation criteria were selected. A questionnaire survey was administered to professionals with ample experience in precast concrete construction for the pair-wise evaluation of the benefit and costs of the HPC method. The results show that the benefits of the HPC method outweighed its costs. Therefore, the HPC method is a suitable substitute for the half-slab method.

Unsupervised Noun Sense Disambiguation using Local Context and Co-occurrence (국소 문맥과 공기 정보를 이용한 비교사 학습 방식의 명사 의미 중의성 해소)

  • Lee, Seung-Woo;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.769-783
    • /
    • 2000
  • In this paper, in order to disambiguate Korean noun word sense, we define a local context and explain how to extract it from a raw corpus. Following the intuition that two different nouns are likely to have similar meanings if they occur in the same local context, we use, as a clue, the word that occurs in the same local context where the target noun occurs. This method increases the usability of extracted knowledge and makes it possible to disambiguate the sense of infrequent words. And we can overcome the data sparseness problem by extending the verbs in a local context. The sense of a target noun is decided by the maximum similarity to the clues learned previously. The similarity between two words is computed by their concept distance in the sense hierarchy borrowed from WordNet. By reducing the multiplicity of clues gradually in the process of computing maximum similarity, we can speed up for next time calculation. When a target noun has more than two local contexts, we assign a weight according to the type of each local context to implement the differences according to the strength of semantic restriction of local contexts. As another knowledge source, we get a co-occurrence information from dictionary definitions and example sentences about the target noun. This is used to support local contexts and helps to select the most appropriate sense of the target noun. Through experiments using the proposed method, we discovered that the applicability of local contexts is very high and the co-occurrence information can supplement the local context for the precision. In spite of the high multiplicity of the target nouns used in our experiments, we can achieve higher performance (89.8%) than the supervised methods which use a sense-tagged corpus.

  • PDF

Evaluation of Multivariate Stream Data Reduction Techniques (다변량 스트림 데이터 축소 기법 평가)

  • Jung, Hung-Jo;Seo, Sung-Bo;Cheol, Kyung-Joo;Park, Jeong-Seok;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.889-900
    • /
    • 2006
  • Even though sensor networks are different in user requests and data characteristics depending on each application area, the existing researches on stream data transmission problem focus on the performance improvement of their methods rather than considering the original characteristic of stream data. In this paper, we introduce a hierarchical or distributed sensor network architecture and data model, and then evaluate the multivariate data reduction methods suitable for user requirements and data features so as to apply reduction methods alternatively. To assess the relative performance of the proposed multivariate data reduction methods, we used the conventional techniques, such as Wavelet, HCL(Hierarchical Clustering), Sampling and SVD (Singular Value Decomposition) as well as the experimental data sets, such as multivariate time series, synthetic data and robot execution failure data. The experimental results shows that SVD and Sampling method are superior to Wavelet and HCL ia respect to the relative error ratio and execution time. Especially, since relative error ratio of each data reduction method is different according to data characteristic, it shows a good performance using the selective data reduction method for the experimental data set. The findings reported in this paper can serve as a useful guideline for sensor network application design and construction including multivariate stream data.

Distribution of Biomass and Production of Pinus rigida and Pinus rigida×taeda Plantation in Kwangju District (광주지방(光州地方)의 리기다소나무 및 리기테다소나무조림지(造林地)의 물질생산량(物質生産量)에 관(關)한 연구(研究))

  • Lee, Kyong Jae;Kim, Kap Duk;Kim, Jae Saeng;Park, In Hyeop
    • Journal of Korean Society of Forest Science
    • /
    • v.69 no.1
    • /
    • pp.28-35
    • /
    • 1985
  • To estimate the aboveground biomass of Pinus rigida and Pinus rigida ${\times}$ taeda 22-year-old plantations, the experimental plots of $200m^2$ in size located in Kwangju of Jeonlanam-do were selected. Nine sample trees selected at each plot taking account of DBH distribution were felled and the diagram of oven-dry weight distribution of stem, branch, and needle for each 1m segment was constructed. The logarithmic regression equations between dry weight of each component (stems, branches, and needles) and the variable of $(DBH)^2{\cdot}H$ were obtained. The aboveground standing crops was estimated to be as much as 71.61 and 142.32 tons of dry matter per hectare in P. rigida and P. rigida ${\times}$ taeda stand respectively. The net production was estimated as 10.81 and 10.46 t/ha/yr and the net assimilation rate 1.32 and 1.00 kg/kg/yr in P. rigida and P. rigida ${\times}$ taeda stand respectively. And the efficiency of needles to produce stem was 0.97 and 0.81 kg/kg/yr in same order.

  • PDF

Subnet Generation Scheme based on Deep Learing for Healthcare Information Gathering (헬스케어 정보 수집을 위한 딥 러닝 기반의 서브넷 구축 기법)

  • Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.221-228
    • /
    • 2017
  • With the recent development of IoT technology, medical services using IoT technology are increasing in many medical institutions providing health care services. However, as the number of IoT sensors attached to the user body increases, the healthcare information transmitted to the server becomes complicated, thereby increasing the time required for analyzing the user's healthcare information in the server. In this paper, we propose a deep learning based health care information management method to collect and process healthcare information in a server for a large amount of healthcare information delivered through a user - attached IoT device. The proposed scheme constructs a subnet according to the attribute value by assigning an attribute value to the healthcare information transmitted to the server, and extracts the association information between the subnets as a seed and groups them into a hierarchical structure. The server extracts optimized information that can improve the observation speed and accuracy of user's treatment and prescription by using deep running of grouped healthcare information. As a result of the performance evaluation, the proposed method shows that the processing speed of the medical service operated in the healthcare service model is improved by 14.1% on average and the server overhead is 6.7% lower than the conventional technique. The accuracy of healthcare information extraction was 10.1% higher than the conventional method.

Studies of the Efficiency of Wearable Input Interface (웨어러블 입력장치의 인터페이스 효율성에 관한 연구)

  • Lee, Seun-Young;Hong, Ji-Young;Chae, Haeng-Suk;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.10 no.4
    • /
    • pp.583-601
    • /
    • 2007
  • The desktop interface is not suitable for the environment in which mobile devices are used commonly with moving, because much attention should be paid for it. And the miniaturizing of mobile devices increases the workload for using them, makes the operation speeds lower and makes more errors. So the study of appropriate level of the input interface for this changing environment is needed. In the aspect of mobile devices. input style and the complexity of the menu hierarchy, this study will look for the way to decrease the workload when doing some primary tasks and using mobile devices simultaneously with moving. The input style was classified into gesture input style, button input style, and pointing input style. The accuracy and speed were measured when doing dual tasks, including a menu searching task and a figure memory task, through one input style of three. By Changing the level of menu hierarchy in the menu searching task, the accuracy of task execution was examined. These Experiments were done in standing state and moving state. In both state the pointing input style was the highest in the accuracy of task execution but the slowest in the speed. In contrast, the gesture input style was not high in the accuracy but the fastest in the speed. This fact shows that the gesture input style is suitable for the condition needed for speedy processing rather than accurate execution when moving.

  • PDF

Granite Suite and Supersuite for the Triassic Granites in South Korea (우리나라 트라이아스기 화강암의 스위트/슈퍼스위트 분류)

  • Jwa Yong-Joo;Kim Jong-Sun;Kim Kun-Ki
    • The Journal of the Petrological Society of Korea
    • /
    • v.14 no.4 s.42
    • /
    • pp.226-236
    • /
    • 2005
  • Using the concept of granite suite/supersuite we hierarchically divided the Triassic granites in South Korea which have spatio-temporally close relationships each other. Among the Triassic granites in the Okcheon belt (western Yeongnam massif), the Baegrok granite and the Jeomchon granite can be grouped into one suite, the Baegrok suite, whereas the Cheongsan granite into the Cheongsan suite. These two suites can be grouped again into a larger supersuite, the Baegrok supersuite, on the basis of the similarity in the source rocks and the contrasts in the petrographic and geochemical characteristics. Three Triassic granites in the Gyeongsang basin - the Yeongdeok granite, the Yeonghae granite, and the Cheongsong granite - can be grouped into the Yeongdeok suite, Yeonghae suite and Cheongsong suite, respectively. These three suites can be grouped again into a larger supersuite, the Yeongdeok supersuite, on the basis of the similarity in the source rocks and the contrasts in the petrographic and geochemical characteristics. Nd-Sr isotopic signatures for the Baegrok supersuite are quite distinct from those for the Yeongdeok supersuite, indicating that the source materials of each granitic magma were not identical. The source rocks for the Baegrok supersuite are thought to be a mixture of two crustal components of the Yeongnam massif, whereas those for the Yeongdeok supersuite to be a mixture of the depleted mantle with the crustal components of the Yeongnam massif. The fact that the two contemporaneous granite supersuites were derived from the different sources can be explained by the difference of the tectonic environments where the granitic magmas were produced.

Development of Computation Model for Traffic Accidents Risk Index - Focusing on Intersection in Chuncheon City - (교통사고 위험도 지수 산정 모델 개발 - 춘천시 교차로를 중심으로 -)

  • Shim, Kywan-Bho;Hwang, Kyung-Soo
    • International Journal of Highway Engineering
    • /
    • v.11 no.3
    • /
    • pp.61-74
    • /
    • 2009
  • Traffic accident risk index Computation model's development apply traffic level of significance about area of road user group, road and street network area, population group etc.. through numerical formula or model by countermeasure to reduce the occurrence rate of traffic accidents. Is real condition that is taking advantage of risk by tangent section through estimation model and by method to choose improvement way to intersection from outside the country, and is utilizing being applied in part business in domestic. However, question is brought in the accuracy being utilizing changing some to take external model in domestic real condition than individual development of model. Therefore, selection intersection estimation element through traffic accidents occurrence present condition, geometry structure, control way, traffic volume, turning traffic volume etc. in 96 intersections in this research, and select final variable through correlation analysis of abstracted estimation elements. Developed intersection design model taking advantage of signal type, numeric of lane, intersection type, analysis of variance techniques through ANOVA analysis of three variables of intersection form with selected variable lastly, in signal crossing through three class intersection, distinction variable choice risk in model, no-signal crossing risk distinction analysis model and so on develop.

  • PDF

k-Interest Places Search Algorithm for Location Search Map Service (위치 검색 지도 서비스를 위한 k관심지역 검색 기법)

  • Cho, Sunghwan;Lee, Gyoungju;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.259-267
    • /
    • 2013
  • GIS-based web map service is all the more accessible to the public. Among others, location query services are most frequently utilized, which are currently restricted to only one keyword search. Although there increases the demand for the service for querying multiple keywords corresponding to sequential activities(banking, having lunch, watching movie, and other activities) in various locations POI, such service is yet to be provided. The objective of the paper is to develop the k-IPS algorithm for quickly and accurately querying multiple POIs that internet users input and locating the search outcomes on a web map. The algorithm is developed by utilizing hierarchical tree structure of $R^*$-tree indexing technique to produce overlapped geometric regions. By using recursive $R^*$-tree index based spatial join process, the performance of the current spatial join operation was improved. The performance of the algorithm is tested by applying 2, 3, and 4 multiple POIs for spatial query selected from 159 keyword set. About 90% of the test outcomes are produced within 0.1 second. The algorithm proposed in this paper is expected to be utilized for providing a variety of location-based query services, of which demand increases to conveniently support for citizens' daily activities.

Video Compression using Characteristics of Wavelet Coefficients (웨이브렛 계수의 특성을 이용한 비디오 영상 압축)

  • 문종현;방만원
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.45-54
    • /
    • 2002
  • This paper proposes a video compression algorithm using characteristics of wavelet coefficients. The proposed algorithm can provide lowed bit rate and faster running time while guaranteeing the reconstructed image qualify by the human virtual system. In this approach, each video sequence is decomposed into a pyramid structure of subimages with various resolution to use multiresolution capability of discrete wavelet transform. Then similarities between two neighboring frames are obtained from a low-frequency subband which Includes an important information of an image and motion informations are extracted from the similarity criteria. Four legion selection filters are designed according to the similarity criteria and compression processes are carried out by encoding the coefficients In preservation legions and replacement regions of high-frequency subbands. Region selection filters classify the high-frequency subbands Into preservation regions and replacement regions based on the similarity criteria and the coefficients In replacement regions are replaced by that of a reference frame or reduced to zero according to block-based similarities between a reference frame and successive frames. Encoding is carried out by quantizing and arithmetic encoding the wavelet coefficients in preservation regions and replacement regions separately. A reference frame is updated at the bottom point If the curve of similarity rates looks like concave pattern. Simulation results show that the proposed algorithm provides high compression ratio with proper Image quality. It also outperforms the previous Milton's algorithm in an Image quality, compression ratio and running time, leading to compression ratio less than 0.2bpp. PSNR of 32 dB and running tome of 10ms for a standard video image of size 352${\times}$240 pixels.