• Title/Summary/Keyword: 패턴 설계

Search Result 2,858, Processing Time 0.036 seconds

A Study on the Development of a Home Mess-Cleanup Robot Using an RFID Tag-Floor (RFID 환경을 이용한 홈 메스클린업 로봇 개발에 관한 연구)

  • Kim, Seung-Woo;Kim, Sang-Dae;Kim, Byung-Ho;Kim, Hong-Rae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.508-516
    • /
    • 2010
  • An autonomous and automatic home mess-cleanup robot is newly developed in this paper. Thus far, vacuum-cleaners have lightened the burden of household chores but the operational labor that vacuum-cleaners entail has been very severe. Recently, a cleaning robot was commercialized to solve but it also was not successful because it still had the problem of mess-cleanup, which pertained to the clean-up of large trash and the arrangement of newspapers, clothes, etc. Hence, we develop a new home mess-cleanup robot (McBot) to completely overcome this problem. The robot needs the capability for agile navigation and a novel manipulation system for mess-cleanup. The autonomous navigational system has to be controlled for the full scanning of the living room and for the precise tracking of the desired path. It must be also be able to recognize the absolute position and orientation of itself and to distinguish the messed object that is to be cleaned up from obstacles that should merely be avoided. The manipulator, which is not needed in a vacuum-cleaning robot, has the functions of distinguishing the large trash that is to be cleaned from the messed objects that are to be arranged. It needs to use its discretion with regard to the form of the messed objects and to properly carry these objects to the destination. In particular, in this paper, we describe our approach for achieving accurate localization using RFID for home mess-cleanup robots. Finally, the effectiveness of the developed McBot is confirmed through live tests of the mess-cleanup task.

Sewer Decontamination Mechanism and Pipe Network Monitoring and Fault Diagnosis of Water Network System Based on System Analysis (시스템 해석에 기초한 하수관망 오염 매카니즘과 관망 모니터링 및 이상진단)

  • Kang, OnYu;Lee, SeungChul;Kim, MinJeong;Yu, SuMin;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.50 no.6
    • /
    • pp.980-987
    • /
    • 2012
  • Nonpoint source pollution causes leaks and overtopping, depending on the state of the sewer network as well as aggravates the pollution load of the aqueous water system as it is introduced into the sewer by wash-off. According, the need for efficient sewer monitoring system which can manage the sewage flowrate, water quality, inflow/infiltration and overflow has increased for sewer maintenance and the prevention of environmental pollution. However, the sewer monitoring is not easy since the sewer network is built in underground with the complex nature of its structure and connections. Sewer decontamination mechanism as well as pipe network monitoring and fault diagnosis of water network system on system analysis proposed in this study. First, the pollution removal pattern and behavior of contaminants in the sewer pipe network is analyzed by using sewer process simulation program, stormwater & wastewater management model for expert (XP-SWMM). Second, the sewer network fault diagnosis was performed using the multivariate statistical monitoring to monitor water quality in the sewer and detect the sewer leakage and burst. Sewer decontamination mechanism analysis with static and dynamic state system results showed that loads of total nitrogen (TN) and total phosphorous (TP) during rainfall are greatly increased than non-rainfall, which will aggravate the pollution load of the water system. Accordingly, the sewer outflow in pipe network is analyzed due to the increased flow and inflow of pollutant concentration caused by rainfall. The proposed sewer network monitoring and fault diagnosis technique can be used effectively for the nonpoint source pollution management of the urban watershed as well as continuous monitoring system.

Influence of Column Aspect Ratio on the Hysteretic Behavior of Slab-Column Connection (슬래브-기둥 접합부의 이력거동에 대한 기둥 형상비의 영향)

  • Choi, Myung-Shin;Cho, In-Jung;Ahn, Jong-Mun;Shin, Sung-Woo
    • Journal of the Korea Concrete Institute
    • /
    • v.19 no.4
    • /
    • pp.515-525
    • /
    • 2007
  • In this investigation, results of laboratory tests on four reinforced concrete flat plate interior connections with elongated rectangular column support which has been used widely in tall residential buildings are presented. The purpose of this study is to evaluate an effect of column aspect ratio (${\beta}_c={c_1}/{c_2}$=side length ratio of column section in the direction of lateral loading $(c_1)$ to the direction of perpendicular to $c_1$) on the hysteretic behavior under earthquake type loading. The aspect ratio of column section was taken as $0.5{\sim}3\;(c_1/c_2=1/2,\;1/1,\;2/1,\;3/1)$ and the column perimeter was held constant at 1200mm in order to achieve nominal vertical shear strength $(V_c)$ uniformly. Other design parameters such as flexural reinforcement ratio $(\rho)$ of the slab and concrete strength$(f_{ck})$ was kept constant as ${\rho}=1.0%$ and $f_{ck}=40MPa$, respectively. Gravity shear load $(V_g)$ was applied by 30 percent of nominal vertical shear strength $(0.3V_o)$ of the specimen. Experimental observations on punching failure pattern, peak lateral-load and story drift ratio at punching failure, stiffness degradation and energy dissipation in the hysteresis loop, and steel and concrete strain distributions near the column support were examined and discussed in accordance with different column aspect ratio. Eccentric shear stress model of ACI 318-05 was evaluated with experimental results. A fraction of transferring moment by shear and flexure in the design code was analyzed based on the test results.

An Analysis of Elementary School Students' Interpretation of Data Characteristics by Cognitive Style (초등학생의 인지양식에 따른 자료해석 특성 분석)

  • Lim, Sung-Man;Son, Hee-Jung;Yang, Il-Ho
    • Journal of The Korean Association For Science Education
    • /
    • v.31 no.1
    • /
    • pp.78-98
    • /
    • 2011
  • The purpose of this study was to analyze elementary school students' interpretation of data characteristics by cognitive style. Participants were elementary students in sixth grade who can use integrated inquiry process skills. The students were divided into two groups, analytic cognitive style and wholistic cognitive style according to their response to Cognitive Style Analysis. They performed scientific interpretation of data activity. To collect data for this study, participants recorded the result on scientific interpretation of data activity paper and researcher recorded the situation on videotape and interviewed with participants after the end of interpretation of data to get additional data. And the findings of this study were as follows: First, the study analyzed interpretation of data characteristics by the operator regarding different situations of interpreting data according to cognitive style. For example, in the intermediate state, analytic-cognitive style students showed high achievement in identifying variables, and wholistic-cognitive style students were active in using prior knowledge to interpret data. Second, the result of analysis on the direction of interpreting data and preference for data types in interpreting data activities according to cognitive style are as follows: Wholistic-cognitive style students showed relatively high perception of information through the top-down approach. On the other hand, analytic-cognitive style students usually used the bottom-up approach gradually expanding detailed information to the scientific question-related answer and showed a preference data of the table type. Through the result, this study aimed to help establish a data interpretation strategy for learners to solve problems based on understanding of interpretation of data characteristics according to learners' cognitive style, and purposed the instruction design suggesting the data requiring various data interpretation strategies to develop learners' data interpretation ability.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Failure Behavior and Separation Criterion for Strengthened Concrete Members with Steel Plates (강판과 콘크리트 접착계면의 파괴거동 및 박리특성)

  • 오병환;조재열;차수원
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.1
    • /
    • pp.126-135
    • /
    • 2002
  • Plate bonding technique has been widely used in strengthening of existing concrete structures, although it has often a serious problem of premature falure such as interface separation and rip-off. However, this premature failure problem has not been well explored yet especially in view of local failure mechanism around the interface of plate ends. The purpose of the present study is, therefore, to identify the local failure of strengthened plates and to derive a separation criterion at the interface of plates. To this end, a comprehensive experimental program has been set up. The double lap pull-out tests considering pure shear force and half beam tests considering combined flexure-shear force were performed. The main experimental parameters include plate thickness, adhesive thickness, and plate end arrangement. The strains along the longitudinal direction of steel plates have been measured and the shear stress were calculated from those measures strains. The effects of plate thickness, bonded length, and plate end treatment have been also clarified from the present test results. Nonlinear finite element analysis has been performed and compared with test results. The Interface properties are also modeled to present the separation failure behavior of strengthened members. The cracking patterns as well as maximum failure loads agree well with test data. The relation between maximum shear and normal stresses at the interface has been derived to propose a separation failure criterion of strengthened members. The present study allows more realistic analysis and design of externally strengthened flexural member with steel plates.

Development of the Risk Evaluation Model for Rear End Collision on the Basis of Microscopic Driving Behaviors (미시적 주행행태를 반영한 후미추돌위험 평가모형 개발)

  • Chung, Sung-Bong;Song, Ki-Han;Park, Chang-Ho;Chon, Kyung-Soo;Kho, Seung-Young
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.133-144
    • /
    • 2004
  • A model and a measure which can evaluate the risk of rear end collision are developed. Most traffic accidents involve multiple causes such as the human factor, the vehicle factor, and the highway element at any given time. Thus, these factors should be considered in analyzing the risk of an accident and in developing safety models. Although most risky situations and accidents on the roads result from the poor response of a driver to various stimuli, many researchers have modeled the risk or accident by analyzing only the stimuli without considering the response of a driver. Hence, the reliabilities of those models turned out to be low. Thus in developing the model behaviors of a driver, such as reaction time and deceleration rate, are considered. In the past, most studies tried to analyze the relationships between a risk and an accident directly but they, due to the difficulty of finding out the directional relationships between these factors, developed a model by considering these factors, developed a model by considering indirect factors such as volume, speed, etc. However, if the relationships between risk and accidents are looked into in detail, it can be seen that they are linked by the behaviors of a driver, and depending on drivers the risk as it is on the road-vehicle system may be ignored or call drivers' attention. Therefore, an accident depends on how a driver handles risk, so that the more related risk to and accident occurrence is not the risk itself but the risk responded by a driver. Thus, in this study, the behaviors of a driver are considered in the model and to reflect these behaviors three concepts related to accidents are introduced. And safe stopping distance and accident occurrence probability were used for better understanding and for more reliable modeling of the risk. The index which can represent the risk is also developed based on measures used in evaluating noise level, and for the risk comparison between various situations, the equivalent risk level, considering the intensity and duration time, is developed by means of the weighted average. Validation is performed with field surveys on the expressway of Seoul, and the test vehicle was made to collect the traffic flow data, such as deceleration rate, speed and spacing. Based on this data, the risk by section, lane and traffic flow conditions are evaluated and compared with the accident data and traffic conditions. The evaluated risk level corresponds closely to the patterns of actual traffic conditions and counts of accident. The model and the method developed in this study can be applied to various fields, such as safety test of traffic flow, establishment of operation & management strategy for reliable traffic flow, and the safety test for the control algorithm in the advanced safety vehicles and many others.

Development of a High Heat Load Test Facility KoHLT-1 for a Testing of Nuclear Fusion Reactor Components (핵융합로부품 시험을 위한 고열부하 시험시설 KoHLT-1 구축)

  • Bae, Young-Dug;Kim, Suk-Kwon;Lee, Dong-Won;Shin, Hee-Yun;Hong, Bong-Guen
    • Journal of the Korean Vacuum Society
    • /
    • v.18 no.4
    • /
    • pp.318-330
    • /
    • 2009
  • A high heat flux test facility using a graphite heating panel was constructed and is presently in operation at Korea Atomic Energy Research Institute, which is called KoHLT-1. Its major purpose is to carry out a thermal cycle test to verify the integrity of a HIP (hot isostatic pressing) bonded Be mockups which were fabricated for developing HIP joining technology to bond different metals, i.e., Be-to-CuCrZr and CuCrZr-to-SS316L, for the ITER (International Thermonuclear Experimental Reactor) first wall. The KoHLT-1 consists of a graphite heating panel, a box-type test chamber with water-cooling jackets, an electrical DC power supply, a water-cooling system, an evacuation system, an He gas system, and some diagnostics, which are equipped in an authorized laboratory with a special ventilation system for the Be treatment. The graphite heater is placed between two mockups, and the gap distance between the heater and the mockup is adjusted to $2{\sim}3\;mm$. We designed and fabricated several graphite heating panels to have various heating areas depending on the tested mockups, and to have the electrical resistances of $0.2{\sim}0.5$ ohms during high temperature operation. The heater is connected to an electrical DC power supply of 100 V/400 A. The heat flux is easily controlled by the pre-programmed control system which consists of a personal computer and a multi function module. The heat fluxes on the two mockups are deduced from the flow rate and the coolant inlet/out temperatures by a calorimetric method. We have carried out the thermal cycle tests of various Be mockups, and the reliability of the KoHLT-1 for long time operation at a high heat flux was verified, and its broad applicability is promising.

지점우량 자료의 분포형 설정과 내용안전년수에 따르는 확률강우량에 관한 고찰 - 국내 3개지점 서울, 부산 및 대구를 중심으로 -

  • Lee, Won-Hwan;Lee, Gil-Chun;Jeong, Yeon-Gyu
    • Water for future
    • /
    • v.5 no.1
    • /
    • pp.27-36
    • /
    • 1972
  • This thesis is the study of the rainfall probability depth in the major areas of Korea, such as Seoul, Pusan and Taegu. The purpose of the paper is to analyze the rainfall in connection with the safe planning of the hydraulic structures and with the project life. The methodology used in this paper is the statistical treatment of the rainfall data in the above three areas. The scheme of the paper is the following. 1. The complementation of the rainfall data We tried to select the maximm values among the values gained by the three methods: Fourier Series Method, Trend Diagram Method and Mean Value Method. By the selection of the maximum values we tried to complement the rainfall data lacking in order to prevent calamities. 2. The statistical treatment of the data The data are ordered by the small numbers, transformed into log, $\sqrt{}, \sqrt[3]{}, \sqrt[4], and$\sqrt[5], and calculated their statistical values through the electronic computer. 3. The examination of the distribution types and the determination of the optimum distibution types By the $x^2-Test$ the distribution types of rainfall data are examined, and rejected some part of the data in order to seek the normal rainfall distribution types. In this way, the optimum distribution types are determined. 4. The computation of rainfall probability depth in the safety project life We tried to study the interrelation between the return period and the safety project life, and to present the rainfall probability depth of the safety project life. In conclusion we set up the optimum distribution types of the rainfall depths, formulated the optimum distributions, and presented the chart of the rainfall probability depth about the factor of safety and the project life.ct life.

  • PDF

Measurement of Backscattering Coefficients of Rice Canopy Using a Ground Polarimetric Scatterometer System (지상관측 레이다 산란계를 이용한 벼 군락의 후방산란계수 측정)

  • Hong, Jin-Young;Kim, Yi-Hyun;Oh, Yi-Sok;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.145-152
    • /
    • 2007
  • The polarimetric backscattering coefficients of a wet-land rice field which is an experimental plot belong to National Institute of Agricultural Science and Technology in Suwon are measured using ground-based polarimetric scatterometers at 1.8 and 5.3 GHz throughout a growth year from transplanting period to harvest period (May to October in 2006). The polarimetric scatterometers consist of a vector network analyzer with time-gating function and polarimetric antenna set, and are well calibrated to get VV-, HV-, VH-, HH-polarized backscattering coefficients from the measurements, based on single target calibration technique using a trihedral corner reflector. The polarimetric backscattering coefficients are measured at $30^{\circ},\;40^{\circ},\;50^{\circ}\;and\;60^{\circ}$ with 30 independent samples for each incidence angle at each frequency. In the measurement periods the ground truth data including fresh and dry biomass, plant height, stem density, leaf area, specific leaf area, and moisture contents are also collected for each measurement. The temporal variations of the measured backscattering coefficients as well as the measured plant height, LAI (leaf area index) and biomass are analyzed. Then, the measured polarimetric backscattering coefficients are compared with the rice growth parameters. The measured plant height increases monotonically while the measured LAI increases only till the ripening period and decreases after the ripening period. The measured backscattering coefficientsare fitted with polynomial expressions as functions of growth age, plant LAI and plant height for each polarization, frequency, and incidence angle. As the incidence angle is bigger, correlations of L band signature to the rice growth was higher than that of C band signatures. It is found that the HH-polarized backscattering coefficients are more sensitive than the VV-polarized backscattering coefficients to growth age and other input parameters. It is necessary to divide the data according to the growth period which shows the qualitative changes of growth such as panicale initiation, flowering or heading to derive functions to estimate rice growth.