• Title/Summary/Keyword: block graph

Search Result 79, Processing Time 0.023 seconds

Introduction to System Modeling and Verification of Digital Phase-Locked Loop (디지털 위상고정루프의 시스템 모델링 및 검증 방법 소개)

  • Shinwoong, Kim
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.577-583
    • /
    • 2022
  • Verilog-HDL-based modeling can be performed to confirm the fast operation characteristics after setting the design parameters of each block considering the stability of the system by performing linear phase-domain modeling on the phase-locked loop. This paper proposed Verilog-HDL modeling including DCO noise and DTC nonlinear characteristic. After completing the modeling, the time-domain transient simulation can be performed to check the feasibility and the functionality of the proposed PLL system, then the phase noise result from the system design based on the functional model can be verified comparing with the ideal phase noise graph. As a result of the comparison of simulation time (6 us), the Verilog-HDL-based modeling method (1.43 second) showed 484 times faster than the analog transistor level design (692 second) implemented by TSMC 0.18-㎛.

Applicability of Huff Model & ABM Method for Discharge Capacity of Sewer Pipe (하수관거 통수능 해석을 위한 Huff 모형과 ABM 법의 적용성 분석)

  • Hyun, Inhwan;Jeon, SeungHui;Kim, Dooil
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.229-237
    • /
    • 2022
  • The sewer capacity design have been based on the Huff model or the rational equation in South Korea and often failed to determine optimal capacity, resulting in frequent urban flooding or over-sizing. A time distribution of rainfall (i.e., Huff or ABM method) could be used instead of a rainfall hyetograph obtained from statistical analysis of previous rainfalls. In this study, the Huff method and the ABM method, which predict the time distribution of rain intensity, which are widely used to calculate sewage pipe drainage capacity using the SWMM, were compared with the standard rainfall intensity hyetograph of Seoul. If the rainfall duration was 30 minutes to 180 minutes, the rainfall intensity value calculated by the Huff model tended to be less than the rainfall intensity value of the standard rainfall intensity in the initial 5-10 minutes. As a result, more than 10% to 30% of under-design would be made. In addition, the rainfall intensity value calculated by the Huff model from the section excluding the initial 5-10 minutes of rainfall to the rainfall duration was calculated larger than the value using the standard rainfall intensity equation, which would result in an over-design of 10% to 30%. In the case of a relatively long rainfall duration of 360 minutes (6 hours) to 1,440 minutes (24 hours), it showed an lower rainfall intensity of 60 to 90% in the early stages of rainfall, but the problem of under-design had been solved as the rainfall duration time had elapsed. On the other hand, in the alternating block method (ABM) method, it was found that the rainfall intensity at the entire period at each assumed rainfall duration accurately matched the standard rainfall intensity hyetograph of Seoul.

A Case Study of Artificial Intelligence Education for Graduate School of Education (교육 대학원에서의 인공지능 교육 사례)

  • Han, Kyujung
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.401-409
    • /
    • 2021
  • This study is a case study of artificial intelligence education subjects in the graduate school of education. The main educational contents consisted of understanding and practice of machine learning, data analysis, actual artificial intelligence using Entries, artificial intelligence and physical computing. As a result of the survey on the educational effect after the application of the curriculum, it was found that the students preferred the use of the Entry AI block and the use of the Blacksmith board as a physical computing tool as the priority applied to the elementary education field. In addition, the data analysis area is effective in linking math data and graph education. As a physical computing tool, Husky Lens is useful for scalability by using image processing functions for self-driving car maker education. Suggestions for desirable AI education include training courses by level and reinforcement of data collection and analysis education.

  • PDF

The Effects of Occupational Therapy Intervention Using Fully Immersive Virtual Reality Device on Upper Extremity Function of Patients With Chronic Stoke: Case Study (완전 몰입형 가상현실 기기를 이용한 작업치료 중재가 만성 뇌졸중 환자의 상지기능에 미치는 영향: 사례연구)

  • Han, Soul;Yoo, Eun-Young
    • Therapeutic Science for Rehabilitation
    • /
    • v.7 no.2
    • /
    • pp.17-27
    • /
    • 2018
  • Objective : The purpose of this study was to investigate the effect of occupational therapy intervention using a fully immersive virtual reality device on the upper extremity function of patients with chronic stroke. Methods : This study used a single subject (ABA) design. The study subjects was a chronic stroke patient with left lateral deviation. Four baseline periods, 12 intervention periods, and 4 baseline regression periods were performed for a total of 20 sessions for 10 weeks. OT intervention with a fully immersive virtual reality device was used every 30 minutes. BBT and WMFT evaluations were performed at each session and the results were displayed in a line graph. Results : The patient's upper limb function has improved. During baseline recurrence, efficacy of treatment was confirmed after removal of intervention, but no significant changes were observed. Conclusion : It has been found that OT intervention with a fully immersive virtual reality device for upper limb function in chronic stroke patients is an effective intervention. However, the effectiveness of maintaining treatment is not important, so we need to develop an easy-to-use home intervention program.

An Emulation System for Efficient Verification of ASIC Design (ASIC 설계의 효과적인 검증을 위한 에뮬레이션 시스템)

  • 유광기;정정화
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.10
    • /
    • pp.17-28
    • /
    • 1999
  • In this paper, an ASIC emulation system called ACE (ASIC Emulator) is proposed. It can produce the prototype of target ASIC in a short time and verify the function of ASIC circuit immediately The ACE is consist of emulation software in which there are EDIF reader, library translator, technology mapper, circuit partitioner and LDF generator and emulation hardware including emulation board and logic analyzer. Technology mapping is consist of three steps such as circuit partitioning and extraction of logic function, minimization of logic function and grouping of logic function. During those procedures, the number of basic logic blocks and maximum levels are minimized by making the output to be assigned in a same block sharing product-terms and input variables as much as possible. Circuit partitioner obtain chip-level netlists satisfying some constraints on routing structure of emulation board as well as the architecture of FPGA chip. A new partitioning algorithm whose objective function is the minimization of the number of interconnections among FPGA chips and among group of FPGA chips is proposed. The routing structure of emulation board take the advantage of complete graph and partial crossbar structure in order to minimize the interconnection delay between FPGA chips regardless of circuit size. logic analyzer display the waveform of probing signal on PC monitor that is designated by user. In order to evaluate the performance of the proposed emulation system, video Quad-splitter, one of the commercial ASIC, is implemented on the emulation board. Experimental results show that it is operated in the real time of 14.3MHz and functioned perfectly.

  • PDF

Development of The Safe Driving Reward System for Truck Digital Tachograph using Hyperledger Fabric (하이퍼레저 패브릭을 이용한 화물차 디지털 운행기록 단말기의 안전운행 보상시스템 구현)

  • Kim, Yong-bae;Back, Juyong;Kim, Jongweon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.47-56
    • /
    • 2022
  • The safe driving reward system aims to reduce the loss of life and property by reducing the occurrence of accidents by motivating safe driving and encouraging active participation by providing direct reward to vehicle drivers who have performed safe driving. In the case of the existing digital tachograph, the goal is to limit dangerous driving by recording the driving status of the vehicle whereas the safe driving reward system is a support measure to increase the effect of accident prevention and induces safe driving with financial reward when safe driving is performed. In other words, in an area where accidents due to speeding are high, direct reward is provided to motivate safe driving to prevent traffic accidents when safe driving instructions such as speed compliance, maintaining distance between vehicles, and driving in designated lanes are performed. Since these safe operation data and reward histories must be managed transparently and safely, the reward evidences and histories were constructed using the closed blockchain Hyperledger Fabric. However, while transparency and safety are guaranteed in the blockchain system, low data processing speed is a problem. In this study, the sequential block generation speed was as low as 10 TPS(transaction per second), and as a result of applying the acceleration function a high-performance network of 1,000 TPS or more was implemented.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Mechanical Characteristics of the Rift, Grain and Hardway Planes in Jurassic Granites, Korea (쥬라기 화강암류에서 발달된 1번 면, 2번 면 및 3번 면의 역학적 특성)

  • Park, Deok-Won
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.33 no.3
    • /
    • pp.273-291
    • /
    • 2020
  • The strength characteristics of the three orthogonal splitting planes, known as rift, grain and hardway planes in granite quarries, were examined. R, G and H specimens were obtained from the block samples of Jurassic granites in Geochang and Hapcheon areas. The directions of the long axes of these three specimens are perpendicular to each of the three planes. First, The chart, showing the scaling characteristics of three graphs related to the uniaxial compressive strengths of R, G and H specimens, were made. The graphs for the three specimens, along with the increase of strength, are arranged in the order of H < G < R. The angles of inclination of the graphs for the three specimens, suggesting the degree of uniformity of the texture within the specimen, were compared. The above angles for H specimens(θH, 24.0°~37.3°) are the lowest among the three specimens. Second, the scaling characteristics related to the three graphs of RG, GH and RH specimens, representing a combination of the mean compressive strengths of the two specimens, were derived. These three graphs, taking the various N-shaped forms, are arranged in the order of GH < RH < RG. Third, the correlation chart between the strength difference(Δσt) and the angle of inclination(θ) was made. The above two parameters show the correlation of the exponential function with an exponent(λ) of -0.003. In both granites, the angle of inclination(θRH) of the RH-graph is the lowest. Fourth, the six types of charts, showing the correlations among the three kinds of compressive strengths for the three specimens and the five parameters for the two sets of microcracks aligned parallel to the compressive load applied to each specimen, were made. From these charts for Geochang and Hapcheon granites, the mean value(0.877) of the correlation coefficients(R2) for total density(Lt), along with the frequency(N, 0.872) and density(ρ, 0.874), is the highest. In addition, the mean values(0.829) of correlation coefficients associated with the mean compressive strengths are more higher than the minimum(0.768) and maximum(0.804) compression strengths of three specimens. Fifth, the distributional characteristics of the Brazilian tensile strengths measured in directions parallel to the above two sets of microcracks in the three specimens from Geochang granite were derived. From the related chart, the three graphs for these tensile strengths corresponding to the R, G and H specimens show an order of H(R1+G1) < G(R2+H1) < R(R1+G1). The order of arrangement of the three graphs for the tensile strengths and that for the compressive strengths are mutually consistent. Therefore, the compressive strengths of the three specimens are proportional to the three types of tensile strengths. Sixth, the values of correlation coefficients, among the three tensile strengths corresponding to each cumulative number(N=1~10) from the above three graphs and the five parameters corresponding to each graph, were derived. The mean values of correlation coefficients for each parameter from the 10 correlation charts increase in the order of density(0.763) < total length(0.817) < frequency(0.839) < mean length(Lm, 0.901) ≤ median length(Lmed, 0.903). Seventh, the correlation charts among the compressive strengths and tensile strengths for the three specimens were made. The above correlation charts were divided into nine types based on the three kinds of compressive strengths and the five groups(A~E) of tensile strengths. From the related charts, as the tensile strength increases with the mean and maximum compressive strengths excluding the minimum compressive strength, the value of correlation coefficient increases rapidly.

Studies on Genetic Analysis by the Diallel Crosses in $F_2$ Generation of Cowpea(Vigna sinensis savi.) (동부 Diallel Cross$ F_2$세대의 유전분석에 관한 연구)

  • Kim, J.H.;Ko, M.S.;Chang, K.Y.
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.28 no.2
    • /
    • pp.216-226
    • /
    • 1983
  • Genetic studies on the $F_2$ generation of a set of half diallel crosses involving six cowpea varieties were conducted. by the randomized block design with three replications to determine combining ability, gene action and the relationships between parents and their $F_2$ hybrids. The 12 agronomic characters namely, days to flowering, days from flowering to maturity, days to maturity, diameter of stem, length of internode, number of branches per plant, length of pod, number of pods per plant, number of grains per pod, number of grains per plant, 100 grain weight and grain weight per plot were observed, and the $F_2$ generation of this diallel set of crosses was analysed for each character according to the method by Jinks and Hayman. The results obtained are summarized as follows: 1. Vr-Wr graphical analyses; The following seven characters, days to flowering, number of branches per plant, length of pod, number of pods per plant, number of grains per plant, 100 grain weight and grain weight per plot appeared to be partially dominant, and over dominance was found for days from flowering to maturity, days to maturity, length of internode and number of grains per pod. But diameter of stem indicated partial dominance near complete dominance. 2. Estimates of genetic variance components; In the degree of dominance,. eight characters, that is, days to flowering, days from flowering to maturity, days to maturity, length of internode, number of pods per plant, number of grains per pod, number of grains per plant and grain weight per plot were expressed larger than 1. And the characters, days from flowering to maturity, number of branches per plant and number of grains per plant as the degree of mean dominance ($H_1$/D) were found to be negative value over other characters. On the other hand, apprent asymmetry of dominance-recessive allele ($H_2$ /$4H_1$) produced comparatively estimates with lower value on days from flowering to maturity, length of internode, number of branches per plant and number of grains per pod. 3. Analyses of combining ability; Mean square value of GCA(general combining ability) appeared to be more important than those of SCA (specific combining ability) for most characters, and among them, grain weight per plot showed the highest mean square value in GCA and SCA. 4. Effect of combining ability; Variety 178 was expressed as the highest GCA effects in five characters (days to flowering days to maturity, number of pods per plant, number of grains per plant and grain weight per plot). SCA effects were differed from parents, characters and crosses, but crosses between TVu 1857 $\times$ TVu 2885 and TVu 2702 $\times$ J78 were shown to be highly with SCA effects on yield.

  • PDF