• Title/Summary/Keyword: STEP-Based Data Model

Search Result 702, Processing Time 0.028 seconds

Numerical Simulation of Heat Transfer in Chip-in-Board Package (Chip-in-Board 패키지의 열전달 해석)

  • Park, Joon Hyoung;Shim, Hee Soo;Kim, Sun Kyoung
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.37 no.1
    • /
    • pp.75-79
    • /
    • 2013
  • Demands for semiconductor devices are dramatically increasing, and advancements in fabrication technology are allowing a step-up in the number of devices per unit area. As a result, semiconductor devices require higher heat dissipation, and thus, cooling solutions have become important for guaranteeing their operational reliability. In particular, in chip-in-board packages, in which chips and passives are embedded in the substrates for efficient device layout, heat dissipation is of greater importance. In this study, a thermal model for layers of different materials has been proposed, and then, the heat transfer has been simulated by imposing a set of appropriate boundary conditions. Heat generation can be predicted based on the results, which will be utilized as practical data for actual package design.

The Effect of Payment Method of Community Medical Provider on Medical Care Use of Community Residents (지역사회 의료공급자의 지불보상체계상의 특징이 지역사회 주민의 의료이용에 미치는 영향: 미국사례분석)

  • Lim, Jae-Young
    • Health Policy and Management
    • /
    • v.15 no.2
    • /
    • pp.16-36
    • /
    • 2005
  • Due to the existence of asymmetry of information between doctor and patient, it has been believed that doctor might affect patient's decision making process of purchasing medical care. Based on this notion, doctor's reimbursement method has been suggested as an effective policy device of improving efficiency of patient's medical care use by way of its affecting doctor's practice pattern. By using the Community Tracking Study (CTS) household and physician data set, which includes not only various information on patient's medical care use, but doctor's practice arrangements and sources of practice revenue, this paper investigates the effect of community doctor's characteristics of reimbursement method on community patient's medical care use under the control of patient's socio-demographic characteristics and community doctor's practice type. In the process of estimating econometric model, the endogeneity problem of individual health insurance purchase was corrected by using 2818. And due to the existence of sample selection problem, Heckman's two-step estimation method was used for strengthen the robustness of estimation which was adversely affected by sample selection problem The empirical results show that as the average value of community doctor's portion of practice revenue determined by prospective method out of total revenue increases, the community patient's total out-of-pocket medical cost decreases. This results suggest, as doctor's practice revenues are mainly determined by prospective method, such as capitation, doctors would be more conscious about practice cost, which might affect doctor's practice pattern and by which his/her patient's use of medical care would decrease.

A Study on the Learner's Recognition of Project Instruction in Automobile Electricity Fields of Engineering Technology Education (자동차 전장 분야 공학기술교육에서 프로젝트 수업에 관한 학습자 인식 연구)

  • Park, Sung-Jong;Han, Myoung-Seok
    • Journal of Engineering Education Research
    • /
    • v.11 no.3
    • /
    • pp.63-69
    • /
    • 2008
  • This study provides a program to promote effective project instruction. With a 4 step learning model as preparation, planning, implementation and evaluating it was adapted to a course of study in automobile electricity fields of college. The purpose of this study was to document project process from the learner's point of view and examine the effect of project instruction with recognition of learner who has completed a course of project study. The data from 28 learner in hardware and software automobile electricity fields of college were collected and interpreted statistically by t-test at the .05 level of significance. It was concluded as follows. It emphasizes the importance not only of motivating active group effort and cooperative relationship between group members, but also communication with presentation in order to have a successful accomplishment of a project.

A Study on Autonomous Stair-climbing System Using Landing Gear for Stair-climbing Robot (계단 승강 로봇의 계단 승강 시 랜딩기어를 활용한 자율 승강 기법에 관한 연구)

  • Hwang, Hyun-Chang;Lee, Won-Young;Ha, Jong-Hee;Lee, Eung-Hyuck
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.362-370
    • /
    • 2021
  • In this paper, we propose the Autonomous Stair-climbing system based on data from ToF sensors and IMU in developing stair-climbing robots to passive wheelchair users. Autonomous stair-climbing system are controlled by separating the timing of landing gear operation by location and utilizing state machines. To prove the theory, we construct and experiment with standard model stairs. Through an experiment to get the Attack angle, the average error of operating landing gear was 2.19% and the average error of the Attack angle was 2.78%, and the step division and status transition of the autonomous stair-climbing system were verified. As a result, the performance of the proposed techniques will reduce constraints of transportation handicapped.

Effects of Pahs and Pcbs and Their Toxic Metabolites on Inhibition of Gjic and Cell Proliferation in Rat Liver Epithelial Wb-F344 Cells

  • Miroslav, Machala;Jan, Vondracek;Katerina, Chramostova;Lenka, Sindlerova;Pavel, Krcmar;Martina, Pliskova;Katerina, Pencikova;Brad, Upham
    • Environmental Mutagens and Carcinogens
    • /
    • v.23 no.2
    • /
    • pp.56-62
    • /
    • 2003
  • The liver progenitor cells could form a potential target cell population fore both tumor-initiating and -promoting chemicals. Induction of drug-metabolizing and antioxidant enzymes, including AhR-dependent CYP1A1, NQO-1 and AKR1C9, was detected in the rat liver epithelial WB-F344 "stem-like" cells. Additionally, WB-F344 cells express a functional, wild-type form of p53 protein, a biomarker of genotoxic events, and connexin 43, a basic structural unit of gap junctions forming an important type of intercellular communication. In this cellular model, two complementary assays have been established for detection of the modes of action associated with tumor promotion: inhibition of gap junctional intercellular communication (GJIC) and proliferative activity in confluent cells. We found that the PAHs and PCBs, which are AhR agonists, released WB-F344 cells from contact inhibition, increasing both DNA synthesis and cell numbers. Genotoxic effects of some PAHs that lead to apoptosis and cell cycle delay might interfere with the proliferative activity of PAHs. Contrary to that, the nongenotoxic low-molecular-weight PAHs and non-dioxin-like PCB congeners, abundant in the environment, did not significantly affect cell cycle and cell proliferation; however both groups of compounds inhibited GJIC in WB-F344 cells. The release from contact inhibiton by a mechanism that possibly involves the AhR activation, inhibition of GJIC and genotoxic events induced by environmental contaminants are three important modes of action that could play an important role in carcinogenic effects of toxic compounds. The relative potencies to inhibit GJIC, to induce AhR-mediated activity, and to release cells from contact inhibition were determined for a large series of PAHs and PCBs and their metabolites. In vitro bioassays based on detection of events on cellular level (deregulation of GJIC and/or proliferation) or determination of receptor-mediated activities in both ?$stem-like^{\circ}{\times}$ and hepatocyte-like liver cellular models are valuable tools for detection of modes of action of polyaromatic hydrocarbons. They may serve, together with concentration data, as a first step in their risk assessment.

  • PDF

Evaluation of various nutrients removal models by using the data collected from stormwater wetlands and considerations for improving the nitrogen removal (인공습지에서 영양소 제거 설계모델 검토 및 질소제거 개선방안에 대한 고찰)

  • Park, Kisoo;Kim, Youngchul
    • Journal of Wetlands Research
    • /
    • v.19 no.1
    • /
    • pp.90-102
    • /
    • 2017
  • In this study, various types of nutrient models were tested by using two tears's water quality data collected from the stormwater wetland in Korea. Based on results, most important factor influencing nitrogen removal was hydraulic loading rate, which indicates that surface area of wetland is more important than its volumetric capacity, and model proposed by WEF was found to give a least error between measured and calculated values. For the phosphorus, in case assuming a power relationship between rate constant and temperature, the best prediction result were obtained, but temperature was most sensitive parameter affecting phosphorus removal. In addition, denitrification was always a limiting step for the nitrogen removal in this particular wetland mostly due to the lack of carbon source and high dissolved oxygen concentration. In this paper, several alternatives to improve nitrogen removal, including proper arrangement and designation of wetland elements and use of floating plants or synthetic fiber mat to control oxygen level and to capture the algal particles were proposed and discussed.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

A CF-based Health Functional Recommender System using Extended User Similarity Measure (확장된 사용자 유사도를 이용한 CF-기반 건강기능식품 추천 시스템)

  • Sein Hong;Euiju Jeong;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-17
    • /
    • 2023
  • With the recent rapid development of ICT(Information and Communication Technology) and the popularization of digital devices, the size of the online market continues to grow. As a result, we live in a flood of information. Thus, customers are facing information overload problems that require a lot of time and money to select products. Therefore, a personalized recommender system has become an essential methodology to address such issues. Collaborative Filtering(CF) is the most widely used recommender system. Traditional recommender systems mainly utilize quantitative data such as rating values, resulting in poor recommendation accuracy. Quantitative data cannot fully reflect the user's preference. To solve such a problem, studies that reflect qualitative data, such as review contents, are being actively conducted these days. To quantify user review contents, text mining was used in this study. The general CF consists of the following three steps: user-item matrix generation, Top-N neighborhood group search, and Top-K recommendation list generation. In this study, we propose a recommendation algorithm that applies an extended similarity measure, which utilize quantified review contents in addition to user rating values. After calculating review similarity by applying TF-IDF, Word2Vec, and Doc2Vec techniques to review content, extended similarity is created by combining user rating similarity and quantified review contents. To verify this, we used user ratings and review data from the e-commerce site Amazon's "Health and Personal Care". The proposed recommendation model using extended similarity measure showed superior performance to the traditional recommendation model using only user rating value-based similarity measure. In addition, among the various text mining techniques, the similarity obtained using the TF-IDF technique showed the best performance when used in the neighbor group search and recommendation list generation step.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.