• Title/Summary/Keyword: 계산영역

Search Result 3,846, Processing Time 0.039 seconds

The High temperature stability limit of talc, $Mg_3Si_4O_{10}(OH)_2$ (활석 $Mg_3Si_4O_{10}(OH)_2$의 고온 안정영역에 관한 실험적 연구)

  • 조동수;김형식
    • The Journal of the Petrological Society of Korea
    • /
    • v.6 no.2
    • /
    • pp.123-132
    • /
    • 1997
  • In the system $MgO-SiO_2-H_2O$, Talc[$Mg_3Si_4O_{10}(OH)_2$] has been synthesized hydrothermally at 200 MPa, $600^{\circ}C$ from the oxide mixture of the bulk composition of talc. The oxide mixture of the bulk composition of anthophyllite$[Mg_7Si_8O_{22}(OH)2]$ converted to talc, enstatite $(MgSiO_3)$, quartz at 200 MPa, $750^{\circ}C$ with excess of $H_2O$. In low to medium pressure metramorphism, enstatite-talc assemblage is metastable relative to anthophyllite with the reaction talc + 4 enstatite=anthophyllite (Greenwood, 1963). The high temperature stability of talc is bounded with the dehydration reaction to anthophyllite rather than that to enstatite(Greenwood, 1963; Chernosky et al., 1985). Therefore our experiment result assemblage, enstatite-talc-quatz at 200 MPa, $750^{\circ}C$ from oxide mixture of bulk compostion of anthophyllite is metastable assemblage. The hydrothermal experiment performed at 41 to 243 MPa, 680 to $760^{\circ}C$ with the starting material composed of synthetic talc, enstatite and quartz. Talc or enstatite grows during the runs and no extra phases including anthophyllite nucleated. Based on the increase or decrease of the each phase from run products, one of the possible reactions is talc=3 enstatite+quartz+H_2O$. The reversal bracket of the reaction is 699 to $700^{\circ}C$ at 100 MPa. Talc is stable up to $740^{\circ}C$ at 200 MPa and enstatite grow at $680^{\circ}C$, 40 MPa and at $760^{\circ}C$, 250 MPa. Though the high temperature limit of talc around 200 MPa is bounded thermodynamically by the reaction, 7 talc=3 anthophyllite+4 quartz+4 H_2O$, talc persisted throughout the previous reaction up to the reaction, talc=3 enstatite+quartz+$H_2O$.

  • PDF

Semantic Access Path Generation in Web Information Management (웹 정보의 관리에 있어서 의미적 접근경로의 형성에 관한 연구)

  • Lee, Wookey
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.2
    • /
    • pp.51-56
    • /
    • 2003
  • The structuring of Web information supports a strong user side viewpoint that a user wants his/her own needs on snooping a specific Web site. Not only the depth first algorithm or the breadth-first algorithm, but also the Web information is abstracted to a hierarchical structure. A prototype system is suggested in order to visualize and to represent a semantic significance. As a motivating example, the Web test site is suggested and analyzed with respect to several keywords. As a future research, the Web site model should be extended to the whole WWW and an accurate assessment function needs to be devised by which several suggested models should be evaluated.

  • PDF

A Study on Intuitive IoT Interface System using 3D Depth Camera (3D 깊이 카메라를 활용한 직관적인 사물인터넷 인터페이스 시스템에 관한 연구)

  • Park, Jongsub;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.137-152
    • /
    • 2017
  • The decline in the price of IT devices and the development of the Internet have created a new field called Internet of Things (IoT). IoT, which creates new services by connecting all the objects that are in everyday life to the Internet, is pioneering new forms of business that have not been seen before in combination with Big Data. The prospect of IoT can be said to be unlimited in its utilization. In addition, studies of standardization organizations for smooth connection of these IoT devices are also active. However, there is a part of this study that we overlook. In order to control IoT equipment or acquire information, it is necessary to separately develop interworking issues (IP address, Wi-Fi, Bluetooth, NFC, etc.) and related application software or apps. In order to solve these problems, existing research methods have been conducted on augmented reality using GPS or markers. However, there is a disadvantage in that a separate marker is required and the marker is recognized only in the vicinity. In addition, in the case of a study using a GPS address using a 2D-based camera, it was difficult to implement an active interface because the distance to the target device could not be recognized. In this study, we use 3D Depth recognition camera to be installed on smartphone and calculate the space coordinates automatically by linking the distance measurement and the sensor information of the mobile phone without a separate marker. Coordination inquiry finds equipment of IoT and enables information acquisition and control of corresponding IoT equipment. Therefore, from the user's point of view, it is possible to reduce the burden on the problem of interworking of the IoT equipment and the installation of the app. Furthermore, if this technology is used in the field of public services and smart glasses, it will reduce duplication of investment in software development and increase in public services.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

NEW ANTIDEPRESSANTS IN CHILD AND ADOLESCENT PSYCHIATRY (소아청소년정신과영역의 새로운 항우울제)

  • Lee, Soo-Jung
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.1
    • /
    • pp.12-25
    • /
    • 2003
  • Objectives:As increasing number of new antidepressants have been being introduced in clinical practice, pharmacological understanding has been broadened. These changes mandate new information and theories to be incorporated into the treatment process of children with depressive disorders. In light of newly coming knowledge, this review intended to recapitulate the characteristics of new antidepressants and to consider the pivotal issues to develope guidelines for the treatment of depression in childhood and adolescence. Methods:Searching the Pub-Med online database for the articles with the key words of 'new', 'antidepressants' and 'children' ninety-seven headings of review articles were obtained. The author selected the articles of pertinent subjects in terms of either treatment guideline or psychopharmacology of new antidepressants. When required, articles about the clinical effectiveness of individual antidepressants were separatedly searched. In addition, the safety information of new antidepressants was acquired by browsing the official sites of the United States Food and Drugs Administration and Department of Health and Human Services. Results:1) For the clinical course, treatment phase, and treatment outcome, the reviews or treatment guidelines adopted the information from adult treatment guidelines. 2) Systematic and critical reviews unambiguously concluded that selective serotonin reuptake inhibitors(SSRIs) excelled tricyclic antidepressants( TCAs) for both efficacy and side effect profiles, and were recommend for the first-line choice for the treatment of children with depressive disorders. 3) New antidepressants generally lacked treatment experiences and randomized controlled clinical trials. 4) SSRIs and other new antidepressants, when used together, might result in pharmacokinetic and/or pharmacodynamic drug-to-drug interaction. 5) The difference of the clinical effectiveness of antidepressants between children and adults should be addressed from developmental aspects, which required further evidence. Conclusion:Treatment guidelines for the pharmacological treatment of childhood and adolescence depression could be constructed on the basis of clinical trial findings and practical experiences. Treatment guidelines are to best serve as the frame of reference for a clinician to make reasonable decisions for a particular therapeutic situation. In order to fulfill this role, guidelines should be updated as soon as new research data become available.

  • PDF

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

Computerized Multiple 15-hue tests for Quantifying Color Vision Acuity (색각 능력의 정량적 평가를 위한 전산화된 다중 15-색상 배열 검사법)

  • Ko S.T.;Hong S.C.;Choi M.J.
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.3 s.61
    • /
    • pp.321-331
    • /
    • 2000
  • Multiple 15-hue tests were designed and implemented on a PC in the study so as to quickly and quantitatively evaluate color vision acuity. Difficulty of the test was control)ed by the value of CDBACC (color difference between adjacent color chips) calculated using a CIELAB formula. The multiple 15-hue tests consist of eight of the hue tests (test 3-10) and three of the basic color (red, green, blue) tests (test 11-13). The 15 colors used for the hue tests were specified by the 15 color coordinates that were located at a constant distance (d = 2. 3. 5. 7, 10, 20, 30. 40) from white reference in the CIE chromaticity coordinate system and were separated by a constant color difference (CDBACC = 0.75, 1.1, 1.8. 2.5. 3.5. 7.5. 11, 14) from the adjacent chips. The color coordinates for the 15 chips for the basic color tests were the same as those of the 15 points spaced equally by a constant color difference (6.87 for the green color test. 7.27 for the red color test, 7.86 for the blue color test) from the white reference along the axis of red, green and blue. Thirty normal subjects who were not color blind were taken to undergo the multiple 15-hue tests. It was observed that most of the subjects correctly arranged color chips for the tests with CDBACC greater than 5, whereas no one correctly answered for those with CDBACC less than 2. Rapid changes in the number of the subjects correctly arranged took place when CDBACC of the tests was between 2 and 4.5. In the basic color tests, unlike the hue tests having similar values of CDBACC, it was seen that the subjects arranged color chips even less correctly. It was found that JNCD (just noticeable color difference) - a measure of color vision acuity was about 3 in average for the subjects. The JNCD was chosen as the value of the CDBACC of the test for which about $50\%$ of the subjects failed to successfully arrange color chips. ERCCA (error rate of color chips arrangement) for the test with CDBACC the same as the JNCD was shown to be about $20\%$. It is expected that the multi 15-hue tests implemented on a PC in the study will be an economical tool to quickly and quantitatively evaluate color vision acuity and, accordingly, the tests can be used for early diagnosis to massive potential patients suffering from diseases (ex. diabetes, glaucoma) which may induce changes in color vision acuity.

  • PDF

Influence of Gating and Attenuation-correction for Diagnostic Performance of Usual Rest/stress Myocardial Perfusion SPECT in Coronary Artery Disease (게이트 방법과 감쇠보정이 심근 관류 SPECT의 관상동맥질환 진단 성능에 미치는 영향)

  • Lee, Dong-Soo;Yeo, Jeong-Seok;So, Young;Cheon, Gi-Jeong;Kim, Kyeong-Min;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.2
    • /
    • pp.131-142
    • /
    • 1999
  • Purpose: Either gated myocardial perfusion SPECT or attenuation corrected SPECT can be used to improve specificity in the diagnosis of coronary artery disease. We investigated in this study whether gating or attenuation correction improved diagnostic performance of rest/stress perfusion SPECT in patients having intermediate pre-test likelihood of coronary artery disease. Materials and Methods: Sixty-eight patients underwent rest attenuation-corrected T1-20l/dipyridamole stress gated attenuation-corrected Tc-99m -MIBI SPECT using an ADAC vertex camera (M:F=29:39, aged $59{\pm}12$ years, coronary artery stenosis ${\geq}70%$, one vessel: 13, two vessel: 18, three vessel: 8, normal: 29). Using a five-point scale, three physicians graded the post-test likelihood of coronary artery disease for each arterial territory (1:normal, 2: possibly normal, 3:equivocal, 4. possibly abnormal, 5: abnormal). Sensitivity, specificity and area under receiver-operating-characteristic curves were compared for each operator between three methods : (A) non-attenuation-corrected SPECT; (B) gated SPECT added to (A): and (C) attenuation-corrected SPECT added to (B). Results: When grade 3 was used as the criteria for coronary artery disease, no differences in sensitivity and specificity were found between the three methods for each operator. Areas under receiver-operating-characteristic curves for diagnosis of coronary artery disease revealed no differences between each modality (p>0.05). Conclusion: In patients at intermediate risk of coronary artery disease, gated SPECT and attenuation- corrected SPECT did not improve diagnostic performance.

  • PDF

Study on Production Performance of Shale Gas Reservoir using Production Data Analysis (생산자료 분석기법을 이용한 셰일가스정 생산거동 연구)

  • Lee, Sun-Min;Jung, Ji-Hun;Sin, Chang-Hoon;Kwon, Sun-Il
    • Journal of the Korean Institute of Gas
    • /
    • v.17 no.4
    • /
    • pp.58-69
    • /
    • 2013
  • This paper presents production data analysis for two production wells located in the shale gas field, Canada, with the proper analysis method according to each production performance characteristics. In the case A production well, the analysis was performed by applying both time and superposition time because the production history has high variation. Firstly, the flow regimes were classified with a log-log plot, and as a result, only the transient flow was appeared. Then the area of simulated reservoir volume (SRV) analyzed based on flowing material balance plot was calculated to 180 acres of time, and 240 acres of superposition time. And the original gas in place (OGIP) also was estimated to 15, 20 Bscf, respectively. However, as the area of SRV was not analyzed with the boundary dominated flow data, it was regarded as the minimum one. Therefore, the production forecasting was conducted according to variation of b exponent and the area of SRV. As a result, estimated ultimate recovery (EUR) increased 1.2 and 1.4 times respectively depending on b exponent, which was 0.5 and 1. In addition, as the area of SRV increased from 240 to 360 acres, EUR increased 1.3 times. In the case B production well, the formation compressibility and permeability depending on the overburden were applied to the analysis of the overpressured reservoir. In comparison of the case that applied geomechanical factors and the case that did not, the area of SRV was increased 1.4 times, OGIP was increased 1.5 times respectively. As a result of analysis, the prediction of future productivity including OGIP and EUR may be quite different depending on the analysis method. Thus, it was found that proper analysis methods, such as pseudo-time, superposition time, geomechanical factors, need to be applied depending on the production data to gain accurate results.

Comparison of Acting Style Between 2D Hand-drawn Animation and 3D Computer Animation : Focused on Expression of Emotion by Using Close-up (2D 핸드 드로운 애니메이션과 3D 컴퓨터 애니메이션에서의 액팅(acting) 스타일 비교 -클로즈-업을 이용한 감정표현을 중심으로-)

  • Moon, Jaecheol;Kim, Yumi
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.147-165
    • /
    • 2014
  • Around the turn of 21st century, there has been a major technological shift in the animation industry. With development of reality-based computer graphics, major American animation studios replaced hand-drawn method with the new 3D computer graphics. Traditional animation was known for its simplified shapes such as circles and triangle that makes characters' movements distinctive from non-animated feature films. Computer-generated animation has largely replaced it, but is under continuous criticism that automated movements and reality-like graphics devaluate the aesthetics of animation. Although hand-drawn animation is still produced, 3D computer graphics have taken commercial lead and there has been many changes to acting of animated characters, which calls for detailed investigation. Firstly, the changes in acting of 3D characters can be traced from looking at human-like rigging method that mimics humanistic moving mechanism. Also, if hair and clothing was part of hand-drawn characters' acting, it has now been hidden inside mathematical simulation of 3D graphics, leaving only the body to be used in acting. Secondly, looking at "Stretch and Squash" method, which represents the distinctive movements of animation, through the lens of media, a paradox arises. Hand-drawn animation are produced frame-by-frame, and a subtle change would make animated frames shiver. This slight shivering acts as an aesthetic distinction of animated feature films, but can also require exaggerated movements to hide the shivering. On the contrary, acting of 3D animation make use of calculated movements that may seem exaggerated compared to human acting, but seem much more moderate and static compared to hand-drawn acting. Moreover, 3D computer graphics add the third dimension that allows more intuitive movements - maybe animators no longer need fine drawing skills; what they now need is directing skills to animate characters in 3D space intuitively. On the assumption that technological advancement and change of artistic expressionism are inseparable, this paper compares acting of 3D animation studio Pixar and classical drawing studio Disney to investigate character acting style and movements.