• Title/Summary/Keyword: Numerical Information

Search Result 4,650, Processing Time 0.028 seconds

A Study on the Bottom-Emitting Characteristics of Blue OLED with 7-Layer Laminated Structure (7층 적층구조 배면발광 청색 OLED의 발광 특성 연구)

  • Gyu Cheol Choi;Duck-Youl Kim;SangMok Chang
    • Clean Technology
    • /
    • v.29 no.4
    • /
    • pp.244-248
    • /
    • 2023
  • Recently, displays play an important role in quickly delivering a lot of information. Research is underway to reproduce various colors close to natural colors. In particular, research is being conducted on the light emitting structure of displays as a method of expressing accurate and rich colors. Due to the advancement of technology and the miniaturization of devices, the need for small but high visibility displays with high efficiency in energy consumption continues to increase. Efforts are being made in various ways to improve OLED efficiency, such as improving carrier injection, structuring devices that can efficiently recombine electrons and holes in a numerical balance, and developing materials with high luminous efficiency. In this study, the electrical and optical properties of the seven-layer stacked structure rear-light emitting blue OLED device were analyzed. 4,4'-Bis(carazol-9-yl)biphenyl:Ir(difppy)2(pic), a blue light emitting material that is easy to manufacture and can be highly efficient and brightened, was used. OLED device manufacturing was performed via the in-situ method in a high vacuum state of 5×10-8 Torr or less using a Sunicel Plus 200 system. The experiment was conducted with a seven-layer structure in which an electron or hole blocking layer (EBL or HBL) was added to a five-layer structure in which an electron or hole injection layer (EIL or HIL) or an electron or hole transport layer (ETL or HTL) was added. Analysis of the electrical and optical properties showed that the device that prevented color diffusion by inserting an EBL layer and a HBL layer showed excellent color purity. The results of this study are expected to greatly contribute to the R&D foundation and practical use of blue OLED display devices.

Survey of coastal topography using images from a single UAV (단일 UAV를 이용한 해안 지형 측량)

  • Noh, Hyoseob;Kim, Byunguk;Lee, Minjae;Park, Yong Sung;Bang, Ki Young;Yoo, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1027-1036
    • /
    • 2023
  • Coastal topographic information is crucial in coastal management, but point measurment based approeaches, which are labor intensive, are generally applied to land and underwater, separately. This study introduces an efficient method enabling land and undetwater surveys using an unmanned aerial vehicle (UAV). This method involves applying two different algorithms to measure the topography on land and water depth, respectively, using UAV imagery and merge them to reconstruct whole coastal digital elevation model. Acquisition of the landside terrain is achieved using the Structure-from-Motion Multi-View Stereo technique with spatial scan imagery. Independently, underwater bathymetry is retrieved by employing a depth inversion technique with a drone-acquired wave field video. After merging the two digital elevation models into a local coordinate, interpolation is performed for areas where terrain measurement is not feasible, ultimately obtaining a continuous nearshore terrain. We applied the proposed survey technique to Jangsa Beach, South Korea, and verified that detailed terrain characteristics, such as berm, can be measured. The proposed UAV-based survey method has significant efficiency in terms of time, cost, and safety compared to existing methods.

Classification and discrimination of excel radial charts using the statistical shape analysis (통계적 형상분석을 이용한 엑셀 방사형 차트의 분류와 판별)

  • Seungeon Lee;Jun Hong Kim;Yeonseok Choi;Yong-Seok Choi
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.73-86
    • /
    • 2024
  • A radial chart of Excel is very useful graphical method in delivering information for numerical data. However, it is not easy to discriminate or classify many individuals. In this case, after shaping each individual of a radial chart, we need to apply shape analysis. For a radial chart, since landmarks for shaping are formed as many as the number of variables representing the characteristics of the object, we consider a shape that connects them to a line. If the shape becomes complicated due to the large number of variables, it is difficult to easily grasp even if visualized using a radial chart. Principal component analysis (PCA) is performed on variables to create a visually effective shape. The classification table and classification rate are checked by applying the techniques of traditional discriminant analysis, support vector machine (SVM), and artificial neural network (ANN), before and after principal component analysis. In addition, the difference in discrimination between the two coordinates of generalized procrustes analysis (GPA) coordinates and Bookstein coordinates is compared. Bookstein coordinates are obtained by converting the position, rotation, and scale of the shape around the base landmarks, and show higher rate than GPA coordinates for the classification rate.

The effect of perineural injection therapy on neuropathic pain: a retrospective study

  • Haekyu Kim;Hyae Jin Kim;Young-Hoon Jung;Wangseok Do;Eun-Jung Kim
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.24 no.1
    • /
    • pp.47-56
    • /
    • 2024
  • Background: Among the various pain-related diseases that can be encountered at the clinic, there is a neuropathic pain that is difficult to treat. Numerous methods have been proposed to treat neuropathic pain, such as taking medication, nerve block with lidocaine, or neurolysis with alcohol or phenol. Recently, a method of perineural injection using dextrose instead of lidocaine was proposed. This study was designed to compare the effects of perineural injection therapy (PIT) with buffered 5% dextrose or 0.5% lidocaine on neuropathic pain. Methods: The data were collected from the database of pain clinic from August 1st, 2019 to December 31st, 2022 without any personal information. The inclusion criteria were patients diagnosed with postherpetic neuralgia (PHN), trigeminal neuralgia (TN), complex regional pain syndrome (CRPS), or peripheral neuropathy (PN), and patients who had undergone PIT with buffered 5% dextrose (Dextrose group) or 0.5% lidocaine (Lidocaine group) for pain control. The data of patients, namely sex, age, and pain score (numerical rating scale, NRS) were collected before PIT. The data of NRS, side effects, and satisfaction grade (excellent, good, fair, or poor) were collected one week after each of the four PIT, and two weeks after the last PIT. Results: Overall, 112 subjects were enrolled. The Dextrose group included 89 and Lidocaine group included 23 patients. Because the number of patients in the Lidocaine group was too small to allow statistical analysis, the trend in Lidocaine group was just observed in each disease. There were no significant side effects except for a few bruise cases on the site of injection in all groups. The NRS in most Dextrose groups except CRPS were reduced significantly; however, the Lidocaine group showed a trend of pain reduction only in PHN. The Dextrose group except CRPS showed increased satisfaction two weeks after the final PIT. Conclusion: From the results, it is suggested that PIT with buffered 5% dextrose may have a good effect for neuropathic pain without any side effect except for patients with CRPS. This may offer a window into a new tool that practitioners can employ in their quest to help patients with neuropathic pain.

A Study on the Revitalization of Tourism Industry through Big Data Analysis (한국관광 실태조사 빅 데이터 분석을 통한 관광산업 활성화 방안 연구)

  • Lee, Jungmi;Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.149-169
    • /
    • 2018
  • Korea is currently accumulating a large amount of data in public institutions based on the public data open policy and the "Government 3.0". Especially, a lot of data is accumulated in the tourism field. However, the academic discussions utilizing the tourism data are still limited. Moreover, the openness of the data of restaurants, hotels, and online tourism information, and how to use SNS Big Data in tourism are still limited. Therefore, utilization through tourism big data analysis is still low. In this paper, we tried to analyze influencing factors on foreign tourists' satisfaction in Korea through numerical data using data mining technique and R programming technique. In this study, we tried to find ways to revitalize the tourism industry by analyzing about 36,000 big data of the "Survey on the actual situation of foreign tourists from 2013 to 2015" surveyed by the Korea Culture & Tourism Research Institute. To do this, we analyzed the factors that have high influence on the 'Satisfaction', 'Revisit intention', and 'Recommendation' variables of foreign tourists. Furthermore, we analyzed the practical influences of the variables that are mentioned above. As a procedure of this study, we first integrated survey data of foreign tourists conducted by Korea Culture & Tourism Research Institute, which is stored in the tourist information system from 2013 to 2015, and eliminate unnecessary variables that are inconsistent with the research purpose among the integrated data. Some variables were modified to improve the accuracy of the analysis. And we analyzed the factors affecting the dependent variables by using data-mining methods: decision tree(C5.0, CART, CHAID, QUEST), artificial neural network, and logistic regression analysis of SPSS IBM Modeler 16.0. The seven variables that have the greatest effect on each dependent variable were derived. As a result of data analysis, it was found that seven major variables influencing 'overall satisfaction' were sightseeing spot attraction, food satisfaction, accommodation satisfaction, traffic satisfaction, guide service satisfaction, number of visiting places, and country. Variables that had a great influence appeared food satisfaction and sightseeing spot attraction. The seven variables that had the greatest influence on 'revisit intention' were the country, travel motivation, activity, food satisfaction, best activity, guide service satisfaction and sightseeing spot attraction. The most influential variables were food satisfaction and travel motivation for Korean style. Lastly, the seven variables that have the greatest influence on the 'recommendation intention' were the country, sightseeing spot attraction, number of visiting places, food satisfaction, activity, tour guide service satisfaction and cost. And then the variables that had the greatest influence were the country, sightseeing spot attraction, and food satisfaction. In addition, in order to grasp the influence of each independent variables more deeply, we used R programming to identify the influence of independent variables. As a result, it was found that the food satisfaction and sightseeing spot attraction were higher than other variables in overall satisfaction and had a greater effect than other influential variables. Revisit intention had a higher ${\beta}$ value in the travel motive as the purpose of Korean Wave than other variables. It will be necessary to have a policy that will lead to a substantial revisit of tourists by enhancing tourist attractions for the purpose of Korean Wave. Lastly, the recommendation had the same result of satisfaction as the sightseeing spot attraction and food satisfaction have higher ${\beta}$ value than other variables. From this analysis, we found that 'food satisfaction' and 'sightseeing spot attraction' variables were the common factors to influence three dependent variables that are mentioned above('Overall satisfaction', 'Revisit intention' and 'Recommendation'), and that those factors affected the satisfaction of travel in Korea significantly. The purpose of this study is to examine how to activate foreign tourists in Korea through big data analysis. It is expected to be used as basic data for analyzing tourism data and establishing effective tourism policy. It is expected to be used as a material to establish an activation plan that can contribute to tourism development in Korea in the future.

An Analysis of Model Bias Tendency in Forecast for the Interaction between Mid-latitude Trough and Movement Speed of Typhoon Sanba (중위도 기압골과 태풍 산바의 이동속도와의 상호작용에 대한 예측에서 모델 바이어스 경향분석)

  • Choi, Ki-Seon;Wongsaming, Prapaporn;Park, Sangwook;Cha, Yu-Mi;Lee, Woojeong;Oh, Imyong;Lee, Jae-Shin;Jeong, Sang-Boo;Kim, Dong-Jin;Chang, Ki-Ho;Kim, Jiyoung;Yoon, Wang-Sun;Lee, Jong-Ho
    • Journal of the Korean earth science society
    • /
    • v.34 no.4
    • /
    • pp.303-312
    • /
    • 2013
  • Typhoon Sanba was selected for describing the Korea Meteorological Administration (KMA) Global Data Assimilation Prediction System (GDAPS) model bias tendency in forecast for the interaction between mid-latitude trough and movement speed of typhoon. We used the KMA GDAPS analyses and forecasts initiated 00 UTC 15 September 2012 from the historical typhoon record using Typhoon Analysis and Prediction System (TAPS) and Combined Meteorological Information System-3 (COMIS-3). Sea level pressure fields illustrated a development of the low level mid-latitude cyclogenesis in relation to Jet Maximum at 500 hPa. The study found that after Sanba entered the mid-latitude domain, its movement speed was forecast to be accelerated. Typically, Snaba interacted with mid-latitude westerlies at the front of mid-latitude trough. This event occurred when the Sanba was nearing recurvature at 00 and 06 UTC 17 September. The KMA GDAPS sea level pressure forecasts provided the low level mid-latitude cyclone that was weaker than what it actually analyzed in field. As a result, the mid-latitude circulations affecting on Sanba's movement speed was slower than what the KMA GDAPS actually analyzed in field. It was found that these circulations occurred due to the weak mid-tropospheric jet maximum at the 500 hPa. In conclusion, the KMA GDAPS forecast tends to slow a bias of slow movement speed when Sanba interacted with the mid-latitude trough.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

An Electrical Conductivity Reconstruction for Evaluating Bone Mineral Density : Simulation (골 밀도 평가를 위한 뼈의 전기 전도도 재구성: 시뮬레이션)

  • 최민주;김민찬;강관석;최흥호
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.261-268
    • /
    • 2004
  • Osteoporosis is a clinical condition in which the amount of bone tissue is reduced and the likelihood of fracture is increased. It is known that the electrical property of the bone is related to its density, and, in particular, the electrical resistance of the bone decreases as the bone loss increases. This implies that the electrical property of bone may be an useful parameter to diagnose osteoporosis, provided that it can be readily measured. The study attempted to evaluate the electrical conductivity of bone using a technique of electrical impedance tomography (EIT). It nay not be easy in general to get an EIT for the bone due to the big difference (an order of 2) of electrical properties between the bone and the surrounding soft tissue. In the present study, we took an adaptive mesh regeneration technique originally developed for the detection of two phase boundaries and modified it to be able to reconstruct the electrical conductivity inside the boundary provided that the geometry of the boundary was given. Numerical simulation was carried out for a tibia phantom, circular cylindrical phantom (radius of 40 mm) inside of which there is an ellipsoidal homeogenous tibia bone (short and long radius are 17 mm and 15 mm, respectively) surrounded by the soft tissue. The bone was located in the 15 mm above from the center of the circular cross section of the phantom. The electrical conductivity of the soft tissue was set to be 4 mS/cm and varies from 0.01 to 1 ms/cm for the bone. The simulation considered measurement errors in order to look into its effects. The simulated results showed that, if the measurement error was maintained less than 5 %, the reconstructed electrical conductivity of the bone was within 10 % errors. The accuracy increased with the electrical conductivity of the bone, as expected. This indicates that the present technique provides more accurate information for osteoporotic bones. It should be noted that tile simulation is based on a simple two phase image for the bone and the surrounding soft tissue when its anatomical information is provided. Nevertheless, the study indicates the possibility that the EIT technique may be used as a new means to detect the bone loss leading to osteoporotic fractures.

Development of a Dose Calibration Program for Various Dosimetry Protocols in High Energy Photon Beams (고 에너지 광자선의 표준측정법에 대한 선량 교정 프로그램 개발)

  • Shin Dong Oh;Park Sung Yong;Ji Young Hoon;Lee Chang Geon;Suh Tae Suk;Kwon Soo IL;Ahn Hee Kyung;Kang Jin Oh;Hong Seong Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.381-390
    • /
    • 2002
  • Purpose : To develop a dose calibration program for the IAEA TRS-277 and AAPM TG-21, based on the air kerma calibration factor (or the cavity-gas calibration factor), as well as for the IAEA TRS-398 and the AAPM TG-51, based on the absorbed dose to water calibration factor, so as to avoid the unwanted error associated with these calculation procedures. Materials and Methods : Currently, the most widely used dosimetry Protocols of high energy photon beams are the air kerma calibration factor based on the IAEA TRS-277 and the AAPM TG-21. However, this has somewhat complex formalism and limitations for the improvement of the accuracy due to uncertainties of the physical quantities. Recently, the IAEA and the AAPM published the absorbed dose to water calibration factor based, on the IAEA TRS-398 and the AAPM TG-51. The formalism and physical parameters were strictly applied to these four dose calibration programs. The tables and graphs of physical data and the information for ion chambers were numericalized for their incorporation into a database. These programs were developed user to be friendly, with the Visual $C^{++}$ language for their ease of use in a Windows environment according to the recommendation of each protocols. Results : The dose calibration programs for the high energy photon beams, developed for the four protocols, allow the input of informations about a dosimetry system, the characteristics of the beam quality, the measurement conditions and dosimetry results, to enable the minimization of any inter-user variations and errors, during the calculation procedure. Also, it was possible to compare the absorbed dose to water data of the four different protocols at a single reference points. Conclusion : Since this program expressed information in numerical and data-based forms for the physical parameter tables, graphs and of the ion chambers, the error associated with the procedures and different user could be solved. It was possible to analyze and compare the major difference for each dosimetry protocol, since the program was designed to be user friendly and to accurately calculate the correction factors and absorbed dose. It is expected that accurate dose calculations in high energy photon beams can be made by the users for selecting and performing the appropriate dosimetry protocol.

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.