• Title/Summary/Keyword: Evaluation Models

Search Result 3,729, Processing Time 0.036 seconds

Introduction and Evaluation of the Production Method for Chlorophyll-a Using Merging of GOCI-II and Polar Orbit Satellite Data (GOCI-II 및 극궤도 위성 자료를 병합한 Chlorophyll-a 산출물 생산방법 소개 및 활용 가능성 평가)

  • Hye-Kyeong Shin;Jae Yeop Kwon;Pyeong Joong Kim;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1255-1272
    • /
    • 2023
  • Satellite-based chlorophyll-a concentration, produced as a long-term time series, is crucial for global climate change research. The production of data without gaps through the merging of time-synthesized or multi-satellite data is essential. However, studies related to satellite-based chlorophyll-a concentration in the waters around the Korean Peninsula have mainly focused on evaluating seasonal characteristics or proposing algorithms suitable for research areas using a single ocean color sensor. In this study, a merging dataset of remote sensing reflectance from the geostationary sensor GOCI-II and polar-orbiting sensors (MODIS, VIIRS, OLCI) was utilized to achieve high spatial coverage of chlorophyll-a concentration in the waters around the Korean Peninsula. The spatial coverage in the results of this study increased by approximately 30% compared to polar-orbiting sensor data, effectively compensating for gaps caused by clouds. Additionally, we aimed to quantitatively assess accuracy through comparison with global chlorophyll-a composite data provided by Ocean Colour Climate Change Initiative (OC-CCI) and GlobColour, along with in-situ observation data. However, due to the limited number of in-situ observation data, we could not provide statistically significant results. Nevertheless, we observed a tendency for underestimation compared to global data. Furthermore, for the evaluation of practical applications in response to marine disasters such as red tides, we qualitatively compared our results with a case of a red tide in the East Sea in 2013. The results showed similarities to OC-CCI rather than standalone geostationary sensor results. Through this study, we plan to use the generated data for future research in artificial intelligence models for prediction and anomaly utilization. It is anticipated that the results will be beneficial for monitoring chlorophyll-a events in the coastal waters around Korea.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Assessment Study on Educational Programs for the Gifted Students in Mathematics (영재학급에서의 수학영재프로그램 평가에 관한 연구)

  • Kim, Jung-Hyun;Whang, Woo-Hyung
    • Communications of Mathematical Education
    • /
    • v.24 no.1
    • /
    • pp.235-257
    • /
    • 2010
  • Contemporary belief is that the creative talented can create new knowledge and lead national development, so lots of countries in the world have interest in Gifted Education. As we well know, U.S.A., England, Russia, Germany, Australia, Israel, and Singapore enforce related laws in Gifted Education to offer Gifted Classes, and our government has also created an Improvement Act in January, 2000 and Enforcement Ordinance for Gifted Improvement Act was also announced in April, 2002. Through this initiation Gifted Education can be possible. Enforcement Ordinance was revised in October, 2008. The main purpose of this revision was to expand the opportunity of Gifted Education to students with special education needs. One of these programs is, the opportunity of Gifted Education to be offered to lots of the Gifted by establishing Special Classes at each school. Also, it is important that the quality of Gifted Education should be combined with the expansion of opportunity for the Gifted. Social opinion is that it will be reckless only to expand the opportunity for the Gifted Education, therefore, assessment on the Teaching and Learning Program for the Gifted is indispensible. In this study, 3 middle schools were selected for the Teaching and Learning Programs in mathematics. Each 1st Grade was reviewed and analyzed through comparative tables between Regular and Gifted Education Programs. Also reviewed was the content of what should be taught, and programs were evaluated on assessment standards which were revised and modified from the present teaching and learning programs in mathematics. Below, research issues were set up to assess the formation of content areas and appropriateness for Teaching and Learning Programs for the Gifted in mathematics. A. Is the formation of special class content areas complying with the 7th national curriculum? 1. Which content areas of regular curriculum is applied in this program? 2. Among Enrichment and Selection in Curriculum for the Gifted, which one is applied in this programs? 3. Are the content areas organized and performed properly? B. Are the Programs for the Gifted appropriate? 1. Are the Educational goals of the Programs aligned with that of Gifted Education in mathematics? 2. Does the content of each program reflect characteristics of mathematical Gifted students and express their mathematical talents? 3. Are Teaching and Learning models and methods diverse enough to express their talents? 4. Can the assessment on each program reflect the Learning goals and content, and enhance Gifted students' thinking ability? The conclusions are as follows: First, the best contents to be taught to the mathematical Gifted were found to be the Numeration, Arithmetic, Geometry, Measurement, Probability, Statistics, Letter and Expression. Also, Enrichment area and Selection area within the curriculum for the Gifted were offered in many ways so that their Giftedness could be fully enhanced. Second, the educational goals of Teaching and Learning Programs for the mathematical Gifted students were in accordance with the directions of mathematical education and philosophy. Also, it reflected that their research ability was successful in reaching the educational goals of improving creativity, thinking ability, problem-solving ability, all of which are required in the set curriculum. In order to accomplish the goals, visualization, symbolization, phasing and exploring strategies were used effectively. Many different of lecturing types, cooperative learning, discovery learning were applied to accomplish the Teaching and Learning model goals. For Teaching and Learning activities, various strategies and models were used to express the students' talents. These activities included experiments, exploration, application, estimation, guess, discussion (conjecture and refutation) reconsideration and so on. There were no mention to the students about evaluation and paper exams. While the program activities were being performed, educational goals and assessment methods were reflected, that is, products, performance assessment, and portfolio were mainly used rather than just paper assessment.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Water Quality and Ecosystem Health Assessments in Urban Stream Ecosystems (도심하천 생태계에서의 수질 및 생태건강성 평가)

  • Kim, Hyun-Mac;Lee, Jae-Hoon;An, Kwang-Guk
    • Korean Journal of Environmental Biology
    • /
    • v.26 no.4
    • /
    • pp.311-322
    • /
    • 2008
  • The objectives of the study were to analyze chemical water quality and physical habitat characteristics in the urban streams (Miho and Gap streams) along with evaluations of fish community structures and ecosystem health, throughout fish composition and guild analyses during 2006$\sim$2007. Concentrations of BOD and COD averaged 3.5 and 5.7 mg L$^{-1}$, in the urban streams, while TN and TP averaged 5.1 mg L$^{-1}$ and 274 ${\mu}g$ L$^{-1}$, indicating an eutrophic state. Especially, organic pollution and eutrophication were most intense in the downstream reach of both streams. Total number of fish was 34 species in the both streams, and the most abundant species was Zacco platypus (32$\sim$42% of the total). In both streams, the relative abundance of sensitive species was low (23%) and tolerant and omnivores were high (45%, 52%), indicating an typical tolerance and trophic guilds of urban streams in Korea. According to multi-metric models of Stream Ecosystem Health Assessments (SEHA), model values were 19 and 24 in Miho Stream and Gap Stream, respectively. Habitat analysis showed that QHEI (Qulatitative Habitat Evaluation Index) values were 123 and 135 in the two streams, respectively. The minimum values in the SEHA and QHEI were observed in the both downstreams, and this was mainly attributed to chemical pollutions, as shown in the water quality parameters. The model values of SEHA were strongly correlated with conductivity (r=-0.530, p=0.016), BOD (r=-0.578, p< 0.01), COD (r=-0.603, p< 0.01), and nutrients (TN, TP: r>0.40, p<0.05). This model applied in this study seems to be a useful tool, which could reflect the chemical water quality in the urban streams. Overall, this study suggests that consistent ecological monitoring is required in the urban streams for the conservations along with ecological restorations in the degradated downstrems.

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

Analysis of Significance between SWMM Computer Simulation and Artificial Rainfall on Rainfall Runoff Delay Effects of Vegetation Unit-type LID System (식생유니트형 LID 시스템의 우수유출 지연효과에 대한 SWMM 전산모의와 인공강우 모니터링 간의 유의성 분석)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.3
    • /
    • pp.34-44
    • /
    • 2020
  • In order to suggest performance analysis directions of ecological components based on a vegetation-based LID system model, this study seeks to analyze the statistical significance between monitoring results by using SWMM computer simulation and rainfall and run-off simulation devices and provide basic data required for a preliminary system design. Also, the study aims to comprehensively review a vegetation-based LID system's soil, a vegetation model, and analysis plans, which were less addressed in previous studies, and suggest a performance quantification direction that could act as a substitute device-type LID system. After monitoring artificial rainfall for 40 minutes, the test group zone and the control group zone recorded maximum rainfall intensity of 142.91mm/hr. (n=3, sd=0.34) and 142.24mm/hr. (n=3, sd=0.90), respectively. Compared to a hyetograph, low rainfall intensity was re-produced in 10-minute and 50-minute sections, and high rainfall intensity was confirmed in 20-minute, 30-minute, and 40-minute sections. As for rainwater run-off delay effects, run-off intensity in the test group zone was reduced by 79.8% as it recorded 0.46mm/min at the 50-minute point when the run-off intensity was highest in the control group zone. In the case of computer simulation, run-off intensity in the test group zone was reduced by 99.1% as it recorded 0.05mm/min at the 50-minute point when the run-off intensity was highest. The maximum rainfall run-off intensity in the test group zone (Dv=30.35, NSE=0.36) recorded 0.77mm/min and 1.06mm/min in artificial rainfall monitoring and SWMM computer simulation, respectively, at the 70-minute point in both cases. Likewise, the control group zone (Dv=17.27, NSE=0.78) recorded 2.26mm/min and 2.38mm/min, respectively, at the 50-minutes point. Through statistical assessing the significance between the rainfall & run-off simulating systems and the SWMM computer simulations, this study was able to suggest a preliminary design direction for the rainwater run-off reduction performance of the LID system applied with single vegetation. Also, by comprehensively examining the LID system's soil and vegetation models, and analysis methods, this study was able to compile parameter quantification plans for vegetation and soil sectors that can be aligned with a preliminary design. However, physical variables were caused by the use of a single vegetation-based LID system, and follow-up studies are required on algorithms for calibrating the statistical significance between monitoring and computer simulation results.

Recent Progress in Air Conditioning and Refrigeration Research -A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2000 and 2001- (공기조화, 냉동 분야의 최근 연구 동향 -2000년 및 2001년 학회지 논문에 대한 종합적 고찰 -)

  • 강신형;한화택;조금남;이승복;조형희;김민수
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.14 no.12
    • /
    • pp.1102-1139
    • /
    • 2002
  • A review on the papers published in the Korean Journal of Air-Conditioning and Refrigerating Engineering in 2000 and 2001 has been done. Focus has been put on current status of research in the aspect of heating, cooling, ventilation, sanitation and building environment. The conclusions are as follows. (1) Most of fundamental studies on fluid flow were related with heat transportation of facilities. Drop formation and rivulet flow on solid surfaces were interesting topics related with condensation augmentation. Research on micro environment considering flow, heat, humidity was also interesting for comfortable living environment. It can be extended considering biological aspects. Development of fans and blowers of high performance and low noise were continuing topics. Well developed CFD technologies were widely applied for developing facilities and their systems. (2) Most of papers related with heat transfer analysis and heat exchanger shows dealt with convection, evaporation, and channel flow for the design application of heat exchanger. The numerical heat transfer simulation studies have been peformed and reported to show heat transfer characteristics. Experimental as well as numerical studies on heat exchanger were reported, while not many papers are available for the system analysis including heat exchanger. (3) A review of the recent studies on heat pump system shows that performance analysis and control of heat pump have been peformed by various simulations and experiments. The research papers on multi-type heat pump system increased significantly. The studies on heat pipe have been examined experimently for change of working characteristics and strut lure. Research on the phase change has been carried out steadily and operation strategies of encapsulated ice storage tank are reported experimentally in several papers. (4) A review of recent studies on refrigeration/air conditioning system have focused on the system performance and efficiency for new alternative refrigerants. Evaporation and condensation heat transfer characteristics are investigated for tube shapes and new alternative refrigerants. Studies on components of refrigeration/air conditioning system are carried to examine efficiency for various compressors and performance of new expansion devices. In addition to thermophysical properties of refrigerant mixtures, studies on new refrigerants are also carried out, however research works on two-phase flow seemed to be insufficient. (5) A review of the recent studies on absorption cooling system indicates that heat and mass transfer phenomena have been investigated to improve absorber performance. Various experimental data have been presented and several simulation models have been proposed. A review of the recent studies on duct and ventilation shows that ventilation indices have been proposed to quantify the ventilation performance in buildings and tunnels. Main efforts have been focused on the applications of ventilation effectiveness in practice, either numerically using computational fluid dynamics or experimentally using tracer gas techniques. (6) Based on a review of recent studies on indoor thermal environment and building service systems, research issues have mainly focused on many innovative ideas such as underfloor air-conditioning system, personal environmental modules, radiant floor cooling and etc. Also, the new approaches for minimizing energy consumption as well as improving indoor environmental conditions through predictive control of HVAC systems, various activities of building energy management and cost-benefit analysis for economic evaluation were highlighted.

Hue Shift Model and Hue Correction in High Luminance Display (고휘도 디스플레이의 색상이동모델과 색 보정)

  • Lee, Tae-Hyoung;Kwon, Oh-Seol;Park, Tae-Yong;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.60-69
    • /
    • 2007
  • The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. The value of hue-shift is determined by perceived hue matching experiments. At first the phenomenon is observed at three lightness levels, that is, the ratio of luminance is the same between high and normal-luminance display when the perceived hue matching experiments we performed. To quantify the hue-shift phenomenon for the whole hue angle, color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with the ratio of luminance between two displays. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appears the same visually. After obtaining the hue-shift values, these values are fit piecewise to allow shifted-hue amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, input RGB values of an image is converted to CIELAB values, and then, LCh (lightness, chroma, and hue) values are calculated to obtain the hue values for all the pixels. These hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected CIELAB values are calculated from corrected hue values, after that, output RGB values for all pixels are estimated. For evaluation, an observer's preference test was performed with hue-shift results and Almost observers conclude that the images from hue-shift model were visually matched with images on normal luminance display.