• Title/Summary/Keyword: user sensitivity

Search Result 213, Processing Time 0.028 seconds

Rank-level Fusion Method That Improves Recognition Rate by Using Correlation Coefficient (상관계수를 이용하여 인식률을 향상시킨 rank-level fusion 방법)

  • Ahn, Jung-ho;Jeong, Jae Yeol;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.5
    • /
    • pp.1007-1017
    • /
    • 2019
  • Currently, most biometrics system authenticates users by using single biometric information. This method has many problems such as noise problem, sensitivity to data, spoofing, a limitation of recognition rate. One method to solve this problems is to use multi biometric information. The multi biometric authentication system performs information fusion for each biometric information to generate new information, and then uses the new information to authenticate the user. Among information fusion methods, a score-level fusion method is widely used. However, there is a problem that a normalization operation is required, and even if data is same, the recognition rate varies depending on the normalization method. A rank-level fusion method that does not require normalization is proposed. However, a existing rank-level fusion methods have lower recognition rate than score-level fusion methods. To solve this problem, we propose a rank-level fusion method with higher recognition rate than a score-level fusion method using correlation coefficient. The experiment compares recognition rate of a existing rank-level fusion methods with the recognition rate of proposed method using iris information(CASIA V3) and face information(FERET V1). We also compare with score-level fusion methods. As a result, the recognition rate improve from about 0.3% to 3.3%.

A Study on Converting the Data of Probability of Hit(Ph) for OneSAF Model (OneSAF 모델을 위한 명중률 데이터 변환 방법)

  • Kim, Gun In;Kang, Tae Ho;Seo, Woo Duck;Pyun, Jae Jung
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.3
    • /
    • pp.83-91
    • /
    • 2020
  • To use the OneSAF model for the analysis of Defence M&S, the most critical factor is the acquisition of input data. The model user is hard to determine the input data such as the probability of hit(Ph) and the probability of kill(Pk). These data can be obtained directly by live fire during the development test and the operational test. Therefore, this needs more time and resources to get the Ph and Pk. In this paper, we reviewed possible ways to obtain the Ph and Pk. We introduced several data producing methodologies. In particular, the error budget method was presented to convert the Ph(%) data of AWAM model to the error(mil) data of OneSAF model. Also, the conversion method which can get the adjusted results from the JMEM is introduced. The probability of a hit was calculated based on the error budget method in order to prove the usefulness of the given method. More accurate data were obtained when the error budget method and the projected area from the published photo were used simultaneously. The importance of the Ph calculation was demonstrated by sensitivity analysis of the Ph on combat effectiveness. This paper emphasizes the importance of determining the Ph data and improving the reliability of the M&S system though steady collection and analysis of the Ph data.

Relationship between Smartphone Addiction and Sensory Processing Ability of Preschool Children (학령전기 아동의 스마트폰 중독과 감각처리능력과의 관계)

  • Kim, Chae-Hyeon;Kim, Kyeong-Mi;Chang, Moon-Young;Jung, Hyerim
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.19 no.2
    • /
    • pp.1-11
    • /
    • 2021
  • Objective : The purpose of this study was to compare sensory processing ability by smartphone addiction levels in preschool children, and to investigate the correlation between smartphone addiction level and sensory processing ability within smartphone addiction group. Method : The subjects of this study were 324 persons, with 124 persons in the addiction group and 200 in the normal user group. Measurements in this study were a questionnaire about general characteristics of subject, smartphone addiction scale, and short sensory profile. Methods for the data analysis included descriptive statistics, independent t-test, Pearson correlation analysis of SPSS 22.0. Results : There was a significant difference in the total Short Sensory Profile (SSP) score and in all sub-domains between the addiction and normal use groups (p<0.05). In the smartphone addiction group, there was a negative correlation between the SSP total score (r=-.278), auditory filtering (r=-.293), visual/auditory sensitivity (r=-.393) and smartphone addiction level. Conclusion : This study confirmed that there was a difference in smartphone addiction and sensory processing ability between the preschool children addiction and normal use groups. It has been proven that there is an interrelationship between sensory processing ability and smartphone addiction in the addiction group. It is significant in that it provides basic data to prevent smartphone addiction.

Personalized Cooling Management System with Thermal Imaging Camera (열화상 카메라를 적용한 개인 맞춤형 냉각관리 시스템)

  • Lee, Young-Ji;Lee, Joo-Hyun;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.782-785
    • /
    • 2021
  • In this paper, we propose a personalized cooling management system with thermal imaging camera. The proposed equipment uses a thermal imaging camera to control the amount of cold air and the system according to the difference between the user's skin temperature before and after the procedure. When the skin temperature is abnormally low, the cold air supply is cut off to prevent the possibility of a safety accident. It is economical by replacing the skin temperature sensor with a thermal imaging camera temperature measurement, and it can be visualized because the temperature can be checked with the thermal image. In addition, the proposed equipment improves the sensitivity of the sensor that measures the distance to the skin by calculating the focal length by using a dual laser pointer for the safety of a personalized cooling management system to which a thermal imaging camera is applied. In order to evaluate the performance of the proposed equipment, it was tested in an externally accredited testing institute. The first measured temperature range was -100℃~-160℃, indicating a wider temperature range than -150~-160℃(cryo generation/USA), which is the highest level currently used in the field. In addition, the error was measured to be ±3.2%~±3.5%, which showed better results than ±5%(CRYOTOP/China), which is the highest level currently used in the field. The second measured distance accuracy was measured as below ±4.0%, which was superior to ±5%(CRYOTOP/China), which is the highest level currently used in the field. Third, the nitrogen consumption was confirmed to be less than 0.15 L/min at the maximum, which was superior to the highest level of 6 L/min(POLAR BEAR/USA) currently used in the field. Therefore, it was determined that the performance of the personalized cooling management system applied with the thermal imaging camera proposed in this paper was excellent.

Design and Implementation of a Data-Driven Defect and Linearity Assessment Monitoring System for Electric Power Steering (전동식 파워 스티어링을 위한 데이터 기반 결함 및 선형성 평가 모니터링 시스템의 설계 구현)

  • Lawal Alabe Wale;Kimleang Kea;Youngsun Han;Tea-Kyung Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.61-69
    • /
    • 2023
  • In recent years, due to heightened environmental awareness, Electric Power Steering (EPS) has been increasingly adopted as the steering control unit in manufactured vehicles. This has had numerous benefits, such as improved steering power, elimination of hydraulic hose leaks and reduced fuel consumption. However, for EPS systems to respond to actions, sensors must be employed; this means that the consistency of the sensor's linear variation is integral to the stability of the steering response. To ensure quality control, a reliable method for detecting defects and assessing linearity is required to assess the sensitivity of the EPS sensor to changes in the internal design characters. This paper proposes a data-driven defect and linearity assessment monitoring system, which can be used to analyze EPS component defects and linearity based on vehicle speed interval division. The approach is validated experimentally using data collected from an EPS test jig and is further enhanced by the inclusion of a Graphical User Interface (GUI). Based on the design, the developed system effectively performs defect detection with an accuracy of 0.99 percent and obtains a linearity assessment score at varying vehicle speeds.

Lifetime Reliability Based Life-Cycle Cost-Effective Optimum Design of Steel Bridges (생애 신뢰성에 기초한 강교의 LCC최적설계)

  • Lee, Kwang Min;Cho, Hyo Nam;Cha, CheolJun;Kim, Seong Hun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.1A
    • /
    • pp.75-89
    • /
    • 2006
  • This paper presents a practical and realistic Life-Cycle Cost (LCC) optimum design methodology of steel bridges considering time effect of bridge reliability under environmental stressors such as corrosion and heavy truck traffics. The LCC functions considered in the LCC optimization consist of initial cost, expected life-cycle maintenance cost and expected life-cycle rehabilitation costs including repair/replacement costs, loss of contents or fatality and injury losses, road user costs, and indirect socio-economic losses. For the assessment of the life-cycle rehabilitation costs, the annual probability of failure which depends upon the prior and updated load and resistance histories should be accounted for. For the purpose, Nowak live load model and a modified corrosion propagation model considering corrosion initiation, corrosion rate, and repainting effect are adopted in this study. The proposed methodology is applied to the LCC optimum design problem of an actual steel box girder bridge with 3 continuous spans (40 m+50 m+40 m=130 m), and various sensitivity analyses of types of steel, local corrosion environments, average daily traffic volume, and discount rates are performed to investigate the effects of various design parameters and conditions on the LCC-effectiveness. From the numerical investigation, it has been observed that local corrosion environments and the number of truck traffics significantly influence the LCC-effective optimum design of steel bridges, and thus realized that these conditions should be considered as crucial parameters for the optimum LCC-effective design.

Enhancing A Neural-Network-based ISP Model through Positional Encoding (위치 정보 인코딩 기반 ISP 신경망 성능 개선)

  • DaeYeon Kim;Woohyeok Kim;Sunghyun Cho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.81-86
    • /
    • 2024
  • The Image Signal Processor (ISP) converts RAW images captured by the camera sensor into user-preferred sRGB images. While RAW images contain more meaningful information for image processing than sRGB images, RAW images are rarely shared due to their large sizes. Moreover, the actual ISP process of a camera is not disclosed, making it difficult to model the inverse process. Consequently, research on learning the conversion between sRGB and RAW has been conducted. Recently, the ParamISP[1] model, which directly incorporates camera parameters (exposure time, sensitivity, aperture size, and focal length) to mimic the operations of a real camera ISP, has been proposed by advancing the simple network structures. However, existing studies, including ParamISP[1], have limitations in modeling the camera ISP as they do not consider the degradation caused by lens shading, optical aberration, and lens distortion, which limits the restoration performance. This study introduces Positional Encoding to enable the camera ISP neural network to better handle degradations caused by lens. The proposed positional encoding method is suitable for camera ISP neural networks that learn by dividing the image into patches. By reflecting the spatial context of the image, it allows for more precise image restoration compared to existing models.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Small-Angle X-ray Scattering Station 4C2 BL of Pohang Accelerator Laboratory for Advance in Korean Polymer Science

  • Yoon, Jin-Hwan;Kim, Kwang-Woo;Kim, Je-Han;Heo, Kyu-Young;Jin, Kyeong-Sik;Jin, Sang-Woo;Shin, Tae-Joo;Lee, Byeong-Du;Rho, Ye-Cheol;Ahn, Byung-Cheol;Ree, Moon-Hor
    • Macromolecular Research
    • /
    • v.16 no.7
    • /
    • pp.575-585
    • /
    • 2008
  • There are two beamlines (BLs), 4C1 and 4C2, at the Pohang Accelerator Laboratory that are dedicated to small angle X-ray scattering (SAXS). The 4C1 BL was constructed in early 2000 and is open to public users, including both domestic and foreign researchers. In 2003, construction of the second SAXS BL, 4C2, was complete and commissioning and user support were started. The 4C2 BL uses the same bending magnet as its light source as the 4C1 BL. The 4C1 BL uses a synthetic double multilayer monochromator, whereas the 4C2 BL uses a Si(111) double crystal monochromator for both small angle and wide angle X-ray scattering. In the 4C2 BL, the collimating mirror is positioned behind the monochromator in order to enhance the beam flux and energy resolution. A toroidal focusing mirror is positioned in front of the monochromator to increase the beam flux and eliminate higher harmonics. The 4C2 BL also contains a digital cooled charge coupled detector, which has a wide dynamic range and good sensitivity to weak scattering, thereby making it suitable for a range of SAXS and wide angle X-ray scattering experiments. The general performance of the 4C2 BL was initially tested using standard samples and further confirmed by the experience of users during three years of operation. In addition, several grazing incidence X-ray scattering measurements were carried out at the 4C2 BL.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.