• Title/Summary/Keyword: software engineering

Search Result 12,283, Processing Time 0.044 seconds

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

Evaluations of Spectral Analysis of in vitro 2D-COSY and 2D-NOESY on Human Brain Metabolites (인체 뇌 대사물질에서의 In vitro 2D-COSY와 2D-NOESY 스펙트럼 분석 평가)

  • Choe, Bo-Young;Woo, Dong-Cheol;Kim, Sang-Young;Choi, Chi-Bong;Lee, Sung-Im;Kim, Eun-Hee;Hong, Kwan-Soo;Jeon, Young-Ho;Cheong, Chae-Joon;Kim, Sang-Soo;Lim, Hyang-Sook
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.1
    • /
    • pp.8-19
    • /
    • 2008
  • Purpose : To investigate the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar nuclear Overhauser effect/enhancement (NOE) interaction through 2D- correlation spectroscopy (COSY) and 2D- NOE spectroscopy (NOESY) techniques. Materials and Methods : All 2D experiments were performed on Bruker Avance 500 (11.8 T) with the zshield gradient triple resonance cryoprobe at 298 K. Human brain metabolites were prepared with 10% $D_2O$. Two-dimensional spectra with 2048 data points contains 320 free induction decay (FID) averaging. Repetition delay was 2 sec. The Top Spin 2.0 software was used for post-processing. Total 7 metabolites such as N-acetyl aspartate (NAA), creatine (Cr), choline (Cho), lutamine (Gln), glutamate (Glu), myo-inositol (Ins), and lactate (Lac) were included for major target metabolites. Results : Symmetrical 2D-COSY and 2D-NOESY pectra were successfully acquired: COSY cross peaks were observed in the only 1.0-4.5 ppm, however, NOESY cross peaks were observed in the 1.0-4.5 ppm and 7.9 ppm. From the result of the 2-D COSY data, cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methylene protons (CH2(3,$H{\alpha}$)) at 2.50ppm and methylene protons ($CH_2$,(3,$H_B$)) at 2.70 ppm were observed in NAA. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. From the result of 2-D NOESY data, cross peaks between the NH proton at 8.00 ppm and methyl protons ($CH_3$) were observed in NAA. Cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methyl protons (CH3) at 3.03 ppm and methylene protons (CH2) at 3.93 ppm were observed in Cr. Cross peaks between the methylene protons ($CH_2$(3)) at 2.11 ppm and methylene protons ($CH_2$(4)) at 2.35 ppm, and between the methylene protons($CH_2$ (3)) at 2.11 ppm and methine proton (CH(2)) at 3.76 ppm were observed in Glu. Cross peaks between the methylene protons (CH2 (3)) at 2.14 ppm and methine proton (CH(2)) at 3.79 ppm were observed in Gln. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. Conclusion : The present study demonstrated that in vitro 2D-COSY and NOESY represented the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar NOE interaction. This study could aid in better understanding the interactions between human brain metabolites in vivo 2DCOSY study.

  • PDF

Implementation of An Automatic Authentication System Based on Patient's Situations and Its Performance Evaluation (환자상황 기반의 자동인증시스템 구축 및 성능평가)

  • Ham, Gyu-Sung;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.25-34
    • /
    • 2020
  • In the current medical information system, a system environment is constructed in which Biometric data generated by using IoT or medical equipment connected to a patient can be stored in a medical information server and monitored at the same time. Also, the patient's biometric data, medical information, and personal information after simple authentication using only the ID / PW via the mobile terminal of the medical staff are easily accessible. However, the method of accessing these medical information needs to be improved in the dimension of protecting patient's personal information, and provides a quick authentication system for first aid. In this paper, we implemented an automatic authentication system based on the patient's situation and evaluated its performance. Patient's situation was graded into normal and emergency situation, and the situation of the patient was determined in real time using incoming patient biometric data from the ward. If the patient's situation is an emergency, an emergency message including an emergency code is send to the mobile terminal of the medical staff, and they attempted automatic authentication to access the upper medical information of the patient. Automatic authentication is a combination of user authentication(ID/PW, emergency code) and mobile terminal authentication(medical staff's role, working hours, work location). After user authentication, mobile terminal authentication is proceeded automatically without additional intervention by medical staff. After completing all authentications, medical staffs get authorization according to the role of medical staffs and patient's situations, and can access to the patient's graded medical information and personal information through the mobile terminal. We protected the patient's medical information through limited medical information access by the medical staff according to the patient's situation, and provided an automatic authentication without additional intervention in an emergency situation. We performed performance evaluation to verify the performance of the implemented automatic authentication system.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF

Suggestion of Computational Thinking-Scientific Inquiry (CT-SI) Model through the Exploration of the Relationship Between Scientific Problem Solving Process and Computational Thinking (과학적 문제해결과정과 컴퓨팅 사고의 관련성 탐색을 통한 컴퓨팅 사고 기반 과학 탐구(CT-SI) 모형의 제안)

  • Hwang, Yohan;Mun, Kongju
    • Journal of Science Education
    • /
    • v.44 no.1
    • /
    • pp.92-111
    • /
    • 2020
  • The 2015 revised science curriculum and NGSS (Next Generation Science Standard) suggest computational thinking as an inquiry skill or competency. Particularly, concern in computational thinking has increased since the Ministry of Education has required software education since 2014. However, there is still insufficient discussion on how to integrate computational thinking in science education. Therefore, this study aims to prepare a way to integrate computational thinking elements into scientific inquiry by analyzing the related literature. In order to achieve this goal, we summarized various definitions of the elements of computational thinking and analyzed general problem solving process and scientific inquiry process to develop and suggest the model. We also considered integrated problem solving cases from the computer science field and summarized the elements of the Computational Thinking-Scientific Inquiry (CT-SI) model. We asked scientists to explain their research process based on the elements. Based on these explanations from the scientists, we developed 'Problem-finding' CT-SI model and 'Problem solving' CT-SI model. These two models were reviewed by scientists. 'Problem-finding' model is relevant for selecting information and analyzing problems in the theoretical research. 'Problem solving' is suitable for engineering problem solving process using a general research process and engineering design. In addition, two teachers evaluated whether these models could be used in the secondary school curriculum. The models we developed in this study linked with the scientific inquiry and this will help enhance the practices of 'collecting, analyzing and interpreting data,' 'use of mathematical thinking and computer' suggested in the 2015 revised curriculum.

Evaluation of the Accuracy for Respiratory-gated RapidArc (RapidArc를 이용한 호흡연동 회전세기조절방사선치료 할 때 전달선량의 정확성 평가)

  • Sung, Jiwon;Yoon, Myonggeun;Chung, Weon Kuu;Bae, Sun Hyun;Shin, Dong Oh;Kim, Dong Wook
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.127-132
    • /
    • 2013
  • The position of the internal organs can change continually and periodically inside the body due to the respiration. To reduce the respiration induced uncertainty of dose localization, one can use a respiratory gated radiotherapy where a radiation beam is exposed during the specific time of period. The main disadvantage of this method is that it usually requests a long treatment time, the massive effort during the treatment and the limitation of the patient selection. In this sense, the combination of the real-time position management (RPM) system and the volumetric intensity modulated radiotherapy (RapidArc) is promising since it provides a short treatment time compared with the conventional respiratory gated treatments. In this study, we evaluated the accuracy of the respiratory gated RapidArc treatment. Total sic patient cases were used for this study and each case was planned by RapidArc technique using varian ECLIPSE v8.6 planning machine. For the Quality Assurance (QA), a MatriXX detector and I'mRT software were used. The results show that more than 97% of area gives the gamma value less than one with 3% dose and 3 mm distance to agreement condition, which indicates the measured dose is well matched with the treatment plan's dose distribution for the gated RapidArc treatment cases.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

A Performance Evaluation of the e-Gov Standard Framework on PaaS Cloud Computing Environment: A Geo-based Image Processing Case (PaaS 클라우드 컴퓨팅 환경에서 전자정부 표준프레임워크 성능평가: 공간영상 정보처리 사례)

  • KIM, Kwang-Seob;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Both Platform as a Service (PaaS) as one of the cloud computing service models and the e-government (e-Gov) standard framework from the Ministry of the Interior and Safety (MOIS) provide developers with practical computing environments to build their applications in every web-based services. Web application developers in the geo-spatial information field can utilize and deploy many middleware software or common functions provided by either the cloud-based service or the e-Gov standard framework. However, there are few studies for their applicability and performance in the field of actual geo-spatial information application yet. Therefore, the motivation of this study was to investigate the relevance of these technologies or platform. The applicability of these computing environments and the performance evaluation were performed after a test application deployment of the spatial image processing case service using Web Processing Service (WPS) 2.0 on the e-Gov standard framework. This system was a test service supported by a cloud environment of Cloud Foundry, one of open source PaaS cloud platforms. Using these components, the performance of the test system in two cases of 300 and 500 threads was assessed through a comparison test with two kinds of service: a service case for only the PaaS and that on the e-Gov on the PaaS. The performance measurements were based on the recording of response time with respect to users' requests during 3,600 seconds. According to the experimental results, all the test cases of the e-Gov on PaaS considered showed a greater performance. It is expected that the e-Gov standard framework on the PaaS cloud would be important factors to build the web-based spatial information service, especially in public sectors.