• Title/Summary/Keyword: Complex network model

Search Result 680, Processing Time 0.033 seconds

Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model (스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용)

  • Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.22 no.4
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

Effect of food-related lifestyle, and SNS use and recommended information utilization on dining out (혼밥 및 외식소비 관련 식생활라이프스타일과 SNS 이용 및 추천정보활용의 영향)

  • Jin A Jang
    • Journal of Nutrition and Health
    • /
    • v.56 no.5
    • /
    • pp.573-588
    • /
    • 2023
  • Purpose: This study aimed to examine social networking service (SNS) use and recommended information utilization (SURU) according to the food-related lifestyles (FRLs) of consumers and analyze how the interaction between the FRL and SURU affects the practice of eating alone and visiting restaurants. Methods: Data on 4,624 adults in their 20s to 50s were collected from the 2021 Consumer Behavior Survey for Food. Statistical methods included factor analysis, K-means cluster analysis, the complex samples general linear model, the complex samples Rao-Scott χ2 test, and the general linear model. Results: The following three factors were extracted from the FRL data: Convenience pursuit, rational consumption pursuit, and gastronomy pursuit, and the subjects were classified into three groups, namely the rational consumption, convenient gastronomy, and smart gourmet groups. An examination of the difference in SURU according to the FRL showed that the smart gourmet group had the highest score. The result of analyzing the effects of the FRL and SURU on eating alone revealed that both the main effect and the interaction effect were significant (p < 0.01, p < 0.001). The higher the SURU, the higher the frequency of eating alone in the convenience pursuit, and gastronomy pursuit groups. The main and interaction effects of the FRL and SURU on the frequency of eating out were also significant (p < 0.01, p < 0.001). In all the FRL groups, the higher the SURU level, the higher the frequency of visiting restaurants. Specifically, the two groups with convenience and gastronomic tendencies showed a steeper increase. Conclusion: This study provides important basic data for research on consumer behavior related to food SNS, market segmentation of restaurant consumers, and development of marketing strategies using SNS in the future.

A Study on Mission Critical Factors for Software Test Enhancement in Information Technologies Development of Public Sector (Mission Critical 공공 정보화 구축 시험평가 개선 지표 연구)

  • Lee, Byung-hwa;Lim, Sung-ryel
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.97-107
    • /
    • 2015
  • Up until recently, Korea has ranked the first place in UN e-Government Survey for three consecutive years. In keeping with such accomplishment, the size of budget execution has been consistently growing in accordance with Korea's Government 3.0 policy and vision, leading to increase in big-sized informatization projects in the business. Especially in mission critical public sector's infrastructure where it affects many people, growing demand for establishing high-quality information system with new technologies being brought to attention in order to meet the complex needs of citizens. National defense information system, being one of representative domains examples in the concerned area, established high military competency by applying breakthrough technology. Network-oriented national defense knowledge informatization was set as the vision in order to implement core roles in making efficient national defense management; and effort has been made to materialize the vision by making advancement in national defense's information system and its informatization implementation system. This research studies new quality index relevant to test and evaluation (T&E)of informatization business in national defense which is the representative example of mission critical public sector's infrastructure. We studied international standards and guidelines, analyzed actual T&E cases, and applied them to the inspection items that are currently in use, complying with the e-government law (Act No. 12346, Official Announcement Date 2014. 1.28., Enforcement Date 2014. 7.29.) As a result of productivity analysis, based on hypothesis in which suggested model was applied to T&E of the national defense informatization business, we confirmed the possibility of enhancement in the T&E productivity by assessing reliability, expertise, and safety as evaluation factors.

Technique for Concurrent Processing Graph Structure and Transaction Using Topic Maps and Cassandra (토픽맵과 카산드라를 이용한 그래프 구조와 트랜잭션 동시 처리 기법)

  • Shin, Jae-Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.159-168
    • /
    • 2012
  • Relation in the new IT environment, such as the SNS, Cloud, Web3.0, has become an important factor. And these relations generate a transaction. However, existing relational database and graph database does not processe graph structure representing the relationships and transactions. This paper, we propose the technique that can be processed concurrently graph structures and transactions in a scalable complex network system. The proposed technique simultaneously save and navigate graph structures and transactions using the Topic Maps data model. Topic Maps is one of ontology language to implement the semantic web(Web 3.0). It has been used as the navigator of the information through the association of the information resources. In this paper, the architecture of the proposed technique was implemented and design using Cassandra - one of column type NoSQL. It is to ensure that can handle up to Big Data-level data using distributed processing. Finally, the experiments showed about the process of storage and query about typical RDBMS Oracle and the proposed technique to the same data source and the same questions. It can show that is expressed by the relationship without the 'join' enough alternative to the role of the RDBMS.

The Impact of CPO Characteristics on Organizational Privacy Performance (개인정보보호책임자의 특성이 개인정보보호 성과에 미치는 영향)

  • Wee, Jiyoung;Jang, Jaeyoung;Kim, Beomsoo
    • Asia pacific journal of information systems
    • /
    • v.24 no.1
    • /
    • pp.93-112
    • /
    • 2014
  • As personal data breach reared up as a problem domestically and globally, organizations appointing chief privacy officers (CPOs) are increasing. Related Korean laws, 'Personal Data Protection Act' and 'the Act on Promotion of Information and Communication Network Utilization and Information Protection, etc.' require personal data processing organizations to appoint CPOs. Research on the characteristics and role of CPO is called for because of the importance of CPO being emphasized. There are many researches on top management's role and their impact on organizational performance using the Upper Echelon theory. This study investigates what influence the characteristics of CPO gives on the organizational privacy performance. CPO's definition varies depending on industry, organization size, required responsibility and power. This study defines CPO as 'a person who takes responsibility for all the duties on handling the organization's privacy,' This research assumes that CPO characteristics such as role, personality and background knowledge have an influence on the organizational privacy performance. This study applies the part relevant to the upper echelon's characteristics and performance of the executives (CEOs, CIOs etc.) for CPO. First, following Mintzberg and other managerial role classification, information, strategic, and diplomacy roles are defined as the role of CPO. Second, the "Big Five" taxonomy on individual's personality was suggested in 1990. Among these five personalities, extraversion and conscientiousness are drawn as the personality characteristics of CPO. Third, advance study suggests complex knowledge of technology, law and business is necessary for CPO. Technical, legal, and business background knowledge are drawn as the background knowledge of CPO. To test this model empirically, 120 samples of data collected from CPOs of domestic organizations are used. Factor analysis is carried out and convergent validity and discriminant validity were verified using SPSS and Smart PLS, and the causal relationships between the CPO's role, personality, background knowledge and the organizational privacy performance are analyzed as well. The result of the analysis shows that CPO's diplomacy role and strategic role have significant impacts on organizational privacy performance. This reveals that CPO's active communication with other organizations is needed. Differentiated privacy policy or strategy of organizations is also important. Legal background knowledge and technical background knowledge were also found to be significant determinants to organizational privacy performance. In addition, CPOs conscientiousness has a positive impact on organizational privacy performance. The practical implication of this study is as follows: First, the research can be a yardstick for judgment when companies select CPOs and vest authority in them. Second, not only companies but also CPOs can judge what ability they should concentrate on for development of their career relevant to their job through results of this research. Cultural social value, citizen's consensus on the right to privacy, expected CPO's role will change in process of time. In future study, long-term time-series analysis based research can reveal these changes and can also offer practical implications for government and private organization's policy making on information privacy.

Random Noise Addition for Detecting Adversarially Generated Image Dataset (임의의 잡음 신호 추가를 활용한 적대적으로 생성된 이미지 데이터셋 탐지 방안에 대한 연구)

  • Hwang, Jeonghwan;Yoon, Ji Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.629-635
    • /
    • 2019
  • In Deep Learning models derivative is implemented by error back-propagation which enables the model to learn the error and update parameters. It can find the global (or local) optimal points of parameters even in the complex models taking advantage of a huge improvement in computing power. However, deliberately generated data points can 'fool' models and degrade the performance such as prediction accuracy. Not only these adversarial examples reduce the performance but also these examples are not easily detectable with human's eyes. In this work, we propose the method to detect adversarial datasets with random noise addition. We exploit the fact that when random noise is added, prediction accuracy of non-adversarial dataset remains almost unchanged, but that of adversarial dataset changes. We set attack methods (FGSM, Saliency Map) and noise level (0-19 with max pixel value 255) as independent variables and difference of prediction accuracy when noise was added as dependent variable in a simulation experiment. We have succeeded in extracting the threshold that separates non-adversarial and adversarial dataset. We detected the adversarial dataset using this threshold.

Evaluation Research on the Protection and Regeneration of the Urban Historical and Cultural District of Pingjiang Road, Suzhou, China (중국 쑤저우 평강로 도시역사문화거리 보존 및 재생사업 평가연구)

  • Geng, Li;Yoon, Ji-Young
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.561-580
    • /
    • 2021
  • This study analyses the historical and cultural streets at Pinggang Road in the city of Suzhou, by understanding the development and conservation of the area, and uses the following ways to investigate its development, re-organization, and current state. This paper comprehensively compares, collates and investigates 4 different historical and cultural areas in Insadong and Samcheong-dong in South Korea, and South Luogu Lane in China. From initial research and analysis, this paper gathers the cultural, economic, and societal perspectives as non-physical measures, and spatial structure, road structure, and building maintenance as physical factor framework. It is significant in that it can provide an evaluation model for the preservation and regeneration of historical and cultural streets by presenting the viewpoint of complex development of non-physical and physical elements in Pyeonggang-ro. In addition, it is necessary to conduct optimization and specific research on insufficient areas, such as maintenance and development of programs and signature systems for visitors, and continuous development of historical and cultural network platforms by combining on-site surveys. Basic data should be provided for reference on the street.