• Title/Summary/Keyword: model complexity

Search Result 1,977, Processing Time 0.036 seconds

Estimation of the Effects of Daily Walking Hours and Days on the Mental Health of Urban Residents - The Case in Seoul - (주거지역 가로환경 및 일상 걷기가 정신 건강에 미치는 영향 - 서울시 대상으로 -)

  • Koo, Bonyu;Baek, Seungjoo;Yoon, Heeyeun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.1
    • /
    • pp.87-100
    • /
    • 2024
  • This study aimed to investigate the impact of the quality of the street environment in residential areas on the mental health of urban residents, considering the frequency of street use. Using a zero-inflated negative binomial regression model, the study analyzed the influence of walking frequency and the street environment on depressive symptoms of urban residents. The research focused on Seoul, South Korea, in 2017, with depressive symptoms as the dependent variable and street environment variables, walking variables, and individual characteristics as independent variables. Additionally, the study explores the interaction effect of street greenery and walking frequency to analyze the synergistic impacts of walking in green spaces on mental health. The findings indicate that a higher ratio of street green areas is associated with fewer depressive symptoms. Increased walking frequency is linked to a reduction in depressive symptoms or a weaker manifestation of such symptoms. The interaction effect confirms that more frequent walking in green spaces is associated with weaker depressive symptoms. Lower ratios of visual complexity are correlated with reduced depressive symptoms. This study contributes to addressing urban residents' mental health issues at the community level by emphasizing the importance of the street green environment in residential areas.

The Effect of Technostress on User Resistance and End-User Performance (테크노스트레스가 사용자 저항과 성과에 미치는 영향)

  • Kyoung-June Kim;Ki-Dong Lee
    • Information Systems Review
    • /
    • v.19 no.4
    • /
    • pp.63-85
    • /
    • 2017
  • Recent information technology achieves remarkable progress in almost all areas where it can be applied. However, this technology also causes technostress, such as fear and pressure to individuals, due to events, such as the threat of job loss. This technostress is becoming an important factor that can affect user performance and productivity in future society where information technology will be the focus. This kind of stress should be studied considerably in academic and practical applications. The effects of technostress on individual performance remain ambiguous. Therefore, academic research is needed to prove these effects. This study aimed to clarify the direct and indirect effects of technostress on information technology end-users. We developed a research model that integrates innovation resistance and technostress theory through previous studies and analyzed the questionnaire of 317 people. The PLS structural equation model and the study results of Baron and Kenny (1986) indicated that rapid change, connectivity, reliability, and complexity are crucial factors affecting the technostress of information technology. Technostress was analyzed indirectly only through innovation resistance, which affected the performance of end-users. This study will provide new implications for the relationship between technostress and performance or productivity in the IS field.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Development of a Safety and Health Expense Prediction Model in the Construction Industry (건설업 산업안전보건관리비 예측 모델 개발 - 일반건설공사(갑)의 공사비 50억미만 공사를 대상으로 -)

  • Yeom, Dong Jun;Lee, Mi Young;Oh, Se Wook;Han, Seung Woo;Kim, Young Suk
    • Korean Journal of Construction Engineering and Management
    • /
    • v.16 no.6
    • /
    • pp.63-72
    • /
    • 2015
  • The importance of the appropriate use and procurement of Safety and Health Expense has been increasing along with the recent increase of construction projects in height, size and complexity. However, the current standards for deducting the Safety and Health Expense have shown limitations in applying the properties and environment of the construction project due to its Safety and Health Expense Rate's classification method. Therefore, the purpose of this study is to develop a prediction model for the Safety and Health Expense that enables the consideration of different environment and properties of construction projects. The study uses multiple regression analysis to analyze the Safety and Health Expense of Ordinary(A) of less than 0.5 billion WON. The research results have shown that the use of multiple regression analysis reduces the error rate to 4.38% which the current standard calculation method have shown 18.48%. Therefore, the use of the suggested model provides reliable Safety and Health Expense prediction values that considers the properties of the project. It is expected that the results of this study contributes to the effective safety management by providing the appropriate amount of Safety and Health Expense to the project. In this study, only projects of less than 5 billion WON have been considered in the analysis. Therefore, more data is required for future studies to suggest an overall Safety and Health Expense predict ion model that covers the whole construction industry.

Characteristics of Pollution Loading from Kyongan Stream Watershed by BASINS/SWAT. (BASINS/SWAT 모델을 이용한 경안천 유역의 오염부하 배출 특성)

  • Jang, Jae-Ho;Yoon, Chun-Gyeong;Jung, Kwang-Wook;Lee, Sae-Bom
    • Korean Journal of Ecology and Environment
    • /
    • v.42 no.2
    • /
    • pp.200-211
    • /
    • 2009
  • A mathematical modeling program called Soil and Water Assessment Tool (SWAT) developed by USDA was applied to Kyongan stream watershed. It was run under BASINS (Better Assessment Science for Integrating point and Non-point Sources) program, and the model was calibrated and validated using KTMDL monitoring data of 2004${\sim}$2008. The model efficiency of flow ranged from very good to fair in comparison between simulated and observed data and it was good in the water quality parameters like flow range. The model reliability and performance were within the expectation considering complexity of the watershed and pollutant sources. The results of pollutant loads estimation as yearly (2004${\sim}$2008), pollutant loadings from 2006 were higher than rest of year caused by high precipitation and flow. Average non-point source (NPS) pollution rates were 30.4%, 45.3%, 28.1% for SS, TN and TP respectably. The NPS pollutant loading for SS, TN and TP during the monsoon rainy season (June to September) was about 61.8${\sim}$88.7% of total NPS pollutant loading, and flow volume was also in a similar range. SS concentration depended on precipitation and pollution loading patterns, but TN and TP concentration was not necessarily high during the rainy season, and showed a decreasing trend with increasing water flow. SWAT based on BASINS was applied to the Kyongan stream watershed successfully without difficulty, and it was found that the model could be used conveniently to assess watershed characteristics and to estimate pollutant loading including point and non-point sources in watershed scale.

A Study on e-Healthcare Business Model: Focusing on Business Ecosystem Approach (e헬스케어 비즈니스모델에 관한 연구: 비즈니스생태계 접근 중심으로)

  • Kim, Youngsoo;Jung, Jai-Jin
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.1
    • /
    • pp.167-185
    • /
    • 2019
  • As most G-20 countries expect medical spending to grow rapidly over the next few decades, the burden of healthcare costs continues to grow globally due to an increase in the elderly population and chronic illnesses, and the ongoing quality improvement of health care services. However, under the rapidly changing technological environment of healthcare and IT convergence, the problem may become even bigger if not properly recognized and not properly prepared. In the context of the paradigm shift and the increasing problem of the medical field, complex responses in technical, institutional and business aspects are urgently needed. The key is to derive a business model that is appropriate for businesses that integrate IT in the medical field. With the arrival of the era of the 4th industrial revolution, new technologies such as Internet of Things have been applied to eHealthcare, and the need for new business models has emerged.In the e-healthcare of the Internet era, it became a traditional firm-based business model. However, due to the characteristics of dynamics and complexity of things Internet in the Internet of things, A business ecosystem-based approach is needed. In this paper, we present and analyze the major success factors of the ecosystem based on the 3 - layer structure of the e - healthcare business ecosystem as a result of research on e - healthcare business ecosystem based on emerging technology such as Internet of things. The three-layer business ecosystem was defined as (1) Infrastructure Layer, (2) Character Layer, and (3) Stakeholder Layer. As the key success factors for the eHealthCare business ecosystem, the following four factors are suggested: (1) introduction of the iHealthcare concept, (2) expansion of the business ecosystem, (3) business ecosystem change process innovation, and (4) business ecosystem leadership innovation.

Planting Design Strategy for a Large-Scale Park Based on the Regional Ecological Characteristics - A Case of the Central Park in Gwangju, Korea - (지역의 생태적 특성을 반영한 대형공원의 식재계획 전략 - 광주광역시 중앙근린공원을 사례로 -)

  • Kim, Miyeun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.3
    • /
    • pp.11-28
    • /
    • 2021
  • Due to its size and complex characteristics, it is not often to newly create a large park within an existing urban area. Also, there has been a lack of research on the planting design methodologies for a large park. This study aims to elucidate how ecological ideas can be applied to planting practice from a designer's perspective, and eventually suggest a planting design framework in the actual case, the Central Park in the City of Gwangju. This framework consists of spatial structure of planting area in order to connect and unite the separated green patches, to adapt to the changes of existing vegetation patterns, to maintain the visual continuity of landscape, and to organize the whole open space system. The framework can be provided for the spatial planning and planting design phase in which the landscape designer flexibly uses it with the design intentions as well as with an understanding of the physical, social, and aesthetic characteristics of the site. The significance of this approach is, first that it can maintain ecological and visual consistency of the both existing and introduced landscapes as a whole in spite of its intrinsic complexity and largeness, and second that it can help efficiently respond to the unexpected changes in the landscape. In the case study, comprehensive site analysis is conducted before developing the framework. In particular, wetlands and grasslands have been identified as potential wildlife habitat which critically determines the vegetation patterns of the green area. Accordingly, the lists of plant communities are presented along with the planting scheme for their shape, layout, and relations. The model of the plant community is developed responding to the structure of surrounding natural landscape. However, it is not designed to evolve to a specific plant community, but is rather a conceptual model of ecological potentials. Therefore, the application of the model has great flexibility by using other plant communities as an alternative as long as the characteristics of the communities are appropriate to the physical conditions. Even though this research provides valuable implications for landscape planning and design in the similar circumstances, there are several limitations to be overcome in the further research. First, there needs to be more sufficient field surveys on the wildlife habitats, which would help generate a more concrete planting model. Second, a landscape management plan should be included considering the condition of existing forest, in particular the afforested landscapes. Last, there is a lack of quantitative data for the models of some plant communities.