• Title/Summary/Keyword: Computer System

Search Result 32,341, Processing Time 0.058 seconds

A study on the employment preparation cost and attitude of college student for Job-seeking (국내 대학생의 취업태도 및 취업준비 비용에 관한 연구)

  • Chung, Bhum-Suk;Jeong, Hwa-Min
    • Management & Information Systems Review
    • /
    • v.33 no.4
    • /
    • pp.1-19
    • /
    • 2014
  • This Study focuses on the university students' job attitude and cost of employment preparation. Nowadays, many university and college students spend a big money improving their employment preparation such as studying on foreign language, getting various kinds of certificates and tooth correction, clothing etc. for employment interview. This study investigated the cost of employment preparation and Job attitude of the 484 students of universities and colleges, the analysis of the collected data was conducted with SPSS 12.0 program by using frequency analysis, factor analysis, reliability assessment, correlation test, t-test, one way ANOVA. The university students paid more costs of employment preparation such as a language training abroad, a private training, and clothing than the college students. Also, Allied social science students paid more costs of the language training abroad, and clothing than allied computer science and allied design students. The female students paid more money than male students for tooth correction. The costs of language training abroad, private training and clothing are affected the students' socioeconomic background of a home. Regarding the job attitude of students, the university students are feeling more positive than the college students of the employment efficacy and cognition of the education environment. As result, the differences in the cost of employment preparation by the university type, faculty major course, their sex, and socioeconomic background of a home. The student's employment-efficacy and cognition of the education environment are also differences between the university and the college students. So, to improve the job attitude, developing their ability for employment preparation, educational programs should be arranged in school and continuous researches are needed.

  • PDF

Flexible Specialization: A New Paradigm for Modern Industrial Society ? (柔軟的 專門化(Flexible Specialization) : 현대 産業社會의 새로운 패러다임 ?)

  • Lee, Deog-An
    • Journal of the Korean Geographical Society
    • /
    • v.28 no.2
    • /
    • pp.148-162
    • /
    • 1993
  • There is much speculation that modern capi-talist society is undergoing fundamental and qualitative chnge towards flexible specialization. The purpose of this study is to examine this hypothesis. This paper focusses on: the idea of flexible specialization; the significance of this transition; industrial district; and the implicati-ons of this new production system for Korean industrial space. Main arguments of this study are as follows: First, as all different groups of researchers apply the idea of flexible specialization according to their own specifications, the current debate on this topic is not much fruitful. Not surpri-singly, the concept of flexible specialization has overlapped with subocontracting. This intergration of subcontracting into flexible specialization systems, however, is inappropriate because the two concepts have different historical contexts. The other cause of this controversy is its inherent weekness, conceptual ambiguity. Thus, today's flexibility becomes tomorrow's rigidity. Secondly, transition towards flexible speciali-zation has only been partially achieved even in advanced capitalist countries. The application of dualistic explanatory framework, such as rigidity versus flexibiity, mass production versus small-lot multi-product production, and de-skilling versus re-skilling, has resulted in great exaggeration of the transformation, from Fordism to post-Fordism. There is no intermediary part between two places. Considering that the workers allocated to the Fordist mass production assembly line are not as large as one might imagine, the shift from mass to flexible production has only limited implications for the transformation of capitalist economy. Thirdly, 'industrial district' contorversy has contributed to highlighting the importance of small firms and areas as production space. The agglomeration of small firms in specific areas is common in Korea, but it is quite different from the industrial district based on flexible specialization. The Korean phenomenon stems from close interactions with its major parent firm rather than interactions between flexible, specialized, autonomous and technology-intensive smll firms. Most Korean subcontractors are still low-skilled, labour-intensive, and heavily dependent on their mojor parent firms. Thus, the assertion that the Seoul Metropolitan Area adopts flexible specialization has no base. Fourthly, the main concern of flexible speciali zation is small firms. However, the corporate organization that needs product diversification and technological specialization is oligopolistic large corporations typified by multinational corporations. It is because of this that most of these organizations are adoptiong Fordist mass production methods. The problem of product diversification will be resolved naturally if economic internationalization progresses further. What is more important for business success is the quality and price competitiveness of firms rather than product diversification. Lastly, in order to dispel further misunderst-anding on this issue, it is imparative that the conceptual ambiguity is resolved most urgently. This study recommends adoption of more speci-fied and direct terminology (such as, factory automation, computer design, out-sourcing, the exploitation of part-time labor, job redesign) rather than that of ideological ones (such as, Taylorism, Fordism, neo-Taylorism, neo-Fordism, post-fordism, flexible specialization, peripheral post-Fordism). As the debates on this topic just started, we still have long way to go until consensus is reached.

  • PDF

A New Approach to Mobile Device Design - focused on the Communication Tool & it's GUI for Office Workers in the Near Future - (모바일 기기 디자인의 새로운 접근 - 근 미래 작업환경에서의 커뮤니케이션 도구 디자인과 GUI 연구를 중심으로 -)

  • Yang, Sung-Ho
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.31-42
    • /
    • 2006
  • This study originates from the following critical mind; what will the office of the future be like? and what technology will we rely upon most to communicate with colleagues or to access business information. In the office environment today, new technology has compelled new work paradigm and has greatly affected the capabilities of the individual to work in a more productive and efficient manner. However, even though new computer technology has changed the business world so rapidly, it is very difficult to see the changes that have been taken place. As an aim of the study, creating a mobile tool for office workers that successfully supports their work and communication was explored, and this study explored future work environment with a 5 years technological and social perspective. As a result of this study, the bON brings new visions to the mobile professionals via various interfaces. The bON, a mobile device, is both a system of work and of communication for office workers. The bON, as an integrated tool for working and communicating, forms the basis for a mobile information gateway that is equally capable of functioning as a mobile desk. The basic underlying idea is that all formal meeting places and hallways in the office are equipped with large wall-mounted screens. The bON collaborates with these media in various ways to enhance productivity and efficiency. The main challenge for the bON to enhance both mobility and quality of information is using new technology including bendable and flexible display and soft material display and sensors. To answer for the strong needs for mobility, the whole size of the device is fairly small while the screen is rolled inside the device. For Graphical User Interface, moreover, a new technique called Multi-layering Interface was adopted to stretch user's visual limits and suggests new direction in designing mobile device, equipped with small size display.

  • PDF

A Construction of TMO Object Group Model for Distributed Real-Time Services (분산 실시간 서비스를 위한 TMO 객체그룹 모델의 구축)

  • 신창선;김명희;주수종
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.307-318
    • /
    • 2003
  • In this paper, we design and construct a TMO object group that provides the guaranteed real-time services in the distributed object computing environments, and verify execution power of its model for the correct distributed real-time services. The TMO object group we suggested is based on TINA's object group concept. This model consists of TMO objects having real-time properties and some components that support the object management service and the real-time scheduling service in the TMO object group. Also TMO objects can be duplicated or non-duplicated on distributed systems. Our model can execute the guaranteed distributed real-time service on COTS middlewares without restricting the specially ORB or the of operating system. For achieving goals of our model. we defined the concepts of the TMO object and the structure of the TMO object group. Also we designed and implemented the functions and interactions of components in the object group. The TMO object group includes the Dynamic Binder object and the Scheduler object for supporting the object management service and the real-time scheduling service, respectively The Dynamic Binder object supports the dynamic binding service that selects the appropriate one out of the duplicated TMO objects for the clients'request. And the Scheduler object supports the real-time scheduling service that determines the priority of tasks executed by an arbitrary TMO object for the clients'service requests. And then, in order to verify the executions of our model, we implemented the Dynamic Binder object and the Scheduler object adopting the binding priority algorithm for the dynamic binding service and the EDF algorithm for the real-time scheduling service from extending the existing known algorithms. Finally, from the numerical analyzed results we are shown, we verified whether our TMO object group model could support dynamic binding service for duplicated or non-duplicated TMO objects, also real-time scheduling service for an arbitrary TMO object requested from clients.

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

3D Histology Using the Synchrotron Radiation Propagation Phase Contrast Cryo-microCT (방사광 전파위상대조 동결미세단층촬영법을 활용한 3차원 조직학)

  • Kim, Ju-Heon;Han, Sung-Mi;Song, Hyun-Ouk;Seo, Youn-Kyung;Moon, Young-Suk;Kim, Hong-Tae
    • Anatomy & Biological Anthropology
    • /
    • v.31 no.4
    • /
    • pp.133-142
    • /
    • 2018
  • 3D histology is a imaging system for the 3D structural information of cells or tissues. The synchrotron radiation propagation phase contrast micro-CT has been used in 3D imaging methods. However, the simple phase contrast micro-CT did not give sufficient micro-structural information when the specimen contains soft elements, as is the case with many biomedical tissue samples. The purpose of this study is to develop a new technique to enhance the phase contrast effect for soft tissue imaging. Experiments were performed at the imaging beam lines of Pohang Accelerator Laboratory (PAL). The biomedical tissue samples under frozen state was mounted on a computer-controlled precision stage and rotated in $0.18^{\circ}$ increments through $180^{\circ}$. An X-ray shadow of a specimen was converted into a visual image on the surface of a CdWO4 scintillator that was magnified using a microscopic objective lens(X5 or X20) before being captured with a digital CCD camera. 3-dimensional volume images of the specimen were obtained by applying a filtered back-projection algorithm to the projection images using a software package OCTOPUS. Surface reconstruction and volume segmentation and rendering were performed were performed using Amira software. In this study, We found that synchrotron phase contrast imaging of frozen tissue samples has higher contrast power for soft tissue than that of non-frozen samples. In conclusion, synchrotron radiation propagation phase contrast cryo-microCT imaging offers a promising tool for non-destructive high resolution 3D histology.

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.

Changes in Growth and Bioactive Compounds of Lettuce According to CO2 Tablet Treatment in the Nutrient Solution of Hydroponic System (수경재배 양액 내 탄산정 처리에 의한 상추의 생육 및 생리활성물질 함량 변화)

  • Bok, Gwonjeong;Noh, Seungwon;Kim, Youngkuk;Nam, Changsu;Jin, Chaelin;Park, Jongseok
    • Journal of Bio-Environment Control
    • /
    • v.30 no.1
    • /
    • pp.85-93
    • /
    • 2021
  • In hydroponic cultivation, in order to investigate the change of lettuce growth and physiologically active substances through CO2 tablet treatment in nutrient solution, we used a solid carbonated tablets commercially available in the Netherlands. The experiment consisted of 0.5-fold, 1-fold, and 2-fold treatment groups with no treatment as a control. As a result, the atmospheric CO2 concentration in the chamber after CO2 tablet treatment showed the highest value at 472.2 µL·L-1 in the 2-fold treatment zone immediately after treatment, and the pH in the nutrient solution decreased the most to pH 6.03 in the 2-fold treatment zone. After that, over time, the CO2 concentration and pH recovered to the level before treatment. Leaf width and leaf area of lettuce showed the highest values of 17.1cm and 1067.14 ㎠ when treated 2-fold with CO2 tablet, while fresh weight and dry weight of the above-ground part were highest at 63.87 g and 3.08 g in 0.5-fold treatment. The root length of lettuce was the longest (28.4 cm) in the control, but there was no significant difference in the fresh weight and the dry weight among the treatments. Apparently, it was observed that the root length of the lettuce was shortened by CO2 tablet treatment and a lot of side roots occurred. In addition, there was a growth disorder in which the roots turned black, but it was found that there was no negative effect on the growth of the above-ground part. As a result of analyzing the bioactive compounds of lettuce by CO2 tablet treatment, chlorogenic acid and quercetin were detected. As a result of quantitative analysis, chlorogenic acid increased by 249% compared to the control in 1-fold treatment, but quercetin decreased by 37%. As a result of comparing the DPPH radical scavenging ability showing antioxidant activity, the control and 0.5-fold treatment showed significantly higher values than the 1-fold and 2-fold treatments. This suggests that carbonated water treatment is effective in increasing the growth and bioactive compounds of hydroponic lettuce.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.