• Title/Summary/Keyword: Voice problem

Search Result 339, Processing Time 0.025 seconds

Poststructural Feminist Theology and Christian Education (후기구조주의 여성 신학과 기독교교육)

  • Joo, Yunsoo
    • Journal of Christian Education in Korea
    • /
    • v.65
    • /
    • pp.81-102
    • /
    • 2021
  • In church tradition, cultural misappropriation has often legitimized unjust hierarchy rather than to challenge it. Under the rubric of culturalism, Christian Education has served to justify the oppressive system and maintain status quo as well. A feminist theologian, Rebecca Chopp argues that the contemporary Western culture has intensified narcissistic individualism and self-referentiality and has supported the powerful, while forced the marginalized to be silent. Chopp insists that the role, nature, and mission of Christianity is to provide Word and words of emancipatory transformation. She advocates poststructural feminist theology and aims at renewal of the socio-symbolic order in society by criticizing assumptions underneath language, culture and politics. In this study, we will review the interview with an Asian-American couple and disclose the underlying assumptions and hegemony which have contributed to maintain the male domineering system. I suggest that Christian education for emancipatory transformation should encourage the oppressed women to reflect critically the existing order and to restore their own voice through constructive intervention facilitating "plurivociy" and "problem-posing" dialogue. Proclaimation of transformative Word can empower the marginalized people to revision the world alternatives to monotheistic patriarchal modernism.

Christian Education for Sustainability (지속가능성을 위한 기독교교육)

  • Yoo, Eunju
    • Journal of Christian Education in Korea
    • /
    • v.72
    • /
    • pp.293-316
    • /
    • 2022
  • This paper argues that Christian education for sustainability should be a spiritual education that fosters the spirituality of contentment, poverty, sharing, and justice through a critical review of the concept of "Sustainable Development" (SD), which emerged to overcome today's climate and ecological crisis. To this end, first of all, the efforts and limitations of international cooperation to overcome the deepening environmental problems will be considered. That is, economic interests have been prior to the environmental issues. When it comes to the 'Green Revolution,' there has been a huge gap between the ideal and the reality, so a sustainable society has become distant. Faced with this situation, Christian theology has been trying to build a new theological paradigm, breaking away from the limits of anthropocentric and afterlife-centered theology based on the dichotomy between the physical and the spiritual. Through such theological reflections, Christian education needs to carry out spiritual education focusing on how to deal with the problem of human greed for the restoration of the created world. In this regard, this paper presents the spirituality of contentment, poverty, sharing, and justice. For developing it, this article suggests deep theological reflection and repentance on human greed, listening to the victim's voice, moderation in consumption, investigation of alternative cases, and decisive action to reveal the truth from the victim's point of view.

Mature Market Sub-segmentation and Its Evaluation by the Degree of Homogeneity (동질도 평가를 통한 실버세대 세분군 분류 및 평가)

  • Bae, Jae-ho
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Privilege and Immunity of Information and Data from Aviation Safety Program in Unites States (미국 항공안전데이터 프로그램의 비공개 특권과 제재 면제에 관한 연구)

  • Moon, Joon-Jo
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.137-172
    • /
    • 2008
  • The earliest safety data programs, the FDR and CVR, were electronic reporting systems that generate data "automatically." The FDR program, originally instituted in 1958, had no publicly available restrictions for protections against sanctions by the FAA or an airline, although there are agreements and union contracts forbidding the use of FDR data for FAA enforcement actions. This FDR program still has the least formalized protections. With the advent of the CVR program in 1966, the precursor to the current FAR 91.25 was already in place, having been promulgated in 1964. It stated that the FAA would not use CVR data for enforcement actions. In 1982, Congress began restricting the disclosure of the CVR tape and transcripts. Congress added further clarification of the availability of discovery in civil litigation in 1994. Thus, the CVR data have more definitive protections in place than do FDR data. The ASRS was the first non-automatic reporting system; and built into its original design in 1975 was a promise of limited protection from enforcement sanctions. That promise was further codified in an FAR in 1979. As with the CVR, from its inception, the ASRS had some protections built in for the person who might have had a safety problem. However, the program did not (and to this day does not) explicitly deal with issues of use by airlines, litigants, or the public media, although it appears that airlines will either take a non-punitive stance if an ASRS report is filed, or the airline may ignore the fact that it has been filed at all. The FAA worked with several U.S. airlines in the early 1990s on developing ASAP programs, and the FAA issued an Advisory Circular about the program in 1997. From its inception, the ASAP program contained some FAA enforcement protections and company discipline protections, although some protection against litigation disclosure and public disclosure was not added until 2003, when FAA Order 8000.82 was promulgated, placing the program under the protections of FAR 193, which had been added in 2001. The FOQA program, when it was first instituted through a demonstration program in 1995, did not contain protections against sanctions. Now, however, the FAA cannot take enforcement action based on FOQA safety data, and an airline is limited to "corrective action" under the program. Union contracts can exclude FOQA from the realm of disciplinary action, although airline practice may be for airlines to require retraining if there is no contract in place forbidding it. The data is protected against disclosure for litigation and public media purposes by FAA Order 8000.81, issued in 2003, which placed FOQA under the protections of FAR 193. The figure on the next page shows when each program began, and when each statute, regulation, or order became effective for that program.

  • PDF

Influences of Unilateral Mandibular Block Anesthesia on Motor Speech Abilities (편측 하악전달마취가 운동구어능력에 미치는 영향)

  • Yang, Seung-Jae;Seo, In-Hyo;Kim, Mee-Eun;Kim, Ki-Suk
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.1
    • /
    • pp.59-67
    • /
    • 2006
  • There exist patients complaining speech problem due to dysesthesia or anesthesia following dental surgical procedure accompanied by local anesthesia in clinical setting. However, it is not clear whether sensory problems in orofacial region may have an influence on motor speech abilities. The purpose of this study was to investigate whether transitory sensory impairment of mandibular nerve by local anesthesia may influence on the motor speech abilities and thus to evaluate possibility of distorted motor speech abilities due to dysesthesia of mandibular nerve. The subjects in this study consisted of 7 men and 3 women, whose right inferior alveolar nerve, lingual nerve and long buccal nerve was anesthetized by 1.8 mL lidocaine containing 1:100,000 epinephrine. All the subjects were instructed to self estimate degree of anesthesia on the affected region and speech discomfort with VAS before anesthesia, 30 seconds, 30, 60, 90, 120 and 150 minutes after anesthesia. In order to evaluate speech problems objectively, the words and sentences suggested to be read for testing speech speed, diadochokinetic rate, intonation, tremor and articulation were recorded according to the time and evaluated using a Computerized Speech $Lab^{(R)}$. Articulation was evaluated by a speech language clinician. The results of this study indicated that subjective discomfort of speech and depth of anesthesia was increased with time until 60 minutes after anesthesia and then decreased. Degree of subjective speech discomfort was correlated with depth of anesthesia self estimated by each subject. On the while, there was no significant difference in objective assessment item including speech speed, diadochokinetic rate, intonation and tremor. There was no change in articulation related with anesthesia. Based on the results of this study, it is not thought that sensory impairment of unilateral mandibular nerve deteriorates motor speech abilities in spite of individual's complaint of speech discomfort.

Exploring the Agency of a Student Leader in Collaborative Scientific Modeling Classes in an Elementary School (초등학교의 협력적 과학 모델링 수업에서 나타난 리더의 행위주체성 탐색)

  • Uhm, Janghee;Kim, Heui-Baik
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.4
    • /
    • pp.339-358
    • /
    • 2021
  • This study explores the agency of a student leader, expressed through efforts to distribute power and encourage participation in elementary scientific modeling classes. The study also analyzes the context in which the leader's agency was expressed and the context in which the development of a collective agency was constrained. The participants were 22 fifth-grade students. The leader's agency was analyzed by examining his words and actions. As a result, at the outset of the study, the leader had the most power, performing all the activities as the sole authority in a non-cooperative participation pattern. However, with reflection and help from the researcher, the leader recognized the problem and facilitated the participation of other students. He developed an identity as a teacher and demonstrated the agency. The leader's agentic behaviors can be categorized into three aspects. First, regarding the cognitive aspect, the leader helped other students participate in modeling by sharing his knowledge. Second, regarding the normative aspect, he made rules to give all students an equal voice. Third, regarding the emotional aspect, the leader acknowledged the contribution of the students, increasing their confidence. The leader's agency temporarily helped the group to overcome the student hierarchy, facilitating a cooperative participation pattern. However, the development of a collective agency was constrained. The power of the leader was partially redistributed, and the other students did not position themselves as equal to the leader. To support the leader's agency to develop into a collective agency, it is necessary to redistribute the power of the leader more equally and to change the recognition of students.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.