1. INTRODUCTION
1.1 Research background
“The definition of a human-machine interface is separated into broad and narrow senses. The general human-machine interface is a core component in the human-machine system”[1]. On the other hand, a narrow human-machine interface refers to a Human-Computer Interface(HCI) in a computer environment and is also called a user interface, or UI for short.
Mixed reality technology is applied in various fields and gradually matures, and its unique holographic experience brings visual impact in different fields. The development of mixed reality technology has brought a brand new user interface mode in information presentation, different from other interface designs. A mixed-reality interface is a kind of unconventional interface.
The Mixed reality interface design is completely different from other interface designs. The standard screen interface is designed from two-dimensional vision, but the hybrid interface has a spaceaware experience. It is an interactive interface that integrates the virtual world and the natural world —general consideration of distance, size, and depth.
There are fewer applications of mixed reality, which also leads to less research on interface design, and there is also a lack of reference systems for interface design, which directly affects the user experience. Therefore, this article will consider the layout, presentation, interaction, and other factors of the interface and the actual environment from space perception and summarize the reference system for mixed reality interface design.
1.2 Research purpose
The purpose of this research: First, by decon structing the mixed reality space perception, analyze the visual composition hierarchy of the mixed reality under the space perception; then analyze the elements of the mixed reality interface design to get the composition elements and window classification mode of the mixed reality interface design; Finally, conduct data analysis to verify whether the mixed reality interface design meets the three user experience elements of Usability, Availability, and Attraction. Based on the design basis of reducing the interaction cost of the operation to complete the task and reducing the user's cognitive load, it provides specific and compelling design specifications for the interface design.
1.3 Research methods
The paper will be studied through the following steps
1) Read relevant literature to determine the theoretical investigation basis of spatial perception, mixed reality, and interface design.
2) Analyze Hierarchical information based on the spatial perception of mixed reality and determine the hierarchical composition of visual space.
3) Analyze the design of a mixed reality interface based on space perception, and draw up its constituent elements and window classification mode.
4) Verify the proposed constituent elements and classification models through practical cases, and verify whether the mixed reality interface design meets the three user experience elements of Usability, Availability, and Attraction through data analysis.
2. THEORETICAL INVESTIGATION
2.1 The Spatial perception theory
"Space perception refers to the individual's perception of the distance, shape, size, orientation, and other spatial characteristics of objects in the space in which they are located. It is an important clue for observing the relationship and changes of objects in space."[2] Space perception is a variety of sensory devices. The products of collaborative activities include the activities and interconnections of vision, hearing, touch, movement, etc. The visual system plays a leading role, and spatial perception includes shape perception, size perception, distance perception, depth perception, and orientation perception. "Spatial cognition is a kind of internalized reflection and structured spatial thinking."[3] "It is also an individual's own sensory description of space, which is defined as the way in which the individual obtains, organizes, stores, and recalls information. It is the interaction of the individual with the three-dimensional environment in daily life."[4]
This research involves mainly "visual space perception" and "depth perception." Visual space perception, that is, the eyes' use as receptors to perceive information in the space environment, the background, and arrangement of objects, etc., will cause the human eyes to produce depth perception, which is an important clue to observe the spatial depth relationship of objects. "The dexterity of vision is not only that it can be used at will for consciousness, but also that it is an indispensable thing in thinking work."[5] Depth perception is that vision uses various subjective and objective conditions to provide differences in the environment through depth clues—the depth information to judge the perceptual object's distance.
2.2 The concept of mixed reality
Mixed reality is to use a camera with a spatial depth analysis function to detect and scan the real scene in all directions and establish real-time 3D model information in the kernel to construct a digital model of the whole scene to determine the fixed position of the superimposed digital information, reaching almost the effect of absolute position stabilization, digital virtual information such as interface windows, animated images, etc. which can be viewed through the head-mounted display device, floating in the air and being closely connected with real objects. The virtual digital objects coexist with the physical reality environment and interact in real-time to achieve a holographic image experience.
"Mixed reality is the result of mixing the physical world with the digital world. Mixed reality is the next evolution in the interaction of humans, computers, and the environment. It unlocks the possibilities previously limited by our imagination. (Fig. 1)"[6] "Since mixed reality is also based on natural human perception, it is essentially a WYSIWYG human-computer interaction interface. This interface liberates humans from the complicated and profound computer user interface, goes over the tedious parameter selection, and returns to the original human sensory channel so that people can intuitively understand the world."[7]
Fig. 1. Mixed reality Schematic diagram.
2.3 Mixed reality interface design
"Interactive interface refers to the medium of communication between humans and machines, that is, through the medium for information exchange and transmission. The design of the interactive interface is particularly focused and emphasized to be user-centric."[4] Usually referred to as UI, the mixed reality interface is abbreviated as MUI, or Mixed reality user interface, a user interface that combines visual depth information and the real world (Fig. 2). Depth is the most prominent feature of MUI design. It is the depth that can intuitively perceive users and bring the sense of distance, which is the embodiment of depth perception in space perception. Moreover, the interface that the user sees is the interface that combines the real world and virtual content and is the interface that is presented together in the vision.
Fig. 2. Schematic diagram.
Different from the two-dimensional screen in terface design, the mixed reality interface design is a three-dimensional display of the interface. Unlike ordinary interface design, it can be moved, stacked, and zoomed in and out at will through interaction. It is a three-dimensional, hierarchical and dynamic window interface. Moreover, the mixed reality interface finally presents not only the part of the interface that integrates reality but all the interfaces that the user sees after the integration, that is, the screen that exists in the real world itself and will be seen by the user, plus the virtual content Overlay, present the interface that the user sees together.
3. EMPIRICAL RESEARCH
3.1 Spatial perception deconstructs the visual performance of mixed reality
Mixed reality is mainly the process of presenting virtual interfaces, scenes, and information in space. Therefore, in the design process, we need to think about the combination and layout, presentation and interaction methods of virtual objects, scenes, and natural objects and real scenes, as well as thinking about the space the relationship between scenes and people, between people and natural objects, and between people and virtual objects.
In the process of mixed reality application design, the fundamental law of application scene presentation is clarified by deconstructing space perception, and the application scene is set from the visual law to have three layers of visual space: the resident interface layer (A) and the surrounding layer (B), the mixed reality layer (C), (Fig. 3) the Resident interface layer. In the actual design process, some specific information needs to be seen by the user, such as a critical indicator. This information is fixed in the human eye. This layer is defined as the resident interface layer. The follow-and-circle layer is a layer that is positioned relative to people and is called the follow-and-circle layer—the position of the information changes with the change of the person’s position. If the distance between the information and the person is assumed to be fixed, then the information of this layer is based on the spherical space with an artificial center. The mixed reality layer is a layer of relative positioning and objects, and the information on this layer is integrated with accurate information.
Fig. 3. Three visual layers-visual level performance diagram.
Clearly express the level of application interface information to understand the application software design, and combine the information presentation methods between different levels to form the final application software design. Information distributed in the real space also has the essential location characteristics of the difference between physical and virtual information. For users, information in spatial perception can be separated into relative positioning of information to people and relative positioning to objects.
3.2 MUI components of space perception theory
MUI is an interface under mixed reality technology. It is composed of two elements, authentic world images and virtual world images, in short, "virtual" and "real". As the most representative MR device, Hololens is the analysis carrier. Compared with other traditional UI designs, MUI is displayed under an optical technical solution and lacks screen or video frame separation. The real-world image is in the field of view of the human eye. It is displayed under the angle range, and the viewing angle of the device determines the image display range of the virtual world. The field of view of the MR equipment is smaller than the field of view of the human eye, and the final composition of the MUI is shown in the figure below (left and right in Fig. 4).
Fig. 4. FOV diagram.
When designing, the explicit design content is the "virtual" in the constituent elements, and the design effect of the ideal virtual content in the "real" of the constituent elements is considered. Using Donald's concept in "Design Psychology 3", this virtual and real combination is the designer's conceptual model. However, the system appearance presents is virtual and real under two angles of view (FOV).
3.3 MUI window mode establishment and interactive mode
The establishment of MUI constituent elements can better understand the MUI, subdivide the MUI windows, design reference standards, and better design the MUI. It is divided into two types of window interface: "virtual" and "real". These two window interfaces can be used in combination or individually.
The first: Use Hololens device in a natural environment; the interface window moves completely following the device's field of view, and other input devices are needed to control the cursor. As shown in the figure, it is defined as a type A window mode (Fig. 5).
Fig. 5. Schematic diagram of type A window mode.
The second type: After using the device, it is in the natural environment, and the reference object is the natural environment and natural objects. When the device field of view leaves the window interface, the interface window disappears from the device screen. When the device field of view returns to the original position, the interface window will be seen again. In other words, the interface window is relatively fixed. As shown in the figure, it is defined as a type B window mode (Fig. 6).
Fig. 6. Schematic diagram of type B window mode
There are also two input methods for interface design. One is the cursor (Fig. 7a). Under the Hololens terminal, the cursor in the MUI simulates the user's eyes' focus, which is defined as the "Gaze-targeting" application design. The interaction needs to be based on the user's ability to obtain the goal, and the user's focus should be full considered when designing. The second is gesture operation. (Fig. 7b, 7c) Although gesture operation is one of the mainstream interaction methods, to realize this gesture interaction in mixed reality applications, it must be satisfied that hand-making the gesture is in the machine's field of view. Within the range of the angle the glasses can recognize and give feedback. When designing, we must fully consider the validity, accuracy, and accuracy of gestures.
Fig. 7. (a) Cursor diagram and (b) Gestures diagrams.
4. CASE PRACTICE
4.1 Design principles
According to the definition of the three-layer visual space of mixed reality: the permanent interface layer, the following surround layer, and the mixed reality layer. The mixed reality interface design at the resident interface layer, and the window mode is used for design arrangements. This case is the MUI design of the education APP. The design content mainly includes the initial interface and the main page.
The resident interface layer is the entry interface of the application software and the related interface during the software. Therefore, the visual level settings are mostly data-type information such as application software descriptions. The information's importance is usually concise and clear, and the color is not so obvious—mainly rich interface design. Simultaneously, from space perception, the design of distance, size, and shape is crucial. The input method is mainly based on visual and gesture interaction. The user's line of sight is a ray, and the interface is placed on this ray at a certain distance from the user's eyes. 1.25-5 meters is the best display distance, and it is not recommended to display objects within 85 cm.
4.2 Design concept
This practical case is the interface design of the photography darkroom education APP. The interface's focus is to guide the user's attention to the core position, so the most critical information is placed. The overall layout is to balance the position and attention of the leading information and the secondary information. The design content mainly includes the initial interface and the main page.
The initial interface includes the name and information of the software. When designing, it mainly displays the software name, sets the close button, and clicks to enter the main interface design. This interface will always remain present. The first type, A window, is used for window classification (Fig. 8a). However, in some application designs, the initial interface will be set to close automatically.
Fig. 8. A conceptual drawing of (a) A window and (b) B window
In the main interface design, each part of the software is directly presented in the main interface, and the learning links are displayed intuitively. Unlike the initial interface, the main interface layout is not limited to the form of a fixed box but uses hexagons of different sizes. Add a sense of vividness and intuitively understand the structure of the software. When the line of sight moves to the main interface, the interface information can be seen. When you turn your head and look away, you cannot see the interface information, the second type B window of the window classification (Fig. 8b).
The primary color of the interface adopts fresh mint green and the matching color system, and the guide arrow adopts bright orange to form a striking contrast. The mixed reality interface can not be limited to a square window, and the use of polygonal design makes the layout more vivid. (Fig. 9) The additional visual feedback for the three input states of the object design for interaction is the object's default idle state, the gaze cursor state, the gesture click, and the pressed state.
Fig. 9. design diagram.
4.3 Presentation of MUI
The interface window is established as a 3D model to give it a three-dimensional effect. Since the photography darkroom equipment and equipment used in the process are primarily dark, the interface design uses fresh and bright colors. The graphic design and 3D modeling are completed and imported into UNITY. Interactive design, through the visual hierarchy rules and window mode, input form of the mixed reality interface design, and the designed interface is presented in the mixed reality as shown in the following figure; (Fig. 10).
Fig. 10. Mixed reality interface rendering.
5. DATA ANALYSIS
5.1 Research questions and questionnaire design
This paper mainly conducts case practice through experimental research on the components of interface design and window classification mode. The main problem of the research is to verify whether the mixed reality interface design meets the three user experience elements of usability, availability and attraction.
According to the characteristics of the three main factors proposed, the corresponding questionnaire questions (Table 1) were set up, and the questionnaire was filled out with a Likert five-level scale.
Table 1. Questionnaire.
5.2 Descriptive analysis
This questionnaire survey adopts the form of the online survey, mainly through the professional questionnaire survey platform "Questionnaire Star" to issue an electronic version of the questionnaire to conduct the survey. Since the APP is a photographic darkroom education software, the target audience is art students at school, all of whom are in the previous study. The mixed reality darkroom photography software users in the thesis re-investigated the questionnaire designed for the interface. As a result, a total of 100 questionnaires were issued. After manual and machine investigations, questionnaires with untrue and incomplete information were excluded, and a total of 91 valid questionnaires were returned.
Among them, there are 41 boys, 5.05%, and 50 girls, 54.95%. There are 49 art majors, 53.85%, 13 humanities, 14.29%, and science and engineering, 29, 31.87%. There are 21 freshmen, 20 sophomores, 19 juniors, 16 seniors, and 15 masters.
5.3 Reliability and validity and confirmatory factor analysis
From the data in the following table, we can see that the factor loading is more significant than 0.7, the coefficient reliability is also greater than 0.5, and the corresponding measurement errors are all less than 0.5. The data show that the index has good reliability, ease of usability, availability and attraction—a better explanation of the questionnaire items.
Table 2. Reliability and validity and confirmatory factor analysis.
The CR values were 0.804; 0.896; 0.859, all of which were greater than 0.6, indicating that the questionnaire items contained in the three factors have solid correlations and good internal consistency. The AVE values are 0.577, 0.742, 0.671, respectively, which are all greater than 0.5, indicating that the comprehensive explanatory ability of item measurement is strong, and the three factors can better explain the main factors studied. The values of Cronbach’s Alpha are 0.797; 0.889; 0.854, which are all above 0.7, indicating that the reliability of the questionnaire is very high, and the reliability of the questionnaire data is also very high.
It can be seen from various data that the coefficient indicators of the measured variables are good. The three factors of usability, availability and attraction can fully illustrate the mixed reality interface design elements in line with the user experience.
6. CONCLUSION AND FUTURE WORK
Based on the theory of space perception, the paper constructs the three-layer visual space definition of mixed reality game scenes, component elements of interface design and window classification mode. Through practical cases and confirmatory factor analysis, it is concluded that the mixed reality interface design meets the ease of Usability, Availability, and Attraction of three user experience elements. Therefore, it is concluded that the constituent elements and window classification modes of interface design can bring design system reference to mixed reality interface design and have a reference role in interface design so that users can better receive interface information and fully explain that their real-world application design ranks bring specific and effective design specifications.
Future work: 1) There are fewer cases in this study, and only confirmatory factor analysis is done. In the future, more cases will be used to study the relationship between interface design and user experience; 2) Interface design is a part of mixed reality applications. In the later stage, mixed reality game development will be carried out. In addition to the interface, the game development work will involve analysis and research of scenes, characters, and storylines.
참고문헌
- Y. Kojima, Y. Yasumuro, H. Sasaki, I. Kanaya, O. Oshiro, T. Kuroda, S. Manabe, K. Chihara, "Hand Manipulation of Virtual Objects in Wearable Augmented Reality,"Seventh International Conference on Virtual Systems and Multimedia, pp. 463-469, 2001.
- T.L. Jianzhang, The Research of Spatial Depth Perception Applied to Augmented Reality, Fengjia University, 2019.
- D. Medyckyj-Scott and H.M. Hearnshaw, Human Factors in Geographical Information Systems, Halsted Press, Div. of John Wiley & Sons, 1993.
- D.A. Ouyang and Z. Ling, "The Development of the Concept of Space for School Children," Geographical Research Report, Vol. 72, pp. 166-204. 1983.
- S.R. Arnheim, translated by T. Shouyao. Visual Thinking-Aesthetic Intuition Psychology, Chengdu: Sichuan People's Publishing House, P24, 1998.
- S.G. Lim and C.Y. Kim, "A Study on the Change of Digital Visuality in the 21st Century through Lacanian Perspective; From the Perspective of Digital Frame Expansion," Journal of Korea Multimedia Society, Vol. 21, No. 5. pp. 638-647, 2018. https://doi.org/10.9717/KMMS.2018.21.5.638
- S.Z. Yi, Introduction to Augmented Reality Technology, Beijing: National Defense Industry Press, 2014.
- T.H. Wentao, Internet Product Interface Design and Evaluation based on User Research, Nanjing University of Aeronautics and Astronautics, 2013.