Running is a highly popular form of exercise, while incorrect running posture over an extended period can lead to severe knee injuries. Smart textiles have recently demonstrated significant potential for continuous motion monitoring. This study involved the design and development of a smart legging with a resistive textile sensor network to monitor lower body motion. The study consists of three main parts. Firstly, we tested textile sensors in terms of linearity and robustness to determine the basic sensor unit that can monitor the characteristics of running postures. Next, optimal sensor placement was determined through comparison experiments, and a sensor network was proposed. Finally, based on the LSTM model with data gathered from 6 participants, we developed the smart legging system that is capable of identifying three types of improper running postures and normal postures with 99.1% accuracy. The evaluation revealed that the smart legging system had the potential to help users adjust their running postures to prevent knee injury through continuous monitoring and multi-modal feedback.[......]
Textile sensors have demonstrated significant potential in next-generation wearable systems due to their excellent performance and unobtrusive nature. By building specialized sensing networks and algorithms, textile-based wearable systems can estimate the continuous motion angles of human joints with desirable accuracies. This article offers a systematic review aimed at identifying key challenges in this field and encouraging further applications of textile strain sensor networks within the human–computer interaction (HCI) community. To achieve this, we conducted an exhaustive literature search across four major databases: IEEE Xplore, PubMed, Scopus, and Web of Science, spanning from January 2016 to August 2023. Applying inclusion and exclusion criteria, we narrowed down 2684 results to a total of 24 relevant papers. To analyze these studies, we proposed a framework that incorporates both technical aspects – such as textile strain sensors, sensor placement, algorithms, and technical evaluations – and contextual factors like target users, wearability, and application scenarios. Our analysis uncovered two critical research gaps: First, it exists an incongruity between the development of textile-based wearables and the advancements in textile sensors. Second, there is a noticeable absence of contextual design considerations in this specific domain. To address these issues, we offer discussions and recommendations from three perspectives: 1) enhancing the robustness of textile-sensing networks, 2) improving wearability, and 3) expanding application scenarios.[......]
Autonomous agents, including service robots, require adherence to moral values, legal regulations, and social norms to interact effectively with humans. A vital aspect of this is the acquisition of ownership relationships between humans and their carrying items, which leads to practical benefits and a deeper understanding of human social norms. The proposed framework enables the robots to learn item ownership relationships autonomously or through user interaction. The autonomous learning component is based on Human-Object Interaction (HOI) detection, through which the robot acquires knowledge of item ownership by recognizing correlations between human-object interactions. The interactive learning component allows for natural interaction between users and the robot, enabling users to demonstrate item ownership by presenting items to the robot. The learning process has been divided into four stages to address the challenges posed by changing item ownership in real-world scenarios. While many aspects of ownership relationship learning remain unexplored, this research aims to explore and design general approaches to item ownership learning in service robots concerning their applicability and robustness. In future work, we will evaluate the performance of the proposed framework through a case study.[......]
This study investigated how drivers can manage the take-over when silent or alerted failure of automated lateral control occurs after monotonous hands-off driving with partial automation. Twenty-two drivers with varying levels of prior ADAS experience participated in the driving simulator experiment. The failures were injected into the driving scenario on curved road segments, accompanied by either a visual-auditory alert or no change in HMI. Results indicated that drivers could rarely maintain lane-keeping when automated steering was disabled silently, but most drivers safely managed the alerted failure situation within the ego-lane. The silent failure yielded significantly longer take-over time and generally worse lateral control quality. In contrast, poor longitudinal control performance was observed in alerted conditions due to more brake usage. An expert-based controllability assessment method was introduced to this study. The silent lateral failure situation during monotonous hands-off driving was rated as uncontrollable, while the alerted situation was basically controllable. Participants showed their preferences for the TORs, and the importance of conveying TOR reasons was also demonstrated. [......]
Service robots have been applied in an increasing number of scenarios, including homes, hospitals, offices, schools, hotels, etc. To ensure the usability of the interaction interface and process in the application of service robots, the design of human-robot interaction in the application of service robots involves various interaction modalities, different robot forms, and different physical behaviors of robots in space, etc. This makes it challenging to have low-cost, high-fidelity prototype methods that support design exploration in the early stages of service robot application design.Currently, many rapid prototyping techniques have been applied to the design exploration stage in the design of service robot applications, such as paper prototypes, storyboards, video prototypes, etc. However, these methods have limitations, including low fidelity and being out of the environmental context. Researchers have also been exploring new prototype methods to meet the design exploration and testing requirements for HRI design. Some studies have explored the use of VR test HRI prototypes, and most of them focus on the technical aspect, and the performance of HRI capabilities. Other studies focus on specific HRI design aspects: interactive mechanism, anthropomorphic appearance, social acceptance, and so on. Approaches focusing on these aspects are not suitable for prototyping and testing the overall interaction scenarios of service robot applications.Based on the characteristics of virtual reality technology, this paper aims to explore how virtual reality technology can be used to support multi-user collaborative design of service robot applications in a virtual environment. To address this issue, this paper proposes a system for supporting collaborative design of service robot applications. The system framework and implementation will be described in detail in the paper. Specifically, the system enables multiple users (designers or stakeholders) to enter a virtual environment in real-time using head-mounted VR devices. Users can select appropriate environment model assets based on the target application scenario and perform bodystorming "on site". Using a modular robot building tool, users can add virtual robots to the space and add or delete functional components, as well as adjust the position, size, and orientation of the components. The system's Wizard-of-OZ module allows users to control the robot's movement and component status. The Graphic UI is embedded into the physical display of the robot via WebView and supports the simulation of the Graphic UI interaction process. After completing the initial application concept ideation and virtual robot design, users can use WoZ and Role-Playing techniques to perform the human-robot interaction process to evaluate and optimize the interaction design. In addition, the recorded video of the performance can also support subsequent design discussions.The system is implemented using the Unity game engine, and users interact with the system using the Oculus Quest headsets and controllers. The design activities case based on the system will be evaluated and discussed to analyze the strengths and weaknesses of the system. Further, we discuss the limitations of this work and the future research directions for supporting the design of service robot applications using virtual reality.[......]
Traditional Chinese embroideries have a long history and take the most important role in the textile Intangible Cultural Heritage (ICH) in China. At the same time, smart textiles have become a dominant trend in textiles development and the thermochromic textile interface resulted in increasing explorations. In order to study how computational thermochromic interface may contribute to the transmission of traditional embroidery craftsmanship, we investigated a novel color-changing embroidery interface and explored various prototypes in multiple scenarios including future cockpit, fitness promotion, and household items.[......]
Stroke is a cardiovascular and cerebrovascular disease that affects the aged population at a high rate. Patients’ functional disabilities can be reduced with effective rehabilitation training. However, due to a lack of hospital resources and a social yearning for family contact, patients frequently discontinue rehabilitation training sessions and return home to their local community. Such a shift emphasizes the value of home and community-based rehabilitation, where patients can perform daily training with remote support from therapists. In this survey, the technologies that assist stroke rehabilitation will be discussed in following aspects: (1) technologies for home-based stroke rehabilitation; (2) technologies for community-based stroke rehabilitation; (3) technologies for therapist’s engagement in remote rehabilitation. A comprehensive overview of technologies that support home and community-based stroke rehabilitation was presented, as well as insights into future research themes.[......]
Explanations have become increasingly vital in communicating with human drivers about the reasons for the decision-making of autonomous vehicles (AVs), particularly in tactical-level driving tasks. Focusing on lane-changing scenarios, we examine whether providing tactical-level explanations and in addition, whether providing a confirmation option, influences drivers’ decision-making, trust, and emotional experience. Thirty participants were equally assigned into three groups: indicator (I), explanation (E), and explanation + confirmation (EC), experiencing four lane-changing scenarios in a driving simulator. Real-time question probes and interviews were adopted to understand drivers’ decision-making process, and post-drive questionnaires on trust and emotional experience were given. Results indicated that merely providing tactical-level explanations had little effect on driver’s trust and experience, but caused worse decision-making performance. The option to confirm lane changes after an explanation promoted driver’s trust, but brought two-sided effects on decision-making performance. Situational trust and decision-making performance varied significantly across lane-changing scenarios.[......]
Benefit from the progress in the field of explainable artificial intelligence (XAI), explanations have been increasingly prospective in the autonomous vehicle (AV) context. Providing explanations has been proved to be vital for human-AV interaction, but what and how to explain are still to be addressed. This study seeks to bridge the areas of XAI and human-AV interaction by combining perspectives of both users and researchers. In this paper, a conceptual framework of explanation models was proposed to indicate what aspects to explain in human-AV interaction. Based on the framework, we introduced a scenario-based and question-driven method, i.e., the SQX-canvas, to guide the workflow of generating explanations from users’ demands in a certain AV scenario. To make an initial validation of the method, a co-design workshop involving researchers and users was conducted with four AV scenarios provided in forms of video clips. Participants produced explanation concepts and expressed their attitudes towards the AV scenarios following the “scenario, question and explanation” process. It was apparent that users’ demands of explanations varied across scenarios, and findings as well as limitations were discussed. This method could provide implications for research and practice on facilitating transparent human-AV interaction.[......]
Before widely adopting in the real life, Emerging technologies and design concepts require appropriate user studies to explore demands from users. Smart cockpit is a typical fields driven by cutting-edge technologies, and vehicles are becoming more intelligent touchpoints empowered by V2X technologies. A repeated, correlative and continuous framework was employed for future-oriented user study such as smart cockpit’s connectivity capability in the context of V2X, by presenting the Participatory Design Fictions with Mixed Reality, which aims for stimulating imagination of the participants to gather their views and discussions about the future. Thematic analysis, discourse analysis and creative analysis were adopted to evaluate this framework and method. Results indicated that Participatory Design Fictions with Mixed Reality provided researchers with more in-depth insights about the preferable futures articulated by different groups when conducting future-oriented, demand mining-oriented user study as a effective tool and method.[......]