Toggle navigation. Stock photo. Search Results Results 1 of Kluwer, Used - Very Good.
Former Library book. Great condition for a used book! Minimal wear. Spine creases, wear to binding and pages from reading. May contain limited notes, underlining or highlighting that does affect the text. Accessories such as CD, codes, toys, may not be included. Springer, Very Good. Used book in very good condition.
Some cover wear, may contain a few marks. Ex-library with the usual stamps. Ships with Tracking Number! May not contain Access Codes or Supplements. Researchers have previously noted the importance of including all interested parties, including caregivers, in the development of assistive technology [ 42 , 43 ]. While some researchers have included users in the design process [ 44 , 45 ], they more often focus on the technical challenges during development, and only receive feedback from users during evaluation.
We have worked with Henry, Jane, and others to develop assistive robotic technologies since [ 15 , 17 ]. Lessons learned from these efforts led to the development of the system reported here. During the development of this system, we met with Henry and Jane Evans weekly using video-conferencing software. We conducted remote evaluations of new functionality approximately monthly to explore design ideas and receive user feedback. We used the insights gained through these discussions and evaluations to identify both user needs and system improvements to enable effective use by individuals from the target population.
We have previously described their involvement and some aspects of the web interface design [ 46 ], and present the final hardware and software system used in the subsequent evaluations here. Because many commercially-available assistive input devices can provide single-button mouse-type input to a web browser, designing assistive technology for use with standard mouse-type input simplifies system development, reduces the need to develop specialized interfaces [ 12 ], and promotes accessibility across medical conditions, impairments, and preferences Fig 2.
Brain-computer interfaces and other novel assistive input devices can also provide this type of input, making them a complementary technology [ 19 , 24 , 47 , 48 ]. Additionally, while designed for use by individuals with motor impairments, this access method is also applicable to non-motor-impaired operators, and so is representative of universal design [ 49 ].
Individuals with diverse disease or injury conditions likely have diverse and possibly changing levels of impairment. These individuals may choose to use a variety of commercially-available, off-the-shelf input devices that enable single-button mouse-type input, which can be used to operate our robotic body surrogate. These devices make our system accessible across a range of sources of impairment and personal preferences. Also, system developers only need to support a single mode of interaction, reducing development and support effort.
Examples: Blue line An individual with ALS may have limited hand function and choose to use a head-tracking mouse; Orange line An individual with spinal muscular atrophy SMA may experience upper-extremity weakness, and prefer the use of a voice-controlled mouse; Green line An individual with a spinal cord injury SCI may only retain voluntary eye movement, and use an eye-gaze based mouse. All three of these individuals can operate our system without modification, making it accessible across types and sources of motor impairment.
Other robotic manipulation interfaces, such as ROS RViz, often rely on 3D-rendered displays of the robot and surrounding environment, and require the user to manipulate both the robot and the virtual camera view. In a prior study, we found that even able-bodied users with experience in virtual 3D modeling had difficulty controlling a robot effectively using this type of interface, despite training and a brief practice session [ 37 ]. This consistent first-person perspective also aids the user in assuming the role of the robot, as this is similar to the perspective from which individuals experience their daily lives.
In addition to the video stream, the interface displays other sensor data from the robot in an integrated manner. If the fabric-based tactile sensors on the arms or base detect contact, a red dot or square, respectively, appears in the camera view at the location of contact Fig 4A and 4B. If contact occurs outside of the camera view, the nearest edge or corner of the screen flashes red Fig 4C.
Remote Manipulation Systems
B The control ring appears parallel to the floor to convey vertical height. A Contact on the forearm against the table edge. A The view before the 3D Peek. D 3D Peek view holds for 2.
The interface uses modal control, where the same input has a different output depending upon the active mode Fig 6. Modal control introduces the opportunity for mode errors, where the correct command, given in the wrong mode, produces an undesired result [ 54 ], and can create mode-switching delays [ 55 ]. We reduce mode-switching delays by making all primary modes selectable from a top-level menu on the left of the screen, and by allowing the server-side control components to run concurrently, which means that mode switching only requires client-side interface changes.
To help avoid mode errors, each mode uses visually distinct AR elements to convey the current mapping from the mouse cursor and single button to robot motions Fig 6 , S1 Video , and to display relevant sensor data in the appropriate context in the camera view. A 3D virtual orientation controls around end effector. B Hovering over the blue arrow hides other arrows and shows yellow preview.
C After sending a command, a green virtual gripper shows the active goal. D Gripper position after rotating to left from A.
Remote Software Developer Jobs in September
E Hovering over green arrow hides arrows, shows preview. F Gripper position after rotating upward from E. These modes also include a number of novel attributes.
- Shop now and earn 2 points per $1?
- Jeffrey Saads Global Kitchen: Recipes Without Borders!
- Recommended for you.
- Challenges in Central Banking.
- ISBN 10: 0792348222.
- Abstracts and Proceedings.
As such, we now provide a more detailed description. This moves the gripper one step on a horizontal plane toward the corresponding 3D point on the virtual disk. The selected step size remains in effect for all movements of the selected hand until adjusted by the user. Inset up and down arrow buttons move the gripper one step vertically up or down. As a whole, this novel interface simplifies Cartesian motions with respect to the environment. The interface element also provides some of the advantages of direct manipulation interfaces [ 51 ]. Clicking a location results in the gripper moving toward both the clicked 2D location in the video and the clicked 3D location in the real world, where the real-world 3D location is defined by the clicked point on the virtual disk.
These virtual arrows are rendered to always appear in the same location and orientation relative to the gripper as the gripper moves. This display reduces visual clutter around the gripper [ 56 ], while providing a consistent interaction, as the gripper will always rotate in the direction indicated by the arrow, unlike alternative interfaces that provide button-pads of directional arrows, where the user must mentally map each command to the current orientation of the gripper.
In both the position and orientation sub-modes, the user can open or close the gripper using sliders in the bottom left or right corner of the screen. While closing, the gripper attempts to grasp items gently but securely using the method described in [ 57 ]. When the cursor hovers over any end effector control, a yellow, semi-transparent, virtual gripper appears where the command being considered would send the gripper, providing a preview [ 16 , 52 ].
This preview allows users to rapidly evaluate the correctness of their currently selected input to help avoid errors or make adjustments without actually moving the robot and possibly causing unintended interactions such as knocking over a glass. When a command is sent, a green, semi-transparent, virtual gripper appears at the goal location and disappears once the goal is reached. Study 1 characterizes the benefit or lack of benefit provided by a robotic body surrogate in terms of a modified version of the ARAT [ 58 — 60 ] and also establishes a performance baseline with respect to this assessment.
Study 1 uses questionnaires to measure perceptions of the robotic system. Our inclusion criteria required participants at least 18 years of age, fluent in written and spoken English, able to operate a computer mouse or equivalent assistive device, and scoring nine or fewer points on the ARAT with both upper limbs. We prescreened participants verbally before enrollment.
We obtained written informed consent from all participants, and all procedures were carried out according to the approved protocol guidelines.
We asked participants about their prior use if any of robots, video games, and computer aided design software, as well as for details about their computer system, Internet bandwidth, and mouse device. The low ARAT score required for inclusion made this remote assessment feasible. Participants who passed prescreening, but did not meet the criteria for impairment based on the ARAT, were not advanced to the next session.
During the second session, participants used the web-based interface to operate a PR2 robot through a guided training session, which introduced all the features of the control interface, and included grasping and placing two plastic bottles from a tabletop. Participants then completed a training evaluation task using the robot without guidance. The task required coordinated use of system features to grasp a box from a nearby shelf and return it to a table in the same room. Participants who failed to complete the training task in under 35 minutes did not proceed further in the study.