<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" dir="ltr" style="font-size: 12pt; color: rgb(0, 0, 0); font-family: Calibri, Helvetica, sans-serif, "EmojiFont", "Apple Color Emoji", "Segoe UI Emoji", NotoColorEmoji, "Segoe UI Symbol", "Android Emoji", EmojiSymbols;">
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Hello everyone!</span></p>
<p><br>
</p>
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">At this week's lunch seminar, visiting researcher
</span><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Rie Kamikubo</span><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt"> will be talking about her recent work investigating how to match sighted mobility assistants
with people who are blind or have visual impairments. The working title and abstract are included below.</span></p>
<p><br>
<span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Lunch will be provided and
</span><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">we hope to see you there!</span></p>
<p><br>
</p>
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">- Amy</span></p>
<p><br>
</p>
<p><br>
</p>
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">------</span></p>
<p><br>
</p>
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Speaker: </span>
<span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Rie Kamikubo</span></p>
<p><span><br>
</span></p>
<p><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Title: </span>
<span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">How to Match the Sighted with the Visually Impaired in Remote Mobility Assistance</span></p>
<p><span><br>
</span></p>
<p><span><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Description:
</span><span style="font-family:Arial,Helvetica,sans-serif; font-size:11pt">Pedestrians with visual impairments often require sighted guides for mobility assistance but they are not always available and do not facilitate independent mobility. Recent technological
strides have been made to design mobile applications of sighted guidance services offering audio/video communication with sighted workers. However, little has been known regarding how assigning different types of expertise of the assistants impacts the overall
user experience of remote mobility assistance. Our exploratory study focuses on investigating the influences of 1) unfamiliarity of the user environment and 2) limited sighted guide experiences, by analyzing the concerns, discourse, and perspectives of pedestrians
and assistants in remote interaction for navigation tasks. We examined the benefits of environmental familiarity of the assistants but complications without awareness to contextually describe directional cues that sighted guide experts were able to offer.
To suggest suitable matching conditions of the sighted and visually impaired, our insights contribute to directing training and requirements of the remote assistants to strengthen and complement their expertise levels.</span></span><br>
</p>
<p><br>
</p>
<p><br>
</p>
</div>
</body>
</html>