{"id":655938,"date":"2020-05-05T09:39:31","date_gmt":"2020-05-05T16:39:31","guid":{"rendered":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/?p=655938"},"modified":"2020-05-05T09:39:31","modified_gmt":"2020-05-05T16:39:31","slug":"vroom-giving-body-to-telepresence","status":"publish","type":"post","link":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/blog\/vroom-giving-body-to-telepresence\/","title":{"rendered":"VROOM: Giving body to telepresence"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-656361 \" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset-.png\" alt=\"Senior Researcher Sean Rintel, using VROOM, interacts with Research Assistant Priscilla Wong, who is wearing a HoloLens. Sean\u2019s photorealistic avatar is clapping along with Priscilla.\" width=\"716\" height=\"430\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset-.png 640w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset--300x180.png 300w\" sizes=\"auto, (max-width: 716px) 100vw, 716px\" \/><\/p>\n<p><em>Editor&#8217;s Note: This post was written collaboratively by Brennan Jones, Sunny Zhang, Priscilla Wong, and Sean Rintel and told from the first-person perspective of Brennan Jones.<\/em><\/p>\n<p>One of my life missions is to connect people, and I\u2019ve been pursuing this mission through research projects that bring remote friends, couples, conference attendees, emergency workers, and search and rescue volunteers together. So when I joined the <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/theme\/future-of-work\/\">Future of Work<\/a> theme at <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/lab\/microsoft-research-cambridge\/\">Microsoft Research Cambridge<\/a> for a summer internship in 2019, I was excited. The theme is led by <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/asellen\/\">Abigail Sellen<\/a>, Deputy Director of the lab and a pioneer in video-mediated communication, and I\u2019d be supervised by Senior Researcher <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/serintel\/\">Sean Rintel<\/a>, who leads the <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/project\/socially-intelligent-meetings\/\">Socially Intelligent Meetings<\/a> workstream. Of course, the irony wasn\u2019t lost on me that I had to travel to the United Kingdom from my home in Vancouver, Canada, to work on video collaboration. This also meant being over 4,600 miles and an eight-hour time difference away from my girlfriend, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/yaying-zhang\/\">Sunny Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a Microsoft Software Development Engineer in Vancouver.<\/p>\n<p>We stayed in touch through daily video chats and messaging and even took advantage of a more advanced way of connecting: telepresence robots. Effectively video chat on wheels, a telepresence robot allows a remote individual to drive around another place and see what\u2019s going on from the robot\u2019s camera while people in the space can see the remote individual on the robot\u2019s screen. The Cambridge lab had a Suitable Technologies Beam robot, so late one afternoon, during the first week of my internship, Sunny \u201cbeamed in\u201d for a tour. Rather than me carrying Sunny around on my phone or laptop, she \u201cwalked\u201d <em>with<\/em> me; the robot gave her physical and mobile autonomy. She even made a special friend\u2014a mini Wall-E robot sitting on my colleague Martin Grayson\u2019s desk. Martin made Wall-E dance, and in response, Sunny rotated her robot body to dance, cementing their robot friendship.<\/p>\n<div id=\"attachment_655965\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-655965\" class=\"wp-image-655965 size-large\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-1v2-1024x448.jpg\" alt=\"A series of three photos chronicling Sunny Zhang\u2019s tour of the Microsoft Research Cambridge lab via telepresence robot.\" width=\"1024\" height=\"448\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-1v2-1024x448.jpg 1024w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-1v2-300x131.jpg 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-1v2-768x336.jpg 768w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-1v2.jpg 1364w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><p id=\"caption-attachment-655965\" class=\"wp-caption-text\">Researcher Brennan Jones and his girlfriend, Microsoft Software Development Engineer Sunny Zhang, used a Suitable Technologies Beam robot to stay connected when Brennan took an internship at Microsoft Research Cambridge, putting over 4,600 miles of distance between the couple. While \u201cvisiting\u201d the lab, Sunny made friends with colleague Martin Grayson\u2019s Wall-E robot (far left).<\/p><\/div>\n<p>We took a selfie as a memento of our time there together. But Sunny was still trapped and flattened on the robot\u2019s monitor, much like video chat on a laptop or phone. And from her perspective, I was trapped and flattened on her screen. There was a wall between us. I wanted it to feel more like she was there with me and also wanted <em>her<\/em> to feel more like she was present.<\/p>\n<p>This desire to connect in meaningful ways, both in our personal and professional lives, is in our nature and is the motivation behind <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/vroom-virtual-robot-overlay-for-online-meetings\/\">Virtual Robotic Overlay for Online Meetings<\/a>, or VROOM, for short, an <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/chi2020.acm.org\/\">ACM CHI Conference on Human Factors in Computing Systems (CHI 2020)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> Late-Breaking Work. VROOM is a two-way telepresence system that has two aims. The first is to help a remote individual feel like a remote physical place belongs as much to them as the local people in it. The second is to help local people feel that a remote individual is with them in the same physical space\u2014be they colleagues, friends, or partners. VROOM is our story of making<em> being there remotely <\/em>a reality.<\/p>\n<div class=\"yt-consent-placeholder\" role=\"region\" aria-label=\"Video playback requires cookie consent\" data-video-id=\"9ZZ-YdUU01w\" data-poster=\"https:\/\/img.youtube.com\/vi\/9ZZ-YdUU01w\/maxresdefault.jpg\"><iframe aria-hidden=\"true\" tabindex=\"-1\" title=\"VROOM: Virtual Robot Overlay for Online Meetings\" width=\"500\" height=\"281\" data-src=\"https:\/\/www.youtube-nocookie.com\/embed\/9ZZ-YdUU01w?feature=oembed&rel=0&enablejsapi=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<div class=\"yt-consent-placeholder__overlay\"><button class=\"yt-consent-placeholder__play\"><svg width=\"42\" height=\"42\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" aria-hidden=\"true\" focusable=\"false\"><g fill=\"none\" fill-rule=\"evenodd\"><circle fill=\"#000\" opacity=\".556\" cx=\"21\" cy=\"21\" r=\"21\"\/><path stroke=\"#FFF\" d=\"M27.5 22l-12 8.5v-17z\"\/><\/g><\/svg><span class=\"yt-consent-placeholder__label\">Video playback requires cookie consent<\/span><\/button><\/div>\n<\/div>\n<h3>The ingredients: Mobility, immersion, and presence<\/h3>\n<p>Traditional video chat has enabled people to attend meetings remotely, partake in virtual classroom activities, and connect with family members overseas. However, it has obvious physical and spatial limitations. Static cameras with small fields of view restrict how much we can see of one another, make it difficult to refer to things in the other space, and\u2014of course\u2014deny us the choice of looking and moving around one another\u2019s space. To overcome these limitations, exotic solutions have been explored, such as combining 360\u00b0 cameras with virtual reality (VR), using augmented reality (AR) headsets to see full-body avatars of others in one\u2019s own space, and robotic telepresence.<\/p>\n<p>Usually, virtual space is created via 3D modeling, like in fantasy scenes in VR games, and the people in it are embodied in 3D-illustrated avatars. But it can also be a real place captured by a 360\u00b0 camera, such as in immersive 360\u00b0 VR films. The 360\u00b0 camera works as a \u201cremote eye\u201d for the user, providing the feeling of being <em>in<\/em> the place. Further, instead of a static camera, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3064663.3064707\">researchers have explored attaching 360\u00b0 cameras onto local individuals<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, enabling remote people to share the local person\u2019s perspective, or <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3173574.3173933\">onto a telepresence robot to enable both immersion and autonomy for the remote individual.<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>While VR and 360\u00b0 cameras can give remote individuals the illusion they\u2019re in the local space, AR can give local individuals the illusion the remote person is there in the space with them. AR technology maps the physical environment and overlays digital content on top of it. A well-known research example is <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/project\/holoportation-3\/\">Holoportation<\/a>, in which an individual in a space surrounded by cameras is seen live in photorealistic full-size 3D video by a person in another space using a <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/hololens\">Microsoft HoloLens.<\/a> A commercial application is <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/spatial.io\/\">Spatial<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, in which people\u2019s facial selfies are mapped onto 3D-illustrated avatars. Individuals see one another\u2019s avatars via a HoloLens or other head-mounted display.<\/p>\n<p style=\"text-align: left;\">As impressively convincing as these experiences are, there are two common limitations regarding mobility. First, since individuals\u2019 rooms are laid out differently, avatars may appear to walk through walls or stand on tables, breaking the illusion of presence. There are approaches to solve that using an <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1910.05998\">appropriately mapped mutual space<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, but that doesn\u2019t solve the second limitation. No matter how good the mutual mapping, remote individuals can\u2019t explore a remote environment by themselves. They must be in a meeting with others and only those others can see them and only in the mapped mutual space.<\/p>\n<p>These three technologies&#8211;telepresence robots, VR, and AR &#8212; feel so close to what we want, but, without a way to integrate them, <em>just<\/em> miss the mark. What if they could be combined? Life sometimes prepares all the ingredients for you, and you just need a little bit of motivation to put them together, and with some added spice, you discover a great new dish.<\/p>\n<p>This is exactly what happened for us. Our motivation had begun building a year before my Microsoft internship. During a hike, Sunny asked about attaching an avatar of a remote person to a telepresence robot to help local individuals feel like the remote person was there with them. I loved the idea, and built on it: What if you also attached a 360\u00b0 camera to the robot to livestream the local space to the remote person in VR to help <em>them<\/em> feel like they were there? As can happen when you get tied up with life and other research, we didn\u2019t talk much about the idea after that. Little did we know, an opportunity to gather the essential ingredients would present itself: the annual internal Microsoft Hackathon.<\/p>\n<div id=\"attachment_655953\" style=\"width: 991px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-655953\" class=\"wp-image-655953 size-full\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-2.png\" alt=\"On the left, two colleagues \u2014 one using a telepresence robot \u2014 talk and wave to one another. On the right, the same photo with a rough sketch of a full-size avatar in yellow drawn over the telepresence robot and a rough sketch of a mixed reality headset drawn over the colleague occupying the physical space.\" width=\"981\" height=\"523\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-2.png 981w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-2-300x160.png 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-2-768x409.png 768w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-2-710x380.png 710w\" sizes=\"auto, (max-width: 981px) 100vw, 981px\" \/><p id=\"caption-attachment-655953\" class=\"wp-caption-text\">Brennan Jones, Sunny Zhang, and their teammates combined telepresence robots\u2019 mobility, VR\u2019s immersion, and AR\u2019s presence in pursuing an enhanced experience with VROOM. Sunny illustrated the idea of attaching an avatar of a remote person to a telepresence robot on a photo of Brennan meeting a colleague, Banu Saat\u00e7i, when he was using the robot.<\/p><\/div>\n<h3>Microsoft Hackathon: The perfect opportunity to put idea into action<\/h3>\n<p>While our telepresence tour of the Cambridge lab that first week of my internship may have helped bring the idea back to mind, the hackathon at the end of July spurred us into action. During the hackathon, every Microsoft employee has free time to work on any project they want, with anyone in the company. We told Sean our idea, and he was immediately excited about it. We quickly formed a team of colleagues from Cambridge and Vancouver and turned our eight-hour time difference into an advantage. When the Cambridge group finished working for the day, we handed everything over to the Vancouver group. There was always someone working on the project!<\/p>\n<p>By the end of the week, we had a demo experience hard-coded with Sean as our test remote user. A local user wearing a HoloLens could see a cartoon avatar of Sean standing on a hoverboard, moving with the telepresence robot via marker tracking. On the remote side, Sean wore a Windows Mixed Reality VR headset, through which he got a 360\u00b0 view of the remote space streaming live from a camera on the robot. Although local individuals needed a HoloLens to see Sean\u2019s avatar, he could drive the robot freely around the local space without needing to be in a meeting with anyone. This brought the space <em>to<\/em> Sean, helping him feel a sense of ownership akin to that of the local people actually in it. The space was <em>his<\/em> to explore, <em>his<\/em> to take in, and <em>his<\/em> to be present in.<\/p>\n<p>We were closer to our vision, but not quite there yet. The avatar was static and non-expressive, and the live video stream had low quality and high latency. Luckily, Sean approached me after the hackathon about pivoting the remainder of my internship to improving VROOM. I jumped at the chance. Sunny was onboard, too, and we were joined by Sean\u2019s Research Assistant, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/uclic.ucl.ac.uk\/people\/priscilla-wong\">Priscilla Wong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, who would help us manage a study comparing standard robotic telepresence to VROOM telepresence (expected to be published later this year).<\/p>\n<h3>Stepping out of the monitor<\/h3>\n<p>Over the next two months, we upgraded the system and ran the user study. The improved VROOM incorporated the following important adjustments:<\/p>\n<ul>\n<li>We used a newer 360\u00b0 camera to increase the quality and reduce the latency of the video stream.<\/li>\n<li>We increased the fidelity of the avatar, changing it from a cartoon to a photorealistic representation of the remote individual. The Avatar Maker Pro Unity library was used to create the head, which we then combined with a Unity standard animated body.<\/li>\n<li>We made the avatar more expressive by animating it and rigging the head and arms to move according to remote individuals\u2019 actions, as detected by a gyroscope in the VR headset and handheld VR controllers, respectively. When remote individuals looked around in the VR view, their avatar\u2019s head turned; when their hands moved, their avatar\u2019s arms moved. We also gave remote individuals a first-person view of their avatar body. When they looked down, they could see their shoulders, arms, torso, legs, and feet.<\/li>\n<li>Simple actions initiated by remote individuals triggered animations on their avatar. When they drove the robot, their avatar\u2019s legs walked; when they spoke, the avatar\u2019s mouth opened and closed.<\/li>\n<li>Some canned animations, such as blinking and slight body movements when the avatar was idle, rounded out the illusion of an embodied version of the remote individual.<\/li>\n<\/ul>\n<div id=\"attachment_655950\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-655950\" class=\"wp-image-655950 size-large\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-3-1024x377.jpg\" alt=\"On the left, a cartoon representation of a person overlaid on a telepresence robot moving in the hallway of an office. On the right, a photorealistic representation of a person overlaid on a telepresence robot moving in the hallway of an office.\" width=\"1024\" height=\"377\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-3-1024x377.jpg 1024w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-3-300x111.jpg 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-3-768x283.jpg 768w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-3.jpg 1137w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><p id=\"caption-attachment-655950\" class=\"wp-caption-text\">For the Microsoft Hackathon, the VROOM demo (left) included a cartoon avatar of team member Sean Rintel. Through HoloLens, local users could see Sean\u2019s avatar moving through the halls with the telepresence robot. After the hackathon, the team increased the fidelity of the avatar, changing it from a cartoon to a photorealistic representation (right).<\/p><\/div>\n<p>For us, the improvements showed the potential to bring telepresence to a whole new level. Remote individuals were finally able to \u201cstep out\u201d from the monitor, having the freedom to explore and more fully immerse themselves in the distant space while also expressing themselves more. They could \u201cwalk around\u201d, clap, high-five their local counterparts, extend their arms, and move their head. In turn, those in the local space could better understand their intent thanks to nonverbal cues like head direction and arm gestures.<\/p>\n<p>While the underlying technologies still need more unification, we think of VROOM and similar VR and telepresence technologies as a bridge between people, environments, and experiences. VROOM is our story of connecting people.<\/p>\n<p><em>Special thanks to all the hackathon team members (from left to right in the below photo), who have helped tremendously with this work: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/leonyanglyu\/\">Software Development Engineer Leon Lu,<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> Brennan Jones, Sunny Zhang,<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/minnie-liu.com\/\"> Software Engineer Minnie Liu,<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/xucao1\/\">Software Engineer Xu Cao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Sean Rintel, Software Engineer He Huang, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/yannickzj\/?originalSubdomain=ca\">Software Engineer Zhao Jun<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/jws\/\">Senior Researcher, James Scott <\/a><\/em><em>and Software Development Engineer Matthew Gan (not pictured).<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-655941 size-full\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-4.png\" alt=\"Small cardboard cutouts of VROOM hackathon team members propped up on a table. Each cutout has a photograph of the team member\u2019s face paired with a cartoon superhero body.\" width=\"1002\" height=\"276\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-4.png 1002w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-4-300x83.png 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Image-4-768x212.png 768w\" sizes=\"auto, (max-width: 1002px) 100vw, 1002px\" \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Editor&#8217;s Note: This post was written collaboratively by Brennan Jones, Sunny Zhang, Priscilla Wong, and Sean Rintel and told from the first-person perspective of Brennan Jones. One of my life missions is to connect people, and I\u2019ve been pursuing this mission through research projects that bring remote friends, couples, conference attendees, emergency workers, and search [&hellip;]<\/p>\n","protected":false},"author":38838,"featured_media":656361,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-655938","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199561],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[898182,639096,483294,241727],"related-events":[641571],"related-researchers":[{"type":"guest","value":"brennan-jones","user_id":"656034","display_name":"Brennan  Jones","author_link":"<a href=\"https:\/\/brennanjones.com\/\" aria-label=\"Visit the profile page for Brennan  Jones\">Brennan  Jones<\/a>","is_active":true,"last_first":"Jones, Brennan ","people_section":0,"alias":"brennan-jones"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"640\" height=\"384\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset-.png\" class=\"img-object-cover\" alt=\"Senior Researcher Sean Rintel, using VROOM, interacts with Research Assistant Priscilla Wong, who is wearing a HoloLens. Sean\u2019s photorealistic avatar is clapping along with Priscilla.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset-.png 640w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2020\/05\/VROOM-Final-Hero-Asset--300x180.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/>","byline":"<a href=\"https:\/\/brennanjones.com\/\" title=\"Go to researcher profile for Brennan  Jones\" aria-label=\"Go to researcher profile for Brennan  Jones\" data-bi-type=\"byline author\" data-bi-cN=\"Brennan  Jones\">Brennan  Jones<\/a>","formattedDate":"May 5, 2020","formattedExcerpt":"Editor&#039;s Note: This post was written collaboratively by Brennan Jones, Sunny Zhang, Priscilla Wong, and Sean Rintel and told from the first-person perspective of Brennan Jones. One of my life missions is to connect people, and I\u2019ve been pursuing this mission through research projects that&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/655938","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=655938"}],"version-history":[{"count":5,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/655938\/revisions"}],"predecessor-version":[{"id":656388,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/655938\/revisions\/656388"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/656361"}],"wp:attachment":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=655938"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=655938"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=655938"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=655938"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=655938"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=655938"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=655938"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=655938"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=655938"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=655938"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=655938"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}