The Sony Wow! Studio was pretty great last year, with full-body haptic suits for VR experiences and a playable trailer for Resident Evil that let you shoot zombies until they ripped your guts out and Mila Jovavich had to put you down with multiple gunshots…and you got to feel all of it.
This year it was more the Sony “meh.” Studio. On the other hand, it was a lot easier to see the technology on display as precursors to all sorts of things.


What looked like a smaller version of Kuri was Xperia Hello!, basically a mobile Alexa. I mean, obviously that’s the next stage of Alexa and Google Home, but I’m not sure Sony’s approach of a soda-bottle-sized hands-free cellphone is the winner.

At this point the layout and flow of the space really became a problem, so hopefully for next year they will work with macro designers as well as micro. The sequel to last year’s Superperception continued to build on their project of combining and expanding human senses. This one was almost fully vision-based, manipulating the brain’s visual cortex to convince you that you were a mosquito. 5 or 6 million science fiction plots exploded in my head (none of them involving being a mosquito), a lot of them having to do with telepresence and consciousness projection. Combined with the ANA Xprize challenge for robot avatars, it looks like Japan is all in on enabling people to put themselves into created bodies. Could be interesting for society and for war.
The surround sound demonstration of mulitple speaker sound to go with the advances in VR and AR for vision was really underwhelming. I have better immersion in my home theater. I didn’t even bother playing the VR soccer kick, because I’ve done stuff like that a hundred times and it is what it is. Later, combined with the Superception projection, this could be amazing, though.

All together, I expect to be able to convince myself I’m in an Aibo, move it around and interact with the world that way, then have my face and body doing whatever some hacker decides to do in mixed reality with my image.
So here’s a thought: I could go someplace, largely unrecognized because I’m going there in a drone or robot dog, while CCTV cameras are convinced I’m actually going somewhere else as a 3D video model of me is overlaid on the footage.
Finally, where last year I used a paint pen to write on the plywood graffiti wall on the way out, this year I used my finger to draw in light on a table via a projected-light interface, and found my message projected on the plywood graffiti wall on my way out. Sensors, light, and mapping.
