The Sony Wow! Studio was pretty great last year, with full-body haptic suits for VR experiences and a playable trailer for Resident Evil that let you shoot zombies until they ripped your guts out and Mila Jovavich had to put you down with multiple gunshots…and you got to feel all of it.
This year it was more the Sony “meh.” Studio. On the other hand, it was a lot easier to see the technology on display as precursors to all sorts of things.
Rather than humanoid robots that strive to be as lifelike as possible, Sony introduced the upgraded Aibo and a smaller version of Kuri. Do you remember Aibo the robot dog from about 10 year ago? It’s back and, I have to say, much much better. Aibo moves more fluidly and naturally, and has a ton of sensors. This cute-but-still-hard-plastic dog seeks out its owners, learns what makes them happy, and gradually grows accustomed to wider environments. It can differentiate between family members, and has an app for learning new tricks. Aibo uses deep learning technology to analyze the sounds and images coming through that array of sensors, and uses cloud data to learn from the experiences of other Aibo units and owners. I don’t know what the Aibos at the SXSW show made of the constant and changing crowds, but they seemed as happy as the baby goats at Viceland. Sony says they are only going to sell Aibo in Japan, and I was surprised there was no “smart home” element built in.
What looked like a smaller version of Kuri was Xperia Hello!, basically a mobile Alexa. I mean, obviously that’s the next stage of Alexa and Google Home, but I’m not sure Sony’s approach of a soda-bottle-sized hands-free cellphone is the winner.
At this point the layout and flow of the space really became a problem, so hopefully for next year they will work with macro designers as well as micro. The sequel to last year’s Superperception continued to build on their project of combining and expanding human senses. This one was almost fully vision-based, manipulating the brain’s visual cortex to convince you that you were a mosquito. 5 or 6 million science fiction plots exploded in my head (none of them involving being a mosquito), a lot of them having to do with telepresence and consciousness projection. Combined with the ANA Xprize challenge for robot avatars, it looks like Japan is all in on enabling people to put themselves into created bodies. Could be interesting for society and for war.
The surround sound demonstration of mulitple speaker sound to go with the advances in VR and AR for vision was really underwhelming. I have better immersion in my home theater. I didn’t even bother playing the VR soccer kick, because I’ve done stuff like that a hundred times and it is what it is. Later, combined with the Superception projection, this could be amazing, though.
Pretty much everything else was about mapping where you are in three dimensions, what you look like, and how you move. They have a “reverse green screen” that models the person and then can drop them in footage of the set (#deepfakes2.0), a motion sensor that has a crazy frame rate, and was demonstrated by 3 people and one physical puck playing air hockey (with multiple virtual pucks thrown in), cameras that map your head perfectly giving you a model to use in video, and a Kinect-based dance/sports game that was frustrating.
All together, I expect to be able to convince myself I’m in an Aibo, move it around and interact with the world that way, then have my face and body doing whatever some hacker decides to do in mixed reality with my image.
So here’s a thought: I could go someplace, largely unrecognized because I’m going there in a drone or robot dog, while CCTV cameras are convinced I’m actually going somewhere else as a 3D video model of me is overlaid on the footage.
Finally, where last year I used a paint pen to write on the plywood graffiti wall on the way out, this year I used my finger to draw in light on a table via a projected-light interface, and found my message projected on the plywood graffiti wall on my way out. Sensors, light, and mapping.