For those of you who have bought, or received as as gift, a 4K UHD TV congrats! You are not invested in a faddish piece of home entertainment consumer electronics in which the premium feature of your new display, like 3DTV, is practically useless for the lack of content.
The big difference with this evolutionary step in displays now, as compared to the 2000’s OTA (over the air) broadcast transition to HDTV, is that this time it’s OTT (over the top) content streaming providers who didn’t exist back then, such as Netflix and Amazon that have raised the bar, and consumers are buying. The rate of adoption for UHD TV is impressive and appears to be steeper than HDTV.
Netflix has been busy creating content in quality higher than many Hollywood television studios have been able to keep pace with, and have set the standard for themselves. However that is beginning to change. Crown Media, the owners of the Hallmark channel, as of now requires producers to deliver content in 4K. It’s only a matter of time now for other broadcasters to follow suit.
On the professional side, HDCam SR tape as a mastering format is diminishing. It’s been a few years now since broadcasters have begun taking digital delivery of program content, usually in the form of a ProRes 422 HQ QuickTime file transmitted digitally on the Aspera platform. The issue with ProRes Quicktime is that by nature it is an insecure format. Anyone who possesses the file can open it. This has created a need for a content container file format such as the encrypted DCP (digital cinema package) format used to distribute theatrical content. Many, including Netflix are banking on IMF (interoperable mastering format) as being the universal container for 4K UHD HDR masters. Tape will still be in existence, though in the LTO format for physical archiving purposes.
Meanwhile the Blu-ray distribution window sees new life as a 4K UHD medium providing Hollywood movies to the home through Sony’s Playstation 4 Pro. It seemed only a few years ago that Blu-ray’s days were numbered, with the emergence of on demand streaming content libraries of big title movies. But given that titles aren’t consistently or even permanently available on all platforms, and the recent obsolesce of the .mp3 music encoding format, there appears to still be value in having entertainment in a physical medium that the purchaser can hold onto and play any number of times without worry of having that right they licensed being taken away arbitrarily.
With VR arcades and virtual cafes featuring LBE (Location Based Experiences) and OOH (Out Of Home) experiences for mainly HTC Vive popping up in cities all over the world, it’s time for creators and their content to step up their game. This article explores a few details on hardware, software, and what HTC Vive will be in the future.
Visitors to the VR Pavilion at NAB 2016 experience room scale VR in the HTC Vive headset. Photo Copyright Robb Cohen Photography and Video.
For the last 18 months I have been testing pre and full releases on various VR platforms, including Samsung’s untethered Gear VR, Oculus Rift, HTC Vive and various forms of CAVE and VPT environments to which our festivals introduce audiences young and old to this new medium. Besides the interesting one-offs – from experimental narrative using photogrammetry and gesture capture to 3D spatialized audio chambers, I currently have over 75 Roomscale or “6-degrees of freedom” (aka 6DoF) VR titles in my Steam account I have been getting to know the HTC Vive at its best and worst.
HTC, or is it Valve? Bet on ideas not plastic
The ‘Vive’ (or HTC Vive tech specifically) is the most advanced tech publicly at this time for Roomscale experiences, but what about tomorrow? HTC jumped at the chance to be relevant again when joining with Valve. Just like with touchscreens on phones, HTC wanted to be first out of the gate and cash in on this emerging tech, while Valve had the hardware, software, platform, ideas and talent to hand over to HTC’s brand machine to make it a hit. As HTC runs in place, Valve has the Steam platform to improve for all creators on any prospective equipment coming.
Valve is interested in competition in the Roomscale hardware world, freely offering their tech to anyone wanting to create their own. Keeping updated with Valve and Steam will ensure the best possible experiences that come up in the future.
Sitting, standing or extra controllers
‘VR will change humanity’, this should be true for all. People with mobility issues, disabilities, and physical limitations want just as much access to games as the able-bodied. Some lend well to this but there is always room for more. Teleporting within an experience has been done many ways and has become a smooth way to travel within the space without needing to physically move.
Roomscale should be designed for the specialty hardware it uses; a controller like the Xbox, often used for games, requires both hands, breaks immersion and is not for the inexperienced. The future will include more eye tracking and improved use of the positional trackers.
Same old same old
Don’t be fooled. When looking for where to spend your money, check reviews, look for demos and avoid getting caught in the re-skins of the same game. Yes, it might be only $3 but those can add up and you want something original (at the very least something the developer spent more than a weekend on).
As important as checking the reviews, remember to review titles you try yourself! Sites like indiegamereviewer.com and Roomscalist.com can only cover so many and the Steam community needs help guiding potential buyers or those looking to use their time at an arcade wisely.
Roomscale is an exciting way to explore real and alternate realities of our world while at home or at a location/event made to showcase this ever evolving technology. Follow Valve, create your own Steam account and prepare for the thrilling experiences to come.
Often we see photos from science fiction speculating on a future with animated holograms floating before controllers of the world around them. However, in actual fact, the world itself will become the interface. As all surfaces become artificially intelligent, we will rapidly iterate and coalesce the disparate parts into a unified, mediated, seamless dance.
As we move into the era of artificially intelligent or networked objects, we will need to find a more contemporary narrative for the way in which the world of once seemingly innocuous and inanimate things suddenly factors into an emotional and communicative paradigm; your running shoes will talk to your refrigerator.
More importantly, there will be a narrative that forms between those various networked items as you develop an emotional connection to them like Pee-Wee saying good morning to everyone in his playhouse. I posit that when I am in my bed and I can make a single macro command statement to my automatic speech recognition device and it can change the temperature dim the lights to an exact percentage lock the front door turn on my security cameras and my computer record my daily to-do list order me a taxi and deliver food within 45 minutes that I have effectively created a form of virtual reality, but, when hybridized in this manner, the term begins to fall apart.
Everything is alive at Pee-Wee’s Playhouse
Cutting Edge Vintage
We should not assume that this will be expressly a digital shift. In fact, as IoT makes previously dumb objects responsive and artificially intelligent, heirlooms and collectibles will become animated like Frankenstein’s monster. As Simon Jenkins writes in the Guardian:
“The resurgence of retro technology is neither negative nor a hipster fad. My landline is simply better than my mobile, as my FM radio is better than my digital one. Photographers say that pictures printed from film are superior to digitised ones. A DJ knows that a vinyl groove holds a deeper bass line.”
My question is how do I make sense of this in a meaningful way, in a philosophical way, in a way that will matter not only to me but to those around me whom I want to draw into the world of the play. What is really going on here is an ontological exercise, wherein we are learning about the very nature of being. As we reevaluate those objects that are imbued with sentimental meaning and memories, like holograms, we might begin to reconsider the very nature of our perception up to this point as a whole. Is everything, already animated by unseen forces – meshnets at the atomic and molecular level that communicate their presence to one another?
This is one of those liminal moments that may happen every quarter-century where the elite in the ivory towers to send to the street to speak with the street punks and all figure out among them what exactly is going on and where to go next. These beautiful transient moments are when humanity truly becomes its finest form and discusses in and among itself how to involve next.
The self-reflexivity of our nature, moreover enhanced by this sort of technology, is something quite unlike any advancement we have seen before. Yet its effects are exactly like so many technological disruptions of the past and the same dynamics emerge – wherein we all resist at first we all fear the implications and the consequences and at the same time immediately begin to absorb it and contemplated and move forward from that new platform
Before I get too far ahead – let’s relist just some of the ways in which VR will impact the culture: real estate, tourism, design, scientific research, archaeology, small and big business commerce and communication, Fintech, psycho and physiotherapy, tele-robotics, entertainment and storytelling, journalism, social exchange, training and simulation, data visualization, education and so much more.
VR is not a stepping stone to Augmented Reality, as many posit; we have fully left the linear behind us. We now live in a paradigm that doesn’t wait for its user to catch up, or comprehend or even become aware that it exists. There is no polite booklet filled with instructions. The rate of iteration and innovation is so intense and networked that we can only grab an oar and learn to navigate the rapids. Indeed soon enough VR, AR, will provide and interface for a hyper-connected world of things – each with its own sensors – ears, eyes, sniffers, autonomous mobility, even the ability to repair and heal itself, to learn and to advance its own agenda. If we are lucky we will get enough influence on this movement to instill a sense of conscience into the AI.
CES this year was rife with ASR (automatic speech recognition) devices embedded in everything from cars and anthropomorphic robots to door locks, thermostats, hotel rooms and medical equipment. In 2017 the brand winner in this regard was Amazon’s Alexa technology – a company whose very engineers shared their surprise at many ways in which their tech had been appropriated by innovators and entrepreneurs. In previous years, at NAB – North America’s largest tech showcase typically populated by industry leaders and trendsetters, we have seen Virtual Reality rapidly infiltrate every sector of the entertainment and information technology conversation.
Robert Scoble (far right) discussing the future of tech at NAB 2016. His latest book – “The Fourth Transformation” – focuses exclusively on Virtual and Augmented Reality – photo copyright the author
Hack To the Future
While it is easy to imagine and speculate, it is important to remember that often the original intent of an invention may be rerouted to newer and more interesting use. With the Microsoft Kinect we already saw the ability for the machine to read our heart rate, blood pressure, body temperature, all of which will provide biofeedback to be used in any way imaginable. It has a massive hacking and modding community  applying and reverse-engineering its technology for many novel applications.
One of the modes of intuitive interaction coming from the V/A/M/r revolution is gaze-based navigation of course. Recently FOVE sold 7,000 units for the growing Asian VRcade LBE market. From moderating an exercise regimen to detecting anxiety, honesty, or arousal of any type, the technology will respond and engage with us in whatever way our future selves may find useful. But as we rapidly introduce always-on tech that monitors us for commands – be it through eye-tracking, automatic speech recognition or Wearables, we introduce a whole new level of a privacy concern as we submit minutiae about our emotional response to content. How will we build in sufficient safeguards to protect users from always-on devices being exploited while avoiding creating regulation that stunts blue sky creativity?
Of course, some of the brightest minds in the popular sphere – Hawking, Musk, Gates, Kurzweil, Minsky, Mann also caution, without reservation, about the dangers of AI running amok. Do we build in an expiration date like Tyrell did for Replicants in Blade Runner? At what point do pseudo-sentient-beings have rights? If we are monitored, do we, in kind have the ability to monitor back?
At the VRTO World Conference and Expo in Toronto 2016, Professor and CTO of Meta Steve Mann presented his Code of Ethics for Humanistic Augmentation wherein he described the effect of surveillance and sousveillance. The right to know that one is being monitored and the right to disclose that one is monitoring in return.
Anyone who has spent any significant amount of time in a powerfully social game like World of Warcraft, Second Life, Sims Online or even Facebook or Snapchat – understands how profound a connection can emerge, not just between a couple but a tribe, a community especially as the neural network between those connections strengthens. But there has still always been a thick layer of abstraction in this 2D UI.
Today some of the leading platforms for social VR include AltSpace, High Fidelity, VRChat, Project Sansar and soon Facebook. Once we are operating with other human agents in a social Virtual or Augmented reality, one with a spatial context, and persistent horizon – let alone haptics, olfactory augmentation or other sensual stimuli, the sense of presence and shared memory will be as potent and meaningful (if not, perhaps even more) as anything in the Meatverse. There is something extremely powerful about the intermediary, this proxy reality that permits humans to connect more directly and wholly.
Through this, we can reach people who may otherwise be emotionally or physically unavailable, address issues that may otherwise be too taboo, difficult or unlikely to be fielded in other contexts.
The Maligned Leading the Blind
Of course with this there must come a respect from the platform and for the platform – if people are to surrender themselves intellectually, socially, emotionally in this manner, they must be protected – the spaces and methods for communication must be protected in their favor. There needs to a be shared ethics upon which we agree to realize the potential for this transition to transportive technology. We must also always remember that we are essentially hijacking the evolutionary neurological functions of a human so as to trick them into believing something is real that is not actually there. With that must come the sense of responsibility that comes with taking another person’s trust and mental health into your hands.
How will we open our minds, heart, and bodies to this radical change towards networked, artificially intelligent and neuroscientific tech so as to both protect ourselves and each other, while benefiting from its unpredictable exponential capability for renewal and evolution? This should always be at the forefront in considering how to create the best compelling and transformative content, the same way we test cars, furniture and toys for safety before deploying into the home, always bearing in mind, and leaving open the possibility that these products and the tools to create them may be utilized in ways we could never have expected. Let’s hope and build for that.
Paul Bloom, How Pleasure Works: The New Science of Why We Like What We Like, WW Norton 2011
This is a work in progress by author Keram Malicki-Sanchez who is the executive director of the VRTO Virtual & Augmented Reality World Conference & Expo and the FIVARS International Festival of Virtual & Augmented Reality Stories.
About six years ago I had heard from some in my social circle from time to time predict that movie theaters would become extinct except for a handful of theaters scattered around some cities for festivals or nostalgic screenings. To then-rising ticket costs, food, and parking would be no match for the numbers and ease of putting together a decent living room home theater, equipped with digital surround sound systems and flat screen TVs.
The comparison didn’t really hit home for me and instead was a reminder of a director of photography who in small talk at a party in 1991 predicted the end of editing because interactive DVD movies would allow viewers to become their own “picture editors.”
If everyone above has cocooned in their homes since then, in front of their mid-2000’s plasma HDTV flat screen, they’ve missed a generation of amazing theatrical exhibition advancements.
Of all the post-production and exhibition methods proposed in the past few years, High Dynamic Range (HDR) has endured and continues to grow. Dolby has been at the forefront with it’s Dolby Vision technology impressively paired with Dolby Atmos theatrical sound systems for the Dolby Cinema experience.
Very simply, HDR means the display or projection of deeper blacks, brighter whites, and much more dynamic color. All of which become very important in bigger theaters with a large throw distance between the projector and its massive screen size. There are now several HDR Dolby Cinema installations in AMC theaters across the US.
Meanwhile, research continues to find ways of minimizing reflections from surfaces in the theater and ambient light in the theatrical environment to give viewers the most maximized contrast richness experience.
The Barco Escape immersive format has also continued to move forward. This viewing experience surrounds the theater with screens on three sides, left, center, right, which depending on where the viewer is seated, either fills the peripheral vision or creates a panoramic image.
Although not as widely adopted as Dolby Cinema theaters, several installations are now in place in US theaters, which were showcased in the summer of 2016 with the release of Star Trek Beyond in the Barco Escape format. The production company Minds Eye Entertainment is currently finishing a slate of sci-fi, action and thriller films destined for Barco Escape screens.
There were several creative production concepts being promoted at NAB 2015, all seeming to suggest the old cliché of “fix it in post” has given way to “make it in post”. Workflows were floated that spoke to High Frame Rate shooting, 4K, 6K resolutions and beyond, high dynamic range, immersive picture and sound, and virtual reality.
The main theme that stood out for the processes related to above is that more than ever, you can kick the creative can down the road to a later point in time in post, rather than “baking in” elements of your photography during production, such as exposure time and framing.
Of these concepts, the one that has just begun to be adopted in significant numbers is 4K shoot, post and delivery. This is because so long as you are equipped with the bandwidth to deal with the higher data, much of the production R&D is already behind us. HDTV was introduced as an abundance of consumer technology but little content. It appears now we will have the inverse where producers are future protecting their intellectual property by producing in 4K while everyone waits for the mass adoption by the consumer of 4K entertainment and display technologies.
While the step up to 4K bandwidth can be handled easily in production and post with standardized workflows, it’s just as easy to forget that by compounding creative production concepts, an exponential demand is placed on bandwidth to a point where it can be crippled.
For example, a production may start with a baseline of 4K shoot and post because that’s what they are required to deliver. So they shoot with a Red Dragon camera at a nominal 4K resolution, with 7:1 compression. But then the director or DP wishes to shoot 5K at 5:1 compression so that it can be reframed tighter later in post, which effectively doubles the data. As well, the location is a vast room interior with floor to ceiling windows on a sunny day so the DP wishes to shoot in Red’s HDRx mode which doubles the data again. And it’s a two camera shoot so double it once more. So how does that effect everything else down the chain?
On a given shoot day, this example could generate around 2 terabytes of camera data. That means more camera cards needed in circulation between the cameras and the DMT (data management technician). Their job it is to back up the media on set onto the abundant raid 5 hard drives as well as the travel drives that go back to editorial and the lab where it is redundantly backed up again onto hard drives and LTO tapes. Assuming the DMT, editorial and the lab are sufficiently staffed and equipped to turn around dailies on that amount of bandwidth at today’s processing power (which they won’t be), the costs for camera card rentals, hard drives, LTO archiving have gone up by a power of 3 and additional time demand placed on labor and workstation processing.
This doesn’t even get into higher frame rates where say shooting at 96 frames per second, everything quadruples further. While it’s all an easy setting change in the camera, it’s important to know the implication for workflow and budget if you can afford what becomes exponentially creative – and voluminous.