Srividhya Srinivasan Issue: Asia-Pacific II 2015
Article no.: 4
Topic: A stage for Wildey
Author: Srividhya Srinivasan
Title: co-founder
Organisation: Amagi
PDF size: 1100KB

About author

Srividhya Srinivasan is the Co-Founder of Amagi

Srividhya Srinivasan loves creating innovative and disruptive technology products that challenge industry status-quo. Vidhya brings about 20 years of technology and product development experience to build pioneering products at Amagi.

Vidhya guides product engineering and is responsible for overall delivery and operations. Under her leadership, Amagi has successfully delivered 99.99 percent SLAs in cloud-based broadcasting for global TV networks and targeted advertising.
Prior to Amagi, Vidhya co-founded Impulsesoft in 1998, a leader in wireless audio technology. At Impulsesoft, Vidhya built the next generation wireless products with OEMs in Asia and Europe. Post-acquisition by NASDAQ-listed semi-conductor major SiRF, Vidhya continued to architect new products and also integrate them with SiRF’s larger product portfolio.

Vidhya began her career as a software engineer at Texas Instruments after graduating in computer science engineering.

Article abstract

Monetization vs User Experience and Innovative Product Placement in the age of Smart Television and Immersive Experiences 

Full Article

In 1964, Douglas Wildey’s Sci-Fi virtual reality animated series premiered on ABC. Jonny Quest ignited the curiosity for Immersive Video Experiences around the globe and it ran for two successful seasons.

Over the last couple of years, the entertainment industry witnessed the introduction of high-end wearable technology which could simulate virtual environments with almost the exact amount of accuracy as depicted in the erstwhile animated series. Currently, major players in the entertainment and gaming industries are vying for their share of the virtual reality pie and everyone seems to be smitten with the idea of delivering alternate reality environments for their customers. This superb amount of interest in this technology is supported by the fact that it isn’t merely another device. It’s an extension of the human condition.
A multidimensional global economy prevents service providers from providing the same physical experience to consumers around the globe. Looking at this in a localized context – let’s imagine that an extremely popular rock band from Spain tours the world prior to releasing their latest studio single. Fans and groupies will fight it out to secure tickets to the respective venues around the globe. Some of them will make the cut, some won’t. At least that’s how it works in the traditional scenario.
Now picture this, in a slightly time-warped utopia of personalized entertainment, we don’t get the band to go up on stage at all. Instead, we place them in a custom “Live Immersion” studio where the members of the band participate in the show, play their scheduled tracks with all of their equipment, aligned exactly how they would have been on stage. We deploy 360° capable drone cameras in the studio environment, allowing us to capture video from multi-angular perspectives. We also enable the recording of sound at different levels based on the distance of any point on a 3D axis from the origin of the sound track. For example, sound captured at a point closer to the lead vocals and distant from the lead percussionist would focus more on the vocals and less on drums. The idea is to create a custom environment with realistic video and audio monitoring.
Now comes the fun part. With recent advances in custom 3D design, it is possible to simulate various types of live venues and place the band right there while streaming the entire content live. The difference between how to handle broadcasts now, and how we can do it in this utopia that we’ve created, is dependent on the usage of wearable interfaces which allow users to have a truly immersive experience. In this specific scenario, we can allow end-users to purchase digital passes (priced strategically competitive to real-life concert passes) which will be valid for the duration of the show. Each user, using their wearable 3D device, can connect to the digital ‘concert’ and see their favourite band play live. By using 3D technology as available in mass multiplayer gaming environments, we can depict a crowd in the scenario and maybe even allow the user to choose who they want to stand next to.
If you’re in Manhattan and your spouse is on a business trip to Shanghai, who’s to say that the two of you can’t catch up for a Rolling Stones Concert together at Los Angeles? With immersive experiences trying to push the limits of artificial interactivity, maybe you’ll even get to hold hands during such a simulcast event!
Apart from the wow factor of such an interface, it also allows us to explore an entirely new era of product placement. In this hypothetical environment, where the end-user is digitally placed inside an interactive show, we would do well to assume that digitally created humanoids can be engineered inside the simulation. And right there, the absence of absolute reality can be used at an advertorial advantage.
How?
Let’s say that teenage music enthusiast Natasha purchases a pass to this simulcast concert and is effectively positioned inside the crowd. She’s captivated by the blaring music and is enjoying her digitally enhanced experience. The advertiser then displays a crate of canned beverages right next to her, inside the simulation. This sparks a feeling of thirst with Natasha, allowing such a scenario to shift our idea of advertisement from a stop-gap solution to a perpetual format. This advertisement does not interfere with the concert as a popup, or as a commercial break. Instead, we’re allowing the product to exist directly in front of the users, pretty much all the time. And we can choose the relevance of such product placement based on the specifics of any given simulation – beverages at digital concerts, sports gear or fan merchandise during a live sports event and so on.
Purchasing the product can be very simple. We might look at the inclusion of user-access controls which direct them towards e-Commerce portals, and let them pay and get the product delivered to their respective shipping addresses. Or, to make the transaction a little more exciting, we may perpetuate the shopping experience into the simulation itself – where the user hands over digital currency (e.g. Bitcoins) to the seller’s humanoid agent and completes the transaction without exiting the simulcast environment.
The next big leap for Smart TVs, is individually tailored content. At the broadcast level, we have successfully engineered the process of targeting advertisements and content based on regional demographic behaviour. What if the same theory were to be applied to an individual?
As a popular example of multi-layered content, the John Cusack-starrer 1408 famously had three alternate endings, for television, theatrical and online experiences. As audiences around the world marvelled at this voluntary inclusion of possibilities, we have also seen reactive audiences such as the fan-following of the popular daytime sitcom ‘How I Met Your Mother’, many of whom rose in fury against the original series finale episode, and demanded an alternate ending. The network executives at CBS finally revealed, after days of speculation, that the Home-Video DVD Edition of the sitcom would include an alternate ending.
Considering the volatility of viewership and empathy, it is reasonably wise to plan such events well in advance. In 1408, one major difference between the endings was the mortal status of the protagonist at the end. The outcome of a cinematic experience, whether positive or negative, can create a major impact on the public perception of any creative giant. At this juncture, we can imagine a future where the solution to this paradox is presented as a predictive analysis of the emotional measure of a viewer, and the delivery of individually tailored content, based on their past feedback and reviews.
The internet and its repositories have become increasingly invasive over the years; IMDB, Netflix and other online movie databases already contain an extensive plethora of individual ratings to present an idea of how different people react to various cinematic stimuli. Consequently, we can then link the post-cinematic emotions of a viewer to their digital shopping behaviour too! A man who expects to see the warrior protagonist make it out alive from a battlefield, gets to see his hero emerge victorious in his plight, and then we show him an advertisement about Personal Safety Equipment. This is beyond the paradigm of contextual advertisement – we might have to look for a whole new word to describe what this can turn into.
Expanding our idea of futuristic television and the concept of linked devices will hopefully play a non-linear role in aiding information and revenue models for broadcasters. Through apps such as Shazam, we already have access to technology that enables us in identifying music by sampling audio. What if your tablet device could identify the content that you’re watching on your living room TV just by sampling the audio, and then present you with a list of references to look for on the screen of your mobile device? The information can be as diverse as required, based on exactly what you’re watching. If you’re an enthusiast for Animal Planet and you’re watching a show about the tropical defensive habits of elephants, your tablet device can auto-generate a link to get you better information about the subject, or maybe give you an advertisement to buy a cool beverage to beat the equatorial heat!
Smart Television is no longer restricted to the idea of internet-enabled display devices. It’s time for us to consider the possibility of integrating our lives through what we watch in the comfort of our living spaces. It’s time to throw the remote control away, and dive into the startling depths of the human mind for an answer.