A virtual journey through sound

David Robinson, Senior Strategist

A virtual journey through sound

Here at TheTin we run monthly Tinnovation sessions, a chance to explore and discuss new trends, tools and technologies. Often they lead to internal projects which act as a practical way to work with something new, outside the normal constraints of deadlines, and taking more risks than we normally would on client work. Our latest Tinnovation project, Band Explorer VR, is up and running – but how did we do it? Senior Strategist David and Designer Daisy talk us through the process…

Project inception

Following on from a Tinnovation focusing on VR we knew we wanted to have a go at building something for ourselves, but there had to be a reason for doing so. The underlying goal was to produce something in VR where the medium itself allowed for a solution with benefits over and above a more traditional 2D experience, not just for the sake of it. The end product should allow the user to do something faster, more easily or in a more engaging way.

Tech considerations

Initially, we thought about using Unity, a mature platform with lots of built in VR capabilities, with cross-platform support from (almost) the one code base. It’s possible to code in C# or Unity Script (based on JavaScript), skills we have in house, but it’s not a tool we have extensive knowledge of working with. It’s also more focused on richer experiences than we would have the resource to invest in.

It took only 5 minutes looking at A-Frame to realise we had found our solution. A-Frame is built on top of three.js, but instead of coding every object by hand it has lots of primitive shapes built in. You don’t even need to create a scene, camera or renderer as these are setup automatically. It’s built around the traditional building blocks of a web page, DOM elements, extended with JavaScript. This makes it easier to keep the code and the layout separate. It incorporates an entity-component-system which allows for easy extensibility. It’s also got a great community of developers actively working together to build a wealth of ever expanding components and tools.  

Design research

The first thing that needed to be done was some fairly extensive research into the topic - and thankfully, there’s plenty of material out there for the newbie to get started with. As we discussed in a recent Tinnovation session on design trends, aesthetic standards for VR are still being figured out, but there are a few established rules to follow. Around the same time that Cardboard was launched, Google released a handy set of guidelines that addressed the most basic principles - always maintain head tracking, keep the user at a constant velocity when they’re moving in the app, don’t make too many brightness changes, and anchor the user to their environment were some of the key takeaways.

UX

Once I had a better understanding of what I was dealing with, it was time to start designing. But before we could leap headfirst into the design world of tomorrow, we had to think about the basic UX of the thing. Who would be using it? What did we want our main features to be? What would be the user’s primary journey? Turns out that even when designing for cutting edge tech, it’s always helpful to start with something familiar - good old pen and paper. I drew up some userflows and some initial layout sketches, including an ideal interface based on familiar objects like vinyl records and their sleeves, which would be simplified down the line.

Eventually, we came up with a wireframe of sorts for our experience - we could only achieve so much of our vision on pen and paper, though. There was little point in creating an extensive mockup in Photoshop or Illustrator in the limited timeframe we had, especially when we weren’t sure what would work style-wise in the VR environment - so it was time to strap on the headsets and start testing.

Trial and error…and error…

It was through a pretty long process of trial and error where we would figure out what would work in terms of the general look and feel of our experience. We kept in mind all the basics from our research - users don’t like spaces that are too bright, so we kept the colour palette dark. Floating text was a no, so we made sure to align any copy with objects in the scene. We wanted to make use of the depth of the VR space too, but still have the objects further out be visible, which was a tricky problem to solve. 

There were some limitations with A-Frame - some animated transitions had to be left out so that the experience could run smoothly. We faced a few challenges in getting the audio to play in a way that made sense too - we didn’t want the user to be turning their head and triggering sounds every second - so we made the decision to have the user click to play the audio to avoid any sound clashes. We added a simple media player UI too, allowing the user to skip between tracks, pause the track if they wanted to, and generate a playlist that they could listen to in Spotify later on. 

We were close to achieving the vision set out in our sketches, but it needed something more than just floating heads in a black space. We needed to make the environment a little more ‘real’ - adding a simple horizon with a recognisable landscape made a huge difference. A sky, a ground and a subtle gradient gave the space some character. The user was no longer staring into an empty black void and was instead in an environment that felt at least a little familiar. Spotify’s own collection of artist images completed the interface.

Back to development

So we had our layout and our interface. Audio was playing, playlist saving was functional, and it all worked rather well in a desktop browser. But this was a VR project, and whilst it ran fine on powerful desktop hardware, we always wanted this to be a mobile experience too.

The complications of the bleeding edge

The amazing team working on A-Frame are exceedingly active, and each new version brings performance enhancements across the board, but browsers are less frequently updated. We were developing with beta versions of browsers, and the WebVR spec (which allows integration with VR headsets, and hardware acceleration) was yet to be implemented. As such performance was well below where we wanted it to be. 

And then, in the first week of February, Chrome 56 for Android came out of beta. The impact was huge. Performance increased dramatically. The lag was greatly reduced, the image was far less pixelated, and everything just worked.

So how did we do?

We are delighted where we have got to, with a stable build running on publicly accessible browsers which anyone can use. There are still issues to overcome, but we have learned so much.

But does it solve our original goal - to produce something in VR which works better than its 2D counterpart? 

We believe it does. 

We did have to drop some features. We wanted to enable voice control, but due to a “feature” with Chrome, this would always have to be preceded with a button press, as It isn’t possible to have the microphone constantly listen for commands. But overall we achieved pretty much everything we set out to do.

Look out for more developments in the future - but for now, please enjoy Band Explorer VR. Wanna find out more, read the full article here.

As your brand and technology partner, we’ll create unique brand experiences for your customers, working flexibly to deliver technology solutions that are right first time.

We can help build your brand through technology, email [email protected]

Comments are closed

TheTin uses cookies to give you the best possible experience on our website and to help us analyse our site traffic.
You can update your cookie settings and review our Cookie Policy at any time.