No designers were hurt in the making of this book.

In collaboration with: We have many faces. We are the THIS EBOOK UNCOVERS recognition and prestige given to your TASTEBUD TINGLING website for your hard work and creativity, we are the expert jury that scores your INTERACTIVE ILLUSTRATIONS projects, we are the blog featuring the MADE WITH WEBGL latest on design and development, we are SHADERS, ARTWORK MAKING the inspiring conferences uniting the best of the digital industry in iconic cities all WEB AUDIO APIS, HIGHLY over the world. CONVERTING ECOMMERCE LOOKBOOKS, GLITCHY WEBGL We are Awwwards. Never stop evolving. The following case studies are true BUBBLES, UI AND THREE.JS stories. They reveal the secrets and steps SCENES RENDERED AS PIXI. taken by top digital agencies to create JS TEXTURE IN THE CANVAS, some of the most unique websites of the year. Be warned, what you are about to AND GLOBALLY CONNECTED witness may severely inspire you. MULTI-DEVICE EXPERIENCES. Protest Sportswear By Build in Amsterdam 06 ECOMMERCE

Paper Planes By Active Theory 19 DEVELOPER

Cavalier: Conqueror of Excellence By Your Majesty 30 EXPERIMENTAL

KIKK Festival 2016 By Dogstudio 40 USER’S CHOICE

Falter Inferno By Wild 45 SITE OF THE YEAR

Sound in Color By Resn 52 AGENCY Conference DIGITAL THINKERS CONF. LOS ANGELES JUNE 1 - 2

Join top tech and design agencies to discover the future of the web, make inspiring connections and immerse yourself in the best talent of the digital design community. E-commerce Site of the Year 2016

Protest ProtestSportswear

Build in Amsterdam

is an award winning design Sportswearand development shop.

Build in Amsterdam

As our name implies, we love to build things. Simple things. Beautiful things. Crazy things. Intuitive things. Things that don’t exist yet. Things that make even the most cynical, seen-it- before people say “hey that’s pretty cool”.

We make things 100% tailored for the project at hand. We dig deep to build things that are actually useful instead of things that just lay there and look pretty. We build these things because we love to build.

Build in Amsterdam was founded by Tim Weers and myself, two advertising veterans on a mission to close the gap between branding & eCommerce. We work in small teams of passionate builders — designers, developers, writers and strategists — to create award- winning work for both local and international brands. At their annual award ceremony, Awwwards rewards the talent and effort of the best web developers, designers and agencies in the world. In this year’s edition, early February, Build in Amsterdam won the prestigious award for eCommerce Site of the Year, for Protest Sportswear. Build in Amsterdam’s Creative Director Daan Klaver explains the ins and outs of this case.

Introduction

Sportswear brand Protest asked us to create a site that would bridge the gap between story and store. They wanted to combine an immersive brand experience with an incredible eCommerce platform. Woocommerce and Wordpress

We prefer working with Woocommerce and Wordpress, because they enable the combination of storytelling and eCommerce in one user friendly CMS system. Furthermore it’s an open source platform, giving our clients the freedom to choose another agency along the line. We want our clients to stay with us because they want to, not because they have to.

An example of a product detail page Concept: Shoppable Lookbook

Every aspect of the website was built for conversion. Therefore we turned the entire website into one big shoppable lookbook. Whenever you see a campaign image, that image is clickable. Then the products in that image appear and can be placed in your shopping bag immediately. This innovative combination of branding and sales creates a seamless shopping experience in a highly converting eCommerce platform. Design Protest is an ‘affordable sportswear brand’. This implicates that the design should be qualitative, but accessible.

We wanted the design to stand out from the competition, so where most fashion brands choose black and white, we went for color. You will find no black on the website, not even in text. The typo is inspired by the blue ball-pen ink that fills countless pages of travel journals, diaries and notebooks around the world. The simple aesthetic of handwritten notes reverberates in the blue texts and the highlighting captions (yellow marker) on the website. The key elements of the website are in modern, soft-toned colors, and for every product we used a unique background color, extracted from the image next to it. Social Wall

To further this seamless experience, we created a Social Wall where visitors can see and shop the latest products straight from the Instagram overview. Team Riders Protest has a big group of top athletes, they call their team riders.

On the “Team page” you find their favourite items, music, and action pics and clips. Once again; the action pics and videos are clickable and the products in them can be purchased directly. Animation

To add to the seamless user experience, we integrated animation that is not only beautiful and intuitive, but at the same time very functional. On an eCommerce platform of this scale the use of animation is an absolute rarity.

Animation throughout the entire platform Mix and Match Tool

Choose a product and swipe your way to the perfect matching items, using our intuitive and fun-to- use tool. You can even Mix and Match on colours. And to make it more fun we added a Shuffle function.

Choose your combination and shop it straight away Mobile Mobile is key for a good converting eCommerce platform.

So for our mobile design we stuck to our shoppable lookbook concept, but we rethought every interactive element and focused on fast functionalities to make it feel like an app. The platform is fully responsive, for all phones and tablets. The Results Since of the launch of their new platform, Protest increased their online turnover by 150%.

And that is what satisfies us the most about this project. Winning awards is absolutely fantastic, but improving the business success of our clients is the primary goal in every project we start. Developer Award of the Year 2016

Paper PaperPlanes

Active Theory

is a creative digital production studio based in Venice, California. Planes They focus on making polished and innovative digital experiences using web technology.

Paper Planes started as a simple thought - “What if you could throw a paper plane from one screen to another?” After gradual work and brainstorming, we shared the idea with our friends at Droga5, who helped us bring it to the biggest possible stage: I/O 2016.

The heart of our concept was to bring people together from all over the world, using the power of the web - an instant connection to one another by a simple visit to a website. Modern web technology, specifically JavaScript and WebGL, powered the experience on every screen. From the 50-foot screen on stage at I/O to the thousands of mobile devices in the crowd, one codebase was written which drastically reduced Fans simply visited a URL on their mobile device and were prompted to create their own plane by adding a stamp that is development time, allowed for more iteration, and increased pre-filled with their location. time available for polish and animation.

Once a plane is created on mobile, a simple throwing gesture Launched publicly on Peace Day 2016, Paper Planes is available launches the plane into the virtual world. During the pre- on the web at paperplanes.world and can also be downloaded Keynote, attendees would immediately see their planes fly into as an Android app on the Google Play. the 50-foot screen on stage. Users at home or an I/O viewing party would see their planes fly into a desktop screen, browsed to the same URL with no syncing required. Introduction

Paper Planes was featured at Google I/O 2016 on May 18th, Later, users could come back to the mobile site and see where 2016 as a pre-Keynote event, bringing together attendees and their planes were caught around the world. Each stamp on the outside viewers, in the 30 minutes leading up to Sundar Pichai plane read like a passport and a 3D earth highlighted flightpath taking the stage. and distance travelled. Besides making their own planes, users could gesture their Flocking phone like a net to catch a plane that had been thrown from As planes were thrown into our world, they began to flock elsewhere and pinch to open it, revealing where it had visited. together and fly around the Earth. In order to animate a large number of planes at once, a technique called Instanced Buffer Geometries was used, which allowed us to render thousands Look & Feel of planes with a single GPU draw command, manipulating each The aesthetic of paper planes aimed for a feeling of child- plane’s position on the GPU in shaders. like playfulness and relaxation. We used a blend of pink, blue Flocking is a complicated mathematical process that is tricky to and teal pastel colors to create a sky gradient that constantly changed during the experience.

The earth was shaded with similar tones but we used specular lighting and a teal rim glow to help distinguish it from the background. The earth and sky were a constant mixing and changing pair that needed to look good regardless of the earth’s rotation or sky hue.

Taking advantage of the strong graphics card, we polished the big screen rendering with a few extra details such as gentle performantly implement in code. Two techniques were tested: first, computing the entire flocking simulation in a number of shaders, waves on the ocean by moving each vertex along the face of the running on the GPU, and second, creating a simulation on the CPU Earth’s sphere using polar coordinate math. and sharing the load between several WebWorker threads.

Additionally, reflections on the ocean were added by rendering We were surprised to find that using threads was almost equal the entire scene from the center of the earth to a CubeCamera in performance to the GPGPU method. Since writing simulation and passing in the resulting cube map into the ocean’s shader, code is much simpler in JavaScript than inside shaders, we went generating real-time reflections. with the second option. We implemented the exact same simulation on mobile by just reducing the amount of threads and number of planes to be simulated

WebGL Text Locations where planes originate cycled through during the event, creating an impactful connection for people around the world. We ran into a bug on Chrome for Windows, where no matter what we had tried, compositing CSS text animations onto WebGL incurred a severe performance hit, dropping our frame rate below 60 frames per second.

The solution involves generating a sprite sheet image with all of the characters upon it, along with a text file containing all of the font metrics, coordinating to each letter on the sprite sheet. Then, in the 3D scene, a quad for each letter was created with the

corresponding section of the sprite drawn onto it. We were then WebSocket Network able to animate each letter’s position and opacity as if it were Underlying our installation for I/O was a network of servers on any other Three.js object. Google Cloud Platform. We used built-in geocoding headers to

An added benefit was that text was entirely in the 3D get approximate geographic locations for stamps and Socket.IO environment, allowing planes to occlude it as they flew into the to connect all devices over WebSockets, funneling messages to experience! our graphics machine at I/O rendering the experience.

I/O attendees were connected directly to the main server that Optimizing for many devices fed into the graphics machine while those outside I/O were Paper Planes demonstrates how a single build using modern connected to the server nearest them, which relayed messages web technology can reach a wide range of devices - from a 50 to the main server as well as to any desktop computers viewing foot screen at I/O to many types of mobile phones, desktop and the experience in that region. laptops. As new features were added, adjustments for different devices were needed and implemented after beta.

A great example is the number of planes in the flock. While on The Challenge the I/O screen thousands of planes could be distributed across 7,000 attendees and 530 organized viewing parties in over 100 8 WebWorker threads, on mobile devices, the exact same countries around the world, including over 1 million people from system was used and scaled down to a few hundred planes China, were expected to tune in for the event. We needed to across 2 threads. ensure that each device and screen would connect reliably and each message would send instantly. On desktop, we took into account the type of graphics card a visitor has and calculated how long the CPU takes to calculate some basic trigonometry in a loop. We wrote code with these For this global experience to work and perform at scale, we set factors in mind and decided the right number of planes and up a network of WebSocket servers to manage connections, threads to render the flock and get smooth performance. route messages and handle the magic behind the scenes. What is a WebSocket? US West Coast US East Coast World

A WebSocket is a web technology that allows for a persistent Relay Server Relay Server Relay Server “socket” connection between a and a server. Once (Us-Central-1) (Us-East-1) (Europe) established, both sides can send and receive messages directly over the network with very low latency and a message footprint Relay Server Relay Server Relay Server (Us-Central-2) (Us-East-2) (Asia) of just a few bytes.

The Network Funnel Server The core of the network was the funnel server. All messages were routed through the funnel before being displayed on the 50 foot screen on stage. Messages from attendees at I/O, both GPU Desktop 1 GPU Desktop 2 (Primary) (Backup) paint splats and paper planes, were routed directly to the funnel. To cater to the global audience, additional relay servers handled connections from outside of the event location in Mountain Google I/O Stage Display View. Global participants were routed to the nearest relay server based on their location. These relay servers would then dispatch collections of messages to the funnel server. Relay Server, each with: 16 x CPU Cores - 16 x Socket Servers 32 x CPU Cores - 32 x Socket Servers

Hardware Software This server infrastructure was setup using Compute Engine on The WebSocket servers were Linux VM’s running multiple Node. the Google Cloud Platform. js processes and Socket.IO servers on individual ports. The number of threads scaled based on the number of CPU cores. The main funnel server comprised 32 CPU cores, each serving Just like the client-side codebase, the servers also ran from a dedicated Socket.IO server on its own thread and port. Relay a single codebase with custom modules and configurations servers, each with 16 CPU cores, were set up in each Google set depending on each server’s function and region. Code Cloud Platform region around the world to create sub networks was deployed across all servers in one go, allowing for rapid in us-central, us-east, asia-east, and europe-west. development and testing. As connections were established, the server would register the Load Balancing client, allowing each end to keep track of the other. In the case Real time stats, including the number of concurrent connections of an intermittent network failure, broken connections would and average response times for each socket server, were reconnect as soon as the network was restored. Each process maintained for each WebSocket server. Tracking this data could safely restart in just a few seconds, and all connections allowed us to continually route new connections to the server from mobile devices and screens to the server would be with the greatest number of available connections. automatically re-established without the user noticing.

From our load tests, we recognized that each relay server was Outside of these servers, Google App Engine and Datastore were capable of handling 16,000 concurrent socket connections used to save and retrieve plane and stamp data. Google Cloud safely without any performance drop. Triggers were in place Storage was used to store all static assets. to switch over to additional relay servers instantly if any server reached the maximum number of concurrent connections. Geolocation Google App Engine provides approximate geolocation data with each request header as a built-in service. This data, along Conclusion with IP detection for known Google IP addresses, was used to With users participating from over 180 countries, the reception determine if a user was at the event, or, if not, which city they has been humbling. Extending an age old past time like paper were connecting from. plane throwing to connect people from all over the world through a simple gesture was an amazing feeling. With this location data, planes were caught, stamped and thrown across the globe, waiting to be caught again. The city and country location was stamped on the plane along the way. Paper Planes shows the power of the web through a simple experience that connected strangers from all around the world. In addition, there was no need to fiddle around with inputting codes to establish the mobile to desktop connection. By simply loading the experience on a phone or desktop, the user connected right away to the same sub network based on their location. Experimental Site of the Year 2016

Conqueror of ConquerorExcellence Your Majesty Co.

is a design and technology firm based in New York and Amsterdam that powers digital product and brand experiences for ExcellenceNetflix, adidas, Samsung, Spotify, BMW, Coca-Cola, Universal Music Group, Bentley, Absolut, American Express and Red Bull.

Your Majesty’s culture is based on three principles: Chivalry, Courtesy and Excellence. These aren’t just words for our About Page. They’re values that guide everything we do. The Cavalier Project is an internal initiative we have created in order to develop interactive experiences that combine our passion for research and development — staying on top of emerging technology — with our core principles.

Posture & Balance

To launch the project, we developed Posture & Balance. It’s an interactive game that uses new technologies to explore our value of excellence.

The Game

Posture & Balance conveys a sense of knighthood in modern days — the pursuit of excellence. The trailer, poster and intro of the game are all very dark and only show a light in the far distance. Upon starting to play, the scenes slowly brightens. The light becomes a goal to reach, just as Your Majesty constantly reaches for excellence.

When finishing the game, after galloping for 40 seconds, the light becomes brighter to indicate that the goal has been reached — making it to the end and getting a step closer to excellence. Game Design

The design and animation showcase Your Majesty’s brand identity in motion. It’s influenced by heraldry and chivalry updated and repurposed for the modern digital world. So the game design is polished and well crafted, set in a dark, mystical and scary forest. A forest of obstacles that need to be overcome to reach the light of the castle. Just like our creative process of working through projects.

The Knight and his horse dressed in heavy metal armor, charging through the dirt, dodging trees and branches — demanding the best from yourself. Not just challenging the forest and your goal, but also your inner thoughts and fears.

Process

We started with the story we wanted to tell. Then we used the project as way to learn new technologies and improve the capabilities of our agency. Our Art Director taught himself Cinema 4D to create style frames as a base for the WebGL development.

The complete game was inspired by medieval times, with a modern twist. This concept was applied to every aspect of the game, from the game scenes, to the UI, animations, sound design and music. To promote the game we created a matte painting as a poster and a fully 3D animated trailer. All of these elements were made in-house.

Animation The trailer has a similar dark atmosphere as the game start screen. Preparing for battle, creating tension and mystery, by using close crops of the horse. Not showing too much so there’s something left to imagine.

For the logo animations we used a mix of elegant and technical styles, tightly tied to the Your Majesty brand. These animations were created in Adobe After Effects and then exported as SVG animations using the bodymovin plugin to keep file sizes down, browser performance up and support for both regular and retina screens.

The main buttons and page transitions use smoke-like transitions to tie in with the mystery of the scene, and were on their own another technical challenge.

Game World The game world consists of two large procedurally generated tiles. Whenever the player gets to the beginning of a new tile, the previous tile is recycled, reconfigured, and placed beyond the tile the player is now on. This process continues indefinitely. Each tile has a grid layout with an inaccessible outer area and a stored object. This saves a lot of processing compared to central path where nuggets and rising pillars are placed. In early doing bounding box intersection tests, and is still accurate (we prototypes the tiles were populated randomly. However, this promise). resulted in unpredictable and often undesirable patterns.

Later in the process we began curating the chaos by introducing Three.js & Shader Animation a set of configurable rules. This gave us a number of key Posture & Balance is built on top of Three.js. This is the go-to values we could use to control the difficulty and pacing of the framework for many WebGL projects, as it has a robust feature gameplay. set and a great community. It also provides nice hooks if you want to get closer to the hardware without having to manage We wanted to prevent pillars from clumping up too much, all the nuts and bolts of the WebGL API. We used these to and to make sure there was always a clear path through the create shader powered animations and post processing effects, tile. To achieve this, pillars were spread out row by row using which played a big part in the overall aesthetic of the game. randomized steps. The pillars, nugget collisions, and the stretchy speed particles We also wanted to make sure all the nuggets could be are all primarily animated on the GPU. They are set up as Buffer collected. Their positioning is governed by a jagged line, never Geometries with custom Buffer Attributes. These attributes placing more than one nugget on a row. After play testing we are used alongside uniforms in extended Phong and Basic settled on a configuration where nuggets are never more than materials, where the animation state is calculated inside the two lanes apart, which worked well with the sense of speed we vertex shader. This approach is similar to morph targets, but wanted to achieve. instead of being linear, the interpolation is defined by arbitrary logic. The grid also removed the need to perform physical collision detection. Instead, pillar and nugget positions are stored in a http://codepen.io/toreholmberg/pen/akKEEB two dimensional array. As the player moves through the tile, their position is converted into this array. Collision detection While harder to set up, this approach has huge performance is done by checking if the player index overlaps with any benefits. The standard way of animating is updating the transformation matrix of a mesh, and sending its 16 floats The first thing we changed was our HTML display stack. Initially, to the GPU. As the number of objects increases, this quickly Three and Pixi each had their own HTML canvas. The Pixi becomes a bottleneck, and rendering slows down. With our canvas was transparent, and lay on top. During the optimization approach, all the data needed to calculate an animation state phase we decided to merge the two. Instead of being in the is already in graphics memory. The animation state is then DOM, Three rendered into an off-screen canvas, which was controlled by a handful of uniform values, which enables the used as a texture for a Pixi sprite. GPU to crunch numbers undisturbed. The next part of the optimization phase revolved around http://codepen.io/toreholmberg/pen/JKkrrW reducing the draw count of Pixi. This is a property of the WebGLRenderer that tells you how many separate drawing After launching Posture & Balance we continued working on operations are performed each frame. Getting this number as this animation system, which is now available as a Three.js low as possible is key to a smooth frame rate, as it reduces the extension on GitHub (https://github.com/zadvorsky/three.bas). overhead from CPU to GPU communication.

The first thing we did wasmerge our textures into sprite Interface & Performance sheets, one for regular and one for retina displays. This way One of the main goals of this project was experimentation. WebGL textures do not need to be updated when rendering To this end we decided to use Pixi.js for the interface layer. different objects. This helps Pixi to group objects into larger We wanted to see how it would compare to traditional DOM batches which can be drawn simultaneously. rendering. This approach resulted in some benefits, some challenges, and a lot of learnings. Next we addressed text rendering. As it turns out, doing this through Pixi is comparatively slow. Each time a string is As we continued to add visual effects to the game, we started updated (like the score and time displays), the text is drawn noticing performance issues. After some investigation, we were to an off-screen canvas, which is then send to the GPU as surprised to find out this was primarily caused by the interface a texture. Excess traffic like this has a negative impact on layer. We then set out to identify and eliminate as many performance. This could be addressed with Bitmap Fonts, bottlenecks as possible. which effectively turn text into images that can be added to a sprite sheet. However, this severely limits flexibility as Bitmap Fonts scale very poorly. We opted to remove text from Pixi altogether, rendering it in the DOM instead.

The final major bottleneck was the Pixi graphics api, which we used to create some simple interface elements. The main culprit here was the gallop indicator, which had about 60 parts. With the graphics api, each piece demanded its own draw call. We solved this by adding differently colored squares to our sprite sheet. These were then scaled for the gallop indicator and some other interface elements.

In the end we managed to reduce the number of draw calls during gameplay to just 2: one for the interface layer, and one for the game world.

Debugging Tools

The Chrome debugging tools were invaluable in finding and fixing these performance pain points. We used the Javascript CPU Profiler to record performance during gameplay. Then we examined the generated chart, looking for “fat” frames where processing took longer than allotted.

The graph that Chrome generates provides a detailed breakdown of what your application is doing each frame. It’s easy to visually identify blocks where frame rate issues arise, results. Based on the score results, the user will either be as these frames take more space. Then it’s a matter of digging praised for their excellence or told to get back on their horse. in, figuring out where repeating bottlenecks occur, and trying to find ways to optimize. Web Audio

Audio plays a big role in Posture & Balance. We used Howler. Sound Design js to control loading and playing of sounds. We also used two The 3D sound design, which we made in collaboration with WebAudio features to further enhance the experience. The first Studio Takt, is a big part of the full experience. In the beginning was controlling the playback rate of the gallop sound effect the sound and music go from very mysterious, dark and empty, to match the speed of the horse. The second was using a to more intense, and pushing you to keep riding. Once you reach PannerNode to position sounds inside the game world. This your goal, the music becomes euphoric. causes the volume and stereo origin of some sounds to change as you move left and right. The sound design, just like the game design, contains a mix of modern influences and medieval times. During the gameplay These features were fairly straightforward to implement, and the sound design provides feedback when certain actions added a lot of depth to the experience. The best part is that this appear, like picking up a gold nugget, hitting a tree, correctly works out of the box on most modern browsers. speeding up by hitting the space-bar, or even notifying when the user is running on the left or right side of the game. We did this Crafting Posture & Balance was a fun and rewarding by balancing the sound effects and music on the left and right experience. There was a tight collaboration between design audio channels. This gives it a true 3D immersive experience. and development, resulting in a distinct aesthetic and highly polished overall experience. Listen to the Cavalier music loops on SoundCloud: soundcloud.com/yourmajestyco/sets/cavalier Challenge yourself at cavalierchallenge.com

To top things off we hired a voice actor to introduce the concept of Cavalier and congratulate the user at the end with the score User’s Choice Site of the Year 2016

KIKK Dogstudio is a design and technology firm with offices in Belgium & KIKKFestival Chicago. We work with innovative brands including , The Museum of Science And Industry Of Chicago, The Kennedy Center of Washington, Dragone, Quanta Magazine. Since 2011 we have Festival been organising and designing our own festival called the KIKK Festival, which attracts over 10.000 people for two days of conferences, workshops, exhibitions, parties etc.

Building the website for the KIKK Festival is always an exciting but stress-generating time of the year for us. Not only is it a great opportunity to create something entirely new based on the current year’s theme for the festival, but there are also expectations to deliver an experience worthy of the escalating fame of the festival.

This year’s theme being interferences, we went in all directions to end up being seduced by the case of the soap bubble, a visually appealing yet really simple and visible representation of light interferences. Based on that, we quickly created this year’s key visual which was then used as a basis for all thinking about all the outputs we needed to produce.

Having this bubble as the first thing you saw seemed logical and making it slightly interactive through the use of WebGL sounded like the right thing to do. To ensure consistency through the project and to convey this sense of interferences, we decided to add an extra layer of glitches, both randomly appearing over images ( WebGL, here we go again ) or through those gif- powered transitions.

Technical Approaches

For the KIKK Festival website, we use a dedicated server in partnership with our hosting partner cBlue who provide a great SLA for our servers. With this infrastructure we can ensure high scalability during the “Buzz” time of the website. For the server architecture we use a LEMP stack (Linux NginX MariaDB and PHP) completed by a varnish server that brings a speedy caching system.

The back-end is a self made CMS solution called Emulsion that provides all the basic CMS behaviors : Content pages, navigation, emailing, media contents but also an easy way to develop new modules for specific use. In the case of the KIKK website, we have developed modules for: events (conferences, workshops, market), speakers, partners, etc.

The front-end is based on our own boilerplate. For the stylesheets we use Sass with our homemade mixins and functions library. The Javascript is written in ES6 and transpiled with Babel. We use webpack as module bundler. The most fun part of the front-end development was the creation of the bubble on the homepage and the glitches on the pictures. For that, we used Three.js and wrote some shaders. To refine the rendering of the bubble, we used this small prototype to play with some parameters until we were happy with the result. Optimization was also a great challenge during this project.

For the development, we use Docker containers that bring the same stack we use in production. We store the source versions on a Gitlab server that also provides the deploy mechanism with Gitlab CI. With that structure we can put the small adjustments or fixes in staging, then in production very quickly.

In the end, and thanks to our team’s involvement through the whole project, the KIKK website was once again a fast-paced / high expectations / low satisfaction project from our team; but then again, considering this won September’s Site Of The Month and Site of the Year 2016 (Users’ Choice) on Awwwards it seems like we might have achieved something decent. Site of the Year 2016

Falter FalterInferno

Wild

is an award winning interactive agency based in Vienna.They Inferno focus on conception, design and development of innovative campaigns, websites and apps.

For Falters Inferno, everything started with the already completed 30 second TV spot. Our main challenge was to bring the amazing illustrations alive interactively. In addition our goal was to create a really smooth experience both on Desktop and Mobile.

To do that we made a few very specific choices regarding the technologies we were going to use. We built a framework based on WebGL using Three.js that would allow us to build the various scenes out quickly.

We didn’t want to rely on Javascript to be performant enough to do all the real time scene manipulations we needed, so we built custom shaders for almost anything you see moving throughout the experience. Whether it was particles, transitions or noise overlays. After everything was in place, the project went through an extensive fine tuning phase and further performance improvements were made The folder structure laying out the frontend framework components for each chapter

Step 1 - Creating the Framework It consisted of

We wanted to work as synchronously as possible between our design and development departments. In order to do so, we 1. A Spritesheet for all graphical elements in the scene, generated via created a framework that would define the same structure for Texture Packer. each scene in the experience. That way our designers would 2. A Json File that contained the information for placement, size and know exactly which format they had to supply the assets in z-offset of each element in the scene. and what type of effects we would be able to re-use throughout 3. An animate.js file describing all the animation loops and mouse/touch scenes. The framework still gave us enough flexibility to add interactions further unique shaders/effects per scene if necessary. 4. A setup.js file taking care of all initial setup work A scene of Falters Inferno being assembled in the Three.js online editor Step 2 - Assembling the Scenes

After we had created the framework we started to assemble the various scenes. We used the Three.js scene editor threejs.org/editor to get the work done efficiently. Having this visual tool not only allowed us to work faster, but it also gave the designers and developers the opportunity to sit side by side and fine tune placement and amount of parallax behaviour.

Step 3 - Adding Shaders To make the scenes feel more alive we created custom WebGL shaders. Some of them were re-usable, such as film noise and particle shaders, and some of time were scene-specific. Shaders always consist of a vertex and a fragment shader.

Vertex shaders affect the geometry of a 3D object whereas fragment shaders affect all the texturing and lighting. You can pass values from JavaScript to your shader code taking user input such as speed of movement and mouse/touch position. Because most of the performance heavy code is calculated on the graphics card, only taking a single input value, the whole experience stays beautifully smooth - even on mobile.

If you want a more in-depth introduction to writing custom shader code, you should definitely take a look at this article by Paul Lewis - its an oldie but a goodie :) aerotwist.com/tutorials/an-introduction-to-shaders-part-1 1. Step 4 - Tuning the Experience

We worked very closely between designers and developers to get the timing of animations and feeling for interactions just right. We also went all out on decreasing the load time and improving performance. For example we saved space in the Sprite-Sheets by splitting up the frames of an animation into the PNG’s color channels.

Resume

Falters Inferno was one of our longer ongoing projects, taking roughly 2.5 months from kick-off to completion. We spent a lot of time on tweaking animations and making the performance great on mobile. We also had some amazing illustrations to work with. After all we think the effort did pay off, and hope you like the experience :)

Technologies

FRONTEND: WebGL / Three.js - JavaScript SERVER: Amazon s3

A Sprite-Sheet for Falters Inferno, with some animations baked into separate color channels to save space Agency of the Year 2016

Sound in SoundColor

Resn

is a creative digital agency. We infect minds with gooey in Color interactive experiences and saturate the internet with digital miracles.

Discovery

The challenge for this HP Spectre campaign was to create a unique piece of artwork from each unique sound source. An artwork that did not look computer generated, something you would deem wall-worthy.

Breaking this challenge down on it’s highest level, we identified two very important steps to tackle:

• SOUND - It needed an easy and intuitive art creation flow, from which a user could input some sound and... • COLOUR - Create an artwork that not only represented the message but looked like a beautiful unique painting. Artwork Creation

We looked at art. Lots of it. From most of the masters we observed that human touch, the sense of organicness. In many you could trace the brushstrokes, see the flourishes and sense the rhythm in which it was carefully placed.

How could we combine these nuances with all the complexities observed in audio too? When recorded, a sound is represented by succession of frequencies, the peaks and troughs of a waveform. What if we mapped each frequency to a color?

That method sounded ideal but it would have only allowed us to create an artwork by looking at the sound sample piece by piece...each individual peak and trough represented. That wasn’t good enough for us...we wanted to literally picture the whole sound sample as an entre piece too.

Stuck on that problem we went back to square one.

Back to the painters. How does a painter make/manipulate their colors?

From the original tube of paint to the final color applied to the canvas a painter will use a palette of many to adjust the color to the right tone. We replicated that process. The color tubes would be our input color frequencies then we combine them in order to represent the sound as a whole.

We ended up with great diversity of colors not only representing the sound but the whole campaign.

The final step was again created by observing the painting process. Using brushes, the painter will move the colors and mix them on the canvas.

In that manner, we wanted the sound to influence the way the brush is going to be used. We quickly realised that the composition, direction and balance of the brush is key to get a Step 2: Create a canvas by blurring and scaling the colors. good aesthetic. We tried a few options.

Step 1: Obtain the artwork colors from the Step 3: Using the recording to create brush sound frequencies of the recording. strokes and texture. Technical Approach

We came up with a two step process to create the final Each individual brush was used as a texture on a plane that gets artworks. the motion and the transformation in relation to the frequency of the sound. We ended up with a maximum of 20 of those planes at a time to not overkill the performance.

To get the artwork / canvas filled with a maximum 20 planes floating we needed to make sure anything that gets drawn throughout the process remained and dissolved on the canvas Step 1 (during recording) - Painting the Background: as if it were real paint. The trick to achieve this was by keeping the previous frame of what is being drawn and then print it By analysing the recorded audio, colors and brushes were again alongside the new frame, therefore we are always one assigned to different frequency brackets. If a particular frame ahead. This was achieved with a FBO (Frame buffer frequency was detected, those corresponding colors were object) system. At the end of the background creation, the applied to the background. canvas was blurred to create a consistent color field.

To get the colors right and harmonic we used the HSB color space. HSB stands for hue, saturation and brightness and it is more intuitive than RGB. It helps us to get the right spectrum of colors based on the frequencies and makes it easier to keep the contrast and the luminosity consistent by only offsetting the hue. Tints that are close to blue represented low frequencies whilst tints that go towards yellow were for higher frequencies. Those colors are then used as a tint base for the brushes that are being displayed.

The size of the paintbrush was affected by the volume of the sound giving additional variation. Analysing Recording Step 2 (during playback) applying brush strokes / color mixing: Brush 08

Adding the human touch...now that the background was complete, we needed to add a sense of brush strokes and movement to the final artwork. This was done by displacing the background, using a set of ten unique brushes.

To get this effect we used the output image / background made throughout the first step and processed it with another image used for displacing pixels. This second texture is made out of another set of brushes. They are drawn, scaled and moved in relation to the audio frequencies. The result is used as a displacement map to move across the pixels of the first output image. Brush 03

By digitally pushing and pulling the color of the background in a diagonal direction, we were able to mix colors and achieve the sense of brush strokes.

Brush 05 Site Design & Art Direction

Artwork

It was highly important for all artworks to have a unique visual output where each differed from another based on the sound recorded. Yet at the same time, having a sense of art direction and overarching style meant creating a set of colors which complemented each other and designing a group of brushes to be used.

Each frequency bracket was assigned color values that gave enough range of difference but that would also work well with each other. Color selection required constant testing throughout the experience.

The algorithm creating the artworks picks a series of brushes to use at the moment of applying the paint strokes. Rather than using a code based paint brush, each brush was designed to look appealing on the canvas. By having a limited set of brushes this gave a more human touch to the experience. The Homepage

To entice users to start creating artworks we wanted to create a visually heroic landing page. It featured an animated, interactive background that mimicked the final artwork but not entirely, so as not to give away the final product. The Waveform

We wanted to create an audio waveform that served two purposes during different moments of the experience. The first use was to show direct, one to one, feedback of you recording audio whereby the waveform would stretch and bend to the audio input it received. The second use was as a playback device where the user listened back to their audio as their artwork was being brushed.

To turn the input waveform into the playback version, we simply compressed its width which in turn created a waveform... Sharing the Love

The Sound in Color experience was created for the HP How did we create a print ready file from a website? Reinvent Giving campaign, centered around sharing special moments with friends and family. The rendering of the artwork on the website is only created at a standard web resolution which would not be high enough for a user to print on paper. To create the high resolution version we built a back-end system on a server using a high-end graphics card to replicate the front-end visuals. When a user recorded audio and created This notion of sharing played an important part in the final goal of an artwork, all of the data of the colors, brushes, positional the website - Sharing digitally and physically your final art piece. information and brush strokes was stored in a JSON file. This file was then sent to the back-end and recreated at a much At the end of the website, the user was provided multiple higher resolution. options to engage in this notion of giving. Firstly the user could share their artwork on Twitter, Facebook or download and image to post on Instagram. Yet more importantly, the user could download a high resolution printable PDF version to print themselves as well as print their artwork on canvas.

By utilizing a third party printing provider, we were able to generate an artwork at a quality ready for canvas print. A special thanks to all the agencies for their collaboration on this project. Still hungry for more? Coming up, juicy themes such as; interaction in VR, WebGL Best Practices, Big Data, Web Performance Optimization and more. Volume 3 - coming soon!

Published by Awwwards - 2017