For VJs with access to a fast upstream Internet connection one of the places that can be a place to perform is over the net by streaming the output of VDMX (or any other Syphon output enabled application) with Twitch.
In this video tutorial we'll be looking at the simple steps involved with setting up streaming with Twitch from VDMX.
In the OBS Preferences under Stream enter the Stream Key found in your Twitch Dashboard.
Tip: Depending on your Internet connection you may want to vary the stream settings in the OBS Preferences as well.
Once you've got the initial basic OBS setup steps out of the way, launch VDMX. You can either start from a new project, load a template or any existing project. Next add a Syphon Output to your workspace if one does not already exist and enable the layer from VDMX you wish to stream over Twitch.
Launch OBS and create a new Scene. Add a video source and choose "Game Capture (Syphon)" from the list of available options. After setting a display name for the source a panel will appear asking you to select which Syphon feed you would like to receive. Choose the output from VDMX that you wish to publish over Twitch. In the main OBS canvas you may need to resize the image from VDMX to fill the entire broadcast region. If needed you can go back to VDMX and adjust your layer / canvas size to match the needed aspect ratio.
The final step is to click the "Start Streaming" button in the main OBS interface. At this point your output from VDMX should be streaming to your Twitch page. When you are finished click the "Stop Streaming" button in the main OBS interface or select the "Stop Streaming" option from the OBS menu.
Tips and Notes:
The Syphon Output plugin can be easily added to existing VDMX projects and templates.
Use OBS to receive Syphon from VDMX and stream to Twitch.
Go to the Twitch dashboard to find your Stream Key to use in the OBS preferences.
One of the biggest nights for live visual performers is New Years Eve and in particular there is responsibility in particular that can be unexpectedly tricky to get right. That is handling the countdown to midnight. Often it can mean coordinating with other performers or workers at the venue. Depending on what is required of you during the show there are a few different approaches you may want to take when setting up your projects. It also helps to have an idea in advance of what your options are for running a visual countdown.
When creating a countdown to midnight, there are two main details that can be decided:
Will the countdown be automated by a clock or external source, or manually advanced?
What will it look like? Will it be a 10 second count down, or a clock display that counts up to midnight? Will it be pre-rendered or live generated? What happens when midnight is reached?
In this tutorial we'll start with a discussion about how to answer each of these questions and then look at a few different ideas to get you started with the available approaches.
Manually Advanced Clocks vs Automated Clocks
Depending on your needs you there are two different directions you can go in for running the timer behind the countdown.
As an example, at many events there is a person who is leading the crown in a countdown and you may be required to keep in sync with them. In some cases the easiest and most flexible approach is to create 10 different still images and manually trigger them one by one. Similarly you may find it useful to create a movie file that starts with an exact 10 second countdown animation that a person can read along with. Another route to working with automated clocks is to use an interactive live rendering composition created in ISF, Vuo or Quartz Composer that is displayed on screen.
In other cases you may need to automate the timing to be in sync with the time of a computer or another event. An example of an external event you may want to keep in sync with is a televised countdown from a local broadcaster. Very often these broadcasts are playing in homes and other public venues around the world and it can sometimes be nice to just cut to this instead of creating your own countdown. For this you can get a digital HDTV receiver and an HDMI capture device (such as a Blackmagic Mini Recorder) to access the live feed like any other camera connected to your computer. Do note that in some venues you may need to get special permission to use live televised feeds as part of your set.
Regardless of whether you are using a clock that runs automated or manually, unless you are using a live feed as your timer, you may have some creative control over what the display looks like. In the second part of this tutorial we'll discuss the design considerations that go into making a clock followed by taking a quick look at how the pre-rendered countdown movie was made in Motion and how the three different ISF based example compositions are written in GLSL.
Within this topic there are two initial decisions to decide on. The first major design consideration to make is if you will be counting down from 10 or if you'd prefer to display a clock that approaches midnight. The major technical consideration to make is if you want to pre-render a media files (such as a movie or still images), or if you plan to use a live rendered option such as ISF, Vuo or Quartz Composer.
Whether using pre-rendered or live generated material you may want to design materials that make use of alpha channels so that they can be overlaid on top of other media that you are playing or to be used as masking layers during composition.
One of the common questions for VJs working alongside musicians is what is the best way to keep the tempo of all of the software being used by the different performers perfectly in sync. Ableton Link is a new technology developed by Ableton that answers this by synchronizing musical beat, tempo, and phase across multiple applications running on multiple devices, including VDMX.
When using Ableton Link any application connected to a local network discover each other automatically and form a musical session in which each participant can perform independently: anyone can start or stop while still staying on the beat and in tempo. Anyone can change the tempo, the others will follow. Anyone can join or leave without disrupting the session. Ableton Link compensates for network latencies and requires almost no setup to get working. The focus here is to facilitate improvisation between performers – there is no master controller and unlike timecode protocols such as MTC and LTC, the absolute song time is not broadcast, only the position within the current measure.
To sync the BPM and measure position of a Clock plugin in VDMX simply look for the “Enable” button where it says Ableton Link in the plugin options panel in the Workspace Inspector. Once activated the text will update to indicate the number of active peers on the network who are also currently using Ableton Link.
You can also quickly get started with a basic example setup in VDMX by choosing the "Ableton Link Demo" option from the Templates menu.
Note that while Waveclock audio based BPM detection can be active for a Clock plugin at the same time as Ableton Link, it in generally not recommended as in this case you will be forcing other peers to slave to this timing. MIDI Clock input can not be used at the same time as Ableton Link.
In this video tutorial we'll be looking at how to use Ableton Link in VDMX to sync the BPM with other software and some related useful techniques that can help you get the most out of this powerful new protocol.
While the Hardware Learn Mode and detect methods are often the fastest way for a VJ to connect sliders, buttons and other elements from a physical controller to their corresponding UI items in VDMX, sometimes it can be also useful to manually type in these addresses instead. For example, you may need to do this if some item on the controller sends multiple MIDI values at the same time, or when attempting to set up a project working from a spec sheet when the device isn't actually plugged in.
This setup technique can be accomplished by using the “Receive” tab in the UI Inspector where individual receivers for an interface control can be directly configured.
The + / - buttons are used to add receivers to the control. Once created you can click on the text string to manually enter an address path to receive MIDI, OSC, DMX, keyboard or any internal data-source available in VDMX.
Along with being able to directly set one or more address paths to listen to, the inspector and its sub-inspector contain lots of other options for setting how the incoming values should be interpreted or translated before being applied. For example, the directionality of a slider can be flipped by enabling the “Invert Val?” toggle in the sub-inspector.
Like most interface items in VDMX, the Color Pickers can send their current state over the DMX protocol for syncing with lighting consoles and fixtures making it possible for a VJ to control both the visuals and lights at a live event. In this video tutorial we'll set up a DMX controllable lighting fixture and set up a Control Surface plugin with interface items for adjusting each of the available parameters.
Before getting started you may need to configure your system Network settings in System Preferences and the DMX / ArtNet settings in the VDMX preferences. The exact settings will vary depending on the equipment being used. Also note that the hardware being used may come with additional setup software you may need to run.
System Preferences and ENTTEC NMU setup program to configure the hardware.
The VDMX5 Preferences for setting universe addresses for each DMX input and output port.
Simple DMX Fixture, “ENTTEC Open DMX Ethernet” and Thunderbolt to Ethernet adapter.
For this basic setup we have a single port ENTTEC Open DMX Ethernet converter but any ArtNet compatible device will work. We're connected directly to the device over cat-5 using a Thunderbolt to Ethernet adapter as pictured above. The fixture being used accepts 4 DMX channels, 1 for strobe / brightness level and 3 for the RGB color.
To begin we'll add a Slider to the Control Surface and set it to send on channel 1 for the strobe rate / brightness level.
Next to control the color of our light we'll add a Color Wheel element and use the UI Inspector to add a DMX Sender. From the sub-inspector we can adjust the channel offset to match the template of the fixture.
Notes and Next Steps:
The UI Inspector Sending tab for the Color Wheel showing DMX Sender options.
Similar but different from MIDI, each DMX port represents a single “Universe” consisting of 512 “Channels” of values ranged 0-255.
If multiple universes are needed there are devices with multiple ports, or you can connect several single port devices to a network router or switch.
While many VJs and musicians use equipment designed to look like a traditional instruments for running their visuals, another great way to trigger video clips and FX during a performance is with gaming controllers.
In this tutorial we'll look at three different examples of using such setups in VDMX, but as always the real fun is adding them into your own projects and templates.
1. HID (Human Input Device) Game Controllers
Using the “HID Input” plugin you can receive button and joystick data from many generic USB or Bluetooth based game pads. Once selected in the device menu any detected elements from the controller will be available for hardware learn and assignment as standard data-sources.
To get started quickly with using the standard bluetooth PS3 HID Controller with VDMX try using the example template which includes a pre-mapped Control Surface plugin for each element on the device. This technique can be useful for visualizing the current state of a controller or for testing behaviors when a device isn't available to play with.
Note that the HID specification also includes lots of other types of devices that can be mapped in interesting ways for VJing.
2. Keyboard Input Based Game Controllers
Some game pad controllers such as the X-Arcade consoles show up as a USB keyboard and send standard key commands when buttons are pressed or joysticks are moved. When using these controllers no extra plugin is needed in VDMX, each assignment can be made using hardware learn mode or manually entering the data-source paths in the UI Inspector pane.
It is important to remember that since these are registered as keyboard presses, they will only be picked up by VDMX when it is the frontmost application. You may also want to consider this if you need to live type text during your performance.
3. WiiMotes
Similar to using HID based inputs, the “WiiMote” plugin in VDMX will make the buttons, joysticks and accelerometers of a bluetooth based WiiMote available as standard data-sources. For more information also see the in depth WiiMote tutorial page.
For this video tutorial we'll start by enabling Bluetooth communication and detecting the controller with the WiiMote plugin in VDMX. Once the basic configuration is working we can begin to put together a sample project that is designed to get the most out of the possibilities of the combining accelerometers with game style buttons for performing live visuals.
Tips and Notes:
Assign buttons from the right-click detect, UI Inspector, or Hardware Learn.
Pair Bluetooth HID and WiiMote controllers in the System Preferences.
We used the X-Arcade Tankstick + Trackball model in this tutorial.
One of the most powerful techniques for combining multiple layers of video into an output is the use of masking, also known as luma keying. With this process, two video sources are combined to create a "cut out" layer that can be composited over others image like a collage instead of simply blending them together. This style is commonly found in music videos, graphic design, and by VJs for live performance. Masking is also an important technique used when projection mapping video onto surfaces.
The basic idea behind this process is easy to learn and can be repeated any number of times within your VDMX project to create complex layered visual scenes.
Typically for our Mask we'll want to use a video source that has clearly defined shapes or patterns, preferably black and white or high-contrast. A few examples of good masks, some that we'll use for this example:
When applying the mask to a layer using the Layer Mask FX, the regions that are white will become textured by the source video, and the regions that are black will show through to video playing on the background layers behind it. Areas that are gray will mix between the two.
Foreground Layer before mask is applied.
Masking Layer provides the "cut out" shape.
Foreground Layer after the mask is applied.
Composited over a background.
When creating layered compositions using luma keying, a useful tip is to organize media files into three types: overlays, masks and backgrounds. The masks will provide the "cut out" shapes that appear over the background clips and the overlays will be the texture video that fills our shapes. In this video tutorial we'll begin by loading our files onto separate pages for these purposes.
After the media files are loaded we'll move to the Workspace Inspector in the Layers tab where the layers from our scene can be managed. For the basic example we'll create three layers: a Foreground, Background, and Mask.
As detailed in the applying a mask to a layer tutorial, once this basic layout is prepared we need to adjust two layer settings and add an FX. First, in the Layer Composition Controls for the Mask Layer, set the Opacity to 0.0 (or use the hide/show button) so that the mask is not visible in the final output directly.
Next on the Foreground Layer set the Composition Mode to "OpenGL-Over" (or "VVSource-Atop") and add a "Layer Mask.fs" FX with its Mask Image input set to receive from the "Mask Layer" that we've created.
Once this is complete to make our scene more visually interesting we'll then repeat this process to add a mid ground layer that appears in between the foreground and background layers.
Tips and Notes:
Use the OpenGL-Over composition mode on foreground and overlay layers.
Use the Chroma Mask FX to convert blue screen footage to a grayscale masks.
Text layers and other built in sources are also useful as mask generators.
Fun dome projections during our visit to the LES Girls Club.
Note: This is an updated tutorial topic – Recently the Unity3D game engine was updated so that the personal edition could use 3rd party add-ons, a feature that was previously limited to their pro version.
While mainly designed for cross platform game development many Mac VJs take advantage of the Unity3D engine for the purposes for creating 3D worlds and other real-time generated graphics for use in visual performance. By connecting these environments to other VJ applications like VDMX over Syphon and OSC we can control these worlds and mix, process and output the virtual camera signals from a scene like any other live media source.
In this guest tutorial we're joined by Dave Pentecost, manager of the planetarium at the Lower East Side Girls Club of NYC. Along with weekly tours of the known universe in their dome, Dave teaches the girls how to use the software needed to make their own digital creations to project into the space.
This example begins by creating a simple scene in Unity3D that contains a plane and a basic character package known as Ethan.
Once this scene is created the next step is to add the Syphon asset package to the Unity3D project. For each of our three surfaces (Ethan's body, glasses and the ground) a different material is created. The next step is to create a matching layer for each of the surfaces in VDMX to provide a video streams for display in the scene using the Syphon Output plugin.
Tips and Notes:
Visit the Github page for the latest version of the Syphon plugin for Unity3D. No additional installation is needed to publish or receive video over Syphon in VDMX.
Enable the "Run in Background" option in the Unity3D Resolution and Presentation settings to keep the scene active when VDMX or other VJ applications are being used.
After getting VDMX to send video to Unity3D, try publishing a camera in the Unity3D environment to Syphon and receiving it on a layer in VDMX.
We really enjoyed this tutorial from the DocOptic crew who were gracious enough to let us share it here with some extra notes.
DocOptic is an independent team of artists with a background in the creation and performance of live visuals, 3D motion graphics, and music production. We also enjoy sharing knowledge with new and existing audio-visual performers through the creation of tutorials that explain how various live visual software works and how they can be used in live performances.
This tutorial goes over some of our favorite tips we use to improve our workflow while using VDMX including keyboard shortcuts, BPM automation, presets, and more. Also covered are a few techniques using our most used features of VDMX such as the Alpha Mask effect and using application windows as media sources.
Tips and Notes:
Save and recall the states of your parameters using Presets.
Remove footage backgrounds using the Alpha and Layer Mask effects.
Create a Control Surface to easily access your favorite effects and parameters.
Use marks to trigger different playback positions of your media.
Use other applications applications as media sources with Window Inputs.
Automate parameters to BPM to keep your visuals in sync with audio.
For musicians working in Ableton Live or other multitrack production software one of the most useful tricks for driving real-time visuals is to output each sound track on a different set of audio channels before they are mixed together to get more accurate results for each sound when performing audio analysis in VDMX.
In this video tutorial we'll look at how to accomplish this technique by using the Soundflower audio routing system extension which allows passing of audio streams between applications.
This same idea can be used with bands or sound boards by using a multi-channel audio inputs that are receiving each instrument before they are mixed.
To begin, set the output device in the Live audio preferences to use Soundflower 64 and configure the needed output channels. Next switch each track in the main Live view to send on a different external channel send instead of the master feed. If needed, use the Soundflowerbed utility to route the audio to your speakers for preview.
Once the audio side has been set up, in VDMX we'll create an Audio Analysis plugin for each of the tracks playing in Live. For each plugin a new set of frequency bands can be tuned to the incoming sounds. The individual data-sources are available for controlling the FX and source parameters of any layer.
One of the popular controllers used by VJs is the Korg nanoKONTROL, a versatile set of sliders, knobs and buttons that can be easily mapped to different setups. The goal of this more generalized setup is to provide a good standard VJ rig for this controller that includes 4 layers with playback / mixing / color adjustment, clip / page switching along with a set of both manual and audio reactive FX that can be individually enabled.
For this project the layout is split into three sections, with playback control on the left, mixing in the middle 4 sliders and audio reactivity on the right 4 sliders.
4 Layer VDMX setup for Korg nanoKONTROL
Example project with controller and display
Main controller layout
In the playback controls section the buttons under the 'TRACK' heading are connected to changing pages and moving to a random / previous / next clip on the current page. Below the 'MARKER' heading are buttons for adjusting the movie rate, direction and jumping the playback backwards and forwards.
The mixing section in the middle contains the controls for adjusting the opacity slider and luma cutoff knob for each of the 4 layers. Additionally each layer has 3 optional style FX (Invert, Mirror, RGB trails) that can be enabled using the S, M, R buttons corresponding to each layer.
The audio reactivity section is used to adjust the gain level for the FX enabled on the corresponding layer. Like with the mixing section the S, M, and R buttons are used to enable the Shake, Blur and Dot Screen reactivity. Above each slider is a knob for manually shifting the hue of the layer for changing the color palette of your visuals.
Though VDMX provides many plugins and built-in datasources to automate control of sliders, buttons and other interface controls, many people like to use their own algorithms to generate control information to use as part of their performance or installations.
In this tutorial we will look at using Quartz Composer to create a “Random Walk” value generator for VDMX. This composition can be downloaded and installed to be used as is, or as a starting point for your own more complex creations.
To begin, from within VDMX choose the “Open Assets Folder In Finder” and drag the “Random Walk” file into the plugins folder. compositions The composition will be available from the Workspace Inspector and any published outputs will be available as data-sources and video streams. As with other plugins, we can have multiple copies running at the same.
To control a parameter with the output from Random Walk works just like any other data-source. In most cases you can make this assignment by right-clicking on an interface control and selecting it from the appropriate sub-menu. The UI Inspector panel can also be used to configure an interface item to receive data from the composition and customize how it is applied.
The Random Walk plugin itself has two basic controls: Step Size (adjust the max amount the value can change) and the Wrap Mode (whether wrap around or bounce off the at the edges). Each of these can be set from the plugin control window
How The Random Walk Example Works
For those interested in how the Random Walk composition was created, in this second part we will open the file in the Quartz Composer Editor to take a closer look. When editing the composition, you can follow the provided notes for each section.
The process for a random walk involves four basic steps:
Input maximum step size.
Generate random numbers and accumulate.
Apply wrapping function.
Publish and visualize results.
The wrapping functions are used to convert the un-ranged random numbers into a 0 to 1 range that can be used by VDMX to control parameters. A sawtooth wave provides a wrap at edge, where as a triangle wave causes a reflection at the edge.
Additional parameters, such as adding a button to reset the accumulator back to its starting value, can be added using Input Splitter objects.
Notes:
Use the "Open Assets Folder in Finder" to locate the custom "plugins" folder.
Quartz Composer and Vuo plugins can be added from the Workspace Inspector.
Editing the "Random Walk" plugin in Quartz Composer.
Expanding on our previous look into using Processing along with VDMX, in this tutorial we will look at how to use Processing to post images received from VDMX to Twitter. This technique can be used at live events, as part of video installations, or to create simple bots.
Set Up The Twitter Account Access Tokens
Before we can post any data to a Twitter account, for security purposes we need to generate access tokens that are associated with it. Once we have finished this step we will have four special password token strings that we'll put into our Processing sketch that allow it to access the account. These are the “Consumer Key” “Consumer Secret” “Access Token” and “Access Token Secret” which are found by going to apps.twitter.com while logged into your user account.
If you are creating a new bot, before starting you'll want to create a new Twitter account to post with. Alternatively you can post using your own personal user account.
1. Create a new Twitter account if needed.
2. Go to apps.twitter.com and click "Create New App"
3. Click "Manage keys and access tokens"
4. Look for these four strings (they will be different for you)
Set Up The Processing Sketch
Fill in the OAuth Consumer Keys and Access Tokens
Begin by downloading the example Sketch for this tutorial and installing the needed 3rd party Library files for Processing. In this particular case you'll want to import Syphon, oscP5, and Simple Tweet using the Library manager.
Open the Sketch and in the Setup() function look for the area where the access tokens for your account can be entered, replacing the ... text with your own strings – be sure to keep the quote marks around your token strings.
After you've placed your account tokens into the provided places in the code, this sketch can be run exactly as is without any additional modifications. You may also want to adjust the rendering dimensions to match the output of your VDMX canvas, set it to receive a particular Syphon feed instead of the first one available, adjust the OSC listening port (defaults to 12000), or change the default message that is posted along with images.
With this sketch you can make posts to Twitter in one of two ways:
Click on the output window.
Send an OSC message to the address "/tweet": If this is a float, a value of 1.0 results in a tweet with the last used text. If this is a string, a tweet is made using this string as the message text.
Additionally you can send strings to "/tweetstring" to set the text for the next tweet without immediately tweeting.
Once the Processing Sketch is ready to receive Syphon and OSC, you can use VDMX, or any other Syphon output enabled application, to post images to Twitter. At this point, if desired you can simply use the VDMX project file that is provided as a starting point, or you can follow along with the final section below to learn more about how this was put together.
Within VDMX we can start by loading in a video generator that will create test card images that get published. Next add a Syphon Output plugin and select which layer you which to publish from the list.
Next we'll configure the OSC sending. If you haven't already set up VDMX to send OSC to Processing, open the Preferences and go to the OSC section under Output Ports and add a new row, setting the IP address to "127.0.0.1" and the port to 12000. To create an interface in VDMX add a Control Surface plugin that contains a button and a text field control. Use the UI inspector to set the button send to Processing over OSC using the address path "/tweet" and for the text field using the address path "/tweetstring".
Once this is ready, we can run the Processing Sketch and test it by writing some words in the text field and then clicking the button – this should result in the posting of an image and your text to Twitter. If this fails, you can look in the console for Processing to see if there are any errors.
To automate the process of posting, an LFO plugin can be used as a timer to bang the button from the control surface at regular intervals. For once an hour, the LFO can be set to loop after 3600 seconds.
If you have been on the internet this last week, you may have heard about GDPR, a new set of base guidelines from the European Union for companies and other entities to follow when dealing with personal information from their customers.
As we've been updating our own privacy policy and server to meet these standards, one of the most important details we ran into was picking just the right animated graphic to go with our new privacy policy page and we thought why not give the image a personal touch and make it with VDMX?
In this quick video tutorial we'll be demonstrating how to create a hacker and glitch visual style that can be used to create still images or short video loops as source material for animated gifs to use for profile pics online in situations where you want to apply some digital processing on your face for privacy or for fun.
For this tutorial you'll either need a webcam, or some pre-recorded material (videos or still images) of the person or object who will be the subject in the final result.
Making the template
One aspect of VDMX is that it can be used to create custom workflows for projects, even if you are just capturing still images to use online. If you'd like to skip to the next section, you can download the completed template from this lesson.
In this example we will begin by building a setup that allows us to load in media files, apply styling by using FX and then capture the output, along with preview windows to visualize what is happening at each stage. As an extension to that a second media bin, page and a hidden layer can be added to the workspace as a way to reload the final results, so that they can be previewed without having to switch between software, or for further remixing. After we complete a good baseline configuration, the layout can be stored as a preset that can be later restored as a starting point.
Tip: In addition to the Poly Glitch FX demonstrated in this video, you may want to try other filters. In particular check out the Glitch, Retro, Film categories to obscure your identity online. Try the LUT section for specialized color palette tinting styling.
Stylizing the output
Once our basic workflow template is prepared, we can spend some time trying out a few different styles of output. This part involves more of a bit of play and you can load in additional media or try out different FX to get your own unique styling. Here we will walk through a few different stylings and techniques that might be useful as starting points.
The first uses a still image as a source, applying a Duotone to separate some of the facial features from the background and a Mirror to duplicate the visible half to create a whole face. The ASCII Art and Sphere Map FX are then added to create a retro hacker styling.
To get this same style with a live camera input, instead of using a Duotone, a Motion Mask can be used to separate a moving face from the background, which also works nicely with this retro style. Another layer can be used to add a bit of noise as texturing.
For the final version demonstrated, instead of a secondary layer for a noise, a colored line generator with filters can be used to make a laser background that can be masked by the shape of the face from the top layer. A Vignette or Shape Mask FX can be useful to create a circular result.
Tips and Notes
Set the canvas size to match the resolution of the images you plan to upload.
Use the Movie Recorder to capture video and images to disk.
Right-click on media files to locate them in the Finder.
Make GIFs
Upload movies files to giphy.com or use other free tools to create animated GIF files.
Along with the technical tutorials on how to use VDMX and other software tools, one of the main focuses of this site are topics related to the field of performing live visuals. In this post we'll be looking at some of the techniques that are used to bring a show from an idea through to an actual production, covering the areas of:
Mood boarding: A primer, or “mood board,” is used to gather ideas for the overall style and palette for the visual design. This may include a collection of colors, graphics, textures, image references, screen grabs and sketches.
Storyboarding: A storyboard takes the elements derived from the mood board and places them in time, typically matching up events such as style changes with important moments in other elements of the show production, such as the music or theater scene changes.
Pre-production: During pre-production any prepared material, such as video files, still images, interactive generators, custom FX, that are needed for the show are created and arranged in the performance software for rehearsals.
Technical riders: Technical rider documents are often created as a way to clearly describe the broad technical aspects of a show production, including details like equipment lists, wiring diagrams, stage layouts, venue requirements, and contact information for people involved.
It is important to note that not every show will need all of these stages of production, and many shows will have other phases of development not covered here; these are simply four of the most common points along the path of creating an event from start to finish.
For this write up we are joined by Candystations and several other artists who have shared some of their own mood boards, technical riders and other planning documents for us to include here as examples.
Mood Boards
Bang On A Can early concept via Candystations
The starting point for a visual design typically starts with a mood board that helps capture the desired looks and feels that will be used in the project. This may include a collection of words, colors, graphics, textures, image references, screen grabs, sketches or other materials that contain some relevant details. There is no set process for creating one, for some people tools like Pinterest are useful, others will simple keep a folder full of files on their hard drive. Simple image collages can be created to explore juxtapositions between ideas.
If the visuals will correspond to music, actors, or other events, this is an opportunity to consider the relationships and conceptual mappings between these elements. The Historical Sound to Vision Primer from Candystations is a great "cheat sheet" of ideas that provides some common ways that sonic and visual elements can work together in a performance.
Once the materials have been gathered, they can be arranged and organized by concepts. Again this can happen simply by using Quicklook in the Finder and using folders, a rich text editor such as TextEdit, or by using a photo editing tool to create a PDF document with additional notes. The completed document can be useful when you are working alone, but can also be especially important when working with collaborators or clients to help express the creative direction that you want to explore, sometimes even before you've started to officially begin work on a project. It can also serve as starting points for the next stages that we'll discuss, creating storyboards and pre-preproduction.
Page from Planning for “Age Of Adz” via Candystations, a mood board that evolved into story boards with pre-planning notes.
Storyboards
After the theme, setting, and mood for a performance or a production are planned out in mood boards the next step is often to plan out, or “storyboard” a script for choreographing various forms to music, actors, or other timing elements of the show.
In the animation industry, storyboards are comprised of “extremes and in-betweens.” Extremes are moments that set the exact mood, emotion, or key image in a sequence. In-betweens are the transitional frames that move from one extreme to the next.
Basic animation loop story board.
In the world of film production a storyboard is also commonly known as a shooting board. Here we essentially have a series of frames, with drawings of the sequence of events in a film, like a comic book or photo novella version of the film. Preparing these documents can help film directors, cinematographers and clients visualize the scenes and how they connect together.
For a show production the same ideas from both animation and film storyboards can apply; you may have storyboards that show the broad changes in themes between and over the course of individual songs or act, and you may have storyboards that focus on the details of specific changes within a verse or scene.
At this point work can begin on the actual creation of any special resources that will be needed for the live performance itself; this can include rendered images, photographs, videos, film, writing code for any real-time generators or FX, and going through the process of arranging these materials in the show production software.
A pre-production planning guide based on the story board phase can be a useful way to coordinate work between several people. It is fairly common to find collaborators, assistants or other means to deal with time constraints and to fill in any gaps in your own knowledge.
“We worked with Aaron Rogosin to voice and animate the moon. After scripting out the moon’s lines, we filmed Aaron speaking them, then motion tracked the movements of his face as he talked. We mapped the video to a high-res photo of the moon we got from NASA. Then we split the lines into single clips we could trigger with a TouchOSC soundboard and exported them as HAP Alpha.”
If you are planning out a show in detail in advance, tools such as the Cue List plugin in VDMX or QLab to pre-program as much of the event coordination as possible. When using multiple pieces of software you can use timing protocols such as LTC or MTC, or control data protocols such as MIDI, OSC and DMX to synchronize events.
In the later of pre-production, you may begin to prepare the finalize materials for the show, doing things like exporting to different resolutions or aspect ratios to accommodate, and converting media files into HAP for optimized playback on the show playback systems.
Technical Riders
Regardless of the complexity of the production, a technical rider can be a useful document to prepare, even if it is just for yourself. Traditionally the rider contains all of the information that a venue and other agencies involved with the production will need in advance of the show to meet your needs. A rider will often contain information such as gear requirements, stage layout and cable wiring diagrams, along with contact information for people to reach in case there are follow up questions or issues that need to be resolved.
Some important questions to ask when making a technical rider include:
What types of displays does the output go to? What kind of connectors are used?
What is the actual display resolution? Aspect ratio.
Am I bringing the projector or displays? How does that get set up?
Where am I setting up my table? Side stage or FOH or on stage. Is there enough power access there? Visibility to the stage?
Is an audio feed needed? If so, where does it come from? What kind of cable is used?
What camera feeds and other video inputs will be used during the show? How do they get from their sources to your computers?
Do you need to get special content for each venue or event that isn't covered by pre-production? For example, logos or banners for festival.
Is there safe storage space? Green room access?
How much time is needed for setup and soundcheck?
Who are the people involved and how can they be reached? Consider everyone you might need to contact: venue contacts, production assistants, runners, lighting designer (LD) and touring managers.
Like the other areas of this tutorial, there is no set process or template for making a technical rider; you can use a simple text editor, an image editing / layout app, spreadsheet software, or any other tool that you are comfortable with. In the gallery below you can find some example technical riders provided by various visual artists and other samples, such as the infamous Iggy Pop and the Stooges rider.
Jon Hopkins 2018 via Dan Tombs
Pixel Mapping Template via Azael Ferrer
First page of the rider for Iggy Pop and the Stooges
Liars via Dan Moore
Via Adrien Cognac
Setup for 11/30/01
Even if they are just for yourself, a wiring diagram can be useful when preparing for a gig.
Beyond the items you include in the technical rider to share with other people, it can also be useful to prepare your own personal checklist of gear to pack and other useful information before the day of a big event or leaving to go on the road. Some standard items to consider are:
A comprehensive list of all cables, adapters, computers and other equipment that belongs to you.
A label maker for tagging your personal equipment.
A toolkit that includes screwdrivers, extra screws, a measuring tape, flashlight, magnify glass, a work knife and anything else that might be needed to make quick repairs. A set of work gloves can also be useful.
USB thumb drive for quick file handoffs.
Backup hard drive with all your media files.
Gaffers tape.
Cable ties.
Phone chargers.
Extra batteries.
A snack.
A list of stores to nearest to the venue where replacement electronics, instruments or other equipment can be purchased.
And perhaps one of the most important details to remember is that even with the best advanced planning things will go wrong. In these situations we suggest taking a little advice from The Hitchhiker's Guide To The Galaxy – DON'T PANIC!
As these are just some samples of how mood boards, story boards, pre-planning notes and technical riders come together. Please share your own materials and experiences in the comments section below!
Along with the basic controls of inverting values and applying basic math equations, number FX chains can used to adjust the values of data-sources before they are applied to sliders. In this example the 'Fall' FX will be applied to an audio analysis level to create a falling style before being applied to a VU meter generator.
This tutorial begins from having a layer loaded with a VU meter ISF generator along with audio analysis, control surface and preview window plugins.
The setup is accomplished in a few steps:
Right+click on slider in the control surface and select 'Filter 2' from the list of Audio Analysis data-sources (the 'clap' region)
From the UI Inspector, click the 'Edit Num FX Chain' button to reveal the number FX chain for the receiver
Add the 'Fall' FX to the receiver and adjust the value to the desired fall rate
Click the 'Close' button to close the inspector
Control+drag from the slider in the Control Surface to the 'audioLevel' slider in the Layer Source controls.
Note that number FX chains are available for every data-source receiver for each slider; in this example we used a Control Surface slider as a go between, but this can be directly applied to the 'audioLevel' slider as well.
Tips and Notes:
Inspect a slider then use the Edit Num FX Chain button from the sub-inspector.
Control+Drag from Control Surface UI elements to create quick assignments.
Advanced tip: Use a Control Surface slider to create an interface for the fall rate.
Also see the tutorial on using the OSCQuery Client plugin which can be used to create dedicated interface controls to remotely control other software. The Control Surface plugin can be used to publish parameters from your local workspace.
The OSCQuery protocol makes it easy for software that supports OSC to access each others parameters for remote control, without a lengthy setup process. Within VDMX there are a few ways to take advantage of this and in this tutorial we will focus on using the built-in OSCQuery Browser window which can be used to browse the address space of a server, send OSC messages and add OSC sending elements to our workspace.
The built-in OSCQuery Browser Window can be opened from the Window menu or by using the cmd+5 keyboard shortcut. From this panel you can access, browse and search the namespaces of other applications. For each of the listed OSC address destinations at the remote server you can:
Use the provided interface control to quickly send test data.
Dragged the listed item on to UI elements in VDMX (such as sliders, buttons, and color wheels – this also works with the list of variables in the Cue List plugin inspector) to automatically configure OSC sending to the remote hosts.
Tips and Notes:
The built-in OSCQuery Browser window, viewing OSC parameters from a remote control surface.
The Control Surface is one of the most widely versatile plugins in VDMX, making it possible to create sets of custom interface elements that can be used to control nearly any aspect of your workspace or send MIDI / OSC / DMX to other systems. The Control Surface plugin also has the ability to publish its list of parameters over a local area network using the OSCQuery protocol so that other software can remotely browse and control almost any aspect of your VDMX project.
In this video tutorial we'll be looking at the basics of using OSCQuery protocol from within the Control Surface, and three ways that those parameters can be accessed from software running on other devices: using our free OSCQuery Browser utility, another copy of VDMX and a web browser running on an iPhone. This same technique of accessing publishing parameters can be used with other OSCQuery enabled tools such as Mitti, Mad Mapper and libossia.
The overview of steps from this tutorial are:
From the Workspace Inspector > Plugins, add a Control Surface plugin to the project.
From the Control Surface inspector, add a slider, two buttons, a pop-up button, a color picker, a text field and a 2D point picker.
Use the UI Inspector to adjust interface settings: Make one button into a momentary and configure the pop-up button with menu options.
On a second computer (or on the same computer if only using a single machine to follow along), open the OSCQuery Browser, and observe that the list of parameters from the Control Surface can be browsed and remotely controlled.
On a second computer, in VDMX, the OSCQuery Client plugin is used to create interfaces that remotely access the published parameters from the first computer.
Webpage control: Go to VDMX Preferences > OSC > OSC Query and click the URL for "Interactive HTML Interface" – a web page controller will appear in a web browser. Load this URL on mobile devices, tablets and other computers on the local network to control published parameters from VDMX.
VDMX and the web controller support the optional 'bi-directional' communication part of the OSCQuery specification – note that as we adjust parameters within one device / application, the values are updated in the displays of other software to match. This feature can be disabled at the top of the web controller for low bandwidth situations.
Tips and Notes:
The basic parts of the Control Surface: Setup inspector, main interface with controls and the UI inspector for adjusting settings of individual interface elements.
Go to the VDMX preferences > OSC > OSCQuery to find the Interactive HTML Interface link.
A Control Surface plugin and corresponding web controller views (in dark and light stylings) on a Mac.
The corresponding interface viewed in Safari on an iPhone.
The OSCQuery protocol makes it easy for software that supports OSC to access each others parameters for remote control, without a lengthy setup process. Within VDMX there are a few ways to take advantage of this and in this tutorial we will focus on using the OSCQuery Client plugin which can be used to browse the address space of a server and add UI controls that are automatically configured to send to it.
Basic Usage of OSCQuery Client in VDMX
In some ways the OSCQuery Client is similar to the Control Surface plugin, with the important difference that the UI controls being viewed and manipulated are representations of parameters in the remote software. For convenience, like the Control Surface plugin, the values associated with these parameters are also published locally as data-sources within VDMX.
Step 1: From the inspector for the OSCQuery Client plugin can select a remote server to connect to and which elements you’d like to remotely control.
Step 2: Use the UI elements in the main plugin interface to remotely control the parameters of the remote system.
Step 3: Parameters are also available as local data-sources to control sliders, buttons and other UI elements.
You can use the OSCQuery Client alongside a variety of different software, including other copies of VDMX. In this video demonstration, a Control Surface plugin is used on Computer 1 to publish parameters which automatically appear in the listing on Computer 2. Once these UI elements are added to the OSCQuery Client plugin on Computer 2, any changes made on either system are automatically kept in sync. If needed additional computers can be made to work in sync by connecting to VDMX on Computer 1. More information on using the Control Surface plugin to publish parameters from VDMX can be found in the Control Surface documentation and the OSCQuery Introduction Tutorial.
Though not supported by all software, when possible the OSCQuery Client and Control Surface plugins will establish a bi-directional connection and update the listing and values of UI elements in response to web socket notifications and OSC messages.
Using OSCQuery Client to Control Visuals
In the second part of this tutorial we'll look at an example that demonstrates using the OSCQuery Client to remotely control visuals in another copy of VDMX running on another computer.
From the OSCQuery Client inspector, the parameters of each of the four Control Surface plugins on the second computer can be browsed and added. From the 'My Controls' tab the local ordering of the parameters can be adjusted.
As a final detail, an NDI® Output plugin is added to the computer generating the visuals and streamed back to the control machine for preview. This step can be repeated to send each individual layer as its own stream, within the bandwidth limits of your network.
Tips and Notes:
The OSCQuery Browser window can be used to quickly view the address space and access the parameters of remote servers, and more.
Multiple OSCQuery Client plugins can be used to remotely control different servers at the same time.
The NDI® protocol from NewTek is a way to publish and receive audio / video streams over a network as a way to share live feeds between systems. From within VDMX, any number of video streams can be both output to the network and input from other applications.
In this tutorial we'll looking at taking layers in VDMX and publishing them for other NDI® enabled software to access. When finished you may want to move on to the tutorial covering receiving NDI® audio / video streams in VDMX.
Tip: For a quick demonstration try the "NDI® Output Example" option from the Templates menu in VDMX.
Publishing NDI® Audio / Video Streams
The NDI® Output plugin can be used to publish video and audio streams from VDMX to other client software and devices that support the protocol. Like other plugins, you can add them to your project from the Plugins section of the Workspace Inspector. This process can be repeated for as many video streams as your system and network bandwidth can handle.
Step 1: Add NDI® Output plugin to a project from Workspace Inspector.
Step 2: Select desired video stream and optional audio device to publish. If needed adjust any options in the plugin inspector.
Step 3: Video stream can be received by other software on the local network.
Note: While NDI® can be used to send video streams between applications on the same computer, to get the best performance and quality it is typically best to use the Syphon Output plugin when possible.
Plugin Settings
From the main NDI® Output plugin interface you can adjust the following settings:
Video Source: Sets which video stream from VDMX is published.
Black: Overrides the current output with a black frame of the same resolution.
Audio Source: Sets the audio input device that is streamed along with the video.
Mute: Overrides the current audio output with silence.
From the inspector panel additional options for NDI® publishing can be set:
Alpha: Enables sending of alpha channel with the stream. Turning this on increases the bandwidth used for the stream.
Send as RGB: Enables using the higher quality RGB mode for sending. Turning this on increases the bandwidth used for the stream.
Crop (left/right/top/base): Set the number of pixels to crop off from the incoming video signal before publishing.
Resize: Optionally resize the video stream before it is published.
Throttle FPS: Optionally limit the rate at which video is published in frames per second.
Tip: From the VDMX Preferences under the NDI® tab you can enter a list of remote sources outside of your local network to connect to.