Our video-setup for conferences
Powerpoint and visual content has become an important part of seminars and conferences. With our video-rig we can make sure that the presentations run correctly on the projectors, and that the transitions run smoothly.
In some rare cases I have been to conferences where the customer has achieved this without external help, using just one PC with one large Powerpoint-presentation that contains everything in one file in the correct order. However, this solution requires a lot of planning, it is not very flexible and it can be fragile. For example, it might not even be possible to do it this way, if presenters show up with files of different formats: powerpoint in 16/9 and 3/4, keynotes and videos. When it comes to videos, it is possible to convert them so they run in Powerpoint, but its not recommended to put a lot of large video-files in one PPT-presentations. By having an external solution for video-playback, you both take some burden off the computer running Powerpoint, and you can potentially get a higher quality video.
In our standard setup we can show content from various sources that are sent to our video-mixer:
– two PCs running Powerpoint and PDFs
– mac running Keynote and Qlab
– hardware video-player running video-content
– stills for logos, program etc. We can load 20 pictures in the internal memory of our mixer
– the last input on our mixer can take a camera or an external computer if the presenter wants to control his/her own PC. This is very common if for example the presenter wants to show something on the Internet.
When the presenter has his/her own computer on the stage they automatically get a screen where they can see their own notes. Now that we have removed this computer we need a monitor on the screen which shows the same. We then get what is called the “presenter-view” which shows the current slide, next slide and the speaker-notes. The way we solve this technically, is to have two separate outputs from each computer. On the PCs the first output is a mirror of the internal laptop-screen, the second is extended screen. On the mac we use two separate outputs because Keynote can work with 3 different screens.
All the outputs from the computers go to the same video-mixer, where the picture is distributed to 2 seperate outputs: the “program-output” goes to the projector for the audience, and the “aux-output” goes to the monitor on the stage. What is shown on the monitor depends on what is shown on the main-projector, and it hardly ever changes: if the main-projector shows the main-output from PC 1, then the monitor should show the mirror laptop-output on the monitor. I found out that I could pre-program what is shown on the monitor-output, based on what is shown on the main-output. By doing this I no longer have to spend energy on manually selecting what goes to the aux-monitor. However, I still have the option to override the pre-programming and select the aux-output manually, in those rare cases where the default aux-setting can’t be used.
In the past I have always used Qlab on Mac for external video-playback. In my current setup I have replaced this with a dedicated video-player, and I believe this solution is better. The videoplayer (hyperdeck studio mini) and the video-mixer (ATEM television studio) is controlled through a laptop using ethernet-network. With a program called Justmacros I am able to make these boxes speak to each other and make custom scripts to control their behaviour. A macro is a command that can do many operations with just one button-click. Now when I put the hyperdeck-player on the main output, it will automatically start playing. When the video is finished playing, and it is still projected on the main screen, the picture will automatically cut back to whatever is on the preview-bus, and once the video is off the screen, it will move to the beginning of the next track in the playlist. With this I get “broadcast-style” video playback with no delays or black frames. If I for example I want to show an external video in the middle of a Powerpoint-presentation, I just have to push the cut-button once, and when the video is finished we are automatically back to the PPT-presentation.
In order make the videos ready for the Hyperdeck-player, I need to convert them to the right format. Currently I use Prores 422, 1080p, 60FPS. This process can take some time, but in my opinion it is definitely worth it, and I will still do the same if I use Qlab for video-playback. In fact, the preferred format for Qlab is the same as for Hyperdeck (prores), which means that I have a great backup-solution if something happens to the Hyperdeck. We normally get videos in all kinds of different formats and compression-types, and the only way to secure that the video will playback properly is to convert it to a format that you have tried and tested.
I use my MacBook and Apple Compressor for converting. The finished videos is sent to memory cards that can both be read by Hyperdeck and the MacBook.
My video mixer is Blackmagic Television Studio HD. I have talked to AV-technicians who are sceptical of using this mixer for presentation-work because it doesn’t have scalers on inputs and outputs, instead they prefer to use more expensive models from Roland, Analog Way, and Barco. Although working without scalers has a learning curve, Ive found it easy to overcome this lack of feature. The trick is to stick with one video-format, set up all the computers in advance and buy some external mono-scalers that you can use for external video-input. You also need one for the output in case the projector doesn’t accept your chosen format. (but so far I’ve never run into a projector who doesn’t accept 1080p 59.94 hz) By using these guidelines Ive never run into a situation where I regret not having scalers on every channel in the mixer.
Audio
I dont use the headphone-outputs on the computers, instead I use the embedded audio through HDMI. The blackmagic-switcher has an internal audio-mixer where I can set the levels. Using AFV I can make sure that only the audio thats on the channel which is seen in the main-projector gets through. In this way I can check the levels without muting the audio-output.
Ive found that using embedded HDMI-sound is very reliable, its easy to set up, the analog-output of Blackmagic sounds great, and by using the same DA-converter for all video-sources the audio-quality will always be the same. I dont have to deal with bad sound cards and lousy windows-audio drivers.
The only time I will use the headphone-output of a computer is if the audio-mixerperson wants individual control of one of the sources.
The framerate-discussion
Although I am in Europe where broadcasters use PAL-framerates, I have chosen to use NTSC 59.94 hz as my main frame rate in the Blackmagic-switcher. (when set to 59.94 you can still use 60p sources through the mixer. The two formats can be seen as one for most practical purposes in this setup.)
I have several reasons for choosing NTSC:
– i want as high frame rate as possible to reduce the latency and get as smooth video as possible.
– although we are in PAL-land there is no guarantee that we will get all video-content with PAL framerate, you must be prepared for everything. Most of the videos on Youtube are usually in 30 fps for example. 59.94 FPS is better for 24, 30 and 60 fps, and works fine for 25 and 50. (better to convert up than down)
– 60 FPS is a more common format for computer-screens and projectors. Some screens dont have a refresh-rate of 50 fps.
– The biggest argument against using NTSC is that if i work with camera-companies they will most likely use PAL. But even if I use 50p the chances are high that I still need to use a scaler in-between, if their camera or switcher uses 25P or 50i. So its easier to always use a scaler for external sources and stick to that. (BTW I use a Datavideo Dac70 for these tasks.)
For a list of our video-equipment follow this link
Leave a Reply