Audio Visual Design Guidelines

Ingest Methods & Best Practice

91 views November 22, 2018 aetm 0

Ingest covers the audiovisual sources, video encoding, and digital file creation aspect of a lecture capture system.

Video Encoding Hardware and Software

Lecture capture ingest may be achieved by hardware appliance, or software:

  • Hardware appliance common features:
    • Digital video inputs (HDMI, DVI, or SDI)
    • Analogue audio inputs (mic or line level)
    • Capture connected USB video and audio devices (webcam, microphones, et al)
    • Digital video output port
    • Line level output port
    • Ethernet port (video streaming over IP network, upload to shared file system or ftp, third party control)
    • Storage (onboard, SD card, or USB storage media)
    • PiP function (multiple digital video inputs)
  • Recording software common features:
    • Software running on computer device, or mobile device (tablet or phone)
    • Recording function may be included as a feature of Unified Communications or webinar software
    • Capture desktop (share screen)
    • Capture connected USB video and audio devices (webcam, microphones, et al)
    • Capture digital video input
    • PiP function

Multi-Stream Recording

Multi-stream recording (aka multi-camera recording) is a lecture recording method whereby multiple individual video sources are captured in real time and saved as a single recording session. During playback, all streams are frame-synchronised and viewers may freely switch between or overlay (PiP) the available video content streams.

Example: A recorded lecture session includes two available video streams. One is the presenter’s shared screen showing a slide deck, the other is a camera showing the presenter’s face. The slide deck may contain important visual information, by making a selection via the video playback software the viewer chooses to show only this and hides the video showing the presenter’s face.

This feature requires that the ingest hardware or software is compliant with the video content management system.

Encoded Video Best Practice

Video encoding best practice as it relates to lecture capture:

  • Confirm the video resolution, colour bitrate, and framerate matches the quality requirements for recorded content
  • Confirm the video encode data rate and compression matches the quality requirements for recorded content
  • Calculate the estimated hours of recorded content to be created each week. Confirm the total recorded content disk space requirements will not exceed the ICT architecture data storage or transmission rate as the library of recorded content grows
  • A copy of encoded video content may be archived at the highest possible quality to futureproof the video content library should the ICT architecture be upgraded

Video Capture Methods

Digital video capture best practice:

  • Confirm the camera, or video-outputting device specifications match the quality requirements for recorded content
  • Confirm any video processing or signal extension (ie HDBaseT) present in the signal chain will not compress, degrade, or downscale the recorded content
  • Confirm the video resolution, colour bitrate, and framerate matches the quality requirements for recorded content
  • High-bandwidth Digital Content Protection (HDCP) will block the digital video output (DisplayPort, DVI, HDMI) of a device if an unauthorised receiving device is detected. All lecture capture, and video recording devices are not authorised to record content with HDCP-enabled
  • It is illegal to use adaptors, convertors, or software to bypass or disable HDCP


Common sources and processing equipment deployed for digital video ingest:

  • Sources:
    • Cameras – pan/tilt/zoom
    • Cameras – fixed shot
    • Overhead document cameras
    • Webcams
    • Mobile device camera and microphone
    • Computer presentation content
    • Media playback devices
    • Video conference
  • Signal processing equipment:
    • Audio visual switcher / matrix switcher (switch between video source inputs)
    • Picture-in-Picture (PiP) / windowing processor (combines multiple video sources into a single video signal)
    • Annotation processor captures touchscreen or keyboard/mouse activity and renders a digital annotation overlay onto an incoming video signal
    • Video distribution amplifier (may be used to deploy a redundant video encoding device for critical applications)

Audio Capture Methods

Audio capture best practice:

  • Microphones:
    • When a single person’s voice must be heard clearly and distinctly, use lapel or handheld microphones with cardioid, supercardioid, or hypercardioid capsules. Position the microphone just below the person’s chin or clipped to the lapel.
    • When noise from the room must be captured whether it be a group discussion or room activity, use omnidirectional or multi-array microphones. These may be suspended from the ceiling or placed on a table centre to the general noise source area.
    • Where possible, reduce the loudness of any ambient room noise captured by microphones
    • Where possible, reduce acoustic reflections from sound within the room. ### Refer to Chapter on Acoustics ###
  • Line level Audio:
    • Ensure recorded audio is free of any hum or noise that may be caused by poor grounding
    • Where possible, use balanced line level for signal transmission
  • Use a noise-gate to filter out any ambient room sound


Common sources and processing equipment deployed for audio capture:

  • Sources:
    • Microphones (wired or wireless)
    • Audio playback devices (PC, media player, et al)
  • Signal processing equipment:
    • Audio Digital Signal Processor (DSP)
    • Audio mixer
    • Audio distribution amplifier

Was this helpful?