- Microphones may be fixed or portable (wireless). When deployed thoughtfully they help deliver clean, intelligible speech to the audience and other systems.
- A fixed microphone is usually provided at any lectern and is typically of flexible ‘gooseneck’ construction for ease of adjustment. Microphones employing a cardioid pickup pattern help limit unwanted audio and maintain a good signal to noise ratio. Typical fixed microphones still provide a balanced analogue output for simplicity and compatibility.
- Wireless microphones are used in most theatres and larger flat-floor spaces to provide freedom of movement to presenters. Balanced analogue audio is still a common output, with a network connection increasingly delivering multi-channel audio to and from professional equipment, as well as providing system event monitoring
- Handheld wireless microphones are preferred by some presenters and generally employ a cardioid pickup pattern
- Lapel microphones leave presenters with both hands free. Most presenters aren’t skilled in microphone technique and may not position the lapel mic well, so an omnidirectional pickup pattern is usually more appropriate.
- Headset microphones position the microphone capsule at a fixed distance from the presenter’s mouth and offer the best chance of clear speech pickup. As input sensitivity is comparatively lower than for an omnidirectional lapel, the result is typically a high signal to noise ratio and better gain before feedback.
Some users dislike headsets and they may not be appropriate for use in all common learning spaces but provide excellent utility in spaces such as wet labs.
- Specialist microphones are increasingly used for conferencing and collaboration systems and may be ceiling, wall or desk mounted.
- Designers should be careful when using boundary microphones. As they are deployed as a ‘catch all’ solution, background noise is acquired at a higher level and may provide unwanted masking in assistive listening systems.
- The requirement for discrete, installed source devices in typical learning and meeting spaces has lessened with the increasing reliance on user-provided devices (BYODs) and network-delivered content. Typical provisions include:
- ‘Resident’ PC and provision for connection of BYODs via commonly-supported cable types.
- A network presentation gateway which allows network connection of BYODs to presentation systems
- Dedicated AV decoders for the receiving of network video streams
- IPTV, MATV or other content appliances
- Audio is almost universally embedded in digital video streams from these devices, but provision for analogue audio input may be appropriate in some spaces. Consider the functionality required, and what analogue inputs need to be afforded (e.g. 3.5mm stereo jack, XLR mic or line input(s) for an event mixer)
- Videoconferencing, capture and collaboration systems are regularly encountered and may deliver two-way audio via analogue, embedded digital or USB.
Mixing, routing and processing
Institutions increasingly rely on the use of digital signal processing (DSP) techniques to handle acquisition, mixing and processing and maintain high signal to noise ratios throughout. Audio DSPs can be tightly integrated with room control systems and lessen the potential for well-intentioned tampering.
Processor size and functionality will be determined by the required number and type of inputs/outputs and the complexity of the planned activities. Modern AV switchers/routers often include an audio DSP which may provide adequate flexibility and functionality.
Whilst DSP resources have traditionally been deployed as a part of each local AV system, it is now viable to pool DSP resources at a building or campus level. In addition to functional requirements, institutions must consider the needs of those programming and supporting the systems when determining the most appropriate topology.
Some organisations will find benefit in deploying a dedicated DSP in a particular space or floor. Others may be looking to leverage centralised DSP deployment at a building or campus level. The appropriate solution should be based on the functionality required for all users, including those supporting the systems.
The selection of audio DSPs within an organisation should consider the needs of the support staff who will manage them remotely. Where practicable, the number of DSP types should be reduced to ensure appropriate staff are trained on the operation of configuration and management software. Once an institution has selected their DSP platform, they should consider training key technical staff. Online training is provided by many manufacturers, and those intending to develop the configuration and programming may benefit from additional classroom training.
Designers should leverage the available diagnostic tools provided in DSPs. Signal flow tracing, test tools, metering and remote audio monitoring allow technical staff the ability to pinpoint specific problem areas, and enable them to deploy and manage incident response more effectively.
For those institutions deploying rooms built upon standardised designs, a common approach is to develop a standard “site file” with a typical layout, naming, I/O configuration and testing tools.
Institutions in Australia and New Zealand benefit from good access to manufacturers and distributors, and may choose to seek their advice on development of these site files. This approach may help ensure a good balance of standardisation and the flexibility needed to cover most scenarios.
Site files should be provided before the build commences, so any issues can be identified prior to final system commissioning by the AV integrator or internal staff.
Audio mixing and processing equipment generally:
- Must be able to be reliably interfaced to the room control system
- Must accept input audio in all required formats required in the space e.g.:
- Line- and microphone level analogue audio with phantom power for condenser mics
- Multi-channel digital audio via a network stream or other industry-standard interface (e.g. USB)
- May include one or more de-embedders to extract audio from an HDMI stream for downstream mixing and processing
- Must provide discrete signals to all output and capture devices; typically analogue audio at line level or digital audio in the institution’s preferred format
- Must include all mixing, routing, dynamics and monitoring functionality to support the planned use cases
- Fixed-architecture DSPs provide defined signal paths and routing with limited processing
- Open-architecture DSPs require skilled specialists to define the internal audio architecture using a library of configurable components. They generally allow the creation of larger, more complex systemsWhere video- or teleconferencing is integrated with room AV, Acoustic Echo Cancellation (AEC) must be employed to prevent undesirable effects caused when incoming audio is picked up by microphones and retransmitted.
- AEC may be achieved in hardware (DSP) or software (many ‘soft’ VC codecs), but only one AEC instance should occur. If software AEC cannot be disabled, additional hardware processing may be detrimental
- In large spaces, it may be appropriate to define multiple AEC zones, such that microphones are referenced to their local loudspeakers
In all cases, the configuration and commissioning of audio processing and mixing systems must be undertaken by staff with appropriate direct experience in assembling audio systems and tuning them to the physical environment. PA system commissioning is one area where the requisite skills are largely the same as they were before the advent of digital processing.
Audio systems feed a number of discrete outputs including:
- Speech and programme loudspeaker systems may be discrete or combined as a single system
- Programme systems are typically configured as stereo, however a mono arrangement is more appropriate for some layouts and multi-channel systems may be required in those labs and theatres that require discrete audio zones/channels or immersive audio.
- Voice systems are generally mono, but multiple channels may be provided so each can be delayed slightly to improve intelligibility over distance.
- Recording and conferencing:
- One or more audio feeds are required to each system for capture of intelligible audio
- Programme capture may be stereo or mono, depending on the capture device
- Speech may feed a separate input or be mixed with programme audio. In the latter case, ensure speech is clearly audible above programme. Ducking can be utilised if required, and should be implemented sensitively to decrease a distracting ‘pumping’ effect caused by too-rapid release.
- Assistive listening systems:
- A mono sum of speech and programme is provided to most assistive listening systems, ensuring speech is clearly audible above programme
- Speech must be derived from those individual microphones actually in use and should specifically exclude audio from room effect/boundary mics which can dramatically decrease signal to noise ratio and decrease intelligibility and amenity to the hearing impaired.