User Guide

Description

These layouts have been developed to preserve stereophony and allow audio professionals to mix in a spatial format that continues all mixing knowledge known for traditional stereo & surround audio mixing. Several notable features of these formats include that there is no active processing during playback for users. This allows studio environments to monitor the mix without needing to compensate for any additional possible active filters, room modeling or reverb processes added to the mix without the audio engineer able to have too much control—this is not the case with Mach1 Spatial. This also means that on the integration side of the Mach1 Spatial APIs are extremely lightweight for developers to integrate as seen fit (based on a few rough guidelines) streamlining the mixing and delivering pipeline. Lastly this means that the audio engineer is responsible for what 'spatial audio' means in their mix and has full creative control to try to make something realistic or define new rules of 'spatial audio' or even provide both in the same mix.

This approach is a virtual algorithmic version of Vector Base Panning (VVBP). Contrary to object orientated audio or ambisonics designed to simulate soundfield reconstruction—VVBP is designed to correctly distribute audio and take full advantage of the creation of phantom sound sources around the listener, the problem is until now this approach is only successful with an array of loudspeakers recreating physical sound. The Mach1 Spatial virtual Vector Base Panning (VVBP) allows user to pan directly to their spatial mix and not require the user to recreate an approximation of the spatial audio field during playback, all correctly down-mixed to stereo for ideal playback scenarios.

Mach1 Spatial Patented Multichannel Layouts

Mach1 Spatial 4ch (M1Spatial_4)

This layout is designed for reduced channel count and only supports listener changes on the horizontal plane. The layout only allows for yaw movements toward the down-mixed stereo algorithm.

Mach1 Spatial 8ch (M1Spatial_8) [Default layout]

This layout is designed for its ease of understanding and quality while only using 8 channels of audio to incorporate 3 degrees of freedom for target listeners. The layout allows for yaw, pitch and roll movements toward the down-mixed stereo algorithm.

Mach1 Spatial 14ch (M1Spatial_14)

This layout is designed for raising the spatial panning resolution further and has been used for proper interoperability with surround channel bed formats. The mix correctly allows for yaw, pitch and roll movements toward the down-mixed stereo algorithm.

Multichannel Deliverable

Deliverables are flexible but ideally a deliverable should be a single multichannel file of 4, 8 or 14 channels and no additional metadata are required to run through the Mach1Decode API during playback to create the exact stereo image. Prepare your multichannel mix and an additional optional head-locked stereo mix for distribution or layback to a target video via the M1-Transcoder application.

Mach1 Spatial System Tools

Mach1 System Helper (m1-system-helper)

The Mach1 Spatial System allow automatic communication between all plugins and standalone applications. This is done via a background service "M1-System-Helper" which sets up temporary and lightweight network between all the plugins and apps during runtime. The following ports are used behind the scenes on launch of plugins and standalone applications:

User Defined Ports: 6345, 6346
These ports are defined via the `settings.json` file in the common User Data folder of your OS within a `Mach1` folder.

Ports: 9001-9004 : 9100 : 10000-11000 : 12345
Ports are searched and configured on launch for these ranges on the localhost of the user's computer.

Mach1 Orientation Manager (m1-orientationmanager)

The m1-orientationmanager is a background service that allows automatic communication between any plugin or standalone app that supports orientation data.

Mach1 Panner (M1-Panner)

The M1-Panner is designed off of the UI of many common surround panners and is based off of a system of divergence instead of actual 3D space representation. The M1-Panner's main UI window is a top down view of the mixing environment. The concentric circles around the center give the user references to different divergence amounts. It is safe to assume that the middle circle represents the estimated divergence of the common ambisonic encoding, while pushing beyond the second concentric circle allows the user to pan sounds to isolated positions in the mix (allowing infinite creative abilities still contained in a single spatial audio mix). It is still recommended to automate your volume outside of the M1-Panner plugin to simulate distance and attenuation. The M1-Panner has a UI slider for altitude or vertical panning of that track as well.

The M1-Panner changes when it is placed on a Mono, Stereo or Multichannel track (Stereo and multichannel modes still in development). When in a multichannel mode the M1-Panner operates with the same automation as the Mono mode however it introduces new functions to rotate and spread the stereo emitters further or closer to center.

Click on the Overlay button to activate 2D to 3D panning window that snaps to Pro Tools native video player and translates your 2D mouse movements into 3D orientation movements. There is also the gain slider that is set to +6db default to compensate for pan law in this format.

Mach1 Monitor (M1-Monitor)

The M1-Monitor adds a monitoring stage using our decoding math shared in the SDK so the user or audio engineer is able to monitor and hear their spatial audio mix during postproduction and mixing process. The M1-Monitor contains sliders for Yaw, Pitch and Roll for mouse referencing user orientation during the mix process.

The M1-Monitor automatically connects with the M1-Player to receive input orientation from the M1-Player's mouse controlled orientation. Both the M1-Monitor and the M1-Player simultaneously receive orientation from any 3rd party IMU or head tracking solution supported by the background service "M1-OrientationManager". The M1-Monitor also sends transport location automatically to the M1-Player.

Mach1 Transcoder (M1-Transcoder)

This standalone application is used to safely apply/encode the user's audio mix to their video content inside of the video container. The Transcoder renders the final video needed for delivery on available platforms/apps as well as downmix the spatial audio as closely as possible to ambisonics for other deliverables.

Mach1 Player (M1-Player)

The M1-Player allows users to simulate orientation angles in additionally beyond the Monitor. Once launched the M1-Player links to the Monitor and controls the Monitor's orientation without Keyboard Focus. Simply drag a video into the M1-Player after launch to have it load a 360 video.

Hotkey Options
  • 'Z' – Switches between flatmode and spherical 360 mode
  • 'O' – Turns on reference angle overlay image onto the video
  • 'D' – Switches between Stereoscopic and Monoscopic videos
  • 'H' – Hold this key to view M1-Player statistics

Game Engine Deployment

The following section is to detail on current spatial audio implementation. It goes over installation into standard projects, general use and deployment and the current features available for implementation.

More Information: Can be found on the Mach1 Spatial Youtube Channel

Unreal Engine

Mach1 Spatial Unreal Engine Plugin

Mach1 Spatial Plugin installs classes that can be used in Unreal Engine's Editor for deploying different forms of Mach1 Spatial Audio Formats. The classes call the Mach1Decode library for each target platform/device with the following features:

Features
  • SourcePoint (Default Rotator): Allows the deployed audio/mix to rotate to a specific point in relation to the user's position/orientation. This gives positional abilities to the audio without changing anything in the deployed audio/mix.
  • Custom Attenuation Curves: When active the user is able to place an attenuation curve to the Actor that is relative to the users position for simulating any possible distance falloff for the deployed audio/mix desired. Use in groups to create crossfade scenarios with complete control.
  • SourcePoint Closest Point (Rotator Plane): When active this assigns the SourcePoint feature to a plane/wall and calculates the users closest point to that plane/wall to be used as the SourcePoint location per update/tick.
  • Check Yaw/Pitch: Allows the user to ignore certain orientation inputs to that deployed audio/mix.
  • Fade Settings: These actors have incorporated fade settings making it a clean addition to add Fade In or Fade Out settings when needed.
  • Custom Node Activation & Node Settings: These classes/actors can be control with Blueprint Nodes however your project may need.
  • Zone Interruption (Mute Inside/Outside): If the camera enters inside one of the SpatialSound's objects reference Spatial it will turn output volume to 0 for that object.

Unity

Mach1 Spatial Unity Package

Leverages Unity scripts to call functions from Mach1Decode library per target platform with the following features:

Features
  • SourcePoint (Default Rotator): Allows the deployed audio/mix to rotate to a specific point in relation to the user's position/orientation.
  • Custom Attenuation Curves: When active the user is able to place an attenuation curve to the object that is relative to the users position.
  • SourcePoint Closest Point (Rotator Plane): Assigns the SourcePoint feature to a plane/wall and calculates the users closest point.
  • Check Yaw/Pitch/Roll: Allows the user to ignore certain orientation inputs to that deployed audio/mix.
  • Zone Interruption (Mute Inside/Outside): Controls output volume based on camera position relative to SpatialSound objects.
  • Custom Loading: Control when to asynchronously load Mach1 audio/mixes or whether to just load on start.

M1 Spatial SDK Integration

Please contact info@mach1.tech if you are interested in importing the Mach1Decode API or Mach1Transcode API to your app to allow support for playing back Mach1 Spatial mixes as well as transcode and convert all spatial/surround audio formats from/to each other.

More information about the APIs can be found in our Developer Documentation

Mach1 Spatial System Hacker Help

As multichannel and spatial audio mixing becomes more needed, we want to enable content creators to be able to control our plugins with any device they create or any new device to market to suit their workflow and pipeline needs.

UDP/OSC Ports

Control M1-Monitor or M1-Player by sending orientation data via OSCMessages or just raw UDP formatted as:

Format
String (address), Float (yaw), Float (pitch), Float (roll)
Remember to not open the device/app that would normally use this port or you will have a port conflict and see no data transmission.
Port Assignments
  • 9899: BoseAR Devices sending orientation data to M1-Monitor
  • 9901: M1-MNTRCTRL iOS App sending orientation data to M1-Player

Orientation OSC Data

For custom orientation transmission we expect the following Euler YPR angles with the address /orientation

Angle Ranges
  • float [0] = Yaw | 0.0 -> 360.0 | left->right ++
  • float [1] = Pitch | -90.0 -> 90.0 | down->up ++
  • float [2] = Roll | -90.0 -> 90.0 | left-ear-down->right-ear-down ++

Other UDP/OSC Ports

Port Ranges
  • 10001->10200: M1-Panner instances connecting to m1-system-helper
  • 10201->10300: M1-Monitor instances connecting to m1-system-helper
  • 10301->10300: M1-Player instances connecting to m1-system-helper

Ports are searched and configured on launch for these ranges on localhost of the user's computer and establish working on connections using unused ports within range.

Mach1 Panner

Interface

Interactive Guide

Hover over different parts of the interface to learn more.

Areas with a pointing hand cursor (👆) can be clicked to navigate to their documentation.

Description

The Mach1 Panner allows users to encode input audio tracks/channels in their preferred DAW (Digital Audio Workstation) to the Mach1 Spatial format. The process is a simple signal distribution process with no other DSP related effects, allowing complete 1:1 encoding into our Vector Based Panning format—Mach1 Spatial.

Plugin Parameters

General Parameters

Azimuth
Range: 0 → 180 → -180 → 0 (clockwise)
Rotates the input sound source(s) around central listener location. Where 0 is front facing and 180/-180 is rear facing.
Diverge
Range: 100 → 0 → -100
distributes the spread of the input sound source(s) to and from center to allow control of divergence of signal, or otherwise allowing control of distribution of how the signal spreads around listener. Where 100 is front forward on current Rotation angle, 0 is center panning or mono divergence and -100 is the inverse of current Rotation angle.
X
Range: -100 → 100
Moves the input sound source(s) along the up/down axis in the top down view.
Y
Range: -100 → 100
Moves the input sound source(s) along the left/right axis in the top down view.
Z
Range: -100 → 100
Moves the input sound source(s) along the Z axis (up/down head pitch).
Gain
Range: -96dB → +24dB
Controls input gain for input sound source(s) before encoding.
Level Meters
Visual indicators showing the output levels of each channel in the selected output Mach1 Spatial configuration.
Overlay
Allows user to pan to picture by overlaying the reticle for a equirectangular 360 video.
Isotropic
When enabled the signal uses a more realistic signal distribution model that distributes the signal equally on the vertically plane. When disabled the signal will crossfade between the top and bottom channels in a "Periphonic" panner mode.
Equal Power
Maintains consistent power levels no matter what the Diverge parameter is set to. Can only be enabled when Isotropic is also enabled.
Auto Gain
Automatically adjust the gain of the input signal based on the number of output channels to maintain a consistent level with the input.

Input and Output Options

Input Selection
Configure the input channel configuration to encode from. The plugin can handle many variations of input channel configurations as long as the plugin is instantiated with the correct number of channels. It is advised to put M1-Panner and M1-Monitor on busses that can supply more channels than needed to allow flexibility while mixing.
Output Selection
Select the output channel configuration to encode to. Options include Mach1 Spatial (4ch), Mach1 Spatial (8ch) and Mach1 Spatial (14ch).
Lock
Locks the current output setting to prevent changes from M1-Monitor. If you change the (input) Decode configuration; the M1-Monitor will automatically change the output setting on all instances of M1-Panner unless the Lock parameter is enabled.

Stereo Parameters

M1-Panner mode for stereo input channels/tracks of audio (L & R)

Stereo Spread [S SPREAD]
Add/reduce the distance between Left & Right input reticles.
Stereo Orbit Angle [S ROTATE]
Rotates Left & Right reticles around center reticle axis for more creative control during mixing.
Stereo Auto Orbit [AUTO ORBIT]
When active Stereo Orbit Angle will automatically adjust to point toward center of M1-Panner grid.
Stereo Balance [S PAN]
Input balance/pan between Left & Right input channels.

Notes

Pro Tools HD Automation: It is observed that Pro Tools HD by default always interpolates automation to smooth all values to only two floating points (example: 20.04 instead of 20.03997); because of this continuous performance knobs/sliders, specifically the Rotation parameter are prone to a GUI bug if you record Automation for this parameter in full rotations and play it back from the Host (Pro Tools HD). This issue can occur when recording automation that travels past 180 crossing, this is due to the value attempting to reset to -180 (or 180 if counter-clockwise).
Automation Examples

Simply un-ramp the automation to further cleanup playback of a fully rotating source:

Panner Zoomed Out View

Zoomed Out View

Panner Zoomed In Bad Example

Bad Example

Panner Zoomed In View

Zoomed In View

Panner Zoomed In Good Example

Good Example

Mach1 Monitor

Interface

Interactive Guide

Hover over different parts of the interface to learn more.

Areas with a pointing hand cursor (👆) can be clicked to navigate to their documentation.

Description

Decoding Explained

The Mach1 Monitor allows users to preview and monitor the mix as though you were head tracking in your target device/app/project. The audio outputting from the channel containing Mach1 Monitor is not intended for exporting (unless using the StereoSafe mode) and is only intended for aiding in the process of mixing and mastering your spatial audio mix. This process takes the multichannel of Mach1 Spatial and down mixes them with our Mach1 Spatial VVBP algorithm to output the correct stereo image based on your simulated orientation angles (Yaw, Pitch, Roll).

Plugin Parameters

The following explains all the current possible encoding modes of the Mach1 Monitor in your common DAW/Editor.

General Parameters

Yaw
Range: 0 → 360 (clockwise)
Allows users to simulate head-tracking on the yaw axis (rotating head horizontally)
Pitch
Range: -90 → 90 (bottom to top)
Allows users to simulate head-tracking on the pitch axis (nodding head up/down)
Roll
Range: -90 → 90 (top of head to left → top of head to right)
Allows users to simulate head-tracking on the roll axis (tilting head toward a shoulder)

Decode Selection

The decode selection determines how your spatial mix will be monitored and affects the available panning options in connected Mach1 Panners:

Mach1 Horizon (4ch)
A 4-channel horizontal-only configuration. Limits spatial movement to the horizontal plane. When selected, connected Mach1 Panners will hide Z-axis controls and optimize for horizontal-only movement.
Mach1 Spatial (8ch)
The standard 8-channel configuration with full 3D spatial audio support. Enables complete spatial control including height information. When selected, connected Mach1 Panners will show full 3D controls including Z-axis movement.
Mach1 Spatial (14ch)
The 14-channel configuration with full 3D spatial audio support. Enables complete spatial control including height information. When selected, connected Mach1 Panners will show full 3D controls including Z-axis movement.

Settings

Monitor Modes

Allows different decoding modes for testing your spatial mix:

Mach1 Spatial Mode
The default mode for full Mach1Spatial mixing support and full Yaw/Pitch/Roll controls of the M1-Monitor. This mode provides accurate spatial monitoring of your mix as it would be heard in the final playback environment.
StereoSafe Mode
When this mode is enabled all Panner's process audio with Mono output so that the user can quickly export a stereo version that is spatially safe in all directions for 360 audio/video players that do not support spatial audio yet. The Diverge values during the mix are instead used to correct the gain ratio between all spatial components so that the mix is roughly retained to avoid full re-mix scenarios.
Front/Back FoldDown
When enabled this mode sums channels Front/Back/Top/Bottom left channels together and Front/Back/Top/Bottom right channels together to allow the user to quickly reference any front to rear differences during the mix. This is particularly useful for checking balance and consistency in your spatial mix.
Note: Changing monitor modes will affect how you hear your mix but does not modify the original spatial audio data unless you're recording the monitor output.

Timecode Offset

The timecode offset feature helps synchronize the Mach1 Player with your DAW timeline:

Offset Configuration
Allows you to adjust the timing offset between your DAW and the Mach1 Player video playback. This is crucial when:
  • Your DAW session doesn't start at 00:00:00:00
  • Your video file has a different start time than your DAW session
  • You need to compensate for any system latency
Using Timecode Offset
  • Enter the timecode where your video should start in your DAW
  • The Mach1 Player will automatically adjust its playback to match this position
  • The offset remains active until changed, ensuring consistent sync between your DAW and video

Mach1 Orientation Manager

Description

The Mach1 Orientation Manager is an external orientation device manager and utility designed for aggregating different headtracking methods. It functions as a central service that connects, manages, and routes orientation data from various devices to any app that has the client integration such as some of the Mach1 Spatial System plugins and apps. By supporting multiple hardware connections and data transmission protocols, it enables seamless integration of orientation tracking in spatial audio workflows.

This component is also designed so that anyone can make a PR to add support for a new device or protocol via the github repo:
- Codebase
- Issue Board - Project Board

System Integration
The Orientation Manager integrates with various parts of the Mach1 Spatial System:
  • Background Service: Runs as a system service or LaunchAgent
  • OSC Client: UI application for monitoring and configuration
  • Mach1 Monitor: For real-time spatial audio monitoring
  • Mach1 Player: For video monitoring with head tracking

Interface

Mach1 Orientation Client Interface
Interactive Guide

Hover over different parts of the interface to learn more.

Areas with a pointing hand cursor (👆) can be clicked to navigate to their documentation.

Parameters

The Mach1 Orientation Manager OSC Client interface contains several key components for controlling and monitoring orientation data:

Orientation Manager Before Device Selection

Initial client state before selecting any device

Device Selection Menu
Select from various supported orientation tracking devices

The menu will display all currently detected compatible devices. If your device isn't listed, check its connection and ensure drivers are properly installed.

Orientation Data Display
Real-time visualization of orientation data from the connected device:
  • Yaw: Horizontal rotation (left/right)
  • Pitch: Vertical rotation (up/down)
  • Roll: Lateral rotation (tilt)

This section displays the current orientation values being received from the connected device and sent to other applications. The data is processed by the background service and made available through this client interface.

Each of these values are also buttons to invert (-) the polarity of each axis depending on how the device is mounted or being used.

Recenter Orientation
Reset the current orientation to a neutral position:
  • Sets the current device position as the new zero point
  • Helps eliminate drift in orientation tracking
  • Essential for accurate spatial audio monitoring

To ensure accurate spatial orientation, press this button when you are in your desired reference position. This calibrates the system to use your current orientation as the neutral starting point.

Disconnect Device
Safely disconnect the current device from the Orientation Manager:
  • Properly closes the connection to the selected device
  • Stops the data transmission to the OSC port

Device Specific Parameters

OSC

Input OSC Address
Displays the OSC address for receiving orientation data from another app or device to the Orientation Manager, and allows you to change it for more customization.
Input OSC Port
Configure the OSC port settings for communication:
  • Set the port number for receiving OSC data
  • Must match the outgoing port on any sending application

The Orientation Manager uses OSC (Open Sound Control) protocol to send orientation data to other applications. This setting allows you to specify which port should receive the orientation data.

Superware IMU

Chirality
Indicate whether the Supperware device's USB port is on the left or right side when mounted on the head. This is important for determining the correct orientation of the device.

Hardware

Orientation Manager Device Selection

Device selection options in the Orientation Manager

Connection Types
The Orientation Manager supports multiple data transmission methods:
  • Serial: Using RS232 protocol for wired connections
  • BLE: Bluetooth Low Energy for wireless devices
  • OSC: Open Sound Control for network-based data
  • Camera: Vision-based tracking (work in progress)
  • Emulator: For testing and debugging
Supported Hardware
The following orientation tracking devices are supported:
  • Supperware IMU: High-precision inertial measurement units
  • MetaWear/mBientLab IMUs: Bluetooth-enabled sensors
  • Waves Nx Tracker: Headphone-mounted motion tracker
  • M1 IMU: Mach1's proprietary orientation sensor
  • WitMotion IMUs: Cost-effective motion sensors (in development)
  • Custom OSC Input: Any device or app that sends OSC-formatted orientation data

Mach1 Player

The Mach1 Player is designed for playback of spatial audio content created with the Mach1 Spatial System.

Overview

Mach1 Player provides a dedicated playback environment for spatial audio content, allowing real-time head tracking and orientation control for accurate spatial audio monitoring.

Playback Options

  • Toggle between flat and spherical 360° mode with the 'Z' key
  • Enable reference angle overlay with the 'O' key
  • Switch between stereoscopic and monoscopic videos with the 'D' key
  • View player statistics by holding the 'H' key

Getting Started

Launch Mach1 Player and use the drag-and-drop interface to load your 360° video content. The player will automatically connect to Mach1 Monitor to provide orientation data and synchronize with your DAW timeline.

Mach1 Transcoder

Interface

Interactive Guide

Hover over different parts of the interface to learn more.

Areas with a pointing hand cursor (👆) can be clicked to navigate to their documentation.

Description & User Story

Mach1 Spatial Format Explained

Mach1 Spatial is a VVBP (Virtual Vector Based Panning) or SPS spatial audio format that encourages users to have complete freedom with their post production mixing process by not forcing the user to use any sonic signal altering processing. The user is free to apply their own DSP/ASP to their audio during the mixing process to avoid proprietary DSP algorithms for spatial audio seen in other formats or tools. With this in mind the user is completely in control of their creative mix process and can make any decision they want whether it results in realistic spatial fields or more creative spatial sound fields.

Transcoder Explained

Mach1 Spatial as a virtual vector based format has the advantage of not using any signal processing effects for pre-rendering or playing back spatial audio. Due to this major advantage; the user is able to leverage Mach1 Spatial format as a master spatial audio format and use the Transcoder to down mix to all other spatial or surround formats without introducing any mix altering effects, guaranteeing the most 1:1 transcoding possible.

Format Conversion (m1-transcode)

The format conversion math uses basic coefficient changes to channels of audio with only the process effecting: re-ordering channels to the correct output, correctly distributing audio data to the correct output channels. There are no other effects applied to the master mix.

For more information: https://dev.mach1.tech/#mach1transcode-api

Headlocked/Static Stereo

Use the Input Static Stereo Mix field to correctly mux master headlocked stereo audio, ideal for adding music or some VO elements (as well as experimental elements) to your spatial mix. These channels will remain intact until the decoding stages, at which point the target app will sum these channels to the output decoded stereo from the spatial mix (if your target app supports this).

Transcoding Features

Input Options

Audio Input Files
Mach1 Spatial Input
Single multichannel file of 8 channels exported/rendered from preferred DAW, supports aif or wav files. (automatically detects Pro Tools HD multichannel .wav file and reorder the channels correctly during transcoding process from the 7.1 channel order)
Mach1 Horizon Pairs Inputs
Drag in the 4 exported stereo files. Add the following to the end of the file names to safely encode:
_000 (front)
_090 (right)
_180 (back)
_270 (left)
The M1-Transcoder app will only display the available/relevant transcoding options for the Mach1 Horizon Pairs input type.
Video Input Files

Supports mp4 & mov video files, all conversions/encoding/transcodings use pass through video so that nothing is re-encoded or effecting the video streams of files.

Output Types

Supported Format & File Types
  • Mach1 Spatial (Audio & Video)
  • Mach1 Horizon (Audio & Video)
  • Mach1 Horizon Pairs (Single Stream) (Audio & Video)
  • Mach1 Horizon Pairs / Quad-Binaural (Multiple Streams) (Audio & Video)
  • First Order Ambisonic ACNSN3D [Youtube] (Audio & Video) [Applies metadata for direct upload to Youtube depending on selected File Type]
  • First Order Ambisonic FuMa (Audio & Video)
  • Second Order Ambisonic ACNSN3D (Audio Only)
  • Second Order Ambisonic FuMa (Audio Only)
  • Third Order Ambisonic ACNSN3D (Audio Only)
  • Third Order Ambisonic FuMa (Audio Only)
  • 5.0 Surround (L,C,R,Ls,Rs) (Audio & Video)
  • 5.1 Surround (L,C,R,Ls,Rs,LFE) (Audio & Video)
  • 5.1 Surround SMPTE (L,R,C,LFE,Ls,Rs) (Audio & Video)
  • 5.1 Surround DTS (L,R,Ls,Rs,C,LFE) (Audio & Video)
  • 5.0.2 Surround (L,C,R,Ls,Rs,Lts,Rts) (Audio Only)
  • 5.1.2 Surround (L,C,R,Ls,Rs,LFE,Lts,Rts) (Audio Only)
  • 5.0.4 Surround (L,C,R,Ls,Rs,FLts,FRts,BLts,BRts) (Audio Only)
  • 5.1.4 Surround (L,C,R,Ls,Rs,LFE,FLts,FRts,BLts,BRts) (Audio Only)
  • 7.0 Surround (L,C,R,Lss,Rss,Lsr,Rsr) (Audio & Video)
  • 7.1 Surround (L,C,R,Lss,Rss,Lsr,Rsr,LFE) (Audio & Video)
  • 7.1 Surround SDDS (L,Lc,C,Rc,R,Ls,Rs,LFE) (Audio & Video)
  • 7.1.2 Surround (L,C,R,Lss,Rss,Lsr,Rsr,LFE,Lts,Rts) (Audio Only)
  • 7.1.4 Surround (L,C,R,Lss,Rss,Lsr,Rsr,LFE,FLts,FRts,BLts,BRts) (Audio Only)
  • FB360/TBE (8ch) (Audio Only)

This list is maintained here: https://dev.mach1.tech/#formats-supported

Output Configuration

The output options are dependent on what is possible per selected format as the list above shows.

Additional Features

Transcode
Runs the transcoding process for the users input files to become the target output encoded audio/video (applies metadata for YouTube output if selected).
Auto File Loading
Drag and drop the input audio file(s) in the top half of app window and the intended video input in the bottom half of the app window to load the input files into the correct files easily.
Transcoder Audio Loading Interface

Audio File Loading

Transcoder Video Loading Interface

Video File Loading

These features are available after a successful run of the M1-Transcoder.

Reveal In Finder
Shows user the location of the output transcoded audio/video file in Finder.