External Endpoints

GTT allows external tools to interact with devices. The tuning of 3rd  party audio objects can be done directly in GTT, or it can also be done from an external tool.

The requests from external tuning tools will come to GTT and GTT will forward these requests to the device. This is enabled through a WCF service endpoint that external tools can tap into.
To help external tools integrate with GTT, the GTT process hosts a WCF endpoint; external tools can connect to that endpoint and use the exposed APIs.

Refer the contact information mentioned in the latest release notes to get more details on “External third party tool” documentation and package.

By default External Endpoints will use port 8080 for the communication.

Before proceeding with the following sections, it is understood that you have received 3 compressed zip files. It is recommended that you use these for integration purposes.

  • HarmanReferenceTool.zip: This compressed folder contains an executable sample tool that you can run and verify the endpoint functionality. Unzip the HarmanReferenceTool.zip file, go to the HarmanReferenceToolReleasenet6.0-windows folder, and locate the ExternalTuningTool.exe. Run the exe to open Harman Reference Tool.
  • ExternalToolCode.zip: This compressed folder contains visual studio solution for the sample tool. One can refer to this code to understand how the endpoint is accessed. Unzip the ExternalToolCode.zip file and locate the file under ExternalToolCodeExternalTool.
  • WcfServiceProxyLib.zip: This compressed folder contains a proxy library dll that should be referenced for integration with the GTT endpoint.

The “WcfServiceProxy.dll” is a .net dll which has the implementation of the client code for the endpoint hosted in the GTT.

By default Third Party tuning tool will use port 8080 for the communication.

Setup

GTT needs a minimum setup for external endpoints to function.

  1. Right-click on GTT launcher and click on the “Run as administrator” option.

    For the external endpoint feature to work correctly, it is necessary to run GTT as an administrator.

  2. GTT should have an open project. Only external audio objects can be accessed from external tools.
    Accordingly, the project must contain at least one external audio object within the signal flow. An external audio object is defined as an object with Class ID between 9000 and 9999.
  3. GTT should be connected to the device. The device can be a virtual device or a physical board.
  4. Click on the Start/Stop button to start the External Endpoint. The same button works as a toggle switch to start and stop the endpoint.

    A license is required to use this feature. Contact the solution management team to enable the feature.

  5. Once the endpoint is hosted,  the 3rd party applications can use the proxy dll or write their own proxy to access the WCF endpoint. For more details about WCF proxy, refer to “About WcfServiceProxy.dll in the GTT Third Party Tool Integration User Guide. 

Supported Features

Sending and Receiving Tuning Data

To support sending and receiving tuning data, the following methods are exposed.

  • GetExternalAudioObjects: This method will return all the 3rd party audio objects in the device.
  • SendTuningDataAsync: This method is used to send tuning data to audio object.
  • ReceiveTuningDataAsync: This method is used for receiving tuning data from an audio object.

Sending and Receiving Control Data

To support sending and receiving control data, the following methods are exposed.

  • SendControlDataAsync: This method should be used to send control data by mentioning the control id and control data.

    The control data supports 16.16 format.

  • ReceiveControlDataAsync: This method can be used for retrieving control data by providing the control id to the control elements to be read back.

Streaming Methods

GTT also supports streaming with the following methods.

  • EstablishSocketConnection: This method must be called first in order for streaming to work. The third-party tool should first establish a socket connection and then make a call to the port number where it is listening. GTT will connect to that port. This is a socket connection.
  • DisconnectSocketConnection: This method is used to unsubscribe all the subscriptions and close the socket connection.
  • SubscribeForStreamDataAsync: This method is used to subscribe for stream data for an audio object.
  • UnSubscribeForStreamData: This method is used to stop or unsubscribes the stream data.

To know more about all the API methods described above, refer to the API Reference section in the GTT Third Party Tool Integration User Guide. 

External Tool Interaction

The following are the steps to integrate with GTT.

By default GTT will host the WCF endpoint on this URL (http://localhost:8080/XtpHandlerService).

Steps to get started with the WCF service.

  1. Discover the service using this URL. Use the known tools like visual studio service reference tool or any 3rd party tools.
  2. Create a service reference for the same.
  3. Then using the service reference call the APIs for different operations.

Otherwise, developers can follow the “Third Party Tool Tuning Sequence Workflow” and  “Third Party Tool Streaming Sequence Workflow”  to write code to consume the endpoint hosted by GTT using the proxy library (WcfServiceProxy.dll) shared and explained in the GTT Third Party Tool Integration User Guide. 

API References

GTT endpoint APIs definition and parameter details are provided in the GTT Third Party Tool Integration User Guide. 

API: ExternalAudioObject

API Function:

ExternalAudioObjectResponse GetExternalAudioObjects();

Description: This function will return all the audio objects that are used in the currently open signal flow in GTT. It returns only those audio objects which have Class ID between 9000 and 9999. Also, any audio objects that are listed in external category.

API: SendTuningDataAsync

API Function:

Task SendTuningDataAsync(ExternalAudioObject audioObject, int subBlock, int offset, byte[] data);

Description: This function will apply the data payload to the ExternalAudioObject  passed into the function.

API: ReceiveTuningDataAsync

API Function:

Task< XtpEndpointResponse > ReceiveTuningDataAsync(ExternalAudioObject audioObject, int subBlock, int offset, byte[] size);

Description: This function will retrieve tuning data from the ExternalAudioObject.

API: SendControlDataAsync

API Function: 

XtpEndpointResponse SendControlDataAsync(int controlId, byte[] data);

Description: This function will send control data to the control id mentioned in the function.

API: ReceiveControlDataAsync

API Function:

XtpEndpointResponse ReceiveControlDataAsync(int controlId, byte[] data);

Description: This function will get the control data from the control id that is passed into the function.

API: EstablishSocketConnection

API Function: 

XtpEndpointSocketResponse EstablishSocketConnection(int port);

Description: This function will instruct GTT to connect to the socket connection listening at the port passed as parameter. The integrating application will create a socket connection and listen at a port and send this port id to GTT, so GTT can establish a connection and send stream data through the socket.

API: DisconnectSocketConnection

API Function: 

XtpEndpointSocketResponse DisconnectSocketConnection(int port);

Description: This function will instruct GTT to disconnect the previously established connection. The integrating application will close the socket connection.

API: SubscribeForStreamDataAsync

API Function:

Task SubscribeForStreamDataAsync(ExternalAudioObject audioObject, int subBlock, int messagesPerSecond, bool beforeCalc);

Description: This function will create a subscription for data streaming of a particular state variable of the audio object mentioned at subblock/offset.

This function can be used to subscribe for streaming of a particular data from audio object. Parameters include audio object, its subblock. There is an option to send the number of messages to be streamed per second and if the streaming data is to be retrieved before calc or after calc.

All this needs to be supported by the audio object. Once the subscription is complete, the subscription id and status are returned to the object.

API: UnSubscribeForStreamData

API Function: 

SubscriptionResponse UnSubscribeForStreamData(Guid subscriptionId);

Description: This function will terminate the subscription that is currently running. GTT will stop the subscription that is started with the subscritionId passed as parameter.

Use Cases and Workarounds

Use Case Workarounds
Use Case 1: While starting External Endpoints from GTT, if you gets notification of port already being used.
Then you need to perform configuration in GTT as well as in ExternalTuningTool.
Close the GTT application and perform the below configurations:

By default the key does not exists in the config file. When configuring the config file, you need to add the key and the required port number.

GTT configuration settings:

  1. Open config ‘GlobalTuningTool.exe.Config‘ from install path ‘..HarmanHarmanAudioworXtoolsGTT’.
  2. Under “appSettings” section, add a key ‘HarmanExternalEndPointPort‘ and provide any value from range from 8081 to 65535.
    This key and value will be used to host the service on provided port for the communication.
  3. Launch the GTT application.

ExternalTuningTool configuration settings:

  1. Open config ‘ExternalTuningTool.dll.config‘ from path ‘…….ExternalToolExternalTuningToolbinDebugnet6.0-windows’ or from ‘…….Releasenet6.0-windows’.
  2. Under “configuration” section, add appSettings section, add key ‘HarmanExternalEndPointPort‘ and provide any value from range 8081 to 65535.
    This key and value will be used for communication.

Key name must be ‘HarmanExternalEndPointPort’ and value must be same in GTT config as well as in ExternalTuningTool config.

Core Objects Toolbox

The Toolbox contains the core objects that were retrieved from the xAF dll. The objects that can be used within the core to create the device signal flow are called core objects. Each core object has its own purpose and solves parametric issues which block routing within the core.

Core Objects are classes that are part of the Audio Core (virtual core) class and operate at a higher level than audio objects. The audio processing class itself is a core object. The relationship between core objects and Audio core is similar to that of audio objects and the Audio Processing class.
Graphical user interface Description automatically generated

The execution order (or index) of the core object is displayed by Core Object Id. Routing determines the order in which core objects are executed within a core. The core objects that are connected to the core input will be executed first, and the core objects that are connected to the root object after that will be given the next execution order.

The device identification feature is enabled for audio libraries version 13 and higher.

Xaf Instance

The Xaf Instance is the core object inside which the signal flow for that instance can be created.
Graphical user interface, text Description automatically generated

  • Core Object Id (execution order of core-object with-in core) and Instance Id (index of xAF instance with-in core, based on execution order) will be displayed as read-only fields.
  • The sample rate and block length of the instance will control signal flow within the instance. You can change the sample rate and block length of the instance in the properties section.
    Graphical user interface, application Description automatically generated

Further information on signal flow creation is available in the GTT Signal Flow Designer guide.

Buffer

Buffer core object is used to convert the input block length into the required output block length. The buffer core object has an equal number of input and output channels. It can be used as a pass through core object OR it can be used to, as its name suggests, buffer samples from the input to the output. The object does not change the sample rate (it is the same at the input and the output).
Text Description automatically generated with low confidence

If you want to connect two core objects with different block lengths, you can use a buffer core object. As a result, the input block length will be the Block Length of the first core object, and the output block length should be the Block Length of the other core object.
Graphical user interface, application Description automatically generated

It can be configured as follows:

  • If the input block length is equal to the output block length, then it behaves as a pass through object (so you could have an audio core with a buffer object to connect the core input to the output)
  • Input and output block lengths must be integer multiple of each other
  • When input and output block lengths are not equal, the object handles taking in input at a lower block length and outputting it at a higher one and vice versa. For example, it facilitates the connection of an object at block length 32 to an object at block length 64

Introducing this object into your signal flow for any case but pass through WILL result in latency at the output.

Splitter

Splitter core object is used to convert one input to multiple outputs of the same sample rate and block length.
Text Description automatically generated

  • This core object always has a single input.
  • In order to make routing from any core object to the splitter both the source core object and splitter core objects sample rate and block length should match.
  • Number of output channels for the splitter is configurable.
    Graphical user interface, text, application Description automatically generated

It is not to be confused with the Splitter audio object.

This object operates in parallel to an xAF instance NOT within it.

Merger

Merger core object is used to merge multiple inputs into a single output of the same block length and sample rate.
Text Description automatically generated

  • This core object always has a single output.
  • In order to make routing from any core object to merger both the source core object and merger core objects sample rate and block length should match.
  • Number of input channels for the merger is configurable.
    Graphical user interface, application Description automatically generated

It is not to be confused with the Merger audio object.

This object operates in parallel to an xAF instance NOT within it.

Ssrc lir Int

Synchronous Sample Rate Converter (SSRCs) is used to convert the input sample rate to the required output sample rate.
SSRCs are core objects that can operate within an audio core. Currently there is one implementation of SRCs in Awx.
Text Description automatically generated

Two options are provided to convert the sample rate. Both these options are mutually exclusive.
Graphical user interface Description automatically generated

IIR Integer Multiple SSRC

This core object implements a synchronous sample rate converter whose input sample rate / input block length and output sample rate / output block length are integer multiple of each others. This is also an infinite impulse response implementation (IIR).

The object operates in one of 2 modes:

  • User Coefficients mode
  • Predefined Coefficients mode

Before we get into the details, there are some common configuration parameters between the two.

  • The input block length needs to be set by the user.
  • The Biquad filter topology. Currently 2 topologies are exposed.
    • Direct Form I
    • Direct Form II

User Coefficients mode: In this mode, the user has to provide the input and output sample rate. Input and output sample rates should not be equal. The Number of Biquads field is read-only.
User has to import the coefficients by clicking on the button “Import Co-efficients”. Based on the number of coefficients in the file, the Number of Biquads is updated.

Validations for User Coefficients mode: The Input and Output sample rates cannot be the same. Validation is shown when the same values are entered.
A screenshot of a computer Description automatically generated with medium confidence

After adding a new “Ssrc lir Int” object and selecting “User Coefficients Mode”, if the coefficients are not imported, the following message will be displayed on various operations such as “Save”, “Edit Device”, “Copy Core Objects” and “Paste Core Objects”. After importing coefficients, the user can perform the required operation.
Graphical user interface, text, application Description automatically generated

Predefined Coefficients mode: In this mode, the xAF dll is used to read the input sample rate, output sample rate, and the number of biquads. When a value in the combo box is selected, the xAF dll is also used to fetch the corresponding coefficients.

Biquad Co-efficient has to be re-imported whenever the mode is switched between Predefined Co- efficient mode to User Co-efficient mode.
A screenshot of a computer Description automatically generated with medium confidence

For these pre-defined coefficients, the quality measures are as follows:

  • Signal to noise ratio: 80 dB
  • Total harmonic distortion: 2e-3f
  • Spurious free dynamic Range: 59 dB
  • Total harmonic distortion plus noise: -60 dB
  • Frequency response flatness: 3 dB

Output block length (Displayed as a read-only field) = (Output sample rate /Input sample rate) * Input block length.

Float to Fixed

Float to Fixed core object accepts audio buffers that are in floating point format and outputs buffers that are in fixed point format (16-bit, 24-bit, 32-bit etc).
Graphical user interface, text Description automatically generated

  • # of Channels is configurable. No of Input channels = No of Output channels
  • The user can configure the scalar value to indicate what fixed point format is required. This scalar value is multiplied by the floating point input samples to convert them to fixed point.
    For example, to convert from float to 32 bit fixed point, this scalar value must be:
    (1 << (32-1) – 1) = 2,147,483,647
  • In order to make routing from any core object to Float2Fixed object both the source core object and Float2Fixed core objects sample rate and block length should match.

Float To Fixed core object is enabled for audio libraries version 16 and greater.
Graphical user interface, application Description automatically generated

Fixed to Float

Fixed to Float core object accepts audio buffers that are in fixed point format (16-bit, 24-bit, 32-bit, etc) and outputs buffers that are in floating point format.
Graphical user interface, text Description automatically generated

  • # of Channels is configurable. No of Input channels = No of Output channels.
  • The user can configure the scalar value to suit the fixed point format of the input samples. The reciprocal of this scalar value is multiplied by the fixed point input samples to convert them to floating point.
    For example, to convert from 32 bit fixed point to float, this scalar value must be:
    (1 << (32-1) – 1) = 2,147,483,647
  • In order to make routing from any core object to Fixed2Float object both the source core object and Fixed2Float core objects sample rate and block length should match.

Fixed To Float core object is enabled for audio libraries version 16 and greater.
Graphical user interface, application Description automatically generated

Nan Detector

The NaN (Not a Number) detector core object detects NaN from input samples and informs the platform using an xTP command if NaN is found. The xTP command will inform about the core id, core object instance id and channel index, so that platform can react accordingly by muting or resetting states. The input samples are copied to the output without doing any other processing. The number of output channel(s) is always same as the number of input channel(s).
Graphical user interface, text Description automatically generated

  • # of Channels is configurable. No of Input channels = No of Output channels.
  • Block length and Sample rate are configurable.
  • The number of input channels is user configurable and ranges from 1 to 255.
  • This core objects’ block length and sampling rate are the same at both input and output side.
  • Block length is configurable in the range of 4 to 4096 samples.
  • Sample rate is configurable in the range of 8 kHz to 192kHz.

In order to make routing from any core object to NaN Detector, both the source core object and NaN Detector core objects sample rate and block length should match.

NaN Detector core object is enabled for audio libraries version 19 and greater.
A screenshot of a computer Description automatically generated with medium confidence

Core Objects Validation

When the GTT is loaded with a version of the xAF library lower than 13. If a user tries to open a device view that contains core objects other than a XAF instance, they will see the following error message.
Graphical user interface, diagram Description automatically generated

Aside from the Xaf instance, every other core object will be red.
Graphical user interface Description automatically generated

Device Operations

This section explains the device operation features available at the bottom of the Device Designer workspace.

Load Device Config

When there are no core objects in the cores, the Load Device Config option is enabled. The Physical Cores and Virtual Cores (available within a Physical Core) as well as their connection points that were fetched from the target during device discovery are visible after the device has been loaded. Since they are read from the device, the number of virtual cores and their connection points cannot be changed.
You can connect core objects by dragging them onto the virtual cores. Alternatively, you can select the “Load Device Config” option, which will read the target’s device routing information as well as the layout and routing between core objects and display it on the GUI.

In the device view, a property section has been added to display and change the properties of chosen Core or Core Objects. When the virtual core or core object is selected, their respective properties are shown on the screen.

If you don’t clear the device view first, you won’t be able to save or access SFD.

The device identification feature is enabled for audio libraries version 13 and higher.

Connect Blocks

You can auto-connect selected devices, cores, and core objects using Connect Blocks.
A screenshot of a computer Description automatically generated with medium confidence

Choose at least two cores and click Connect Blocks.

You can auto-connect the following components.

  • Device – Virtual Core(s)
  • Virtual Core(s) – Virtual Core(s)
  • Virtual Core – Core Objects (within selected Virtual Core)
  • Core Object(s) (within same Virtual Core)

Edit Device

In Device File Editor you can modify the device configuration, then perform “Save Device Template” or “Update Device” operation.

Send Device Config

You can create your own configuration and write this configuration to the target device. The Send Device Config option sends the device configuration (Core Objects, Device Routing, and Virtual Core routing) to the target device.
Before sending the device configuration, all the input and output connection points of the core objects must be connected. If any of the pins are not connected, an error message is displayed.
Graphical user interface Description automatically generated

The Device view (number of Cores, Physical Cores, Connection points, Sample Rate, and Block Length) inside GTT and the data inside the Device. The flash file on the target should match before Sending the Device configuration. If the data doesn’t match then the below transmission error is seen

The data in the device view (number of cores, physical cores, connection points, sample rate, and block length) and the data in the device. The flash file on the target device should match before the device configuration is sent. If the data doesn’t match, you will see the transmission error.
Graphical user interface, text, application Description automatically generated

If any changes are made to the configuration, the send device config must be completed before sending the signal flow.

The device identification feature is enabled for audio libraries version 13 and higher.

Feedback Loop

In the device view, feedback connections are allowed. You can connect the output of any core object as input to any other core object.

Core object Id defines the order of execution of core objects. Execution order of any core object depends on the IO dependency. IO side dependency is not considered when there are feedback loops.

Self-loop is not supported as feedback.

In the image, connection highlighted in green color is a feedback connection.

Send Signal Flow

The Send Signal Flow option sends the configuration of the signal flow design to the target device. You can also use this to test how the target device responds to specific test signals. In a test scenario, you can configure specific test signals and send them to the amplifier.
Graphical user interface Description automatically generated

Export

Using the Export option in the device view, you can export the device configuration data and the signal flow design details.

*.Core files would be created for each Virtual Core available in the device. One *.route file will be created for Device routing data. One .mcd file will be generated for master control data and one .SFD file will be generated per instance per core.
Graphical user interface Description automatically generated

  • One core file per virtual core in the signal flow.
  • The core objects within the virtual core.
  • The routing within those objects.
  • The destination of the output pins of the virtual core.
  • One signal flow file per xAF instance.
  • This is the same legacy file.
  • One input device routing file.
  • This basically describes how the device input buffers are connected to the virtual cores and/or the device outputs.

Control IDs

The Control IDs are used to configure Custom Control IDs. You can add, edit, export, and import Custom Control IDs data. For more details, refer to Configure Control IDS.
Graphical user interface Description automatically generated

Device View

The Device View is used to view and modify detailed information and settings for a specific device. Double-click on the desired device from the list to open a device template in the Device View.

Related Topics

Device Object Properties

A device is a combination of four layers: Device Layer, Physical Layer, Virtual Core Layer, and Core Object Layer. When you select any of the layers, you will see the properties of the selected layer on the right side.

Device Layer: Device layer properties include Device Name, Hardware, and Software version.

Device layer properties are not editable.

Graphical user interface Description automatically generated

Physical Core Layer: Physical Core layer properties include Physical Core Name, Physical Core Type, and MIPS.

The “Physical Core Name” property in the Physical Core layer properties can be changed, but the “Physical Core Type” and “MIPS” properties are non-editable.

By default, the “Physical Core Name” property value is the same as the “Physical Core Type”. If you want to change the physical core name, enter the value for “Physical Core Name”, the updated Physical Core layer value will be reflected in the device view, then click on the save button to save the changes.

Only from the Device File editor window, you can update the Physical Core Type.

If you keep the Physical Core Name field empty, GTT will ask you to enter the valid name for “Physical Core Name”.

Virtual Core Layer: Virtual Core layer includes the following properties:

  • Core ID: Display the core ID.
  • Core Name: Display the core name.
  • Data Format: Display the date format type.
  • Task Priority: Display the date priority value.
  • Queue Size: Size of message queue
  • Guard Time: Time to avoid message processing as a percentage of interrupt time.
  • Ramp Time (ms): Duration between two processing states (in ms), when the Core Object processing state is enabled.
  • Core Object Processing State: Enable or disable the processing state for Core objects in the core.
  • Streaming: Enable or disable streaming. Enabling the streaming option allows you to set “State Variables”, “Probe Points “, and “Level Meters”.

Queue Size and Guard Time will be supported only for Devices with audio library version “O” release and above or “M+2” release and above.

Enable Core Object Processing State and Ramp Time will be supported only for Devices with audio library version “O” release and above.

Processing state for Core objects will be applied only if Enable Core Object Processing State was enabled before Send Device Configuration.

Enable Probe Points and the Number of Probe Points will be supported only for Devices with audio library version S release and above.

Core Object Layer: Each core object has different properties. For more details refer to the ToolBox.

Magnifier Options

A picture containing graphical user interface Description automatically generated

  1. Fit to Window: Click on this button will change the current view to the size of your device view window.
  2. Zoom to 100%: Click on this button will return the view to 100% zoom.
  3. Zoom In: Click on this button (+) to zoom in for gradual increments.
  4. Zoom Slider: Slide to the desired percentage zoom setting.
  5. Zoom Out: Click on this button (-) to zoom out for gradual decrement.

Undo and Redo Operation

The undo and redo feature allows you to reverse or redo previous actions.

  • Undo: The undo feature allows you to reverse the previous action by restoring the design state to a previous design state.
  • Redo: The redo feature allows you to perform the action that is undone.

Undo/Redo operation is supported for the following actions:

  • Adding/ Removing core objects.
  • Adding, removing, and changing connections.
  • Changing core and core object position.
  • Changing core and core object properties.

The scope of undo/redo is within the selected device.

Undo/redo action will not restore the tuning data state.

This feature is limited to 1000 actions.

When a new manual action (dragging new object, changing connection or position etc) is performed, all existing redo records will be cleared. As a result, it will not be possible to redo any previous actions.

Properties Panel

Graphical user interface, application, chat or text message Description automatically generated

  • Class Name: Display type of device and this property is read-only,
  • Audio Library Version: Display the audio library version used in the selected device. You can change the audio library version. On the property panel, click on the audio library version, select the desired audio library version, and click save.

    If a device has a signal flow with audio objects created in an older version, compatible audio objects will upgrade automatically. Non-compatible audio objects in Signal Flow Designer will be highlighted in blue or displayed as a warning in the compiler; these audio objects can be upgraded using the audio object context menu.In the following cases device association with a dll cannot be changed.

    • If a device’s signal flow is open.
    • If you have a monitoring window open, such as a streaming or profiler window.

    A change in dll association for one device has no effect on other device instances.

  • Device Id: Enter the ID of the selected device. Make sure the device ID should be unique.
  • Node Address: The Node address of the selected device. Each device has a unique node address assigned to it.
  • Name: Name of the device.
  • MaxTuningDataSize: Maximum tuning data.
  • System Context: Set the system context Init or Runtime.

Memory

The Memory window presents the CPU memory of cores, core objects and audio objects of the device in a single multi-level grid.

Memory profiling data of cores and core object is fetched from the device (hardware) using xTP Commands, and the memory of the audio object is fetched based on its memory latency configurations.

Overhead Memory consumed by core and instance is calculated and displayed as ‘Framework Memory’. You can optimize signal flow or adjust latency based on this information.

Apart from memory profiling data, the class size of each audio object is also fetched from the device using xTP commands and displayed in the memory window from the X release(24.x.x.xxxx) audio library onwards.

Memory window is only enabled if the device xAF dll version is 18.x.x.xxx or higher.

Before starting the Memory window, the signal flow should be flashed.

If the memory latency configuration is updated, the signal flow should be flashed again and the memory window should be restarted.

Memory profiling data of non-xAF instance core objects (Buffer,Splitter…etc) is available only from the X release(24.x.x.xxxx) audio library onwards.

Launch Memory Profiling

Steps to launch Memory profiling:

  1. Select the device node and click Memory. This opens the Memory window for the selected device.

When the Memory window opens, it will show a collapsible grid with core, instance, and audio objects.

  • The physical core memory values displayed are the sum of its virtual cores.
  • xTP Commands retrieve virtual core and core object memory from the device.
  • Memory latency configurations are used to fetch audio-object memory.
  • Overhead Memory consumed by the core and instance is calculated and displayed as ‘Framework Memory’.
  • xTP Commands are used to retrieve audio object class size based on the block ID of the audio object.

  • Expand All: Expands all rows of the collapsible grid.
  • Collapse All: Collapses all rows of the collapsible grid.
  • Export to CSV: Click on the Export option to export the memory data of the device in a CSV file.  The exported file will have a row for Framework Memory for Virtual core, Instance, Audio Objects, and Physical core along with columns PhysicalCoreName, VirtualCoreName, CoreObjectName, ObjectType, ObjectName, BlockId, AO Class Size and Level1 to Level 16 as per the below image.

MIPS

The MIPS window presents the CPU load of cores, core objects and audio objects of the connected device.

MIPS profiling data is fetched from the device (hardware) using xTP Commands and the user can optimize signal flow based on this information.

MIPS window is enabled only when the device xAF dll version is 18.x.x.xxx or higher.

Signal flow should be flashed before launching MIPS.

MIPS profiling data of non-xAF instance core objects (Buffer,Splitter…etc) is available only from X release(24.x.x.xxxx) audio library onwards.

Launch MIPS Profiling

Steps to launch MIPS profiling:

  1. Select the device node and click MIPS. This opens the MIPS window for the selected device.
    MIPS measurement on (0x64000501) and Audio Object level MIPS measurement off command (0x64000504) will be sent while opening the MIPS window and a progress window will be seen as per the below screenshot.

The following command will be displayed in the xTP Log Viewer.

Summary Tab

  • Present virtual core and core object MIPS data (Average MIPS and Maximum MIPS) retrieved from the device using the xTP Command.
  • MIPS data displayed on the physical core are the aggregated values of its virtual cores.
  • Audio Object level MIPS measurement off command (0x64000504) will be sent while switching to the Summary tab from the Instance tab and a progress window will be seen as per the below screenshot.

Below command will be seen in xTP Log Viewer.

Instance Tabs

  • A new tab corresponding to the selected instance will be loaded, displaying audio-object MIPS data retrieved via the xTP Command.
  • Inner audio objects will be displayed alongside compound audio objects for compound audio objects.
  • Audio Object level MIPS measurement on command (0x64000503) will be sent while opening a new Instance tab or switching to the Instance tab from the Summary tab and a progress window will be seen as per the below screenshot.

Below command will be seen in xTP Log Viewer.

Reset

When you click the Reset option, the MIPS number for the device will get reset for the selected tab and Reset command (0x64000500) will be sent and a progress window will be displayed as per the below screenshot.

Below command will be seen in xTP Log Viewer.

Refresh MIPS: When you click the Refresh Mips option, the MIPS data for the current tab will be refreshed.

Export to CSV: When you select the Export option, the Mips data for the current tab will be exported to a CSV file. The CSV format is shown below.

Summary Tab
Instance Tab

Closing MIPS Window: When you click on cross (x) to close the MIPS window, the MIPS measurement off command (0x64000502) will be sent and a progress window will be displayed as per the below screenshot.

Below command will be seen in xTP Log Viewer.

Link Window

The Linking Window is designed to assist you by reducing the number of audio parameter configurations. It enables you to organize the filters and channels. When you set one item in a group, the remaining items in the group will have the same value.

By default, all the groups link are enabled. If you want to disable the specific group link, click on the toggle button to disable it.  GTT will store the group linking status in project file.

If you want to disable all the groups link, select the Disable Linking checkbox.

Create a New Group

Steps to create a new group:

  1. Open Linking Window, expand the task, and drag-drop the object to the right-side section under Groups. A new group is created, expand the new group. Under the new group, you can view the added object.

Renaming Groups: Double-click on a group name to rename it.

Only groups can be renamed; Audio objects cannot be renamed.

Removing Objects: Click on the remove icon to remove the object. This will also delete all child objects.

Removing Groups: Click the remove icon to the right of the group. This will also delete all child objects.

Add Object to Existing Group

Steps to add object to an existing group:

  1. Open Linking Window, expand the task, and drag-drop the object to the right-side section under target group. A new group is created, expand the new group. Under the new group, you can view the added object.

If an object cannot be added to a specific group, the color of that group will change to grey.

If the object is added to the group, the color of the group will change to blue.

Linking Rules

  • Each of the audio objects can be part of only one group.
  • If a child audio object is part of a group, the parent element cannot be part of that group.
  • Groups can contain only one type of audio object. For example, you cannot link a Biquad with a Delay. Each group can contain more than one AO.
  • Objects in groups are linked according to their order. For example, if you link two EQ channels, the first Biquad from the first channel will be linked with the first Biquad from the second channel.
  • Changing links is done live. You do not need to close the window for the changes to be working.

Controller

The controller window is used to send instance commands.

The controller feature is enabled only when the device xAF dll version is lower than 18.x.x.xxx.

Steps to setup controller:

  1. Enter a valid Core Id and Instance Id and click on Get Status to get the current available slot of the device. The response from the device will be displayed in the Response section.
  2. Click Save to save the current instance data on the device to the memory slot entered.
  3. Click Load to load the data from the memory slot to the device RAM.

If there is any error in the device connection or if any invalid Core Id and/or Instance Id were entered, the error message “Request failed!!!! Please make sure […]” will be displayed.

Configuring Preset Controller

The preset controller is the central place for managing and organizing how you will load presets in your signal flow.  It also contains other related features such as creating .set files, storing sets, and recalling sets of parameter sets available.
A “Slot” is a group of parameters set one level above. You can create multiple such slots and do any actions like create set files, store , recall etc.
In addition to GTT functionalities, it is also possible to send Xtp commands to device. There are XTP commands to send slot map and Load the slot on to device. You need to export all .set files and manually flash on to the amp.

To know more about  the Preset Controller functionality, refer to below topics.

Limitations in Preset Controller

  • After the set map is configured, if parameter sets are deleted, a generic sets window should be reopened to see the changes.
  • Generic sets window can be opened only one at a time. If a new window has to be opened for a new device, the currently opened window will be automatically closed (changes made will be retained).
  • Parameter sets are project specific. Generic sets are device-specific.
  • Only basic validations are done. Illegal values are yet to be handled. Ex: Entering a string value for fade in –out is considered illegal and will not be handled.
  • Signal flow and presets saved on the device should be in sync.
  • The slot map and set files export is offline. The user shall make sure that these files are in sync with GTT for better visualization.
  • All the presets configuration, slot map, and preset data are sent together in “Send To Device”. There is no option to send these data individually.
  • For any changes in presets or maps, the user is expected to send all data (config, map, and presets).

Virtual device naming conventions. The set files should always follow the “preset [preset id]” naming convention. Example preset0, preset1, etc. when checkbox “Maintain Folder Structure” is unchecked.

When exporting the slot map file for Virtual Device usage, the name should always be sect262144.flash.

If set groups overlap for a given audio object, the one loaded last will override the first.  The order is not guaranteed so it is not recommended to do this.