Advanced Design Time Configuration

Additional Variable Configuration

Additional configuration variables are optional parameters that are needed if a specific object/algorithm needs configuration parameters beyond the default ones. Objects by default have configuration for (as mentioned above):

  • Channels
  • Elements
  • Modes

An object can chose to utilize all the default variables or not. However if the object needs more configuration variables, that’s where additional variables come in. A good example is parameter biquad audio object:

  • Channels are used
  • Elements represent number of biquads field
  • Object Mode is the drop down.

However, the parameter biquad object also allows the user to select the filter topology and whether ramping is needed or not. Those two are represented with additional variables. These are fully customizable by the objects.

Adding Additional Variables to Audio Object

The object needs to inform the toolbox regarding the number of additional variables it needs. This is already described above as part of the object description. Besides the number of additional variables, the object needs to have a description of each additional variable. The audio object developer can provide the following:

Besides the number of additional variables, the object needs to have a description of each additional variable. The audio object developer can provide the following:

  • Label for additional variable (example: filter enable or disable)
  • Data type for the additional variable

String data type is not supported, as it will add additional bytes to the flash memory. Strings are used only in GTT and not required on target.

  • Defaults & Range
    • Min
    • Max
    • Default value
  • The dimension for each additional variable
  • Data order – Describes how data is ordered. e.g. ascending or descending order
  • Dimension description
    • Label
    • Size of each dimension
    • Axis start index (Float always irrespective of datatype)
    • Axis increment (Float always irrespective of datatype)

Starting in R release – the sizes of a dynamic additional variable (NOT the count of variables, the size of each variable) can change based on user inputs.

To enable this functionality, make sure to see the static metadata page.  You must set the static metadata parameter isAddVarUpdateRequired to true.

Here are the restrictions and features:

  1. You can access the following members to change your size –
    1. m_NumElements
    2. m_NumAudioIn
    3. m_NumAudioOut
  2. You can only refer to these if they are true inputs (IE: you are NOT setting them in getObjectIO)
    1. Example : Your mask says m_NumElements and m_NumAudioOutare NOT configurable by the user (so they are considered derived values).
    2. You cannot utilize m_NumElements and m_NumAudioOut for changing additional var size – this would create a two way dependency as additional vars are INPUTS to getObjectIO
    3. You can use m_NumAudioIn freely

Below is an example of an object that has four additional variables.

  • First additional variable
    • Label : “ Gain Vs Frequency “
    • Data type : Float
    • Defaults : Min Value = 0.5 Max Value = 1.0 and Default value = 0.75
    • Number of dimension = NUM_DIMENTION_VAR1 (2)
    • Data order : xAF_NONE (no specific order required)
    • Dimension details :
      • 1st dimension:
        • Size of 1st dimension : SIZE_ADDVAR_1_XAXIS (10)
        • Label : “Gain“
        • Axis start index : 0
        • Axis increment : 1
      • 2nd dimension:
        • Size of 2nd dimension : SIZE_ADDVAR_1_YAXIS (20)
        • Label : “Frequency“
        • Axis start index : 0
        • Axis increment : 1
  • Second additional variable
    • Label : “Enable Disable Filter“
    • Data Type : Int8 / char
    • Defatults : Min Value = 0 Max Value = 1 and Default value = 0
    • Number of dimension = 1
    • Data order : xAF_NONE (no specific order required)
    • Dimension details :
      • 1st dimension
        • Size of 1st dimension : 10
        • Label : “ inputFiltEnable“
        • Axis start index : 0
        • Axis increment : 1
  • Third additional variable
    • Label : “ Min Max “
    • Data type : Int
    • Defatults : Min Value = 0 Max Value = 30 and Default value = 5
    • Number of dimension = 1
    • Data order : xAF_ASCENDING (Data has to be entered in ascending order)
    • Dimension details :
      • 1st dimension
        • Size of 1st dimension : 2
        • Label : “ 1st – Min 2nd – Max“
        • Axis start index : 0
        • Axis increment : 1
  • Fourth additional variable
    • Label : “ Gain “
    • Data type : Int
    • Defatults : Min Value = -30 Max Value = 20 and Default value = 1
    • Number of dimension = 1
    • Data order : xAF_NONE (Data has to be entered in ascending order)
    • Dimension details :
      • 1st dimension
        • Size of 1st dimension : 1
        • Label : “ Gain “
        • Axis start index : 0
        • Axis increment : 1

The example can be referred to here

An example of the result in the SFD is shown below, with the configuration of additional variable 1:

Audio Object description file for tuning and control

Once a signal flow design is complete, SFD calls the following three Audio Object API functions, getXmlSVTemplate(), getXmlObjectTemplate(), and getXmlFileInfo(), to generate XML that describes the parameter memory layout for tuning purposes and state memory layout for control and debug purposes. This data depends on the object configuration designed in the signal flow.

These functions are enabled only when generating the XML file on a PC. The getXmlSVTemplate() function is called once and used for state variable templates, a single parameter, or control value. This state variable template can be reused in the object template or even the device description. The getXmlObjectTemplate() creates an object template that can be reused in another object template or in the device description. The getXmlFileInfo() uses the block ID assigned by the SFD and the HiQnet address of an object. HiQnet ID of the StateVariable must be unique in an object – even across hierarchical levels.

This data describes the parameter memory layout for tuning purposes and state memory layout for control purposes:

unsigned int CAudioObjectToolbox::getXmlSVTemplate(tTuningInfo* info, char* buffer, unsigned int maxLen){}
unsigned int CAudioObjectToolbox::getXmlObjectTemplate(tTuningInfo* info, char* buffer, unsigned int maxLen){}
unsigned int CAudioObjectToolbox::getXmlFileInfo(tTuningInfo* info, char* buffer, unsigned int maxLen){}

This data must precisely describe the memory layout of the object. Here are some general guidelines:

  • Each object should start with a new HiQnet block value.
  • Each object should have a unique block ID value.
    • Block ID refers to an entire audio object. How sub-blocks are used, depends on the object developer. This is tied to how the developer writes the tuneXTP function. For example, each sub-block in a Biquad that contains multiple filters and multiple channels, can refer to the multiple filters on one channel. Alternatively, each sub-block can refer to one filter in the Biquad.
  • This file and tuning are directly related and should be implemented or laid out in the same order.
  • Each parameter or state value in an object that the developer wants to expose to the user should be wrapped and described in the segment.
  • Category should be set to ‘Tuning‘ for parameter memory and to ’State‘ for control memory or state memory.

To ease the generation of this data, the xAF has created XML helper functions. These functions can be used when writing getXmlSVTemplate(), getXmlObjectTemplate() functions.

The helper function is shown below for an example where getXmlSVTemplate() for a Delay object is written using writeSvTemplate():

For more xml helper functions:

  • Internal customers: Refer to XafXmlHelper.h and XafXmlHelper.cpp.
  • External customers: Contact Harman.

For more information and details, please check the Device Description File specification guide.

Examples of the XML function that needs to be written is shown inAudio Object Example section.

Audio Object AO Switch Processing State

This function is called from CAudioProcessing class whenever a XTP command is received to switch the audio object processing state. This function configures the ramping related variables and also the function pointer for the method to be called for every subsequent audio interrupts.

void CAudioObject::aoSwitchProcState(int state, int prevState);

Audio Object Processing States

The audio objects can be set to one of the following states from the GTT:

  1. Normal (default state on boot-up)
  2. Bypass
  3. Mute
  4. Stop

These options are available to all regular audio objects with equal number of input and output channels. For source objects like Waveform generator, only Normal and Mute states are allowed. This feature is not available to the interface objects like Audio-in/out, Control-in/out. For the compound audio objects, the selected state will be applied to all inner audio objects.

Following are the tasks carried out every time an audio interrupt is received for each state:

  • Normal: Normal operation with update of necessary internal states of the audio object; normal output.
  • Bypass: Normal operation with update of necessary internal states of the audio object; input channel buffer data copied to the output channel buffers.
  • Mute: Normal operation with update of necessary internal states of the audio object; output channel buffers cleared to zero.
  • Stop: Input channel buffer data copied to the output channel buffers (no update of internal states).

Ramping

To ensure smooth transition across states, linear ramping is provided with the ramp-up OR ramp-down time of 50 ms. Ramping is not provided for any transitions involving Bypass state and the individual audio object need to support this.

For transition between Normal and Stop states, first the output is ramped down from the present state to mute state and then ramped up to the target state.

Audio Object Bypass

This function is called every time an audio interrupt is received and when the audio object is in “BYPASS” processing state. The calc() function is called from here to get the internal states of the audio object updated. Subsequently the data from the input audio buffers are copied to the output audio buffers (overwriting the generated output data through the calc process).

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::bypass(float** inputs, float** outputs)
{
}

Audio Object Mute

This function is called every time an audio interrupt is received and when the audio object is in “MUTE processing state. The calc() function is called from here to get the internal states of the audio object updated. Subsequently the output audio buffers are cleared to zero (overwriting the generated output data through the calc process).

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::mute(xAFAudio** inputs, xAFAudio** outputs);
{
}

Audio Object Stop

This function is called every time an audio interrupt is received and when the audio object is in “STOP processing state. The data from the input audio buffers are copied to the output audio buffers without calling calc() and thereby the internal states of the audio object are not updated. This function is used to save cycles.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::stop(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp-Up

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “MUTE” state to “NORMAL / STOP” processing state. The calc() function is called from here with “NORMAL / STOP” as the active state and the output is ramped up linearly. The ramp-up time is fixed as 50 ms. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampUp(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp-Down

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “NORMAL / STOP” to “MUTE” processing state. The calc() function is called from here with NORMAL / STOP as the active state and the output is ramped down linearly. The ramp-down time is fixed as 50 ms. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampDown(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp Down-Up

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “NORMAL” to “STOP” or “STOP” to “NORMAL” processing state. The transition is in two parts – ramp down from the present state to the MUTE state followed by ramp up from MUTE state to the target state. The calc() function is called from here with present state as the active state during ramp down and target state as the active state during ramp up. Linear ramping is applied and the ramp down time and ramp up time are fixed at 50 ms each. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampDownUp(xAFAudio** inputs, xAFAudio** outputs);

Audio object Examples

A new audio object will implement the following functions depending on functionality. See the header files for the associated classes for detailed comments.

In the class which inherits CAudioObject – ie: CYourAudioObject.cpp

Abstract Methods (required implementation)

xUInt32 CAudioObject::getSize() const 

Virtual Methods (optional implementation – depending on object features). This is not a complete list but contains the major virtual methods.

void      CAudioObject::init()
void      CAudioObject::calc(xFloat32** inputs, xFloat32** outputs)
void      CAudioObject::tuneXTP(xSInt32 subblock, xSInt32 startMemBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp)
void      CAudioObject::controlSet(xSInt32 index, xFloat32 value)
xAF_Error CAudioObject::controlSet(xSInt32 index, xUInt32 sizeBytes, const void * const pValues)
void      CAudioObject::assignAdditionalConfig()
xInt8*    CAudioObject::getSubBlockPtr(xUInt16 subBlock)
xSInt32   CAudioObject::getSubBlockSize(xUInt16 subBlock)

In the class which inherits CAudioObjectToolbox – ie: CYourAudioObjectToolbox.cpp

const CAudioObjectToolbox::tObjectDescription*          CAudioObjectToolbox::getObjectDescription()
const CAudioObjectToolbox::tModeDescription*            CAudioObjectToolbox::getModeDescription(xUInt32 mode)
const CAudioObjectToolbox::additionalSfdVarDescription* CAudioObjectToolbox::getAdditionalSfdVarsDescription(xUInt32 index)
xAF_Error                                               CAudioObjectToolbox::getObjectIo(ioObjectConfigOutput* configOut)
xUInt32                                                 CAudioObjectToolbox::getXmlSVTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
xUInt32                                                 CAudioObjectToolbox::getXmlObjectTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
xUInt32                                                 CAudioObjectToolbox::getXmlFileInfo(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
void                                                    CAudioObjectToolbox::createStaticMetadata()
void                                                    CAudioObjectToolbox::createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut)

In the class which inherits CMemoryRecordProperties – ie: CYourAudioObjectMemRecs.cpp

xUInt8 CMemoryRecordProperties::getMemRecords(xAF_memRec* memTable, xAF_memRec& scratchRecord, xInt8 target, xInt8 format)

Source code for AwxAudioObjExt audio object can be found in HarmanAudioworX installation folder. The path for the source code is
Program FilesHarmanHarmanAudioworXext-reference-algorithmsexternalinc
Program FilesHarmanHarmanAudioworXext-reference-algorithmsexternalsrc

The code snippet for source and include files is provided in following sections for reference

General Guidelines

Below topics describes general guideline for developing audio object.

Hardware Abstraction

Hardware abstraction is being done at Audio Object functions level such as calc, init etc.

Audio Object Function Level Abstraction

When the audio object function implementations are significantly different across various platforms, platform specific functions should be in different cpp files (file per core). A new cpp file needs to be introduced for every object that has any core-specific code. The object will still have one header file across all platforms. Dll specific functions should be part of Win32 specific files.

Folder Structure

The folder structure will look like the following:

Override Defines

Each object will also have an “OBJECT”_OVERRIDE define. This define may be in the object header. However, for the xAF basic audio objects and the core objects, they have been combined into buildprocessor”PLATFORM”objectOverride”PLATFORM“.cmake file that shall be included in the build process.

For non-Sharc platforms, the API function specific defines are available in AudioFramework.h and the Biquad specific defines are available in privatesrcframeworkfilterCMakeLists.txt:

For example, buildprocessorarmv8aobjectOverrideArmv8a.cmake will look like the following:

For SHARC based platforms, it is not possible to define the override macro as a logical expression of individual API macros as done above. Here the logical operation is done separately with the math directive and subsequently assigned to the overriding macro. Both the API function specific defines and the Biquad specific defines are available in this file buildprocessorsharcobjectOverrideSharc.cmake:

The FIR object has Sharc specific implementation for init() and calc(). The FIR_OVERRRIDE define will be the union of XAF_INIT and XAF_CALC defines as given below:

Files

In the generic implementation of the object, every function that is overridden in optimization files should be surrounded by an ifdef check. Using the example above, the generic file FIR.cpp would look like this:

The Sharc file – FIRSharc.cpp – in this case should override both init() and calc():

Build System

Based on the target processor, the build system should always include the generic cpp implementation and the processor specific implementation. For example, when compiling for Sharc for the xAF basic audio objects:

Generic Files

SHARC files

Hardware Abstraction for Header files

This hardware abstraction separates out platform specific member variables and member functions from audio object header file and keeps in hardware abstraction class. Implementation of this class is done in platform specific .cpp file. Accessing the hardware abstraction class members in audio object is done by forward declaring the class in later and instantiating this as a class member. Memory allocation and assignment is done for this instance in initMemRecord and init of the object. Sometimes, it may be needed to access audio object data members inside hardware abstraction class. This requirement is taken care by declaring a back pointer in hardware abstraction class and assigning it to audio object pointer in constructor function. Implementation for ToneControl audio object is completed and code snippets for the same is given below

Hardware Abstraction Class

Below is the implementation example of hardware abstraction class for ToneControl C66 implementation. Here m_ToneCtrlInst is the back pointer to access ToneControl data members in Tone Control hardware abstraction class which is initialized in constructor with CToneControl.

Instance of Hardware Abstraction class in Audio Object

Memory Enums

Memory Enums shall be used for clarity and to avoid errors when allocating the memory required by the object. For example, the AudioToControl AO enum and getMemRecords() method are presented below.

Overlays

Instead of dereferencing parameter and state memory as m_Params[0] and m_States[0], overlays can be used where applicable. The same concept can be applied to coefficient memory if needed. The example below is presented for the gain object that contains three tunable parameters: gain value, invert, and mute.

Below is an example of how the parameters above can be used during tuning:

Example 1 – AwxAudioObjExt.cpp

/*!
*   file      AwxAudioObjExt.cpp
*   brief     Simple example Audio object for building outside from the xAF repo- Source file
*   details   Implements a simple example fucntionality
*   details   Project    Extendable Audio Framework
*   copyright Harman/Becker Automotive Systems GmbH
*   
       2022
*   
       All rights reserved
*   author    xAF Team
*/

/*!
*   xaf mandataory includes
*/
#include "AwxAudioObjExt.h"
#include "XafMacros.h"
#include "vector.h"

VERSION_STRING_AO(AwxAudioObjExt, AWXAUDIOOBJEXT);
AO_VERSION(AwxAudioObjExt, AWXAUDIOOBJEXT);

/** here you can add all required include files required for    
    the core functionality of your objects
**/

#define MAX_CONFIG_MIN_GAIN_dB              (0.0f)
#define MAX_CONFIG_MAX_GAIN_dB              (30.0f)
#define MAX_GAIN_DEFAULT_GAIN_dB            (10.0f)
#define CONTROL_GAIN_MIN                    (-128.0f)
#define GAINDB_CONVERSION_FACTOR             (0.05f)

CAwxAudioObjExt::CAwxAudioObjExt()
    : m_Coeffs(NULL)
    , m_Params(NULL), m_MemBlock(NULL), m_EnMemory(DISABLE_BLOCK)
{
}

CAwxAudioObjExt::~CAwxAudioObjExt()
{
}

void CAwxAudioObjExt::init()
{   
    m_Params = static_cast<xFloat32*>(m_MemRecPtrs[PARAM]);
    m_Coeffs = static_cast<xFloat32*>(m_MemRecPtrs[COEFF]);
	
	if (ENABLE_BLOCK == m_EnMemory)
	{
		m_MemBlock = static_cast<xFloat32*>(m_MemRecPtrs[FLOATARRAY]);
	}
	if (static_cast(GAIN_WITH_CONTROL) == m_Mode)
	{
		m_NumControlIn = 1;
		m_NumControlOut = 0;
	}
	else
	{
		m_NumControlIn = 0;
		m_NumControlOut = 0;
	}
}

void CAwxAudioObjExt::assignAdditionalConfig()
{
	xInt8*  addVars8Ptr = reinterpret_cast<xInt8*>(m_AdditionalSFDConfig);
	//Assigning additional configuration variable "Abstracted Tuning Memory".
	if (static_cast<void*>(NULL) != m_AdditionalSFDConfig)
	{
		m_EnMemory = addVars8Ptr[m_NumAudioIn * sizeof(xFloat32)];
	}
}

xFloat32 CAwxAudioObjExt::getMaxGain(xSInt32 index)
{
	xFloat32*  addVars32Ptr = reinterpret_cast<xFloat32*>(m_AdditionalSFDConfig);
	xFloat32 value = addVars32Ptr[index];
	return value;
}

xInt8* CAwxAudioObjExt::getSubBlockPtr(xUInt16 subBlock)
{
    xInt8* ptr = NULL;
  
    // this is just an example of how memory could be split by an AO developer. There is no strict rule
    // how memory has to be split for each subblock
    
    switch(subBlock)
    {
        case 0:
        ptr = reinterpret_cast<xInt8*>(m_Params);
        break;

        case 1:
        ptr = reinterpret_cast<xInt8*>(m_MemBlock);
        break;
        
        default:
        // by default we will return a null ptr, hence wrong subBlock was provided
        break;
    }

    return ptr;
}

xSInt32 CAwxAudioObjExt::getSubBlockSize(xUInt16 subBlock)
{
    xSInt32 subBlockSize = 0;
    
    // this is just an example of how memory could be split by an AO developer. There is no strict rule
    // how memory has to be split for each subblock
    switch(subBlock)
    {
        case 0:
        subBlockSize = static_cast(sizeof(xFloat32)) * static_cast(m_NumAudioIn) * NUM_PARAMS_PER_CHANNEL;
        break;

        case 1:
        subBlockSize = (nullptr != m_MemBlock) ? (static_cast(sizeof(xFloat32)) * FLOAT_ARRAY_SIZE) : 0;
        break;
        
        default:
        // by default we will return a null ptr, hence wrong subBlock was provided
        break;
    }
    return subBlockSize;
}

void CAwxAudioObjExt::calc(xAFAudio** inputs, xAFAudio** outputs)
{
	if (static_cast(ENABLE_BLOCK) == m_EnMemory)
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			// for example if m_MemBlock[0] is to mute all channels
			xFloat32 factor = m_Coeffs[i] * m_MemBlock[0];
			scalMpy(factor, inputs[i], outputs[i], static_cast(m_BlockLength));
		}
	}
	else
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			scalMpy(m_Coeffs[i], inputs[i], outputs[i], static_cast(m_BlockLength));
		}
	}
}

void CAwxAudioObjExt::calcGain(xSInt32 channelIndex, xFloat32 gainIndB)
{
	xFloat32 maxGainIndB = getMaxGain(channelIndex);

	LIMIT(gainIndB, CONTROL_GAIN_MIN, maxGainIndB);
	m_Params[channelIndex * NUM_PARAMS_PER_CHANNEL] = gainIndB;

	xUInt32* mutePtr = reinterpret_cast<xUInt32*>(&m_Params[(channelIndex * NUM_PARAMS_PER_CHANNEL) + 1u]);

	m_Coeffs[channelIndex] = (0 == *mutePtr) ? powf(MAX_GAIN_DEFAULT_GAIN_dB, gainIndB * GAINDB_CONVERSION_FACTOR) : 0.f;
}

void CAwxAudioObjExt::tuneXTP(xSInt32 subBlock, xSInt32 offsetBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp)
{
	if(0 == subBlock)
	{
		xUInt32 channelu = static_cast(offsetBytes) >> 2u;
		xSInt32 channel = static_cast(channelu) / NUM_PARAMS_PER_CHANNEL;
		while (sizeBytes > 0)
		{
			calcGain(channel, m_Params[channel * NUM_PARAMS_PER_CHANNEL]);
			sizeBytes -= static_cast(NUM_PARAMS_PER_CHANNEL * sizeof(xFloat32));
			channel++;
		}
	}
	else if(1 == subBlock)
	{ // handle float array related here
	  // values are available in m_MemBlock
	}
	else
	{
	}
} 

xSInt32 CAwxAudioObjExt::controlSet(xSInt32 index, xFloat32 value)
{
	if ((0 == index) && (static_cast(GAIN_WITH_CONTROL) == m_Mode))
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			calcGain(i, value);
		}
	}
	return 0;
}

xUInt32 CAwxAudioObjExt::getSize() const
{
    return sizeof(*this);
}

Example 2 – AwxAudioObjExtToolbox.cpp

/*!
*   file      AwxAudioObjExtToolbox.cpp
*   brief     AwxAudioObjExt Toolbox Source file
*   details   Implements the AwxAudioObjExt signal design API
*   details   Project    Extendable Audio Framework
*   copyright Harman/Becker Automotive Systems GmbH
*   
       2020
*   
       All rights reserved
*   author    xAF Team
*/

/*!
*   xaf mandataory includes to handle the toolbox related data
*/
#include "AwxAudioObjExtToolbox.h"
#include "XafXmlHelper.h"
#include "AudioObjectProperties.h"
#include "AwxAudioObjExt.h"
#include "XafMacros.h"
#include "AudioObject.h"



// the revision number may be different for specific targets
#define MIN_REQUIRED_XAF_VERSION            (RELEASE_U)
// mode specific defines
#define AUDIO_IN_OUT_MIN                    (1)
#define AUDIO_IN_OUT_MAX                    (255)
#define EST_MEMORY_CONSUMPTION_NA           (0)         // memory consuption not available/measured
#define EST_CPU_LOAD_CONSUMPTION_NA         (0.f) // cpu load not available/measured
#define CONTROL_GAIN_MAX                   (30.0f)
#define CONTROL_GAIN_MIN                   (-128.0f)
#define CONTROL_GAIN_IN_LABEL              "Gain"
#define MAX_CONFIG_MIN_GAIN_dB              (-12.0f)
#define MAX_CONFIG_MAX_GAIN_dB              (30.0f)
#define MAX_GAIN_DEFAULT_GAIN_dB            (10.0f)

CAwxAudioObjExtToolbox::CAwxAudioObjExtToolbox()
{
}

CAwxAudioObjExtToolbox::~CAwxAudioObjExtToolbox()
{
}

static CAudioObjectToolbox::additionalSfdVarDescription theVar;
const CAwxAudioObjExtToolbox::additionalSfdVarDescription* CAwxAudioObjExtToolbox::getAdditionalSfdVarsDescription(xUInt32 index)

{
    CAudioObjectToolbox::additionalSfdVarDescription* ptr = &theVar;
    static CAudioObjectToolbox::MinMaxDefault addVar1 = { MAX_CONFIG_MIN_GAIN_dB ,MAX_CONFIG_MAX_GAIN_dB ,MAX_GAIN_DEFAULT_GAIN_dB }; //Min,max,default values for additional config variable
    static CAudioObjectToolbox::MinMaxDefault addVar2 = { 0, 1, 0 }; //Min,max,default values
    
    static CAudioObjectToolbox::addVarsSize m_AddtionalVarSize1[NUM_DIMENSION_VAR] =
    {
        // size, label, start index, increment
        {1, "Max Gain per channel(dB)", 0, 1}
    };
    static CAudioObjectToolbox::addVarsSize m_AddtionalVarSize2[NUM_DIMENSION_VAR] =
    {
        // size, label, start index, increment
        {1, "Disable: 0nEnable  : 1", 0, 1}
    };

     if (index == 0)
     {
       theVar.mP_Label = "Max Gain per channel";
       theVar.m_DataType = xAF_FLOAT_32;
       theVar.mP_RangeSet = &addVar1;
       theVar.m_Dimension = 1;
       theVar.m_DataOrder = xAF_NONE;
       //size varies according to the number of channels
       m_AddtionalVarSize1[0].m_Size = static_cast(m_NumAudioIn);
       theVar.mP_MaddVarsSize = m_AddtionalVarSize1;
     }
     else if (index == 1)
     {
         theVar.mP_Label = "Abstracted Tuning Memory";
         theVar.m_DataType = xAF_UCHAR;
         theVar.mP_RangeSet = &addVar2;
         theVar.m_Dimension = 1;
         theVar.m_DataOrder = xAF_NONE;
         theVar.mP_MaddVarsSize = m_AddtionalVarSize2;
     }
     else
     {
         /* Invalid index. Return NULL */
         ptr = NULL;
     }
     return ptr;
}


const CAudioObjectToolbox::tObjectDescription* CAwxAudioObjExtToolbox::getObjectDescription()
{
    static const CAudioObjectToolbox::tObjectDescription descriptions =
    {
        1, 1, 0, 0, "AwxAudioObjExt", "Simple Object to start with for 3rd party/external objects integration", "External", AWX_EXT_NUM_ADD_VARS, AWX_EXT_NUM_MODES
    };
    return &descriptions;
}


const CAudioObjectToolbox::tModeDescription* CAwxAudioObjExtToolbox::getModeDescription(xUInt32 mode)
{
    static const CAudioObjectToolbox::tModeDescription modeDescription[AWX_EXT_NUM_MODES] =
    {
        {"Gain", "No control input", 0, 0, "", CFG_NCHANNEL},
        {"GainWithControl", "One gain control input pin gets added", 0, 0, "", CFG_NCHANNEL},
    };
    return (mode < (sizeof(modeDescription) / sizeof(tModeDescription))) ? &modeDescription[mode] : static_cast<tModeDescription*>(NULL);
}


xAF_Error CAwxAudioObjExtToolbox::getObjectIo(ioObjectConfigOutput* configOut)
{
    if (static_cast(GAIN_WITH_CONTROL) == m_Mode)
    {
        configOut->numControlIn = 1;
        configOut->numControlOut = 0;
    }
    else 
    {
        configOut->numControlIn = 0;
        configOut->numControlOut = 0;
    }

    return xAF_SUCCESS;
}


xUInt32 CAwxAudioObjExtToolbox::getXmlObjectTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
{
    initiateNewBufferWrite(buffer, maxLen);
    
    // add your tuning parameters here in order to show up in the tuning tool
    // the number of params specified here, depends on the number to be tuned in GTT
    // in this example we are exposing the tuning parameters as arrays split into 2 subblocks

    //template 1
    xSInt32 numAudioIn = static_cast(m_NumAudioIn);
    xSInt32 blockID = info->Global_Object_Count; // uniq to this instance
    xUInt32 id = 0;

    for (xSInt32 i = 0; i < numAudioIn; i++) { //Max value for each channel to show up shall be specified by this function xFloat32 gainval = getMaxGain(i); xAFOpenLongXMLTag("Object"); string templateName = string("").append("AwxAudioObjExtTuneTemplate").append(xAFIntToString(i + 1)).append(xAFIntToString(blockID)); xAFAddFieldToXMLTag("Key", templateName.c_str()); xAFEndLongXMLTag(); xAFWriteQuickXmlTag("ExplorerIcon", "Object"); xAFWriteXmlTag("StateVariables", XML_OPEN); xAFWriteStateVariable("Gain", // name id, // id NULL, // control law "dB", // unit type DataTypes[xAF_FLOAT], // data type -128.0, // min gainval, // max 0.0, // default 0u, // offset NULL, // encode value NULL, // decode value DataTypeConverters[xAF_FLOAT], // bit converter false // disable streaming ); id++; xAFWriteStateVariable("Mute", // name id, // id NULL, // control law NULL, // unit type DataTypes[xAF_UINT], // data type 0.0, // min 1.0, // max 0.0, // default 4u, // offset NULL, // encode value NULL, // decode value DataTypeConverters[xAF_UINT], // bit converter false // disable streaming ); id++; xAFWriteXmlTag("StateVariables", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); } //template 2 //It shall be shown up in the State Variable Explorer if through additional configuration "Abstracted Tuning Memory" is set to 1 or enabled. if (static_cast(ENABLE_BLOCK) == m_EnMemory) { xAFOpenLongXMLTag("Object"); xAFAddFieldToXMLTag("Key", "AwxAudioObjExtArrayTemplate"); xAFEndLongXMLTag(); xAFWriteQuickXmlTag("ExplorerIcon", "Object"); xAFWriteXmlTag("StateVariables", XML_OPEN); xAFWriteStateVariableBuffer(/* name = */ "FloatArray", /* id = */ id, /* type = */ FLOATARRAY_SV, /* size = */ FLOAT_ARRAY_SIZE, /* streamIdx = */ id, /* minVal = */ -1000.0, /* maxVal = */ 1000.0, /* defaultVal = */ 0.0, /* offset = */ 0, /* isStreamable = */ false); xAFWriteXmlTag("StateVariables", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); } return finishWritingToBuffer(); } xUInt32 CAwxAudioObjExtToolbox::getXmlFileInfo(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen) { initiateNewBufferWrite(buffer, maxLen); xUInt32 hiqnetInc = 0u; xUInt8 subBlock = 0u; xSInt32 blockID = info->Global_Object_Count; // uniq to this instance

    xAFWriteObject(info->Name, static_cast(info->Global_Object_Count), hiqnetInc, static_cast(info->HiQNetVal), 0);

    xAFWriteXmlTag("Objects", XML_OPEN);
    xSInt32 numAudioIn = static_cast(m_NumAudioIn); // m_NumAudioIn == m_NumAudioOut
    
    xAFWriteXmlObjectContainer("Gains", hiqnetInc, subBlock, PARAM_CATEGORY);
    xAFWriteXmlTag("Objects", XML_OPEN);
    for(xSInt32 i = 0; i < numAudioIn; i++) { string templateName = string("").append("AwxAudioObjExtTuneTemplate").append(xAFIntToString(i+1)).append(xAFIntToString(blockID)); xAFWriteXmlObjectTemplateInstance(templateName.c_str(), string("Ch").append(xAFIntToString(i+1)).c_str(), i * 8, 1 + i); // 8 is related here to the internal memory layout where a "state/tuning" is related to a block of N-bytes and withing this needs to be offset hiqnetInc++; } xAFWriteXmlTag("Objects", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); //gains if (static_cast(ENABLE_BLOCK) == m_EnMemory) { xAFWriteXmlObjectBlockOffset("AwxAudioObjExtArrayTemplate", "FloatArrayMemory", subBlock, static_cast(info->HiQNetVal), hiqnetInc, PARAM_CATEGORY);
        hiqnetInc++;
    }
    xAFWriteXmlTag("Objects", XML_CLOSE);

    xAFWriteXmlTag("Object", XML_CLOSE);

    return finishWritingToBuffer();
}


void CAwxAudioObjExtToolbox::createStaticMetadata()
{
    m_StaticMetadata.minReqXafVersion = static_cast(MIN_REQUIRED_XAF_VERSION);

    setAudioObjectVersion(AWXAUDIOOBJEXT_VERSION_MAJOR, AWXAUDIOOBJEXT_VERSION_MINOR, AWXAUDIOOBJEXT_VERSION_REVISION);
    setTuningVersion     (AWXAUDIOOBJEXT_TUNING_VERSION_MAJOR, AWXAUDIOOBJEXT_TUNING_VERSION_MINOR);

    m_StaticMetadata.supDataFormats.push_back(xAF_DATATYPE_FLOAT);

    //creation/release date
    setCreationDate(2022, 8, 4);

    //Simple AO supports in-place computation
    m_StaticMetadata.inPlaceComputationEnabled = true;
    //This flag allows to set whether the object dynamically updates its additional vars based on input params
    m_StaticMetadata.isAddVarUpdateRequired = true;
}

void CAwxAudioObjExtToolbox::createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut)
{
    metaDataControlDescription ctrlDesc;
    // define audio in metadata
    m_DynamicMetadata.audioIn.Min = AUDIO_IN_OUT_MIN;
    m_DynamicMetadata.audioIn.Max = AUDIO_IN_OUT_MAX;

    // define audio out metadata
    m_DynamicMetadata.audioOut.Min = AUDIO_IN_OUT_MIN;
    m_DynamicMetadata.audioOut.Max = AUDIO_IN_OUT_MAX;

    switch (configIn.mode)
    {
        case static_cast(GAIN_WITH_CONTROL) :
        // define control in min, max and label values for the control pin
            
        ctrlDesc.Min = CONTROL_GAIN_MIN;
        ctrlDesc.Max = CONTROL_GAIN_MAX;
        ctrlDesc.Label = CONTROL_GAIN_IN_LABEL;
        m_DynamicMetadata.controlIn.push_back(ctrlDesc);
        break;

        default:
        break;
    }

    m_DynamicMetadata.estMemory = EST_MEMORY_CONSUMPTION_NA;
    m_DynamicMetadata.estMIPS = EST_CPU_LOAD_CONSUMPTION_NA;
}

Example 4 – AwxAudioObjExt.h

// ============================================================
// (C) 2017 Harman International Industries, Incorporated.
// Confidential & Proprietary. All Rights Reserved.
// ============================================================

/**
*   file       AwxAudioObjExt.h
*   brief      Simple demo object Audio object to start a development of a new audio object- Header file
*   details    Project    Extendable Audio Framework
*   copyright  Harman/Becker Automotive Systems GmbH
*   
        2017
*   
        All rights reserved
*   author     xAF Team
*   date       Nov 28, 2023
*/

#ifndef AWXAUDIOOBJEXT_H
#define AWXAUDIOOBJEXT_H

/*!
*   xaf mandataory includes
*/
#include "AudioObject.h"

#define AWXAUDIOOBJEXT_VERSION_MAJOR             (0x01)
#define AWXAUDIOOBJEXT_VERSION_MINOR             (0x01)
#define AWXAUDIOOBJEXT_VERSION_REVISION          (0x06)
#define AWXAUDIOOBJEXT_TUNING_VERSION_MAJOR      (0x01)
#define AWXAUDIOOBJEXT_TUNING_VERSION_MINOR      (0x00)

/** here you can add all required include files required for    
    the core functionality of your objects
**/

#define AWX_AUDIO_OBJ_EXT_NUM_PARAMS         1
#define FLOAT_ARRAY_SIZE                     10
#define NUM_DIMENSION_VAR                    1

/**
*    brief Simple example object to provide a starting point for a new audio object 
*/
class CAwxAudioObjExt : public CAudioObject
{
public:
    static AOVersion version;
    CAwxAudioObjExt();
    virtual ~CAwxAudioObjExt();

    /**
    *   Refer AudioObject.h for description
    */

	/*
     *   It returns the class size of the given audio object.
     */
    xUInt32 getSize() const OVERRIDE;

	/*
     This function initializes all the object variables and parameters. In this method, the object shall initialize all its memory to appropriate values.
    */
    void init() OVERRIDE;
    void calc(xAFAudio** inputs, xAFAudio** outputs) OVERRIDE;

	/**
     *    brief  This method is called when an object receives updated tuning data.
     *    param  subblock               index selects the subblock
     *    param  offsetBytes               points the offset in param memory (bytes)
     *    param  sizeBytes                   number of parameters to be updated (bytes)
     *    param  shouldAttemptRamp      whether or not this data should be attempt to apply instantly, or AO should attempt ramping
     */
    void tuneXTP(xSInt32 subblock, xSInt32 adrBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp) OVERRIDE;

	/**
	*    brief  Retrieves pointer to the start of the subblock
	*    param  subBlock subblock number
	*    return start address of the subblock
	*/
    xInt8* getSubBlockPtr(xUInt16 subBlock) OVERRIDE;

	/**
	*    Returns the size of the sub block indicated by 'subBlock'
	*    param  subBlock the ID of the state subBlock we want to get the size of
	*    return size of subBlock
	*/

    xSInt32 getSubBlockSize(xUInt16 subblock) OVERRIDE;

	/**
    *   Assigns the additional configuration as the object requires.
    */
	void assignAdditionalConfig() OVERRIDE;

	/**
     *    Control method to set the new value
     *    param index - pin index of the object's control input we are writing to
     *    param value - value we are writing
     */
	xSInt32 controlSet(xSInt32 index, xFloat32 param) OVERRIDE;

	/**
     *    brief  It reads the array of applied gain values through additional configuration "Max Gain per channel" for each channel.
     *    param    index  denotes the channel index
     *    return   Gain value read for each channel.
     */
    xFloat32 getMaxGain(xSInt32 index);

	enum AddnlVars {MAX_GAIN_PER_CHANNEL, THIRD_PARTY_MEM_BLK , AWX_EXT_NUM_ADD_VARS };
	enum Modes { GAIN, GAIN_WITH_CONTROL, AWX_EXT_NUM_MODES };
	enum MemAccess { DISABLE_BLOCK, ENABLE_BLOCK };
	enum memoryRecords { PARAM, COEFF, FLOATARRAY, NUM_MEM_RECORDS } memRecs;
    enum PARAMS { NUM_PARAMS_PER_CHANNEL = 2 };
	xInt8 m_EnMemory;

protected:
    xFloat32* m_Coeffs;                                         ///< internal pointer to COEFF memrec
    xFloat32* m_Params;                                         ///< internal pointer to PARAM memrec
	xFloat32* m_MemBlock;                                       ///< internal pointer to FLOATARRAY memrec

private:
	/**
     *    brief  for each channel, checks the gain limits and calculates gain coefficient
     *    param    channel     channel index
     *    param    gainIndB    gain in dB
     */
    void calcGain(xSInt32 channelIndex, xFloat32 gainIndB);
};
#endif //AWXAUDIOOBJEXT_H


Audio Object Class

This section provides a description of the base class. The tables below show the class members and methods of CAudioObject class that a developer would need to use.

CAudioObject Members

Member Description
m_Owner This is the audio processing class that ‘owns’ this audio object.
m_MemRecPtrs m_MemRecPtrs  is an array which has the address to the start of each record
tObjectProperties This is a struct containing the object properties:

  • Object type
  • Number of audio inputs
  • Number of audio outputs
  • Number of elements
  • Mode
  • Name of the audio object
  • Block ID
  • *AdditionalVars
  • SizeofAdditionalVars
  • NumMemRecords
  • *MemRecordsInfo
m_NumAudioIn This is the number of audio input channels.
m_NumAudioOut This is the number of audio output channels.
m_NumElements This is the number of elements (e.g., filters, taps) per channel.
m_Mode This is the audio object mode. For example, Mode with a value of zero could represent a matrix mixer that operates on linear gains while mode one could represent a mixer that operates on a logarithmic scale.
m_AdditionalSFDConfig This is a pointer (void) to the additional data an object requires for configuration
m_BlockLength This is the block length in samples.
m_Type This is the audio object type, defined in object properties.
m_Name This is the name of the audio object.
m_BlockID This is the ID of the block in a specific signal flow.
m_NumControIn This is the number of control data input channels.
m_NumControlOut This is the number of control data output channels.
m_ControlConfig A list of audio objects and their control input channel numbers, where the current audio object’s control output channels are connected in order. There are two elements for each control output channel:

  • the destination audio object
  • the destination control input channel number

CAudioObject Methods

Method Description
Constructor This sets the following:

  • number of input and output audio channels
  • number of elements
  • object operation mode
  • processing block length
  • sample rate
  • address
  • memory table
assignAdditionalConfig() This dereferences the m_AdditionalVariables pointer to use the additional configuration parameters as needed.
getSubBlockPtr() Retrieves pointer to the start of the subblock in the audio object.
getSubBlockSize() Returns the size (in BYTES) of the sub block indicated by ‘subBlock’ . subBlock is the ID of the state subBlock we want to get the size
init() This initializes all internal variable and parameters. This is called by CAudioProcessing::initAudioObjects().
calc() This function implements the module functionality or algorithm that runs every audio interrupt. Before calling this function m_Inputs & m_Outputs objects to be set by CAudioProcessing object. This is called by CAudioProcessing::calcProcessing() for every frame interval.
tuneXTP() This performs any required operations after the parameter memory is updated. This is called by CAudioProcessing::setAudioObjectTuning() and is triggered by the tuning tool.
setControlOut() This is a helper function for writing a value to one of the object’s outputs.
controlSet() This is called when controls like volume, bass, fade, RPM, and throttle are changed. These variables should live in state memory.
getXmlSVTemplate() This function implements the generation of state variable templates used in the Device Description File on the computer.
getXmlObjectTemplate() This function implements the generation of object templates used in the Device Description File on the computer.
getXmlFileInfo() This function generates the Device.ddf file through the SFD. This function is enabled only when generating Device Description Files on the computer.
getStateMemForLiveStreamingPtr() This function returns the address and length of the state variable for live streaming.

Metadata

The Metadata is design-time information of the audio objects use to describe their features and attributes. Metadata is stored in the audio object code. This information can be used to convey memory usage or to check compatibility between the audio objects.
It also provides the tool with constraints on parameters, and information describing controls and audio channels.

There are three types of metadata:

  • Dynamic: The dynamic accepts numerous configuration parameters and provides data specific to those parameters.
  • Static: The static data is constant, it does not take in any parameters.
  • Real-time: The real-time metadata specific to a connected target device.

Static Metadata

The Static Metadata represents data that will not change based on configuration. It is provided as is.

There are two API methods related to this feature.

  • createStaticMetadata
  • getStaticMetadata
virtual void createStaticMetadata();
staticMetadata getStaticMetadata() { return m_StaticMetadata; }

Create Static Metadata

This method is intended to be overwritten by each instance of AudioObject. The goal is to populate the protected member m_StaticMetadata. There is an example of how to do this in AudioObject.cpp. The basic audio objects included within xAF also implement this method appropriately.

This method should be overridden by any object updating to the new API. Here are the relevant details:

  • minReqXafVersion – set this to an integer which is related to the major version of xAF. (ACDC == 1, Beatles == 2, etc)
  • isExtendedObjIdRequired – false for most objects. This flag enables support for more than 256 subblocks.
  • supSampleRates – list of all supported samples rates (leave blank if there are no restrictions)
  • supBlockSizes – list of all supported block sizes (leave blank if there are no restrictions)
  • supDataFormats – list of supported calcObject data formats (leave blank if there are no restrictions)
  • audioObjectVersion – Condensed three part version number. Created with helper method :
    • void setAudioObjectVersion(unsigned char major, unsigned char minor, unsigned char revision)
    • It is up to audio object to determine how to manage these versions.
  • tuningVersion – Condensed two part version number. Created with helper method :
    • Void setTuningVersion(unsigned char major, unsigned char minor)
    • These version numbers must only be changed when appropriate!!
    • Follow these rules:
      • Increment minor version when new release has *additional* tuning but previous tuning data can still be loaded successfully.
      • Increment major reversion when the new release is not compatible at all with previous tuoid setTuning data.
      • If tuning structure does not change, do not change this version
  • authorList – fill with list of authors if desired
  • creationDay – date of creation for the object
  • certified – whether this object has undergone certification
  • inPlaceComputationEnabled – whether this object requires input and output buffers to be the same (see below)
  • isAddVarUpdateRequired – whether the tuning tool should assume additional vars can change any time main object parameters are updated.
    • Set this to true if your additional variable sizes are based in some way on inputs.
    • Example: Number of Input channels is configurable by the user – and the first additional variable size is always equal to the number of input channels.

Get Static Metadata

This method simply returns a copy of m_StaticMetadata. It is not virtual.

In-Place Computation

This will be deprecated here and moved to dynamic metadata, to support a target/core specific implementation. The static member will be kept for backward compatibility reasons only and will be deprecated after some time.
Do not use this struct member in future implementations!

Dynamic Metadata

Dynamic metadata creation is similiar to static, but accepts arguments for the creation process, hence the dynamic. The object receives all configuration data being considered (in most cases this would be by GTT) and writes relevant information to the member m_DynamicMetadata in response.

virtual void createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut);
dynamicMetadata getDynamicMetadata() { return m_DynamicMetadata; }

Create Dynamic Metadata

createDynamicMetadata is called after a successful call to getObjectIo. It can further restrict values in the ioObjectConfigOutput struct. All required information is passed in with configIn.

  • audioIn & audioOut – instances of type metaDataDescription which label and set restrictions for inputs and outputs. Label need not be specified if a generic label will suffice. (eg: Input 1) If not, supply a label for each input and output.
  • controlIn & controlOut – are vectors of type metaDataControlDescription which label and specify value ranges for each control input. The number of controls is dictated by other. parameters, so we don’t have to bound the min and max. Note: Min and Max are not enforced, they are only informative to the user.
  • estMemory – Estimated memory consumption for the current configuration (in bytes).
  • estMIPS – estimated consumption of processor (in millions of cycles per second, so not really MIPS).

Get Dynamic Metadata

This method simply returns a copy of m_DynamicMetadata. Note that it is not virtual.

Description of Structures

These structures are used by the tool during signal design. The input configuration struct holds all *attempted* parameters, the output struct is used to constrain audio inputs and outputs and report the correct number of control inputs and outputs.

In-Place Computation

This feature allows Audio Objects to give the GTT the ability to operate in in-place computation mode. In this mode, the audio object uses the same buffers for input and output. This option has been moved from static metadata to dynamic metadata to support a kernel-based decision. This allows the AO developer to decide, based on the target architecture, whether or not it is beneficial to run the calc function in-place.

GTT analyses the signal flow and calculates the amount of buffers required for the given signal flow. If isInplaceComputationSupported is set by the audio object developer, GTT tells the framework to allocate only input buffers only.

The isInplaceComputationSupported flag can be checked in the the audio object’s dynamic metadata.

For example,  Gain configured for 6 channels :

  • If isInplaceComputationSupported is not set, it will use a total of 12 buffers.
  • If isInplaceComputationSupported is set, it will use a total of 6 buffers.

     

In-place computation feature has the following benefits:

  • reduces flash size.
  • reduces number of IO streams which improves the memory performance on embedded.

Current Limitation or Additional conditions

An Audio Object Marked is considered for in-place has to satisfy following three conditions:

  • Audio Object should have dynamic metadata flag isInplaceComputationSupported set to true for the selected core type.
  • Audio Object should have equal number of input and output pins.
  • All audio pins should be connected.

Debug and Monitoring

A number of features are planned for debugging and monitoring but currently, live streaming is implemented and described below.

Live streaming of state variable or state memory

To enable live streaming for a particular state variable, below steps needs to be performed by the audio object.

1. The XML section of the state variable has to be updated to convey that state variable is streamable to GTT. The optional variable after the bit converter has to be set to true to enable the state variable streaming.
The code snippet from CTemplate::getXmlObjectTemplate function conveys to GTT that the state variable “State1Value” is enabled for streaming by setting the optional variable after bit converter to true.

2. For uploading data from framework to GTT, the following public functions have to overridden or implemented:

  • CAudioObject::getStateMemForLiveStreamingPtr
  • CAudioObject::getDataFormatForLiveStreamingPtr
CAudioObject::getStateMemForLiveStreamingPtr () take in 4 arguments representing streamIndext, subBlockId, pointer to hold memory address of the state variable (stateMem) and the number of the bytes to be streamed (len).

The streamIndext and subBlockId are passed into the audio object to enable the calculation of which channel of the state variable to be streamed. Based on the calculation done, the audio object has to update the variables stateMem and len.

The code snippet from CTemplate::getStateMemForLiveStreamingPtr shows the example implementation. In this example code, subblock is not used for state variable and hence subBlockId has to be always zero. The first state variable (with streamIndex 0) is mute which is not enabled for streaming in XML file. The state variables (streamIndext 1 to 6) is enabled for streaming in XML file and stateMem and len is updated.

The len argument of the function getStateMemForLiveStreamingPtr conveys that how many bytes are going to be streamed. If only one float value is going to be streamed then value for len is 4. If n number of float values need to be streamed, then variable len has to be 4 times n. Also, audio object has to make sure that len number of bytes are allocated for the state variable and the starting address is assigned to stateMem variable.

CAudioObject::getDataFormatForLiveStreamingPtr() take in 2 arguments representing streamIndext and subBlockId.

The streamIndext and subBlockId are passed into the audio object to return the data format of the state variable to be streamed.

The code snippet from CTemplate::getDataFormatForLiveStreamingPtr shows the example implementation. In this example code, subblock is not used for state variable and hence subBlockId has to be always zero. The first state variable (with streamIndex 0) is mute which is not enabled for streaming in XML file. The state variables (streamIndext 1 to 6) is enabled for streaming in XML file and the corresponding data format is returned from this function.

3. Data from framework to GTT is sent based on the value commands per second of the state variable which is sent from GTT.

The framework does the below calculation to decide on which call it needs to send data to GTT.

  • Number of blocks per second = SampleRate / BlockLength
  • Blocks per message = Number of blocks per second / commands per second

The amount of data to be send is based on the below calculation.

  • Bytes per message = Header size + len
    where

    • Header size is 5.
    • len is in bytes.

The framework sends Bytes per message amount of data to GTT for every Blocks per message.

Example #1:
SampleRate = 48000, BlockLength = 64, len = 4 and commands per second = 10

Number of blocks per second = 48000 / 64 = 750
Blocks per message = 750 / 10 = 75
Bytes per message = 5 + 4 = 9
The framework sends 9 bytes of data to GTT for every 75th block.
Bytes per second = (9 * 10) bytes per sec = 90 bytes per sec

Example #2:
SampleRate = 48000, BlockLength = 64, len = 128 and commands per second = 6

Number of blocks per second = 48000 / 64 = 750
Blocks per message = 750 / 6 = 125
Bytes per message = 5 + 128 = 133
The framework sends 133 bytes of data to GTT for every 125th block.
Bytes per second = (133 * 6) bytes per sec = 798 bytes per sec