Audio Object AO Switch Processing State

This function is called from CAudioProcessing class whenever a XTP command is received to switch the audio object processing state. This function configures the ramping related variables and also the function pointer for the method to be called for every subsequent audio interrupts.

void CAudioObject::aoSwitchProcState(int state, int prevState);

Audio Object Processing States

The audio objects can be set to one of the following states from the GTT:

  1. Normal (default state on boot-up)
  2. Bypass
  3. Mute
  4. Stop

These options are available to all regular audio objects with equal number of input and output channels. For source objects like Waveform generator, only Normal and Mute states are allowed. This feature is not available to the interface objects like Audio-in/out, Control-in/out. For the compound audio objects, the selected state will be applied to all inner audio objects.

Following are the tasks carried out every time an audio interrupt is received for each state:

  • Normal: Normal operation with update of necessary internal states of the audio object; normal output.
  • Bypass: Normal operation with update of necessary internal states of the audio object; input channel buffer data copied to the output channel buffers.
  • Mute: Normal operation with update of necessary internal states of the audio object; output channel buffers cleared to zero.
  • Stop: Input channel buffer data copied to the output channel buffers (no update of internal states).

Ramping

To ensure smooth transition across states, linear ramping is provided with the ramp-up OR ramp-down time of 50 ms. Ramping is not provided for any transitions involving Bypass state and the individual audio object need to support this.

For transition between Normal and Stop states, first the output is ramped down from the present state to mute state and then ramped up to the target state.

Audio Object Bypass

This function is called every time an audio interrupt is received and when the audio object is in “BYPASS” processing state. The calc() function is called from here to get the internal states of the audio object updated. Subsequently the data from the input audio buffers are copied to the output audio buffers (overwriting the generated output data through the calc process).

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::bypass(float** inputs, float** outputs)
{
}

Audio Object Mute

This function is called every time an audio interrupt is received and when the audio object is in “MUTE processing state. The calc() function is called from here to get the internal states of the audio object updated. Subsequently the output audio buffers are cleared to zero (overwriting the generated output data through the calc process).

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::mute(xAFAudio** inputs, xAFAudio** outputs);
{
}

Audio Object Stop

This function is called every time an audio interrupt is received and when the audio object is in “STOP processing state. The data from the input audio buffers are copied to the output audio buffers without calling calc() and thereby the internal states of the audio object are not updated. This function is used to save cycles.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::stop(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp-Up

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “MUTE” state to “NORMAL / STOP” processing state. The calc() function is called from here with “NORMAL / STOP” as the active state and the output is ramped up linearly. The ramp-up time is fixed as 50 ms. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampUp(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp-Down

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “NORMAL / STOP” to “MUTE” processing state. The calc() function is called from here with NORMAL / STOP as the active state and the output is ramped down linearly. The ramp-down time is fixed as 50 ms. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampDown(xAFAudio** inputs, xAFAudio** outputs);

Audio Object Ramp Down-Up

This function is called every time an audio interrupt is received and when the audio object is in the transition state of switching from “NORMAL” to “STOP” or “STOP” to “NORMAL” processing state. The transition is in two parts – ramp down from the present state to the MUTE state followed by ramp up from MUTE state to the target state. The calc() function is called from here with present state as the active state during ramp down and target state as the active state during ramp up. Linear ramping is applied and the ramp down time and ramp up time are fixed at 50 ms each. The ramp step and number of times this function need to be called is computed during the start of the ramp period.

This function takes pointers to input and output audio streams and is called by the CAudioProcessing class when an audio interrupt is received.

void CAudioObject::rampDownUp(xAFAudio** inputs, xAFAudio** outputs);

Example 4 – AwxAudioObjExt.h

// ============================================================
// (C) 2017 Harman International Industries, Incorporated.
// Confidential & Proprietary. All Rights Reserved.
// ============================================================

/**
*   file       AwxAudioObjExt.h
*   brief      Simple demo object Audio object to start a development of a new audio object- Header file
*   details    Project    Extendable Audio Framework
*   copyright  Harman/Becker Automotive Systems GmbH
*   
        2017
*   
        All rights reserved
*   author     xAF Team
*   date       Nov 28, 2023
*/

#ifndef AWXAUDIOOBJEXT_H
#define AWXAUDIOOBJEXT_H

/*!
*   xaf mandataory includes
*/
#include "AudioObject.h"

#define AWXAUDIOOBJEXT_VERSION_MAJOR             (0x01)
#define AWXAUDIOOBJEXT_VERSION_MINOR             (0x01)
#define AWXAUDIOOBJEXT_VERSION_REVISION          (0x06)
#define AWXAUDIOOBJEXT_TUNING_VERSION_MAJOR      (0x01)
#define AWXAUDIOOBJEXT_TUNING_VERSION_MINOR      (0x00)

/** here you can add all required include files required for    
    the core functionality of your objects
**/

#define AWX_AUDIO_OBJ_EXT_NUM_PARAMS         1
#define FLOAT_ARRAY_SIZE                     10
#define NUM_DIMENSION_VAR                    1

/**
*    brief Simple example object to provide a starting point for a new audio object 
*/
class CAwxAudioObjExt : public CAudioObject
{
public:
    static AOVersion version;
    CAwxAudioObjExt();
    virtual ~CAwxAudioObjExt();

    /**
    *   Refer AudioObject.h for description
    */

	/*
     *   It returns the class size of the given audio object.
     */
    xUInt32 getSize() const OVERRIDE;

	/*
     This function initializes all the object variables and parameters. In this method, the object shall initialize all its memory to appropriate values.
    */
    void init() OVERRIDE;
    void calc(xAFAudio** inputs, xAFAudio** outputs) OVERRIDE;

	/**
     *    brief  This method is called when an object receives updated tuning data.
     *    param  subblock               index selects the subblock
     *    param  offsetBytes               points the offset in param memory (bytes)
     *    param  sizeBytes                   number of parameters to be updated (bytes)
     *    param  shouldAttemptRamp      whether or not this data should be attempt to apply instantly, or AO should attempt ramping
     */
    void tuneXTP(xSInt32 subblock, xSInt32 adrBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp) OVERRIDE;

	/**
	*    brief  Retrieves pointer to the start of the subblock
	*    param  subBlock subblock number
	*    return start address of the subblock
	*/
    xInt8* getSubBlockPtr(xUInt16 subBlock) OVERRIDE;

	/**
	*    Returns the size of the sub block indicated by 'subBlock'
	*    param  subBlock the ID of the state subBlock we want to get the size of
	*    return size of subBlock
	*/

    xSInt32 getSubBlockSize(xUInt16 subblock) OVERRIDE;

	/**
    *   Assigns the additional configuration as the object requires.
    */
	void assignAdditionalConfig() OVERRIDE;

	/**
     *    Control method to set the new value
     *    param index - pin index of the object's control input we are writing to
     *    param value - value we are writing
     */
	xSInt32 controlSet(xSInt32 index, xFloat32 param) OVERRIDE;

	/**
     *    brief  It reads the array of applied gain values through additional configuration "Max Gain per channel" for each channel.
     *    param    index  denotes the channel index
     *    return   Gain value read for each channel.
     */
    xFloat32 getMaxGain(xSInt32 index);

	enum AddnlVars {MAX_GAIN_PER_CHANNEL, THIRD_PARTY_MEM_BLK , AWX_EXT_NUM_ADD_VARS };
	enum Modes { GAIN, GAIN_WITH_CONTROL, AWX_EXT_NUM_MODES };
	enum MemAccess { DISABLE_BLOCK, ENABLE_BLOCK };
	enum memoryRecords { PARAM, COEFF, FLOATARRAY, NUM_MEM_RECORDS } memRecs;
    enum PARAMS { NUM_PARAMS_PER_CHANNEL = 2 };
	xInt8 m_EnMemory;

protected:
    xFloat32* m_Coeffs;                                         ///< internal pointer to COEFF memrec
    xFloat32* m_Params;                                         ///< internal pointer to PARAM memrec
	xFloat32* m_MemBlock;                                       ///< internal pointer to FLOATARRAY memrec

private:
	/**
     *    brief  for each channel, checks the gain limits and calculates gain coefficient
     *    param    channel     channel index
     *    param    gainIndB    gain in dB
     */
    void calcGain(xSInt32 channelIndex, xFloat32 gainIndB);
};
#endif //AWXAUDIOOBJEXT_H


Example 2 – AwxAudioObjExtToolbox.cpp

/*!
*   file      AwxAudioObjExtToolbox.cpp
*   brief     AwxAudioObjExt Toolbox Source file
*   details   Implements the AwxAudioObjExt signal design API
*   details   Project    Extendable Audio Framework
*   copyright Harman/Becker Automotive Systems GmbH
*   
       2020
*   
       All rights reserved
*   author    xAF Team
*/

/*!
*   xaf mandataory includes to handle the toolbox related data
*/
#include "AwxAudioObjExtToolbox.h"
#include "XafXmlHelper.h"
#include "AudioObjectProperties.h"
#include "AwxAudioObjExt.h"
#include "XafMacros.h"
#include "AudioObject.h"



// the revision number may be different for specific targets
#define MIN_REQUIRED_XAF_VERSION            (RELEASE_U)
// mode specific defines
#define AUDIO_IN_OUT_MIN                    (1)
#define AUDIO_IN_OUT_MAX                    (255)
#define EST_MEMORY_CONSUMPTION_NA           (0)         // memory consuption not available/measured
#define EST_CPU_LOAD_CONSUMPTION_NA         (0.f) // cpu load not available/measured
#define CONTROL_GAIN_MAX                   (30.0f)
#define CONTROL_GAIN_MIN                   (-128.0f)
#define CONTROL_GAIN_IN_LABEL              "Gain"
#define MAX_CONFIG_MIN_GAIN_dB              (-12.0f)
#define MAX_CONFIG_MAX_GAIN_dB              (30.0f)
#define MAX_GAIN_DEFAULT_GAIN_dB            (10.0f)

CAwxAudioObjExtToolbox::CAwxAudioObjExtToolbox()
{
}

CAwxAudioObjExtToolbox::~CAwxAudioObjExtToolbox()
{
}

static CAudioObjectToolbox::additionalSfdVarDescription theVar;
const CAwxAudioObjExtToolbox::additionalSfdVarDescription* CAwxAudioObjExtToolbox::getAdditionalSfdVarsDescription(xUInt32 index)

{
    CAudioObjectToolbox::additionalSfdVarDescription* ptr = &theVar;
    static CAudioObjectToolbox::MinMaxDefault addVar1 = { MAX_CONFIG_MIN_GAIN_dB ,MAX_CONFIG_MAX_GAIN_dB ,MAX_GAIN_DEFAULT_GAIN_dB }; //Min,max,default values for additional config variable
    static CAudioObjectToolbox::MinMaxDefault addVar2 = { 0, 1, 0 }; //Min,max,default values
    
    static CAudioObjectToolbox::addVarsSize m_AddtionalVarSize1[NUM_DIMENSION_VAR] =
    {
        // size, label, start index, increment
        {1, "Max Gain per channel(dB)", 0, 1}
    };
    static CAudioObjectToolbox::addVarsSize m_AddtionalVarSize2[NUM_DIMENSION_VAR] =
    {
        // size, label, start index, increment
        {1, "Disable: 0nEnable  : 1", 0, 1}
    };

     if (index == 0)
     {
       theVar.mP_Label = "Max Gain per channel";
       theVar.m_DataType = xAF_FLOAT_32;
       theVar.mP_RangeSet = &addVar1;
       theVar.m_Dimension = 1;
       theVar.m_DataOrder = xAF_NONE;
       //size varies according to the number of channels
       m_AddtionalVarSize1[0].m_Size = static_cast(m_NumAudioIn);
       theVar.mP_MaddVarsSize = m_AddtionalVarSize1;
     }
     else if (index == 1)
     {
         theVar.mP_Label = "Abstracted Tuning Memory";
         theVar.m_DataType = xAF_UCHAR;
         theVar.mP_RangeSet = &addVar2;
         theVar.m_Dimension = 1;
         theVar.m_DataOrder = xAF_NONE;
         theVar.mP_MaddVarsSize = m_AddtionalVarSize2;
     }
     else
     {
         /* Invalid index. Return NULL */
         ptr = NULL;
     }
     return ptr;
}


const CAudioObjectToolbox::tObjectDescription* CAwxAudioObjExtToolbox::getObjectDescription()
{
    static const CAudioObjectToolbox::tObjectDescription descriptions =
    {
        1, 1, 0, 0, "AwxAudioObjExt", "Simple Object to start with for 3rd party/external objects integration", "External", AWX_EXT_NUM_ADD_VARS, AWX_EXT_NUM_MODES
    };
    return &descriptions;
}


const CAudioObjectToolbox::tModeDescription* CAwxAudioObjExtToolbox::getModeDescription(xUInt32 mode)
{
    static const CAudioObjectToolbox::tModeDescription modeDescription[AWX_EXT_NUM_MODES] =
    {
        {"Gain", "No control input", 0, 0, "", CFG_NCHANNEL},
        {"GainWithControl", "One gain control input pin gets added", 0, 0, "", CFG_NCHANNEL},
    };
    return (mode < (sizeof(modeDescription) / sizeof(tModeDescription))) ? &modeDescription[mode] : static_cast<tModeDescription*>(NULL);
}


xAF_Error CAwxAudioObjExtToolbox::getObjectIo(ioObjectConfigOutput* configOut)
{
    if (static_cast(GAIN_WITH_CONTROL) == m_Mode)
    {
        configOut->numControlIn = 1;
        configOut->numControlOut = 0;
    }
    else 
    {
        configOut->numControlIn = 0;
        configOut->numControlOut = 0;
    }

    return xAF_SUCCESS;
}


xUInt32 CAwxAudioObjExtToolbox::getXmlObjectTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
{
    initiateNewBufferWrite(buffer, maxLen);
    
    // add your tuning parameters here in order to show up in the tuning tool
    // the number of params specified here, depends on the number to be tuned in GTT
    // in this example we are exposing the tuning parameters as arrays split into 2 subblocks

    //template 1
    xSInt32 numAudioIn = static_cast(m_NumAudioIn);
    xSInt32 blockID = info->Global_Object_Count; // uniq to this instance
    xUInt32 id = 0;

    for (xSInt32 i = 0; i < numAudioIn; i++) { //Max value for each channel to show up shall be specified by this function xFloat32 gainval = getMaxGain(i); xAFOpenLongXMLTag("Object"); string templateName = string("").append("AwxAudioObjExtTuneTemplate").append(xAFIntToString(i + 1)).append(xAFIntToString(blockID)); xAFAddFieldToXMLTag("Key", templateName.c_str()); xAFEndLongXMLTag(); xAFWriteQuickXmlTag("ExplorerIcon", "Object"); xAFWriteXmlTag("StateVariables", XML_OPEN); xAFWriteStateVariable("Gain", // name id, // id NULL, // control law "dB", // unit type DataTypes[xAF_FLOAT], // data type -128.0, // min gainval, // max 0.0, // default 0u, // offset NULL, // encode value NULL, // decode value DataTypeConverters[xAF_FLOAT], // bit converter false // disable streaming ); id++; xAFWriteStateVariable("Mute", // name id, // id NULL, // control law NULL, // unit type DataTypes[xAF_UINT], // data type 0.0, // min 1.0, // max 0.0, // default 4u, // offset NULL, // encode value NULL, // decode value DataTypeConverters[xAF_UINT], // bit converter false // disable streaming ); id++; xAFWriteXmlTag("StateVariables", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); } //template 2 //It shall be shown up in the State Variable Explorer if through additional configuration "Abstracted Tuning Memory" is set to 1 or enabled. if (static_cast(ENABLE_BLOCK) == m_EnMemory) { xAFOpenLongXMLTag("Object"); xAFAddFieldToXMLTag("Key", "AwxAudioObjExtArrayTemplate"); xAFEndLongXMLTag(); xAFWriteQuickXmlTag("ExplorerIcon", "Object"); xAFWriteXmlTag("StateVariables", XML_OPEN); xAFWriteStateVariableBuffer(/* name = */ "FloatArray", /* id = */ id, /* type = */ FLOATARRAY_SV, /* size = */ FLOAT_ARRAY_SIZE, /* streamIdx = */ id, /* minVal = */ -1000.0, /* maxVal = */ 1000.0, /* defaultVal = */ 0.0, /* offset = */ 0, /* isStreamable = */ false); xAFWriteXmlTag("StateVariables", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); } return finishWritingToBuffer(); } xUInt32 CAwxAudioObjExtToolbox::getXmlFileInfo(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen) { initiateNewBufferWrite(buffer, maxLen); xUInt32 hiqnetInc = 0u; xUInt8 subBlock = 0u; xSInt32 blockID = info->Global_Object_Count; // uniq to this instance

    xAFWriteObject(info->Name, static_cast(info->Global_Object_Count), hiqnetInc, static_cast(info->HiQNetVal), 0);

    xAFWriteXmlTag("Objects", XML_OPEN);
    xSInt32 numAudioIn = static_cast(m_NumAudioIn); // m_NumAudioIn == m_NumAudioOut
    
    xAFWriteXmlObjectContainer("Gains", hiqnetInc, subBlock, PARAM_CATEGORY);
    xAFWriteXmlTag("Objects", XML_OPEN);
    for(xSInt32 i = 0; i < numAudioIn; i++) { string templateName = string("").append("AwxAudioObjExtTuneTemplate").append(xAFIntToString(i+1)).append(xAFIntToString(blockID)); xAFWriteXmlObjectTemplateInstance(templateName.c_str(), string("Ch").append(xAFIntToString(i+1)).c_str(), i * 8, 1 + i); // 8 is related here to the internal memory layout where a "state/tuning" is related to a block of N-bytes and withing this needs to be offset hiqnetInc++; } xAFWriteXmlTag("Objects", XML_CLOSE); xAFWriteXmlTag("Object", XML_CLOSE); //gains if (static_cast(ENABLE_BLOCK) == m_EnMemory) { xAFWriteXmlObjectBlockOffset("AwxAudioObjExtArrayTemplate", "FloatArrayMemory", subBlock, static_cast(info->HiQNetVal), hiqnetInc, PARAM_CATEGORY);
        hiqnetInc++;
    }
    xAFWriteXmlTag("Objects", XML_CLOSE);

    xAFWriteXmlTag("Object", XML_CLOSE);

    return finishWritingToBuffer();
}


void CAwxAudioObjExtToolbox::createStaticMetadata()
{
    m_StaticMetadata.minReqXafVersion = static_cast(MIN_REQUIRED_XAF_VERSION);

    setAudioObjectVersion(AWXAUDIOOBJEXT_VERSION_MAJOR, AWXAUDIOOBJEXT_VERSION_MINOR, AWXAUDIOOBJEXT_VERSION_REVISION);
    setTuningVersion     (AWXAUDIOOBJEXT_TUNING_VERSION_MAJOR, AWXAUDIOOBJEXT_TUNING_VERSION_MINOR);

    m_StaticMetadata.supDataFormats.push_back(xAF_DATATYPE_FLOAT);

    //creation/release date
    setCreationDate(2022, 8, 4);

    //Simple AO supports in-place computation
    m_StaticMetadata.inPlaceComputationEnabled = true;
    //This flag allows to set whether the object dynamically updates its additional vars based on input params
    m_StaticMetadata.isAddVarUpdateRequired = true;
}

void CAwxAudioObjExtToolbox::createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut)
{
    metaDataControlDescription ctrlDesc;
    // define audio in metadata
    m_DynamicMetadata.audioIn.Min = AUDIO_IN_OUT_MIN;
    m_DynamicMetadata.audioIn.Max = AUDIO_IN_OUT_MAX;

    // define audio out metadata
    m_DynamicMetadata.audioOut.Min = AUDIO_IN_OUT_MIN;
    m_DynamicMetadata.audioOut.Max = AUDIO_IN_OUT_MAX;

    switch (configIn.mode)
    {
        case static_cast(GAIN_WITH_CONTROL) :
        // define control in min, max and label values for the control pin
            
        ctrlDesc.Min = CONTROL_GAIN_MIN;
        ctrlDesc.Max = CONTROL_GAIN_MAX;
        ctrlDesc.Label = CONTROL_GAIN_IN_LABEL;
        m_DynamicMetadata.controlIn.push_back(ctrlDesc);
        break;

        default:
        break;
    }

    m_DynamicMetadata.estMemory = EST_MEMORY_CONSUMPTION_NA;
    m_DynamicMetadata.estMIPS = EST_CPU_LOAD_CONSUMPTION_NA;
}

Example 1 – AwxAudioObjExt.cpp

/*!
*   file      AwxAudioObjExt.cpp
*   brief     Simple example Audio object for building outside from the xAF repo- Source file
*   details   Implements a simple example fucntionality
*   details   Project    Extendable Audio Framework
*   copyright Harman/Becker Automotive Systems GmbH
*   
       2022
*   
       All rights reserved
*   author    xAF Team
*/

/*!
*   xaf mandataory includes
*/
#include "AwxAudioObjExt.h"
#include "XafMacros.h"
#include "vector.h"

VERSION_STRING_AO(AwxAudioObjExt, AWXAUDIOOBJEXT);
AO_VERSION(AwxAudioObjExt, AWXAUDIOOBJEXT);

/** here you can add all required include files required for    
    the core functionality of your objects
**/

#define MAX_CONFIG_MIN_GAIN_dB              (0.0f)
#define MAX_CONFIG_MAX_GAIN_dB              (30.0f)
#define MAX_GAIN_DEFAULT_GAIN_dB            (10.0f)
#define CONTROL_GAIN_MIN                    (-128.0f)
#define GAINDB_CONVERSION_FACTOR             (0.05f)

CAwxAudioObjExt::CAwxAudioObjExt()
    : m_Coeffs(NULL)
    , m_Params(NULL), m_MemBlock(NULL), m_EnMemory(DISABLE_BLOCK)
{
}

CAwxAudioObjExt::~CAwxAudioObjExt()
{
}

void CAwxAudioObjExt::init()
{   
    m_Params = static_cast<xFloat32*>(m_MemRecPtrs[PARAM]);
    m_Coeffs = static_cast<xFloat32*>(m_MemRecPtrs[COEFF]);
	
	if (ENABLE_BLOCK == m_EnMemory)
	{
		m_MemBlock = static_cast<xFloat32*>(m_MemRecPtrs[FLOATARRAY]);
	}
	if (static_cast(GAIN_WITH_CONTROL) == m_Mode)
	{
		m_NumControlIn = 1;
		m_NumControlOut = 0;
	}
	else
	{
		m_NumControlIn = 0;
		m_NumControlOut = 0;
	}
}

void CAwxAudioObjExt::assignAdditionalConfig()
{
	xInt8*  addVars8Ptr = reinterpret_cast<xInt8*>(m_AdditionalSFDConfig);
	//Assigning additional configuration variable "Abstracted Tuning Memory".
	if (static_cast<void*>(NULL) != m_AdditionalSFDConfig)
	{
		m_EnMemory = addVars8Ptr[m_NumAudioIn * sizeof(xFloat32)];
	}
}

xFloat32 CAwxAudioObjExt::getMaxGain(xSInt32 index)
{
	xFloat32*  addVars32Ptr = reinterpret_cast<xFloat32*>(m_AdditionalSFDConfig);
	xFloat32 value = addVars32Ptr[index];
	return value;
}

xInt8* CAwxAudioObjExt::getSubBlockPtr(xUInt16 subBlock)
{
    xInt8* ptr = NULL;
  
    // this is just an example of how memory could be split by an AO developer. There is no strict rule
    // how memory has to be split for each subblock
    
    switch(subBlock)
    {
        case 0:
        ptr = reinterpret_cast<xInt8*>(m_Params);
        break;

        case 1:
        ptr = reinterpret_cast<xInt8*>(m_MemBlock);
        break;
        
        default:
        // by default we will return a null ptr, hence wrong subBlock was provided
        break;
    }

    return ptr;
}

xSInt32 CAwxAudioObjExt::getSubBlockSize(xUInt16 subBlock)
{
    xSInt32 subBlockSize = 0;
    
    // this is just an example of how memory could be split by an AO developer. There is no strict rule
    // how memory has to be split for each subblock
    switch(subBlock)
    {
        case 0:
        subBlockSize = static_cast(sizeof(xFloat32)) * static_cast(m_NumAudioIn) * NUM_PARAMS_PER_CHANNEL;
        break;

        case 1:
        subBlockSize = (nullptr != m_MemBlock) ? (static_cast(sizeof(xFloat32)) * FLOAT_ARRAY_SIZE) : 0;
        break;
        
        default:
        // by default we will return a null ptr, hence wrong subBlock was provided
        break;
    }
    return subBlockSize;
}

void CAwxAudioObjExt::calc(xAFAudio** inputs, xAFAudio** outputs)
{
	if (static_cast(ENABLE_BLOCK) == m_EnMemory)
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			// for example if m_MemBlock[0] is to mute all channels
			xFloat32 factor = m_Coeffs[i] * m_MemBlock[0];
			scalMpy(factor, inputs[i], outputs[i], static_cast(m_BlockLength));
		}
	}
	else
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			scalMpy(m_Coeffs[i], inputs[i], outputs[i], static_cast(m_BlockLength));
		}
	}
}

void CAwxAudioObjExt::calcGain(xSInt32 channelIndex, xFloat32 gainIndB)
{
	xFloat32 maxGainIndB = getMaxGain(channelIndex);

	LIMIT(gainIndB, CONTROL_GAIN_MIN, maxGainIndB);
	m_Params[channelIndex * NUM_PARAMS_PER_CHANNEL] = gainIndB;

	xUInt32* mutePtr = reinterpret_cast<xUInt32*>(&m_Params[(channelIndex * NUM_PARAMS_PER_CHANNEL) + 1u]);

	m_Coeffs[channelIndex] = (0 == *mutePtr) ? powf(MAX_GAIN_DEFAULT_GAIN_dB, gainIndB * GAINDB_CONVERSION_FACTOR) : 0.f;
}

void CAwxAudioObjExt::tuneXTP(xSInt32 subBlock, xSInt32 offsetBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp)
{
	if(0 == subBlock)
	{
		xUInt32 channelu = static_cast(offsetBytes) >> 2u;
		xSInt32 channel = static_cast(channelu) / NUM_PARAMS_PER_CHANNEL;
		while (sizeBytes > 0)
		{
			calcGain(channel, m_Params[channel * NUM_PARAMS_PER_CHANNEL]);
			sizeBytes -= static_cast(NUM_PARAMS_PER_CHANNEL * sizeof(xFloat32));
			channel++;
		}
	}
	else if(1 == subBlock)
	{ // handle float array related here
	  // values are available in m_MemBlock
	}
	else
	{
	}
} 

xSInt32 CAwxAudioObjExt::controlSet(xSInt32 index, xFloat32 value)
{
	if ((0 == index) && (static_cast(GAIN_WITH_CONTROL) == m_Mode))
	{
		xSInt32 numAudioIn = static_cast(m_NumAudioIn);
		for (xSInt32 i = 0; i < numAudioIn; i++)
		{
			calcGain(i, value);
		}
	}
	return 0;
}

xUInt32 CAwxAudioObjExt::getSize() const
{
    return sizeof(*this);
}

General Guidelines

Below topics describes general guideline for developing audio object.

Hardware Abstraction

Hardware abstraction is being done at Audio Object functions level such as calc, init etc.

Audio Object Function Level Abstraction

When the audio object function implementations are significantly different across various platforms, platform specific functions should be in different cpp files (file per core). A new cpp file needs to be introduced for every object that has any core-specific code. The object will still have one header file across all platforms. Dll specific functions should be part of Win32 specific files.

Folder Structure

The folder structure will look like the following:

Override Defines

Each object will also have an “OBJECT”_OVERRIDE define. This define may be in the object header. However, for the xAF basic audio objects and the core objects, they have been combined into buildprocessor”PLATFORM”objectOverride”PLATFORM“.cmake file that shall be included in the build process.

For non-Sharc platforms, the API function specific defines are available in AudioFramework.h and the Biquad specific defines are available in privatesrcframeworkfilterCMakeLists.txt:

For example, buildprocessorarmv8aobjectOverrideArmv8a.cmake will look like the following:

For SHARC based platforms, it is not possible to define the override macro as a logical expression of individual API macros as done above. Here the logical operation is done separately with the math directive and subsequently assigned to the overriding macro. Both the API function specific defines and the Biquad specific defines are available in this file buildprocessorsharcobjectOverrideSharc.cmake:

The FIR object has Sharc specific implementation for init() and calc(). The FIR_OVERRRIDE define will be the union of XAF_INIT and XAF_CALC defines as given below:

Files

In the generic implementation of the object, every function that is overridden in optimization files should be surrounded by an ifdef check. Using the example above, the generic file FIR.cpp would look like this:

The Sharc file – FIRSharc.cpp – in this case should override both init() and calc():

Build System

Based on the target processor, the build system should always include the generic cpp implementation and the processor specific implementation. For example, when compiling for Sharc for the xAF basic audio objects:

Generic Files

SHARC files

Hardware Abstraction for Header files

This hardware abstraction separates out platform specific member variables and member functions from audio object header file and keeps in hardware abstraction class. Implementation of this class is done in platform specific .cpp file. Accessing the hardware abstraction class members in audio object is done by forward declaring the class in later and instantiating this as a class member. Memory allocation and assignment is done for this instance in initMemRecord and init of the object. Sometimes, it may be needed to access audio object data members inside hardware abstraction class. This requirement is taken care by declaring a back pointer in hardware abstraction class and assigning it to audio object pointer in constructor function. Implementation for ToneControl audio object is completed and code snippets for the same is given below

Hardware Abstraction Class

Below is the implementation example of hardware abstraction class for ToneControl C66 implementation. Here m_ToneCtrlInst is the back pointer to access ToneControl data members in Tone Control hardware abstraction class which is initialized in constructor with CToneControl.

Instance of Hardware Abstraction class in Audio Object

Memory Enums

Memory Enums shall be used for clarity and to avoid errors when allocating the memory required by the object. For example, the AudioToControl AO enum and getMemRecords() method are presented below.

Overlays

Instead of dereferencing parameter and state memory as m_Params[0] and m_States[0], overlays can be used where applicable. The same concept can be applied to coefficient memory if needed. The example below is presented for the gain object that contains three tunable parameters: gain value, invert, and mute.

Below is an example of how the parameters above can be used during tuning:

Audio object Examples

A new audio object will implement the following functions depending on functionality. See the header files for the associated classes for detailed comments.

In the class which inherits CAudioObject – ie: CYourAudioObject.cpp

Abstract Methods (required implementation)

xUInt32 CAudioObject::getSize() const 

Virtual Methods (optional implementation – depending on object features). This is not a complete list but contains the major virtual methods.

void      CAudioObject::init()
void      CAudioObject::calc(xFloat32** inputs, xFloat32** outputs)
void      CAudioObject::tuneXTP(xSInt32 subblock, xSInt32 startMemBytes, xSInt32 sizeBytes, xBool shouldAttemptRamp)
void      CAudioObject::controlSet(xSInt32 index, xFloat32 value)
xAF_Error CAudioObject::controlSet(xSInt32 index, xUInt32 sizeBytes, const void * const pValues)
void      CAudioObject::assignAdditionalConfig()
xInt8*    CAudioObject::getSubBlockPtr(xUInt16 subBlock)
xSInt32   CAudioObject::getSubBlockSize(xUInt16 subBlock)

In the class which inherits CAudioObjectToolbox – ie: CYourAudioObjectToolbox.cpp

const CAudioObjectToolbox::tObjectDescription*          CAudioObjectToolbox::getObjectDescription()
const CAudioObjectToolbox::tModeDescription*            CAudioObjectToolbox::getModeDescription(xUInt32 mode)
const CAudioObjectToolbox::additionalSfdVarDescription* CAudioObjectToolbox::getAdditionalSfdVarsDescription(xUInt32 index)
xAF_Error                                               CAudioObjectToolbox::getObjectIo(ioObjectConfigOutput* configOut)
xUInt32                                                 CAudioObjectToolbox::getXmlSVTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
xUInt32                                                 CAudioObjectToolbox::getXmlObjectTemplate(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
xUInt32                                                 CAudioObjectToolbox::getXmlFileInfo(tTuningInfo* info, xInt8* buffer, xUInt32 maxLen)
void                                                    CAudioObjectToolbox::createStaticMetadata()
void                                                    CAudioObjectToolbox::createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut)

In the class which inherits CMemoryRecordProperties – ie: CYourAudioObjectMemRecs.cpp

xUInt8 CMemoryRecordProperties::getMemRecords(xAF_memRec* memTable, xAF_memRec& scratchRecord, xInt8 target, xInt8 format)

Source code for AwxAudioObjExt audio object can be found in HarmanAudioworX installation folder. The path for the source code is
Program FilesHarmanHarmanAudioworXext-reference-algorithmsexternalinc
Program FilesHarmanHarmanAudioworXext-reference-algorithmsexternalsrc

The code snippet for source and include files is provided in following sections for reference

Metadata

The Metadata is design-time information of the audio objects use to describe their features and attributes. Metadata is stored in the audio object code. This information can be used to convey memory usage or to check compatibility between the audio objects.
It also provides the tool with constraints on parameters, and information describing controls and audio channels.

There are three types of metadata:

  • Dynamic: The dynamic accepts numerous configuration parameters and provides data specific to those parameters.
  • Static: The static data is constant, it does not take in any parameters.
  • Real-time: The real-time metadata specific to a connected target device.

Static Metadata

The Static Metadata represents data that will not change based on configuration. It is provided as is.

There are two API methods related to this feature.

  • createStaticMetadata
  • getStaticMetadata
virtual void createStaticMetadata();
staticMetadata getStaticMetadata() { return m_StaticMetadata; }

Create Static Metadata

This method is intended to be overwritten by each instance of AudioObject. The goal is to populate the protected member m_StaticMetadata. There is an example of how to do this in AudioObject.cpp. The basic audio objects included within xAF also implement this method appropriately.

This method should be overridden by any object updating to the new API. Here are the relevant details:

  • minReqXafVersion – set this to an integer which is related to the major version of xAF. (ACDC == 1, Beatles == 2, etc)
  • isExtendedObjIdRequired – false for most objects. This flag enables support for more than 256 subblocks.
  • supSampleRates – list of all supported samples rates (leave blank if there are no restrictions)
  • supBlockSizes – list of all supported block sizes (leave blank if there are no restrictions)
  • supDataFormats – list of supported calcObject data formats (leave blank if there are no restrictions)
  • audioObjectVersion – Condensed three part version number. Created with helper method :
    • void setAudioObjectVersion(unsigned char major, unsigned char minor, unsigned char revision)
    • It is up to audio object to determine how to manage these versions.
  • tuningVersion – Condensed two part version number. Created with helper method :
    • Void setTuningVersion(unsigned char major, unsigned char minor)
    • These version numbers must only be changed when appropriate!!
    • Follow these rules:
      • Increment minor version when new release has *additional* tuning but previous tuning data can still be loaded successfully.
      • Increment major reversion when the new release is not compatible at all with previous tuoid setTuning data.
      • If tuning structure does not change, do not change this version
  • authorList – fill with list of authors if desired
  • creationDay – date of creation for the object
  • certified – whether this object has undergone certification
  • inPlaceComputationEnabled – whether this object requires input and output buffers to be the same (see below)
  • isAddVarUpdateRequired – whether the tuning tool should assume additional vars can change any time main object parameters are updated.
    • Set this to true if your additional variable sizes are based in some way on inputs.
    • Example: Number of Input channels is configurable by the user – and the first additional variable size is always equal to the number of input channels.

Get Static Metadata

This method simply returns a copy of m_StaticMetadata. It is not virtual.

In-Place Computation

This will be deprecated here and moved to dynamic metadata, to support a target/core specific implementation. The static member will be kept for backward compatibility reasons only and will be deprecated after some time.
Do not use this struct member in future implementations!

Dynamic Metadata

Dynamic metadata creation is similiar to static, but accepts arguments for the creation process, hence the dynamic. The object receives all configuration data being considered (in most cases this would be by GTT) and writes relevant information to the member m_DynamicMetadata in response.

virtual void createDynamicMetadata(ioObjectConfigInput& configIn, ioObjectConfigOutput& configOut);
dynamicMetadata getDynamicMetadata() { return m_DynamicMetadata; }

Create Dynamic Metadata

createDynamicMetadata is called after a successful call to getObjectIo. It can further restrict values in the ioObjectConfigOutput struct. All required information is passed in with configIn.

  • audioIn & audioOut – instances of type metaDataDescription which label and set restrictions for inputs and outputs. Label need not be specified if a generic label will suffice. (eg: Input 1) If not, supply a label for each input and output.
  • controlIn & controlOut – are vectors of type metaDataControlDescription which label and specify value ranges for each control input. The number of controls is dictated by other. parameters, so we don’t have to bound the min and max. Note: Min and Max are not enforced, they are only informative to the user.
  • estMemory – Estimated memory consumption for the current configuration (in bytes).
  • estMIPS – estimated consumption of processor (in millions of cycles per second, so not really MIPS).

Get Dynamic Metadata

This method simply returns a copy of m_DynamicMetadata. Note that it is not virtual.

Description of Structures

These structures are used by the tool during signal design. The input configuration struct holds all *attempted* parameters, the output struct is used to constrain audio inputs and outputs and report the correct number of control inputs and outputs.

In-Place Computation

This feature allows Audio Objects to give the GTT the ability to operate in in-place computation mode. In this mode, the audio object uses the same buffers for input and output. This option has been moved from static metadata to dynamic metadata to support a kernel-based decision. This allows the AO developer to decide, based on the target architecture, whether or not it is beneficial to run the calc function in-place.

GTT analyses the signal flow and calculates the amount of buffers required for the given signal flow. If isInplaceComputationSupported is set by the audio object developer, GTT tells the framework to allocate only input buffers only.

The isInplaceComputationSupported flag can be checked in the the audio object’s dynamic metadata.

For example,  Gain configured for 6 channels :

  • If isInplaceComputationSupported is not set, it will use a total of 12 buffers.
  • If isInplaceComputationSupported is set, it will use a total of 6 buffers.

     

In-place computation feature has the following benefits:

  • reduces flash size.
  • reduces number of IO streams which improves the memory performance on embedded.

Current Limitation or Additional conditions

An Audio Object Marked is considered for in-place has to satisfy following three conditions:

  • Audio Object should have dynamic metadata flag isInplaceComputationSupported set to true for the selected core type.
  • Audio Object should have equal number of input and output pins.
  • All audio pins should be connected.

Audio Object Class

This section provides a description of the base class. The tables below show the class members and methods of CAudioObject class that a developer would need to use.

CAudioObject Members

Member Description
m_Owner This is the audio processing class that ‘owns’ this audio object.
m_MemRecPtrs m_MemRecPtrs  is an array which has the address to the start of each record
tObjectProperties This is a struct containing the object properties:

  • Object type
  • Number of audio inputs
  • Number of audio outputs
  • Number of elements
  • Mode
  • Name of the audio object
  • Block ID
  • *AdditionalVars
  • SizeofAdditionalVars
  • NumMemRecords
  • *MemRecordsInfo
m_NumAudioIn This is the number of audio input channels.
m_NumAudioOut This is the number of audio output channels.
m_NumElements This is the number of elements (e.g., filters, taps) per channel.
m_Mode This is the audio object mode. For example, Mode with a value of zero could represent a matrix mixer that operates on linear gains while mode one could represent a mixer that operates on a logarithmic scale.
m_AdditionalSFDConfig This is a pointer (void) to the additional data an object requires for configuration
m_BlockLength This is the block length in samples.
m_Type This is the audio object type, defined in object properties.
m_Name This is the name of the audio object.
m_BlockID This is the ID of the block in a specific signal flow.
m_NumControIn This is the number of control data input channels.
m_NumControlOut This is the number of control data output channels.
m_ControlConfig A list of audio objects and their control input channel numbers, where the current audio object’s control output channels are connected in order. There are two elements for each control output channel:

  • the destination audio object
  • the destination control input channel number

CAudioObject Methods

Method Description
Constructor This sets the following:

  • number of input and output audio channels
  • number of elements
  • object operation mode
  • processing block length
  • sample rate
  • address
  • memory table
assignAdditionalConfig() This dereferences the m_AdditionalVariables pointer to use the additional configuration parameters as needed.
getSubBlockPtr() Retrieves pointer to the start of the subblock in the audio object.
getSubBlockSize() Returns the size (in BYTES) of the sub block indicated by ‘subBlock’ . subBlock is the ID of the state subBlock we want to get the size
init() This initializes all internal variable and parameters. This is called by CAudioProcessing::initAudioObjects().
calc() This function implements the module functionality or algorithm that runs every audio interrupt. Before calling this function m_Inputs & m_Outputs objects to be set by CAudioProcessing object. This is called by CAudioProcessing::calcProcessing() for every frame interval.
tuneXTP() This performs any required operations after the parameter memory is updated. This is called by CAudioProcessing::setAudioObjectTuning() and is triggered by the tuning tool.
setControlOut() This is a helper function for writing a value to one of the object’s outputs.
controlSet() This is called when controls like volume, bass, fade, RPM, and throttle are changed. These variables should live in state memory.
getXmlSVTemplate() This function implements the generation of state variable templates used in the Device Description File on the computer.
getXmlObjectTemplate() This function implements the generation of object templates used in the Device Description File on the computer.
getXmlFileInfo() This function generates the Device.ddf file through the SFD. This function is enabled only when generating Device Description Files on the computer.
getStateMemForLiveStreamingPtr() This function returns the address and length of the state variable for live streaming.

Audio Object Memory

 

API

The below API is used to fill memory records according to given target and data format:

xUInt8 getMemRecords(xAF_memRec* memTable, xAF_memRec& scratchRecord, xInt8 target, xInt8 format);
  • getMemRecords() is called by the GTT when it needs to know how many memory records each object contains or requires and their type, latency and size which depends on target and data format.
  • The memory record sizes could be dependent on these object variables: m_NumElements, m_NumAudioIn, m_NumAudioOut, BlockLength, SampleRate, additional configuration data, etc

By default, the getMemRecords() method will return zero number of records and doesn’t fill the provided memTable if your object does not require any dynamic memory, you do not need to override these this method.

Memory Configuration on GTT

In the SFD when AudioObject is drag into panel, GTT will call getMemRecords() API to collect memory records details from the object to update the memory latency table. This will be repeated for all the AO’s of the SFD. Once the design is complete it shall be saved.

The user can open the memory latency editor and edit the latency levels. After saving the latency levels, if the object properties were edited and saved, the user will loose the saved latency levels for that object because GTT will call getMemRecords() again that will reset to default latency levels.

Graphical user interface, text, application Description automatically generated

Scratch memory does not need to be allocated for each audio object since all the audio objects can use the same scratch memory. The GTT  calculates the maximum scratch memory, maximum alignment and minimum latency requested by the audio objects, and put it into AudioProcessing chunk.

The elements of this table must be of the data type xAF_MemRec, which is shown below:

Member Description
alignment This is the required alignment for the memory region.
memType Type of the memory to be allocated – memType could be:

  • COEFFCIENT_MEM – This memory is used to store both tuning parameters and variables / buffers for internal usage by the audio objects.
  • SCRATCH_MEM – This memory is only used for temporary calculations.
size This is the size of the memory to be allocated.
memLatency This can take values from one to five where one – low latency and five – high latency.

While sending the signal flow to the target device the memory records details will be send as part of audio processing chunk and audio object chunk in the signal flow write command.

Memory allocation in framework

The memory records are part of flash file so the platform needs to parse the flash file using DSFD parser that would give audio processing properties and audio object properties structures.

The platform needs to register the platform dependent memory allocator and deallocator in the constructor CAudioProcessing::CAudioProcessing().

Now call the CAudioProcessing::initFramework() pass the audio processing and audio properties structures, which will allocate the requested memory using platform dependent allocator and calls CAudioObject::setRecordPointers() (it will set m_MemRecPtrs) and CAudioObject::init() API’s for every objects.

Audio Object Memory Declaration and Usage

There are two types of memory used when allocating memory for audio objects.

  • Scratch Memory: Scratch memory is non persistent memory that is used by an audio object only during an audio interrupt. Data in the scratch memory is not guaranteed to remain unchanged from one audio interrupt to the next. Up to and including Deep Purple, each object is restricted to one memory record. Developers can determine the size, alignment and latency level required.
  • Coefficient Memory: Coefficient memory is any other type of persistent memory that is required by the audio object. There is no limit to the number of records developers can create. Developers can configure the size, alignment and latency levels required for each record and they can be different for every record.

Audio Buffers: The audio buffers provided in the calc function are intended to be read and write only, i.e. they should not be used for intermediate calculations. If intermediate buffers are required, scratch memory must be requested. The reason for this is that it is not guaranteed that the buffer per channel will be unique for unconnected pins. For unconnected pins on the input side, xAF provides a single buffer (filled with zeros) that is shared by all audio objects in an xAF instance. It is expensive (MIPS-wise) to clear this buffer each time, so it is important not to write data to this buffer and leave it untouched, as it will be read by all audio objects with unconnected pins. For unconnected output pins, xAF allocates a single “dummy” buffer for all unconnected pins, rather than a unique buffer per channel.
For audio objects that support in-place processing and all pins are connected, xAF will assign the same input and output buffer per channel, but if there are unconnected pins, the input and output buffers will be different, so the audio object developer should not blindly rely on the input and output buffers being the same or different. It is highly recommended to implement proper checks if the algorithm requires any of the above input/output buffer constellations.

Audio Object Configuration

Before any audio flow design can start, the design tool needs to know about the audio objects and how to interact with them. All objects must provide the information in the structures below, and expose them to the tool through dll calls. This dll is generated by the Visual Studio solution, VirtualAmp.

This code will not be compiled in embedded libraries. It will only be compiled in the toolbox library targeted for GTT.