ALDialog API

NAOqi Audio - Overview | API | QiChat | Tutorials


Namespace : AL

#include <alproxies/aldialogproxy.h>

Methods

std::string ALDialogProxy::loadTopic(const std::string& topicPath)

Loads the topic, exports and compiles the corresponding context files ready to be used in ASR. The name of the topic found in the specified file is given as an output of the function call.

Note that loading a topic file can be both CPU and time consuming.

Parameters:
  • topicFilePath – full path of the file to load.
Returns:

Name of the topic loaded. Syntax error are thrown as exceptions.

void ALDialogProxy::unloadTopic(const std::string& topicName)

Unloads the specified topic and frees the associated memory load. Any further call to loadTopic of an unloaded topic will require a new export and compilation.

Parameters:
  • topicName – Topic name as given by loadTopic.
void ALDialogProxy::activateTopic(const std::string& topicName)

Adds the specified topic to the list of the topics that are currently used by the dialog engine to parse inputs. Several topics can be active at the same time but only one will be used to generate proposals (this specific topic is said to have the focus).

Parameters:
  • topicName – Topic name as given by loadTopic.
void ALDialogProxy::deactivateTopic(const std::string& topicName)

Removes the specified topic from list of the topics that are currently used by the dialog engine to parse inputs. Several topics can be active at the same time but only one will be used to generate proposals. (this specific topic is said to have the focus).

Parameters:
  • topicName – Topic name as given by loadTopic.
void ALDialogProxy::setLanguage(const std::string& language)

Sets the language of the dialog engine.

Parameters:
  • language

    English name of the language to set.

    Example: ‘French’.

    For the complete list of supported languages, see: List of supported Languages.

void ALDialogProxy::setFocus(const std::string& topicName)

If multiple topics can be active at the same time, only one of them is used to generate proposals. This topic is said to have the focus. A call to this function forces the focus to the specified topic. After this call, proposals will be generated from this topic and inputs will be first parsed through this topic. However, if a “User Rule” of a different active topic is matched, the focus will change automatically to that topic.

Parameters:
  • topicName – Topic name as given by loadTopic.
void ALDialogProxy::activateTag(const std::string& tagName, const std::string& topicName)

Activates the tag in the specified topic.

Parameters:
  • tagName – Name of the tag to activate.
  • topicName – Name of the topic.
void ALDialogProxy::deactivateTag(const std::string& tagName, const std::string& topicName)

Deactivates the tag in the specified topic.

Parameters:
  • tagName – Name of the tag to deactivate.
  • topicName – Name of the topic.
void ALDialogProxy::gotoTopic(const std::string& topicName)

Sets the focus to the topic then says the first activated proposal of the topic (if any).

Parameters:
  • topicName – Name of the topic that will get the focus.
void ALDialogProxy::setConcept(const std::string& conceptName, const std::string& language, const std::vector<std::string>& content)

Sets dynamically the specified concept of the specified language with the given word list. The speech recognition must run to call setConcept (else speech recognition won’t be updated with concept content)

Parameters:
  • conceptName – Name of the concept to set.
  • language – Language that contain the concept (jpj, frf, enu...)
  • content – Word or sentence list (vector of std::string in C++).
void ALDialogProxy::setASRConfidenceThreshold(const float& threshold)

Sets the minimum confidence threshold used validate the output of the Automatic Speech Recognition (ASR) engine.

Parameters:
  • threshold – Threshold (from 0.0 to 1.0). 0.5 by default.
float ALDialogProxy::getASRConfidenceThreshold()

Gets the minimum confidence threshold currently used to validate the output of the Automatic Speech Recognition (ASR) engine.

Returns:Threshold (from 0.0 to 1.0). 0.5 by default.
void ALDialogProxy::forceOutput()

Forces the robot to say a proposal, if any is available.

void ALDialogProxy::forceInput(const std::string& input)

Inputs are normally provided by the Speech Recognition engine and the event system of the robot. A call to this function will stimulate the dialog engine with the given input as if this input had been given by the ASR engine.

Parameters:
  • input – Input to match
std::vector<std::string> ALDialogProxy::getLoadedTopics(const std::string& language)

Gets the list of loaded topics in the given language.

Parameters:
  • language – Language of the topics
std::vector<std::string> ALDialogProxy::getActivatedTopics()

Gets the list of activated topics in the current language.

std::vector<std::string> ALDialogProxy::setVariablePath(const std::string& topicName, const std::string& eventName, const std::string& path)

Allows to change event name at runTime.

Parameters:
  • topicName – Topic that contain event.
  • eventName – Original event name.
  • path – New event name.
void ALDialogProxy::getUserList()

Gets the list of user IDs.

Returns:List of user IDs (int).
void ALDialogProxy::openSession(int id)

Opens a new dialog session. A call to this function restores the dialog user variables to ALMemory from the dialog database.

Parameters:
  • id – ID of the user.
void ALDialogProxy::closeSession()

Closes the current dialog session and stores all the related ALMemory variables in the database dedicated to ALDialog.

void ALDialogProxy::insertUserData(const std::string& variableName, const std::string& variableValue, const int& userID)

Insert directly a variable in user dialog database.

Parameters:
  • variableName – Variable name.
  • variableValue – Variable value.
  • userID – User ID.
void ALDialogProxy::getUserData(const std::string& variableName, const int& userID)

Get a variable value from user dialog database.

Parameters:
  • variableName – Variable name.
  • userID – User ID.
Returns:

Variable value (string).

void ALDialogProxy::getUserDataList(const int& userID)

Get all variable list of a user.

Parameters:
  • userID – User ID.
Returns:

Variable name list.

void ALDialogProxy::compileAll()

Can be optionally called after a series of loadTopic in order to build dialog model and speech recognition model. If not called the compilation occurs once at runtime.

void ALDialogProxy::startPush()

The Dialog engine starts making automatically proposal. After an answer, the dialog engine will automatically say a proposal from the available topics. Dialog engine will first try to say a proposal from the topic having the current focus, then from other topics.

void ALDialogProxy::stopPush()

The Dialog engine stops making automatically proposal.

void ALDialogProxy::startApp(const std::string& UNUSED_name, const AL::ALValue& val, const std::string& UNUSED_message)

Asks the Dialog engine to stop and start an application.

Note: if autonomous Life is running, use ALAutonomousLifeProxy::switchFocus() instead.

Parameters:
  • variableName – Unused.
  • ApplicationName – Application to start.
  • message – Unused.
void ALDialogProxy::resetAll()

Resets the status of all topics: all proposals are able to be used again.

void ALDialogProxy::startUpdate(const std::string& UNUSED_name, const AL::ALValue& val, const std::string& UNUSED_message)

Allows to update the Dialog model and the speech recognition model at runtime (experimental). Parameter are unused but fit to ALMemory API.

void ALDialogProxy::generateSentences(const std::string& destination, const std::string& topic, const std::string& language)

Generates all possible input sentences in a text file.

Parameters:
  • destination – destination path file.
  • topic – Source topic.
  • language – Source topic language.
void ALDialogProxy::tell(const std::string& input)

Deprecated since version 1.18: use ALDialogProxy::forceInput() instead.

void ALDialogProxy::connectionChanged()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::endOfUtteranceCallback()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::eventReceived()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::packageInstalled()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::setPushMode()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::statusChanged()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::wordRecognized()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

void ALDialogProxy::wordsRecognizedCallback()

Deprecated since version 1.22: unstable and inconclusive trial. Do not use.

Events

Event: "Dialog/Answered"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised each time the robot says an output.

Contains the last robot output.

Subscribing to this event starts ALDialog.

Example

u:(what did you say before) I said $Dialog/Answered
Event: "Dialog/Failure"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when 3 Dialog/Fallback or 3 Dialog/NotUnderstood have been consecutively raised.

Event: "Dialog/Fallback"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when fallback matches.

Event: "Dialog/IsStarted"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

True if ALDialog is started.

Event: "Dialog/LastInput"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised each time the robot catches a human input.

Contains the last human input.

Example

u:(hello) $Dialog/LastInput
Event: "Dialog/NotSpeaking5"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Human hasn’t talked for 5 seconds.

Dialog/NotSpeaking5, Dialog/NotSpeaking10, Dialog/NotSpeaking15, Dialog/NotSpeaking20 are raised when the human didn’t talk for, respectively 5, 10, 15 and 20 seconds.

Example

u:(e:Dialog/NotSpeaking10) are you still there?
Event: "Dialog/NotSpeaking10"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Human hasn’t talked for 10 seconds.

Event: "Dialog/NotSpeaking15"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Human hasn’t talked for 15 seconds.

Event: "Dialog/NotSpeaking20"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Human hasn’t talked for 20 seconds.

Event: "Dialog/NotUnderstood"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when the robot heard some human input but no rule has matched.

Example

u:(e:Dialog/NotUnderstood) sorry, I didn't understand.
Event: "Dialog/NotUnderstood2"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when the robot didn’t understand 2 chains in a row.

Event: "Dialog/NotUnderstood3"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when the robot didn’t understand 3 chains in a row.

Event: "Dialog/SameRule"
callback(std::string eventName, std::string value, std::string subscriberIdentifier)

Raised when a rule is triggered twice in a row. Allows you to avoid a repetition.