Internet Engineering Task Force Saravanan Shanmugham Internet-Draft Cisco Systems Inc. draft-ietf-speechsc-mrcpv2-00 September 23, 2003 Expires: March 23, 2004 Media Resource Control Protocol Version 2(MRCPv2) Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Copyright Notice Copyright (C) The Internet Society (1999). All Rights Reserved. Abstract This document describes a proposal for a Media Resource Control Protocol Version 2(MRCPv2) and aims to meet the requirements specified in the SPEECHSC working group requirements document. It is based on the Media Resource Control Protocol (MRCP), also called MRCPv1 developed jointly by Cisco Systems, Inc., Nuance Communications, and Speechworks Inc. The MRCPv2 protocol will control media service resources like speech synthesizers, recognizers, signal generators, signal detectors, fax servers etc. over a network. This protocol depends on a session management protocol such as the Session Initiation Protocol (SIP) to S. Shanmugham, et. al. Page 1 MRCPv2 Protocol May 2003 establish a separate MRCPv2 control session between the client and the media server. It also depends on SIP to establish the media pipe and associated parameters between the media source or sink and the media server. Once this is done, the MRCPv2 protocol exchange can happen over the control session established above allowing the client to command and control the media processing resources that may exist on the media server. Table of Contents Status of this Memo..............................................1 Copyright Notice.................................................1 Abstract.........................................................1 Table of Contents................................................2 1. Introduction:...............................................4 2. Architecture:...............................................5 2.1. MRCPv2 Media Resources:....................................6 2.2. Server and Resource Addressing.............................7 3. MRCPv2 Protocol Basics......................................7 3.1. Connecting to the Media Server.............................7 3.2. Managing Resource Control Channels.........................8 3.3. Media Streams and RTP Ports...............................13 3.4. MRCPv2 Message Transport..................................13 4. Notational Conventions.....................................14 5. MRCPv2 Specification.......................................14 5.1. Request...................................................15 5.2. Response..................................................16 5.2.1. Status Codes.............................................17 5.3. Event.....................................................17 5.4. Message Headers...........................................18 5.4.1. Channel-Identifier.......................................19 5.4.2. Active-Request-Id-List...................................19 5.4.3. Proxy-Sync-Id............................................20 5.4.4. Accept-Charset...........................................20 5.4.5. Content-Type.............................................20 5.4.6. Content-Id...............................................20 5.4.7. Content-Base.............................................21 5.4.8. Content-Encoding.........................................21 5.4.9. Content-Location.........................................21 5.4.10. Content-Length..........................................22 5.4.11. Cache-Control...........................................22 5.4.12. Logging-Tag.............................................23 6. Resource Discovery.........................................24 7. Speech Synthesizer Resource................................25 7.1. Synthesizer State Machine.................................25 7.2. Synthesizer Methods.......................................26 7.3. Synthesizer Events........................................26 7.4. Synthesizer Header Fields.................................26 7.4.1. Jump-Target..............................................27 7.4.2. Kill-On-Barge-In.........................................27 S Shanmugham, et. al. IETF-Draft Page 2 MRCPv2 Protocol May 2003 7.4.3. Speaker Profile..........................................28 7.4.4. Completion Cause.........................................28 7.4.5. Voice-Parameters.........................................28 7.4.6. Prosody-Parameters.......................................29 7.4.7. Vendor Specific Parameters...............................30 7.4.8. Speech Marker............................................30 7.4.9. Speech Language..........................................30 7.4.10. Fetch Hint..............................................30 7.4.11. Audio Fetch Hint........................................31 7.4.12. Fetch Timeout...........................................31 7.4.13. Failed URI..............................................31 7.4.14. Failed URI Cause........................................31 7.4.15. Speak Restart...........................................31 7.4.16. Speak Length............................................32 7.5. Synthesizer Message Body..................................32 7.5.1. Synthesizer Speech Data..................................32 7.6. SET-PARAMS................................................34 7.7. GET-PARAMS................................................35 7.8. SPEAK.....................................................35 7.9. STOP......................................................37 7.10. BARGE-IN-OCCURRED.........................................38 7.11. PAUSE.....................................................39 7.12. RESUME....................................................40 7.13. CONTROL...................................................41 7.14. SPEAK-COMPLETE............................................42 7.15. SPEECH-MARKER.............................................43 8. Speech Recognizer Resource.................................44 8.1. Recognizer State Machine..................................44 8.2. Recognizer Methods........................................45 8.3. Recognizer Events.........................................45 8.4. Recognizer Header Fields..................................45 8.4.1. Confidence Threshold.....................................47 8.4.2. Sensitivity Level........................................47 8.4.3. Speed Vs Accuracy........................................47 8.4.4. N Best List Length.......................................48 8.4.5. No Input Timeout.........................................48 8.4.6. Recognition Timeout......................................48 8.4.7. Waveform URL.............................................48 8.4.8. Completion Cause.........................................49 8.4.9. Recognizer Context Block.................................50 8.4.10. Recognition Start Timers................................50 8.4.11. Vendor Specific Parameters..............................50 8.4.12. Speech Complete Timeout.................................51 8.4.13. Speech Incomplete Timeout...............................51 8.4.14. DTMF Interdigit Timeout.................................52 8.4.15. DTMF Term Timeout.......................................52 8.4.16. DTMF-Term-Char..........................................52 8.4.17. Fetch Timeout...........................................52 8.4.18. Failed URI..............................................53 8.4.19. Failed URI Cause........................................53 8.4.20. Save Waveform...........................................53 S Shanmugham, et. al. IETF-Draft Page 3 MRCPv2 Protocol May 2003 8.4.21. New Audio Channel.......................................53 8.4.22. Speech Language.........................................53 8.5. Recognizer Message Body...................................54 8.5.1. Recognizer Grammar Data..................................54 8.5.2. Recognizer Result Data...................................57 8.5.3. Recognizer Context Block.................................57 8.6. SET-PARAMS................................................58 8.7. GET-PARAMS................................................58 8.8. DEFINE-GRAMMAR............................................59 8.9. RECOGNIZE.................................................62 8.10. STOP......................................................64 8.11. GET-RESULT................................................65 8.12. START-OF-SPEECH...........................................66 8.13. RECOGNITION-START-TIMERS..................................67 8.14. RECOGNITON-COMPLETE.......................................67 8.15. DTMF Detection............................................68 9. Examples:..................................................68 10. Reference Documents........................................75 11. Appendix...................................................76 ABNF Message Definitions........................................76 Full Copyright Statement........................................81 Authors' Addresses..............................................81 1. Introduction: The MRCPv2 protocol is designed to provide a mechanism for a client device requiring audio/video stream processing to control media processing resources on the network. Some of these media processing resources could be speech recognition, speech synthesis engines, speaker verification or speaker identification engines. This allows a vendor to implement distributed Interactive Voice Response platforms such as VoiceXML [7] browsers. This protocol is designed to leverage and build upon a session management protocols such as Session Initiation Protocol (SIP) and Session Description Protocol (SDP). The SIP protocol described in [2] defines session control messages used during the setup and tear down stages of a SIP session. In addition, the SIP re-INVITE can be used during a SIP session to change the characteristics of the session. This is generally to create or delete media/control channels or to change the properties of existing media/control channels related to the SIP session. In this SIP exchange, SDP is used to describe the parameters of the media pipe associated with that session. The MRCPv2 protocol depends on SIP and SDP to create the session, and setup the media channels to the media server. It also depends on SIP and SDP to establish a MRCPv2 control channel between the client and the server for every media processing resource that thee client S Shanmugham, et. al. IETF-Draft Page 4 MRCPv2 Protocol May 2003 requires for that session. The MRCPv2 protocol exchange between the client and the media resource can then happen on that control channel. The MRCPv2 protocol exchange happening on this control channel does not change the state of the SIP session, the media or other parameters of the session SIP initiated. It merely controls and affects the state of the media processing resource associated with that MRCPv2 channel. The MRCPv2 protocol defines the messages to control the different media processing resources and the state machines required to guide their operation. It also describes how these messages are carried over a transport layer such as TCP or SCTP. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this document are to be interpreted as described in RFC 2119[9]. 2. Architecture: The system consists of a client that requires the generation of media streams or requires the processing of media streams and a media resource server that has the resources or engines to process or generate these streams. The client establishes a session using SIP and SDP with the server to use its media processing resources. A SIP URI refers to the MRCPv2 media server. The session management protocol (SIP) will use SDP with the offer/answer model described RFC 3264 to describe and setup the MRCPv2 control channels. Separate MRCPv2 control channels are need for controlling the different media processing resources associated with that session. Within a SIP session, the individual resource control channels for the different resources are added or removed through the SDP offer/answer model and the SIP re-INVITE dialog. The server, through the SDP exchange, provides the client with a unique channel identifier and a TCP port number. The client MAY then open a new TCP connection with the server using this port number. Multiple MRCPv2 channels can share a TCP connection between the client and the server. All MRCPv2 messages exchanged between the client and the server will also carry the specified channel identifier that MUST be unique among all MRCPv2 control channels that are active on that server. The client can use this channel to control the media processing resource associated with that channel. The session management protocol (SIP) will also establish media pipes between the client (or source/sink of media) and the media server using SDP m-lines. These media pipes MUST BE shared by all the media processing resources under that SIP session. Each media processing resource MUST NOT define its own separate media pipe. MRCPv2 client MRCPv2 Media Resource Server S Shanmugham, et. al. IETF-Draft Page 5 MRCPv2 Protocol May 2003 |--------------------| |-----------------------------| ||------------------|| ||---------------------------|| || Application Layer|| || TTS | ASR | SV | SI || ||------------------|| ||Engine|Engine|Engine|Engine|| ||Media Resource API|| ||---------------------------|| ||------------------|| || Media Resource Management || || SIP | MRCPv2 || ||---------------------------|| ||Stack | || || SIP | MRCPv2 || || | || || Stack | || ||------------------|| ||---------------------------|| || TCP/IP Stack ||----MRCPv2---|| TCP/IP Stack || || || || || ||------------------||-----SIP-----||---------------------------|| |--------------------| |-----------------------------| | / SIP / | / |-------------------| RTP | | / | Media Source/Sink |-------------/ | | |-------------------| 2.1. MRCPv2 Media Resources: The MRCPv2 media server may offer one or more of the following media processing resources to its clients. Speech Recognition The media server may offer speech recognition engines that the client can allocate, control and have it recognize the spoken input contained in the audio stream. Speech Synthesis The media server may offer speech synthesis engines that the client can allocate, control and have it generate synthesized voice into the audio stream. Speaker Recognition The media server may offer speaker recognition engines that the client can allocate, control and have it recognize the speaker from voice in the audio stream. Speaker Verification The media server may offer speaker Verification engines that the client can allocate, control and have it verify and authenticate the speaker based on his voice. S Shanmugham, et. al. IETF-Draft Page 6 MRCPv2 Protocol May 2003 2.2. Server and Resource Addressing The MRCPv2 server as a whole is a generic SIP server and the MRCPv2 media processing resources it offers are addressed by specific SIP URL registered by the server. Example: sip:mrcpv2@mediaserver.com 3. MRCPv2 Protocol Basics MRCPv2 requires the use of a transport layer protocol such as TCP or SCTP to guarantee reliable sequencing and delivery of MRCPv2 control messages between the client and the server. One or more TCP or SCTP connections between the client and the server can be shared between different MRCPv2 channels to the server. The individual messages carry the channel identifier to differentiate messages on different channels. The message format for MRCPv2 is text based with mechanisms to carry embedded binary data. This allows data like recognition grammars, recognition results, synthesizer speech markup etc. to be carried in the MRCPv2 message between the client and the server resource. The protocol does not address session and media establishment and management and relies of SIP and SDP to do this. 3.1. Connecting to the Media Server The MRCPv2 protocol depends on a session establishment and management protocol such as SIP in conjunction with SDP. The client finds and reaches a MRCPv2 server across the SIP network using the INVITE and other SIP dialog exchanges. The SDP offer/exchange model over SIP is used to establish resource control channels for each resource. It is also used to establish media pipes between the source or sink of audio and the media server. Example 1: Opening a session to the media server. This does not immediately allocate any resource control channels yet. C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314159 INVITE Contact: Content-Type: application/sdp Content-Length: 142 S Shanmugham, et. al. IETF-Draft Page 7 MRCPv2 Protocol May 2003 v=0 o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314159 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 C->S: ACK sip:mrcpv2@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314160 ACK Content-Length: 0 3.2. Managing Resource Control Channels The client needs a separate MRCPv2 resource control channel to control each media processing resource under the SIP session. A hexadecimal channel identifier identifies these resource control channels. The last 2 digits of the hexadecimal channel identifier is the resource type identifier and identify the type of media process resource that this channel is talking to. MRCPv2 defines the following type of media processing resources. Resource Type Resource Description Identifier 00 Reserved 01 Speech Recognition 02 Speech Synthesis 03 Speaker Identification 04 Speaker Verification 05-FF Reserved for future use S Shanmugham, et. al. IETF-Draft Page 8 MRCPv2 Protocol May 2003 Other resource types can be defined in future through a separate document. The SIP INVITE or re-INVITE dialog exchange and the SDP offer/answer exchange it carries, will contain m-lines describing the resource control channel it wants to allocate. There should be one SDP m-line for each MRCPv2 resource that needs to be controlled. This m-line will have a media type field of control and a transport type field of "MRCP". The port number field of the m-line MUST contain 0 in the SDP offer from the client and MUST contain the TCP listen port on the server in the SDP answer. The client MAY then setup a TCP connection to that server port or share an already established connection to that port. The format field of the m-line MUST contain the resource type identifier in the SDP offer and MUST contain the full channel identifier for control channel in the SDP answer. When the client wants to add a media processing resource to the session, it should initiate a re-INVITE dialog. The SDP offer/answer exchange contained in this SIP dialog will contain an additional control channel m-line for the new resource that needs to be allocated. The media server, on seeing the new m-line, will allocate the resource and respond with a corresponding SDP m-line in the SDP answer response. When the client wants to de-allocate the resource from this session, it should initiate a SIP re-INVITE dialog with the media server and MUST simply drop the corresponding m-line from the SDP description of the session. Example 2: This exchange continues from example 1 and adds a resource control channel for a synthesizer. Since a synthesizer would be generating an audio stream, this interaction also creates a receive-only audio stream for the server to send audio to. C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314161 INVITE Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 S Shanmugham, et. al. IETF-Draft Page 9 MRCPv2 Protocol May 2003 m=control 0 mrcp 02 m=audio 49170 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=recvonly S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314161 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 32416 mrcp 32AECB23433802 m=audio 48260 RTP/AVP 00 96 a=rtpmap:0 pcmu/8000 a=sendonly C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314162 ACK Content-Length: 0 Example 3: This exchange continues from example 2 allocates an additional resource control channel for a recognizer. Since a recognizer would need to receive an audio stream for recognition, this interaction also updates the audio stream to sendrecv making it a 2-way audio stream. C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 142 S Shanmugham, et. al. IETF-Draft Page 10 MRCPv2 Protocol May 2003 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 0 mrcp 01 m=control 0 mrcp 02 m=audio 49170 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=rtpmap:96 telephone-event/8000 a=fmtp:96 0-15 a=sendrecv S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 32416 mrcp 32AECB23433801 m=control 32416 mrcp 32AECB23433802 m=audio 48260 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=rtpmap:96 telephone-event/8000 a=fmtp:96 0-15 a=sendrecv C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314164 ACK Content-Length: 0 Example 4: This exchange continues from example 3 and de-allocates recognizer channel. Since a recognizer would not need to receive an audio stream any more, this interaction also updates the audio stream to recvonly. S Shanmugham, et. al. IETF-Draft Page 11 MRCPv2 Protocol May 2003 C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 0 mrcp 02 m=audio 49170 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=recvonly S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 32416 mrcp 32AECB23433802 m=audio 48260 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=sendonly C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314164 ACK Content-Length: 0 S Shanmugham, et. al. IETF-Draft Page 12 MRCPv2 Protocol May 2003 3.3. Media Streams and RTP Ports A single set of RTP/RTCP ports is setup under a SIP session between the source of audio and the media server. This is done using the SDP offer/answer model as well. This media pipe is shared among all the different media processing resources that may be needed for that session. The individual resource instances allocated on the server under the same SIP session will feed from/to that single RTP stream. The media pipes created will setup to be recvonly, sendonly or recvsend depending on what resource channels are active and if they need to receive or generate the media stream. The client can send multiple media streams towards the server, differentiated by using different synchronized source (SSRC) identifier values. Similarly the server resources can use multiple Synchronized Source (SSRC) identifier values to differentiate media streams originating from the individual media processing resources if more than one exists. The individual resources may on the other hand, work together to send just one stream to the client. This is up to the implementation of the media server. 3.4. MRCPv2 Message Transport The MRCPv2 resource messages defined in this document are transported over a TCP or SCTP pipe between the client and the server. The setting up of this TCP pipe and the resource control channel is discussed in Section 3.2. Multiple resource control channels between a client and a server that belong to different SIP sessions can share one or more TCP pipes between them. The individual MRCPv2 messages carry the MRCPv2 channel identifier in their Channel-Identifier header field SHOULD be used to differentiate MRCPv2 messages from different resource channels. All MRCPv2 based media servers MUST support TCP for transport and MAY support SCTP. Example 1: C->S: SPEAK 543257 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams S Shanmugham, et. al. IETF-Draft Page 13 MRCPv2 Protocol May 2003 and arrived at 3:45pm. The subject is ski trip S->C: MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 S->C: SPEAK-COMPLETE 543257 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 Most examples from here on show only the MRCPv2 messages and do not show the SIP messages and headers that may have been used to establish the MRCPv2 control channel. 4. Notational Conventions Since many of the definitions and syntax are identical to HTTP/1.1, this specification only points to the section where they are defined rather than copying it. For brevity, [HX.Y] is to be taken to refer to Section X.Y of the current HTTP/1.1 specification (RFC 2616 [1]). All the mechanisms specified in this document are described in both prose and an augmented Backus-Naur form (ABNF) similar to that used in [H2.1]. It is described in detail in RFC 2234 [3], with the difference that this MRCPv2 specification maintains the "1#" notation for comma-separated lists. The complete message format in ABNF form is provided in Appendix section 12.1 and is the normative format definition. 5. MRCPv2 Specification The MRCPv2 PDU is textual using an ISO 10646 character set in the UTF-8 encoding (RFC 2044) to allow many different languages to be represented. However, to assist in compact representations, MRCPv2 also allows other character sets such as ISO 8859-1 to be used when desired. The MRCPv2 protocol headers and field names use only the US-ASCII subset of UTF-8. Internationalization only applies to certain fields like grammar, results, speech markup etc, and not to MRCPv2 as a whole. Lines are terminated by CRLF, but receivers SHOULD be prepared to also interpret CR and LF by themselves as line terminators. Also, some parameters in the PDU may contain binary data or a record spanning multiple lines. Such fields have a length value associated with the parameter, which indicates the number of octets immediately following the parameter. S Shanmugham, et. al. IETF-Draft Page 14 MRCPv2 Protocol May 2003 All MRCPv2 messages, responses and events MUST carry the Channel- Identifier header field in it for the server or client to differentiate messages from different control channels that may share the same TCP connection. The MRCPv2 message set consists of requests from the client to the server, responses from the server to the client and asynchronous events from the server to the client. All these messages consist of a start-line, one or more header fields (also known as "headers"), an empty line (i.e. a line with nothing preceding the CRLF) indicating the end of the header fields, and an optional message body. generic-message = start-line message-header CRLF [ message-body ] start-line = request-line | response-line | event-line message-header = 1*(generic-header | resource-header) resource-header = recognizer-header | synthesizer-header The message-body contains resource-specific and message-specific data that needs to be carried between the client and server as a MIME entity. The information contained here and the actual MIME- types used to carry the data are specified later when addressing the specific messages. If a message contains data in the message body, the header fields will contain content-headers indicating the MIME-type and encoding of the data in the message body. 5.1. Request A MRCPv2 request consists of a Request line followed by zero or more parameters as part of the message headers and an optional message body containing data specific to the request message. The Request message from a client to the server includes within the first line, the method to be applied, a method tag for that request and the version of protocol in use. request-line = method-name SP request-id SP mrcp-version CRLF The request-id field is a unique identifier created by the client and sent to the server. The server resource should use this identifier in its response to this request. If the request does not S Shanmugham, et. al. IETF-Draft Page 15 MRCPv2 Protocol May 2003 complete with the response future asynchronous events associated with this request MUST carry the request-id. request-id = 1*DIGIT The method-name field identifies the specific request that the client is making to the server. Each resource supports a certain list of requests or methods that can be issued to it, and will be addressed in later sections. method-name = synthesizer-method | recognizer-method The mrcp-version field is the MRCPv2 protocol version that is being used by the client. mrcp-version = "MRCP" "/" 1*DIGIT "." 1*DIGIT 5.2. Response After receiving and interpreting the request message, the server resource responds with an MRCPv2 response message. It consists of a status line optionally followed by a message body. response-line = mrcp-version SP request-id SP status-code SP request-state CRLF The mrcp-version field used here is similar to the one used in the Request Line and indicates the version of MRCPv2 protocol running on the server. The request-id used in the response MUST match the one sent in the corresponding request message. The status-code field is a 3-digit code representing the success or failure or other status of the request. The request-state field indicates if the job initiated by the Request is PENDING, IN-PROGRESS or COMPLETE. The COMPLETE status means that the Request was processed to completion and that there are will be no more events from that resource to the client with that request-id. The PENDING status means that the job has been placed on a queue and will be processed in first-in-first-out order. The IN-PROGRESS status means that the request is being processed and is not yet complete. A PENDING or IN-PROGRESS status indicates that further Event messages will be delivered with that request-id. request-state = "COMPLETE" | "IN-PROGRESS" | "PENDING" S Shanmugham, et. al. IETF-Draft Page 16 MRCPv2 Protocol May 2003 5.2.1. Status Codes The status codes are classified under the Success(2XX) codes and the Failure(4XX) codes. 5.2.1.1. Success 2xx 200 Success 201 Success with some optional parameters ignored. 5.2.1.2. Failure 4xx 401 Method not allowed 402 Method not valid in this state 403 Unsupported Parameter 404 Illegal Value for Parameter 405 Not found (e.g. Resource URI not initialized or doesn't exist) 406 Mandatory Parameter Missing 407 Method or Operation Failed(e.g. Grammar compilation failed in the recognizer. Detailed cause codes MAY BE available through a resource specific header field.) 408 Unrecognized or unsupported message entity 409 Unsupported Parameter Value 421-499 Resource specific Failure codes 5.3. Event The server resource may need to communicate a change in state or the occurrence of a certain event to the client. These messages are used when a request does not complete immediately and the response returns a status of PENDING or IN-PROGRESS. The intermediate results and events of the request are indicated to the client through the event message from the server. Events have the request-id of the request that is in progress and generating these events and status value. The status value is COMPLETE if the request is done and this was the last event, else it is IN-PROGRESS. event-line = event-name SP request-id SP request-state SP mrcp-version CRLF The mrcp-version used here is identical to the one used in the Request/Response Line and indicates the version of MRCPv2 protocol running on the server. The request-id used in the event should match the one sent in the request that caused this event. The request-state indicates if the Request/Command causing this event is complete or still in progress, and is the same as the one S Shanmugham, et. al. IETF-Draft Page 17 MRCPv2 Protocol May 2003 mentioned in section 5.3. The final event will contain a COMPLETE status indicating the completion of the request. The event-name identifies the nature of the event generated by the media resource. The set of valid event names are dependent on the resource generating it, and will be addressed in later sections. event-name = synthesizer-event | recognizer-event 5.4. Message Headers MRCPv2 header fields, which include general-header (section 5.5) and resource-specific-header (section 7.4 and section 8.4), follow the same generic format as that given in Section 3.1 of RFC 822 [8]. Each header field consists of a name followed by a colon (":") and the field value. Field names are case-insensitive. The field value MAY be preceded by any amount of LWS, though a single SP is preferred. Header fields can be extended over multiple lines by preceding each extra line with at least one SP or HT. message-header = field-name ":" [ field-value ] field-name = token field-value = *( field-content | LWS ) field-content = The field-content does not include any leading or trailing LWS: linear white space occurring before the first non-whitespace character of the field-value or after the last non-whitespace character of the field-value. Such leading or trailing LWS MAY be removed without changing the semantics of the field value. Any LWS that occurs between field-content MAY be replaced with a single SP before interpreting the field value or forwarding the message downstream. The order in which header fields with differing field names are received is not significant. However, it is "good practice" to send general-header fields first, followed by request-header or response- header fields, and ending with the entity-header fields. Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list [i.e., #(values)]. It MUST be possible to combine the multiple header fields into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field-value to the first, each separated by a comma. The order in which header fields with the same field-name are received is therefore significant to the S Shanmugham, et. al. IETF-Draft Page 18 MRCPv2 Protocol May 2003 interpretation of the combined field value, and thus a proxy MUST NOT change the order of these field values when a message is forwarded. Generic Headers generic-header = channel-identifier | active-request-id-list | proxy-sync-id | content-id | content-type | content-length | content-base | content-location | content-encoding | cache-control | logging-tag All headers in MRCPv2 will be case insensitive consistent with HTTP and SIP protocol header definitions. 5.4.1. Channel-Identifier All MRCPv2 methods, responses and events MUST contain the Channel- Identifier header field. The value of this field is a hexadecimal string and is allocated by the media server when the control channel was added to the session through a SDP offer/answer exchange. The last 2 digits of the Channel-Identifier field specify one of the media processing resource types listed in Section 3.2. 5.4.2. Active-Request-Id-List In a request, this field indicates the list of request-ids that it should apply to. This is useful when there are multiple Requests that are PENDING or IN-PROGRESS and you want this request to apply to one or more of these specifically. In a response, this field returns the list of request-ids that the operation modified or were in progress or just completed. There could be one or more requests that returned a request-state of PENDING or IN-PROGRESS. When a method affecting one or more PENDING or IN-PROGRESS requests is sent from the client to the server, the response MUST contain the list of request-ids that were affected in this header field. The active-request-id-list is only used in requests and responses, not in events. For example, if a STOP request with no active-request-id-list is sent to a synthesizer resource(a wildcard STOP) which has one or more SPEAK requests in the PENDING or IN-PROGRESS state, all SPEAK S Shanmugham, et. al. IETF-Draft Page 19 MRCPv2 Protocol May 2003 requests MUST be cancelled, including the one IN-PROGRESS and the response to the STOP request would contain the request-id of all the SPEAK requests that were terminated in the active-request-id-list. In this case, no SPEAK-COMPLETE or RECOGNITION-COMPLETE events will be sent for these terminated requests. active-request-id-list = "Active-Request-Id-List" ":" request-id *("," request-id) CRLF 5.4.3. Proxy-Sync-Id When any server resource generates a barge-in-able event, it will generate a unique Tag and send it as a header field in an event to the client. The client then acts as a proxy to the server resource and sends a BARGE-IN-OCCURRED method to the synthesizer server resource with the Proxy-Sync-Id it received from the server resource. When the recognizer and synthesizer resources are part of the same session, they may choose to work together to achieve quicker interaction and response. Here the proxy-sync-id helps the resource receiving the event, proxied by the client, to decide if this event has been processed through a direct interaction of the resources. proxy-sync-id = "Proxy-Sync-Id" ":" 1*ALPHA CRLF 5.4.4. Accept-Charset See [H14.2]. This specifies the acceptable character set for entities returned in the response or events associated with this request. This is useful in specifying the character set to use in the NLSML results of a RECOGNITON-COMPLETE event. 5.4.5. Content-Type See [H14.17]. Note that the content types suitable for MRCPv2 are restricted to speech markup, grammar, recognition results etc. and are specified later in this document. The multi-part content type "multi-part/mixed" is supported to communicate multiple of the above mentioned contents, in which case the body parts cannot contain any MRCPv2 specific headers. 5.4.6. Content-Id This field contains an ID or name for the content, by which it can be referred to. The definition of this field is available in RFC 2111 and is needed in multi-part messages. In MRCPv2 whenever the content needs to be stored, by either the client or the server, it is stored associated with this ID. Such content can be referenced during the session in URI form using the session: URI scheme described in a later section. S Shanmugham, et. al. IETF-Draft Page 20 MRCPv2 Protocol May 2003 5.4.7. Content-Base The content-base entity-header field may be used to specify the base URI for resolving relative URLs within the entity. content-base = "Content-Base" ":" absoluteURI CRLF Note, however, that the base URI of the contents within the entity- body may be redefined within that entity-body. An example of this would be a multi-part MIME entity, which in turn can have multiple entities within it. 5.4.8. Content-Encoding The content-encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content coding have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media- type referenced by the content-type header field. Content-encoding is primarily used to allow a document to be compressed without losing the identity of its underlying media type. content-encoding = "Content-Encoding" ":" 1#content-coding CRLF Content coding is defined in [H3.5]. An example of its use is Content-Encoding: gzip If multiple encoding have been applied to an entity, the content coding MUST be listed in the order in which they were applied. 5.4.9. Content-Location The content-location entity-header field MAY BE used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource's URI. content-location = "Content-Location" ":" ( absoluteURI | relativeURI ) CRLF The content-location value is a statement of the location of the resource corresponding to this particular entity at the time of the request. The media server MAY use this header field to optimize certain operations. When providing this header field the entity being sent should not have been modified, from what was retrieved from the content-location URI. For example, if the client provided a grammar markup inline, and it had previously retrieved it from a certain URI, that URI can be S Shanmugham, et. al. IETF-Draft Page 21 MRCPv2 Protocol May 2003 provided as part of the entity, using the content-location header field. This allows a resource like the recognizer to look into its cache to see if this grammar was previously retrieved, compiled and cached. In which case, it might optimize by using the previously compiled grammar object. If the content-location is a relative URI, the relative URI is interpreted relative to the content-base URI. 5.4.10. Content-Length This field contains the length of the content of the message body (i.e. after the double CRLF following the last header field). Unlike HTTP, it MUST be included in all messages that carry content beyond the header portion of the message. If it is missing, a default value of zero is assumed. It is interpreted according to [H14.13]. 5.4.11. Cache-Control If the media server plans on implementing caching it MUST adhere to the cache correctness rules of HTTP 1.1 (RFC2616), when accessing and caching HTTP URI. In particular, the expires and cache-control headers of the cached URI or document must be honored and will always take precedence over the Cache-Control defaults set by this header field. The cache-control directives are used to define the default caching algorithms on the media server for the session or request. The scope of the directive is based on the method it is sent on. If the directives are sent on a SET-PARAMS method, it SHOULD apply for all requests for documents the media server may make in that session. If the directives are sent on any other messages they MUST only apply to document requests the media server needs to make for that method. An empty cache-control header on the GET-PARAMS method is a request for the media server to return the current cache-control directives setting on the server. cache-control = "Cache-Control" ":" 1#cache-directive CRLF cache-directive = "max-age" "=" delta-seconds | "max-stale" "=" delta-seconds | "min-fresh" "=" delta-seconds delta-seconds = 1*DIGIT Here delta-seconds is a time value to be specified as an integer number of seconds, represented in decimal, after the time that the message response or data was received by the media server. These directives allow the media server to override the basic expiration mechanism. S Shanmugham, et. al. IETF-Draft Page 22 MRCPv2 Protocol May 2003 max-age Indicates that the client is ok with the media server using a response whose age is no greater than the specified time in seconds. Unless a max-stale directive is also included, the client is not willing to accept the media server using a stale response. min-fresh Indicates that the client is willing to accept the media server using a response whose freshness lifetime is no less than its current age plus the specified time in seconds. That is, the client wants the media server to use a response that will still be fresh for at least the specified number of seconds. max-stale Indicates that the client is willing to accept the media server using a response that has exceeded its expiration time. If max-stale is assigned a value, then the client is willing to accept the media server using a response that has exceeded its expiration time by no more than the specified number of seconds. If no value is assigned to max-stale, then the client is willing to accept the media server using a stale response of any age. The media server cache MAY BE requested to use stale response/data without validation, but only if this does not conflict with any "MUST"-level requirements concerning cache validation (e.g., a "must-revalidate" cache-control directive) in the HTTP 1.1 specification pertaining the URI. If both the MRCPv2 cache-control directive and the cached entry on the media server include "max-age" directives, then the lesser of the two values is used for determining the freshness of the cached entry for that request. 5.4.12. Logging-Tag This header field MAY BE sent as part of a SET-PARAMS/GET-PARAMS method to set the logging tag for logs generated by the media server. Once set, the value persists until a new value is set or the session is ended. The MRCPv2 server should provide a mechanism to subset its output logs so that system administrators can examine or extract only the log file portion during which the logging tag was set to a certain value. MRCPv2 clients using this feature should take care to ensure that no two clients specify the same logging tag. In the event that two clients specify the same logging tag, the effect on the MRCPv2 server's output logs in undefined. S Shanmugham, et. al. IETF-Draft Page 23 MRCPv2 Protocol May 2003 logging-tag = "Logging-Tag" ":" 1*ALPHA CRLF 6. Resource Discovery The capability of media server resources can be found using the SIP OPTIONS method requesting the capability of the media server. The media server should respond to such a request with an SDP description of its capabilities according to RFC 3264. The MRCPv2 capabilities are described BY an m-line containing the media type Ÿcontrol÷, transport type ŸMRCP÷ and the format containing the supported resource identifier. The SDP description SHOULD contain m-lines describing the audio capabilities, and the coders it supports. Example 4: The client uses the SIP OPTIONS method to query the capabilities of the MRCPv2 media server. C->S: OPTIONS sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 63104 OPTIONS Contact: Accept: application/sdp Content-Length: 0 S->C: SIP/2.0 200 OK To: ;tag=93810874 From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 63104 OPTIONS Contact: Allow: INVITE, ACK, CANCEL, OPTIONS, BYE Accept: application/sdp Accept-Encoding: gzip Accept-Language: en Supported: foo Content-Type: application/sdp Content-Length: 274 v=0 o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 S Shanmugham, et. al. IETF-Draft Page 24 MRCPv2 Protocol May 2003 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 0 mrcp 01 m=control 0 mrcp 02 m=audio 0 RTP/AVP 0 1 3 a=rtpmap:0 PCMU/8000 a=rtpmap:1 1016/8000 a=rtpmap:3 GSM/8000 7. Speech Synthesizer Resource This resource is capable of converting text provided by the client and generating a speech stream in real-time. Depending on the implementation and capability of this resource, the client can control parameters like voice characteristics, speaker speed, etc. The synthesizer resource is controlled by MRCPv2 requests from the client. Similarly the resource can respond to these requests or generate asynchronous events to the server to indicate certain conditions during the processing of the stream. 7.1. Synthesizer State Machine The synthesizer maintains states as it needs to correlate MRCPv2 requests from the client. The state transitions shown below describe the states of the synthesizer and reflect the request at the head of the queue. A SPEAK request in the PENDING state can be deleted or stopped by a STOP request and does not affect the state of the resource. Idle Speaking Paused State State State | | | |----------SPEAK------->| |--------| |<------STOP------------| CONTROL | |<----SPEAK-COMPLETE----| |------->| |<----BARGE-IN-OCCURRED-| | | |--------| | | CONTROL |-----------PAUSE--------->| | |------->|<----------RESUME---------| | | |----------| | | PAUSE | | | |--------->| | |--------|----------| | | BARGE-IN-OCCURED | SPEECH-MARKER | | |------->|<---------| | |----------| | |------------| | STOP | SPEAK | S Shanmugham, et. al. IETF-Draft Page 25 MRCPv2 Protocol May 2003 | | | |----------->| |<---------| | |<-------------------STOP--------------------------| 7.2. Synthesizer Methods The synthesizer supports the following methods. synthesizer-method = "SET-PARAMS" | "GET-PARAMS" | "SPEAK" | "STOP" | "PAUSE" | "RESUME" | "BARGE-IN-OCCURRED" | "CONTROL" 7.3. Synthesizer Events The synthesizer may generate the following events. synthesizer-event = "SPEECH-MARKER" | "SPEAK-COMPLETE" 7.4. Synthesizer Header Fields A synthesizer message may contain header fields containing request options and information to augment the Request, Response or Event the message it is associated with. synthesizer-header = jump-target ; Section 7.4.1 | kill-on-barge-in ; Section 7.4.2 | speaker-profile ; Section 7.4.3 | completion-cause ; Section 7.4.4 | voice-parameter ; Section 7.4.5 | prosody-parameter ; Section 7.4.6 | vendor-specific ; Section 7.4.7 | speech-marker ; Section 7.4.8 | speech-language ; Section 7.4.9 | fetch-hint ; Section 7.4.10 | audio-fetch-hint ; Section 7.4.11 | fetch-timeout ; Section 7.4.12 | failed-uri ; Section 7.4.13 | failed-uri-cause ; Section 7.4.14 | speak-restart ; Section 7.4.15 | speak-length ; Section 7.4.16 Parameter Support Methods/Events/Response S Shanmugham, et. al. IETF-Draft Page 26 MRCPv2 Protocol May 2003 jump-target MANDATORY SPEAK, CONTROL logging-tag MANDATORY SET-PARAMS, GET-PARAMS kill-on-barge-in MANDATORY SPEAK speaker-profile OPTIONAL SET-PARAMS, GET-PARAMS, SPEAK, CONTROL completion-cause MANDATORY SPEAK-COMPLETE voice-parameter MANDATORY SET-PARAMS, GET-PARAMS, SPEAK, CONTROL prosody-parameter MANDATORY SET-PARAMS, GET-PARAMS, SPEAK, CONTROL vendor-specific MANDATORY SET-PARAMS, GET-PARAMS speech-marker MANDATORY SPEECH-MARKER speech-language MANDATORY SET-PARAMS, GET-PARAMS, SPEAK fetch-hint MANDATORY SET-PARAMS, GET-PARAMS, SPEAK audio-fetch-hint MANDATORY SET-PARAMS, GET-PARAMS, SPEAK fetch-timeout MANDATORY SET-PARAMS, GET-PARAMS, SPEAK failed-uri MANDATORY Any failed-uri-cause MANDATORY Any speak-restart MANDATORY CONTROL speak-length MANDATORY SPEAK, CONTROL 7.4.1. Jump-Target This parameter MAY BE specified in a CONTROL method and controls the jump size to move forward or rewind backward on an active SPEAK request. A + or - indicates a relative value to what is being currently played. This MAY BE specified in a SPEAK request to indicate an offset into the speech markup that the SPEAK request should start speaking from. The different speech length units supported are dependent on the synthesizer implementation. If it does not support a unit or the operation the resource SHOULD respond with a status code of 404 "Illegal or Unsupported value for parameter". jump-target = "Jump-Size" ":" speech-length-value CRLF speech-length-value = numeric-speech-length | text-speech-length text-speech-length = 1*ALPHA SP "Tag" numeric-speech-length= ("+" | "-") 1*DIGIT SP numeric-speech-unit numeric-speech-unit = "Second" | "Word" | "Sentence" | "Paragraph" 7.4.2. Kill-On-Barge-In This parameter MAY BE sent as part of the SPEAK method to enable kill-on-barge-in support. If enabled, the SPEAK method is S Shanmugham, et. al. IETF-Draft Page 27 MRCPv2 Protocol May 2003 interrupted by DTMF input detected by a signal detector resource or by the start of speech sensed or recognized by the speech recognizer resource. kill-on-barge-in = "Kill-On-Barge-In" ":" boolean-value CRLF boolean-value = "true" | "false" If the recognizer or signal detector resource is on the same server as the synthesizer, the server should be intelligent enough to recognize their interactions by their common MRCPv2 channel identifier (ignoring the last 2 hexadecimal digits) and work with each other to provide kill-on-barge-in support. The client needs to send a BARGE-IN-OCCURRED method to the synthesizer resource when it receives a bargin-in-able event from the synthesizer resource or signal detector resource. These resources MAY BE local or distributed. If this field is not specified, the value defaults to "true". 7.4.3. Speaker Profile This parameter MAY BE part of the SET-PARAMS/GET-PARAMS or SPEAK request from the client to the server and specifies the profile of the speaker by a uri, which may be a set of voice parameters like gender, accent etc. speaker-profile = "Speaker-Profile" ":" uri CRLF 7.4.4. Completion Cause This header field MUST be specified in a SPEAK-COMPLETE event coming from the synthesizer resource to the client. This indicates the reason behind the SPEAK request completion. completion-cause = "Completion-Cause" ":" 1*DIGIT SP 1*ALPHA CRLF Cause-Code Cause-Name Description 000 normal SPEAK completed normally. 001 barge-in SPEAK request was terminated because of barge-in. 002 parse-failure SPEAK request terminated because of a failure to parse the speech markup text. 003 uri-failure SPEAK request terminated because, access to one of the URIs failed. 004 error SPEAK request terminated prematurely due to synthesizer error. 005 language-unsupported Language not supported. 7.4.5. Voice-Parameters S Shanmugham, et. al. IETF-Draft Page 28 MRCPv2 Protocol May 2003 This set of parameters defines the voice of the speaker. voice-parameter = "Voice-" voice-param-name ":" voice-param-value CRLF voice-param-name is any one of the attribute names under the voice element specified in W3C's Speech Synthesis Markup Language Specification[10]. The voice-param-value is any one of the value choices of the corresponding voice element attribute specified in the above section. These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to define/get default values for the entire session or MAY BE sent in the SPEAK request to define default values for that speak request. Furthermore these attributes can be part of the speech text marked up in SML. These voice parameter header fields can also be sent in a CONTROL method to affect a SPEAK request in progress and change its behavior on the fly. If the synthesizer resource does not support this operation, it should respond back to the client with a status of unsupported. 7.4.6. Prosody-Parameters This set of parameters defines the prosody of the speech. prosody-parameter = "Prosody-" prosody-param-name ":" prosody-param-value CRLF prosody-param-name is any one of the attribute names under the prosody element specified in W3C's Speech Synthesis Markup Language Specification[10]. The prosody-param-value is any one of the value choices of the corresponding prosody element attribute specified in the above section. These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to define/get default values for the entire session or MAY BE sent in the SPEAK request to define default values for that speak request. Further more these attributes can be part of the speech text marked up in SML. The prosody parameter header fields in the SET-PARAMS or SPEAK request only apply if the speech data is of type text/plain and does not use a speech markup format. These prosody parameter header fields MAY also be sent in a CONTROL method to affect a SPEAK request in progress and change its behavior on the fly. If the synthesizer resource does not support this operation, it should respond back to the client with a status of unsupported. S Shanmugham, et. al. IETF-Draft Page 29 MRCPv2 Protocol May 2003 7.4.7. Vendor Specific Parameters This set of headers allows for the client to set Vendor Specific parameters. vendor-specific = "Vendor-Specific-Parameters" ":" vendor-specific-av-pair *[";" vendor-specific-av-pair] CRLF vendor-specific-av-pair = vendor-av-pair-name "=" vendor-av-pair-value This header MAY BE sent in the SET-PARAMS/GET-PARAMS method and is used to set vendor-specific parameters on the server side. The vendor-av-pair-name can be any Vendor specific field name and conforms to the XML vendor-specific attribute naming convention. The vendor-av-pair-value is the value to set the attribute to and needs to be quoted. When asking the server to get the current value of these parameters, this header can be sent in the GET-PARAMS method with the list of vendor-specific attribute names to get separated by a semicolon. 7.4.8. Speech Marker This header field contains a marker tag that may be embedded in the speech data. Most speech markup formats provide mechanisms to embed marker fields between speech texts. The synthesizer will generate SPEECH-MARKER events when it reaches these marker fields. This field SHOULD be part of the SPEECH-MARKER event and will contain the marker tag values. speech-marker = "Speech-Marker" ":" 1*ALPHA CRLF 7.4.9. Speech Language This header field specifies the default language of the speech data if it is not specified in it. The value of this header field should follow RFC 1766 for its values. This MAY occur in SPEAK, SET-PARAMS or GET-PARAMS request. speech-language = "Speech-Language" ":" 1*ALPHA CRLF 7.4.10. Fetch Hint When the synthesizer needs to fetch documents or other resources like speech markup or audio files, etc., this header field controls URI access properties. This defines when the synthesizer should retrieve content from the server. A value of "prefetch" indicates a file may be downloaded when the request is received, whereas "safe" indicates a file that should only be downloaded when actually S Shanmugham, et. al. IETF-Draft Page 30 MRCPv2 Protocol May 2003 needed. The default value is "prefetch". This header field MAY occur in SPEAK, SET-PARAMS or GET-PARAMS requests. fetch-hint = "Fetch-Hint" ":" 1*ALPHA CRLF 7.4.11. Audio Fetch Hint When the synthesizer needs to fetch documents or other resources like speech audio files, etc., this header field controls URI access properties. This defines whether or not the synthesizer can attempt to optimize speech by pre-fetching audio. The value is either "safe" to say that audio is only fetched when it is needed, never before; "prefetch" to permit, but not require the platform to pre-fetch the audio; or "stream" to allow it to stream the audio fetches. The default value is "prefetch". This header field MAY occur in SPEAK, SET-PARAMS or GET-PARAMS. requests. audio-fetch-hint = "Audio-Fetch-Hint" ":" 1*ALPHA CRLF 7.4.12. Fetch Timeout When the synthesizer needs to fetch documents or other resources like speech audio files, etc., this header field controls URI access properties. This defines the synthesizer timeout for resources the media server may need to fetch from the network. This is specified in milliseconds. The default value is platform-dependent. This header field MAY occur in SPEAK, SET-PARAMS or GET-PARAMS. fetch-timeout = "Fetch-Timeout" ":" 1*DIGIT CRLF 7.4.13. Failed URI When a synthesizer method needs a synthesizer to fetch or access a URI and the access fails the media server SHOULD provide the failed URI in this header field in the method response. failed-uri = "Failed-URI" ":" Url CRLF 7.4.14. Failed URI Cause When a synthesizer method needs a synthesizer to fetch or access a URI and the access fails the media server SHOULD provide the URI specific or protocol specific response code through this header field in the method response. This field has been defined as alphanumeric to accommodate all protocols, some of which might have a response string instead of a numeric response code. failed-uri-cause = "Failed-URI-Cause" ":" 1*ALPHA CRLF 7.4.15. Speak Restart S Shanmugham, et. al. IETF-Draft Page 31 MRCPv2 Protocol May 2003 When a CONTROL jump backward request is issued to a currently speaking synthesizer resource and the jumps beyond the start of the speech, the current SPEAK request re-starts from the beginning of its speech data and the response to the CONTROL request would contain this header indicating a restart. This header MAY occur in the CONTROL response. speak-restart = "Speak-Restart" ":" boolean-value CRLF 7.4.16. Speak Length This parameter MAY BE specified in a CONTROL method to control the length of speech to speak, relative to the current speaking point in the currently active SPEAK request. A - value is illegal in this field. If a field with a Tag unit is specified, then the media must speak till the tag is reached or the SPEAK request complete, which ever comes first. This MAY BE specified in a SPEAK request to indicate the length to speak in the speech data and is relative to the point in speech the SPEAK request starts. The different speech length units supported are dependent on the synthesizer implementation. If it does not support a unit or the operation the resource SHOULD respond with a status code of 404 "Illegal or Unsupported value for parameter". speak-length = "Speak-Length" ":" speech-length-value CRLF speech-length-value = numeric-speech-length | text-speech-length text-speech-length = 1*ALPHA SP "Tag" numeric-speech-length= ("+" | "-") 1*DIGIT SP numeric-speech-unit numeric-speech-unit = "Second" | "Word" | "Sentence" | "Paragraph" 7.5. Synthesizer Message Body A synthesizer message may contain additional information associated with the Method, Response or Event in its message body. 7.5.1. Synthesizer Speech Data Marked-up text for the synthesizer to speak is specified as a MIME entity in the message body. The message to be spoken by the synthesizer can be specified inline by embedding the data in the message body or by reference by providing the URI to the data. In either case the data and the format used to markup the speech needs to be supported by the media server. S Shanmugham, et. al. IETF-Draft Page 32 MRCPv2 Protocol May 2003 All media servers MUST support plain text speech data and W3C's Speech Synthesis Markup Language[10] as a minimum and hence MUST support the MIME types text/plain and application/synthesis+ssml at a minimum. If the speech data needs to be specified by URI reference the MIME type text/uri-list is used to specify the one or more URI that will list what needs to be spoken. If a list of speech URI is specified, speech data provided by each URI must be spoken in the order in which the URI are specified. If the data to be spoken consists of a mix of URI and inline speech data the multipart/mixed MIME-type is used and embedded with the MIME-blocks for text/uri-list, application/synthesis+ssml or text/plain. The character set and encoding used in the speech data may be specified according to standard MIME-type definitions. The multi-part MIME-block can contain actual audio data in .wav or sun audio format. This is used when the client has audio clips that it may have recorded and has it stored in memory or a local device and it needs to play it as part of the SPEAK request. The audio MIME- parts, can be sent by the client as part of the multi-part MIME- block. This audio will be referenced in the speech markup data that will be another part in the multi-part MIME-block according to the multipart/mixed MIME-type specification. Example 1: Content-Type: text/uri-list Content-Length: 176 http://www.example.com/ASR-Introduction.sml http://www.example.com/ASR-Document-Part1.sml http://www.example.com/ASR-Document-Part2.sml http://www.example.com/ASR-Conclusion.sml Example 2: Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S Shanmugham, et. al. IETF-Draft Page 33 MRCPv2 Protocol May 2003 Example 3: Content-Type: multipart/mixed; boundary="--break" --break Content-Type: text/uri-list Content-Length: 176 http://www.example.com/ASR-Introduction.sml http://www.example.com/ASR-Document-Part1.sml http://www.example.com/ASR-Document-Part2.sml http://www.example.com/ASR-Conclusion.sml --break Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip --break 7.6. SET-PARAMS The SET-PARAMS method, from the client to server, tells the synthesizer resource to define default synthesizer context parameters, like voice characteristics and prosody etc. If the server accepted and set all parameters it MUST return a Response- Status of 200. If it chose to ignore some optional parameters it MUST return 201. If some of the parameters being set are unsupported or have illegal values, the server accept and set the remaining parameters and MUST respond with a Response-Status of 403 or 404, and MUST include in the response the header fields that could not be set. Example: C->S:SET-PARAMS 543256 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: female S Shanmugham, et. al. IETF-Draft Page 34 MRCPv2 Protocol May 2003 Voice-category: adult Voice-variant: 3 S->C:MRCP/2.0 543256 200 COMPLETE Channel-Identifier: 32AECB23433802 7.7. GET-PARAMS The GET-PARAMS method, from the client to server, asks the synthesizer resource for its current synthesizer context parameters, like voice characteristics and prosody etc. The client SHOULD send the list of parameter it wants to read from the server by listing a set of empty parameter header fields. If a specific list is not specified then the server SHOULD return all the settable parameters including vendor-specific parameters and their current values. The wild card use can be very intensive as the number of settable parameters can be large depending on the vendor. Hence it is RECOMMENDED that the client does not use the wildcard GET-PARAMS operation very often. Example: C->S:GET-PARAMS 543256 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: Voice-category: Voice-variant: Vendor-Specific-Parameters:com.mycorp.param1; com.mycorp.param2 S->C:MRCP/2.0 543256 200 COMPLETE Channel-Identifier: 32AECB23433802 Voice-gender:female Voice-category: adult Voice-variant: 3 Vendor-Specific-Parameters:com.mycorp.param1="Company Name"; com.mycorp.param2="124324234@mycorp.com" 7.8. SPEAK The SPEAK method from the client to the server provides the synthesizer resource with the speech text and initiates speech synthesis and streaming. The SPEAK method can carry voice and prosody header fields that define the behavior of the voice being synthesized, as well as the actual marked-up text to be spoken. If specific voice and prosody parameters are specified as part of the speech markup text, it will take precedence over the values specified in the header fields and those set using a previous SET- PARAMS request. S Shanmugham, et. al. IETF-Draft Page 35 MRCPv2 Protocol May 2003 When applying voice parameters there are 3 levels of scope. The highest precedence are those specified within the speech markup text, followed by those specified in the header fields of the SPEAK request and hence apply for that SPEAK request only, followed by the session default values which can be set using the SET-PARAMS request and apply for the whole session moving forward. If the resource is idle and the SPEAK request is being actively processed the resource will respond with a success status code and a request-state of IN-PROGRESS. If the resource is in the speaking or paused states, i.e. it is in the middle of processing a previous SPEAK request, the status returns success and a request-state of PENDING. This means that this SPEAK request is in queue and will be processed after the currently active SPEAK request is completed. For the synthesizer resource, this is the only request that can return a request-state of IN-PROGRESS or PENDING. When the text to be synthesized is complete, the resource will issue a SPEAK-COMPLETE event with the request-id of the SPEAK message and a request-state of COMPLETE. Example: C->S:SPEAK 543257 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 S Shanmugham, et. al. IETF-Draft Page 36 MRCPv2 Protocol May 2003 S->C:SPEAK-COMPLETE 543257 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 Completion-Cause: 000 normal 7.9. STOP The STOP method from the client to the server tells the resource to stop speaking if it is speaking something. The STOP request can be sent with an active-request-id-list header field to stop the zero or more specific SPEAK requests that may be in queue and return a response code of 200(Success). If no active- request-id-list header field is sent in the STOP request it will terminate all outstanding SPEAK requests. If a STOP request successfully terminated one or more PENDING or IN- PROGRESS SPEAK requests, then the response message body contains an active-request-id-list header field listing the SPEAK request-ids that were terminated. Otherwise there will be no active-request-id- list header field in the response. No SPEAK-COMPLETE events will be sent for these terminated requests. If a SPEAK request that was IN-PROGRESS and speaking was stopped the next pending SPEAK request, if any, would become IN-PROGRESS and move to the speaking state. If a SPEAK request that was IN-PROGRESS and in the paused state was stopped the next pending SPEAK request, if any, would become IN- PROGRESS and move to the paused state. Example: C->S:SPEAK 543258 MRCP/2.0 Channel-Identifier: 32AECB23433802 Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S Shanmugham, et. al. IETF-Draft Page 37 MRCPv2 Protocol May 2003 S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 C->S:STOP 543259 200 MRCP/2.0 Channel-Identifier: 32AECB23433802 S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 7.10. BARGE-IN-OCCURRED The BARGE-IN-OCCURRED method is a mechanism for the client to communicate a barge-in-able event it detects to the speech resource. This event is useful in two scenarios, 1. The client has detected some events like DTMF digits or other barge-in-able events and wants to communicate that to the synthesizer. 2. The recognizer resource and the synthesizer resource are in different servers. In which case the client MUST act as a Proxy and receive event from the recognition resource, and then send a BARGE- IN-OCCURRED method to the synthesizer. In such cases, the BARGE-IN- OCCURRED method would also have a proxy-sync-id header field received from the resource generating the original event. If a SPEAK request is active with kill-on-barge-in enabled, and the BARGE-IN-OCCURRED event is received, the synthesizer should stop streaming out audio. It should also terminate any speech requests queued behind the current active one, irrespective of whether they have barge-in enabled or not. If a barge-in-able prompt was playing and it was terminated, the response MUST contain the request-ids of all SPEAK requests that were terminated in its active-request-id- list. There will be no SPEAK-COMPLETE events generated for these requests. If the synthesizer and the recognizer are on the same server they could be optimized for a quicker kill-on-barge-in response by the recognizer and synthesizer interacting directly based on a MRCPv2 channel identifier ignoring the last 2 hexadecimal digits. In these cases, the client MUST still proxy the recognition event through a BARGE-IN-OCCURRED method, but the synthesizer resource may have already stopped and sent a SPEAK-COMPLETE event with a barge in completion cause code. If there were no SPEAK requests terminated as a result of the BARGE-IN-OCCURRED method, the response would still be a 200 success but MUST not contain an active-request-id- list header field. S Shanmugham, et. al. IETF-Draft Page 38 MRCPv2 Protocol May 2003 C->S:SPEAK 543258 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 C->S:BARGE-IN-OCCURRED 543259 200 MRCP/2.0 Channel-Identifier: 32AECB23433802 Proxy-Sync-Id: 987654321 S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 7.11. PAUSE The PAUSE method from the client to the server tells the resource to pause speech, if it is speaking something. If a PAUSE method is issued on a session when a SPEAK is not active the server SHOULD respond with a status of 402 or "Method not valid in this state". If a PAUSE method is issued on a session when a SPEAK is active and paused the server SHOULD respond with a status of 200 or "Success". If a SPEAK request was active the server MUST return an active- request-id-list header with the request-id of the SPEAK request that was paused. C->S:SPEAK 543258 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium S Shanmugham, et. al. IETF-Draft Page 39 MRCPv2 Protocol May 2003 Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 C->S:PAUSE 543259 MRCP/2.0 Channel-Identifier: 32AECB23433802 S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 7.12. RESUME The RESUME method from the client to the server tells a paused synthesizer resource to continue speaking. If a RESUME method is issued on a session when a SPEAK is not active the server SHOULD respond with a status of 402 or "Method not valid in this state". If a RESUME method is issued on a session when a SPEAK is active and speaking(i.e. not paused) the server SHOULD respond with a status of 200 or "Success". If a SPEAK request was active the server MUST return an active-request-id-list header with the request-id of the SPEAK request that was resumed Example: C->S:SPEAK 543258 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 S Shanmugham, et. al. IETF-Draft Page 40 MRCPv2 Protocol May 2003 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 C->S:PAUSE 543259 MRCP/2.0 Channel-Identifier: 32AECB23433802 S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 C->S:RESUME 543260 MRCP/2.0 Channel-Identifier: 32AECB23433802 S->C:MRCP/2.0 543260 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 7.13. CONTROL The CONTROL method from the client to the server tells a synthesizer that is speaking to modify what it is speaking on the fly. This method is used to make the synthesizer jump forward or backward in what it is speaking, change speaker rate, and speaker parameters, etc. It affects the active or IN-PROGRESS SPEAK request. Depending on the implementation and capability of the synthesizer resource it may allow this operation or one or more of its parameters. When a CONTROL to jump forward is issued and the operation goes beyond the end of the active SPEAK method's text, the request succeeds. A SPEAK-COMPLETE event follows the response to the CONTROL method. If there are more SPEAK requests in the queue, the synthesizer resource will continue to process the next SPEAK method. When a CONTROL to jump backwards is issued and the operation jumps to the beginning of the speech data of the active SPEAK request, the response to the CONTROL request contains the speak-restart header. These two behaviors can be used to rewind or fast-forward across multiple speech requests, if the client wants to break up a speech markup text to multiple SPEAK requests. S Shanmugham, et. al. IETF-Draft Page 41 MRCPv2 Protocol May 2003 If a SPEAK request was active when the CONTROL method was received the server MUST return an active-request-id-list header with the Request-id of the SPEAK request that was active. Example: C->S:SPEAK 543258 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 C->S:CONTROL 543259 MRCP/2.0 Channel-Identifier: 32AECB23433802 Prosody-rate: fast S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 C->S:CONTROL 543260 MRCP/2.0 Channel-Identifier: 32AECB23433802 Jump-Size: -15 Words S->C:MRCP/2.0 543260 200 COMPLETE Channel-Identifier: 32AECB23433802 Active-Request-Id-List: 543258 7.14. SPEAK-COMPLETE This is an Event message from the synthesizer resource to the client indicating that the SPEAK request was completed. The request-id S Shanmugham, et. al. IETF-Draft Page 42 MRCPv2 Protocol May 2003 header field WILL match the request-id of the SPEAK request that initiated the speech that just completed. The request-state field should be COMPLETE indicating that this is the last Event with that request-id, and that the request with that request-id is now complete. The completion-cause header field specifies the cause code pertaining to the status and reason of request completion such as the SPEAK completed normally or because of an error or kill-on- barge-in etc. Example: C->S:SPEAK 543260 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543260 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 S->C:SPEAK-COMPLETE 543260 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 Completion-Cause: 000 normal 7.15. SPEECH-MARKER This is an event generated by the synthesizer resource to the client when it hits a marker tag in the speech markup it is currently processing. The request-id field in the header matches the SPEAK request request-id that initiated the speech. The request-state field should be IN-PROGRESS as the speech is still not complete and there is more to be spoken. The actual speech marker tag hit, describing where the synthesizer is in the speech markup, is returned in the speech-marker header field. Example: S Shanmugham, et. al. IETF-Draft Page 43 MRCPv2 Protocol May 2003 C->S:SPEAK 543261 MRCP/2.0 Channel-Identifier: 32AECB23433802 Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543261 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 S->C:SPEECH-MARKER 543261 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433802 Speech-Marker: here S->C:SPEECH-MARKER 543261 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433802 Speech-Marker: ANSWER S->C:SPEAK-COMPLETE 543261 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 Completion-Cause: 000 normal 8. Speech Recognizer Resource The speech recognizer resource is capable of receiving an incoming voice stream and providing the client with an interpretation of what was spoken in textual form. 8.1. Recognizer State Machine The recognizer resource is controlled by MRCPv2 requests from the client. Similarly the resource can respond to these requests or S Shanmugham, et. al. IETF-Draft Page 44 MRCPv2 Protocol May 2003 generate asynchronous events to the server to indicate certain conditions during the processing of the stream. Hence the recognizer maintains states to correlate MRCPv2 requests from the client. The state transitions are described below. Idle Recognizing Recognized State State State | | | |---------RECOGNIZE---->|---RECOGNITION-COMPLETE-->| |<------STOP------------|<-----RECOGNIZE-----------| | | | | | |-----------| | |--------| GET-RESULT | | START-OF-SPEECH | |---------->| |------------| |------->| | | | |----------| | | DEFINE-GRAMMAR | RECOGNITION-START-TIMERS | |<-----------| |<---------| | | | | | | | |-------| | | | STOP | | |<------| | | | | |<-------------------STOP--------------------------| |<-------------------DEFINE-GRAMMAR----------------| 8.2. Recognizer Methods The recognizer supports the following methods. recognizer-Method = SET-PARAMS | GET-PARAMS | DEFINE-GRAMMAR | RECOGNIZE | GET-RESULT | RECOGNITION-START-TIMERS | STOP 8.3. Recognizer Events The recognizer may generate the following events. recognizer-Event = START-OF-SPEECH | RECOGNITION-COMPLETE 8.4. Recognizer Header Fields A recognizer message may contain header fields containing request options and information to augment the Method, Response or Event message it is associated with. recognizer-header = confidence-threshold ; Section 8.4.1 S Shanmugham, et. al. IETF-Draft Page 45 MRCPv2 Protocol May 2003 | sensitivity-level ; Section 8.4.2 | speed-vs-accuracy ; Section 8.4.3 | n-best-list-length ; Section 8.4.4 | no-input-timeout ; Section 8.4.5 | recognition-timeout ; Section 8.4.6 | waveform-url ; Section 8.4.7 | completion-cause ; Section 8.4.8 | recognizer-context-block ; Section 8.4.9 | recognizer-start-timers ; Section 8.4.10 | vendor-specific ; Section 8.4.11 | speech-complete-timeout ; Section 8.4.12 | speech-incomplete-timeout; Section 8.4.13 | dtmf-interdigit-timeout ; Section 8.4.14 | dtmf-term-timeout ; Section 8.4.15 | dtmf-term-char ; Section 8.4.16 | fetch-timeout ; Section 8.4.17 | failed-uri ; Section 8.4.18 | failed-uri-cause ; Section 8.4.19 | save-waveform ; Section 8.4.20 | new-audio-channel ; Section 8.4.21 | speech-language ; Section 8.4.22 Parameter Support Methods/Events confidence-threshold MANDATORY SET-PARAMS, RECOGNIZE GET-RESULT sensitivity-level Optional SET-PARAMS, GET-PARAMS, RECOGNIZE speed-vs-accuracy Optional SET-PARAMS, GET-PARAMS, RECOGNIZE n-best-list-length Optional SET-PARAMS, GET-PARAMS, RECOGNIZE, GET-RESULT no-input-timeout MANDATORY SET-PARAMS, GET-PARAMS, RECOGNIZE recognition-timeout MANDATORY SET-PARAMS, GET-PARAMS, RECOGNIZE waveform-url MANDATORY RECOGNITION-COMPLETE completion-cause MANDATORY DEFINE-GRAMMAR, RECOGNIZE, RECOGNITON-COMPLETE recognizer-context-block Optional SET-PARAMS, GET-PARAMS recognizer-start-timers MANDATORY RECOGNIZE vendor-specific MANDATORY SET-PARAMS, GET-PARAMS speech-complete-timeout MANDATORY SET-PARAMS, GET-PARAMS RECOGNIZE speech-incomplete-timeout MANDATORY SET-PARAMS, GET-PARAMS RECOGNIZE dtmf-interdigit-timeout MANDATORY SET-PARAMS, GET-PARAMS RECOGNIZE dtmf-term-timeout MANDATORY SET-PARAMS, GET-PARAMS RECOGNIZE dtmf-term-char MANDATORY SET-PARAMS, GET-PARAMS S Shanmugham, et. al. IETF-Draft Page 46 MRCPv2 Protocol May 2003 RECOGNIZE fetch-timeout MANDATORY SET-PARAMS, GET-PARAMS RECOGNIZE, DEFINE-GRAMMAR failed-uri MANDATORY DEFINE-GRAMMAR response, RECOGNITION-COMPLETE failed-uri-cause MANDATORY DEFINE-GRAMMAR response, RECOGNITION-COMPLETE save-waveform MANDATORY SET-PARAMS, GET-PARAMS, RECOGNIZE new-audio-channel MANDATORY RECOGNIZE speech-language MANDATORY SET-PARAMS, GET-PARAMS, RECOGNIZE, DEFINE-GRAMMAR 8.4.1. Confidence Threshold When a recognition resource recognizes or matches a spoken phrase with some portion of the grammar, it associates a confidence level with that conclusion. The confidence-threshold parameter tells the recognizer resource what confidence level should be considered a successful match. This is an integer from 0-100 indicating the recognizer's confidence in the recognition. If the recognizer determines that its confidence in all its recognition results is less than the confidence threshold, then it MUST return no-match as the recognition result. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. The default value for this field is platform specific. confidence-threshold= "Confidence-Threshold" ":" 1*DIGIT CRLF 8.4.2. Sensitivity Level To filter out background noise and not mistake it for speech, the recognizer may support a variable level of sound sensitivity. The sensitivity-level parameter allows the client to set this value on the recognizer. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. A higher value for this field means higher sensitivity. The default value for this field is platform specific. sensitivity-level = "Sensitivity-Level" ":" 1*DIGIT CRLF 8.4.3. Speed Vs Accuracy Depending on the implementation and capability of the recognizer resource it may be tunable towards Performance or Accuracy. Higher accuracy may mean more processing and higher CPU utilization, meaning less calls per media server and vice versa. This parameter on the resource can be tuned by the speed-vs-accuracy header. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. A higher value for this field means higher speed. The default value for this field is platform specific. S Shanmugham, et. al. IETF-Draft Page 47 MRCPv2 Protocol May 2003 speed-vs-accuracy = "Speed-Vs-Accuracy" ":" 1*DIGIT CRLF 8.4.4. N Best List Length When the recognizer matches an incoming stream with the grammar, it may come up with more than one alternative matches because of confidence levels in certain words or conversation paths. If this header field is not specified, by default, the recognition resource will only return the best match above the confidence threshold. The client, by setting this parameter, could ask the recognition resource to send it more than 1 alternative. All alternatives must still be above the confidence-threshold. A value greater than one does not guarantee that the recognizer will send the requested number of alternatives. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. The minimum value for this field is 1. The default value for this field is 1. n-best-list-length = "N-Best-List-Length" ":" 1*DIGIT CRLF 8.4.5. No Input Timeout When recognition is started and there is no speech detected for a certain period of time, the recognizer can send a RECOGNITION- COMPLETE event to the client and terminate the recognition operation. The no-input-timeout header field can set this timeout value. The value is in milliseconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value for this field is platform specific. no-input-timeout = "No-Input-Timeout" ":" 1*DIGIT CRLF 8.4.6. Recognition Timeout When recognition is started and there is no match for a certain period of time, the recognizer can send a RECOGNITION-COMPLETE event to the client and terminate the recognition operation. The recognition-timeout parameter field sets this timeout value. The value is in milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value is 10 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. recognition-timeout = "Recognition-Timeout" ":" 1*DIGIT CRLF 8.4.7. Waveform URL If the save-waveform header field is set to true, the recognizer MUST record the incoming audio stream of the recognition into a file and provide a URI for the client to access it. This header MUST be S Shanmugham, et. al. IETF-Draft Page 48 MRCPv2 Protocol May 2003 present in the RECOGNITION-COMPLETE event if the save-waveform header field was set to true. The URL value of the header MUST be NULL if there was some error condition preventing the server from recording. Otherwise, the URL generated by the server SHOULD be globally unique across the server and all its recognition sessions. The URL SHOULD BE available until the session is torn down. waveform-url = "Waveform-URL" ":" Url CRLF 8.4.8. Completion Cause This header field MUST be part of a RECOGNITION-COMPLETE, event coming from the recognizer resource to the client. This indicates the reason behind the RECOGNIZE method completion. This header field MUST BE sent in the DEFINE-GRAMMAR and RECOGNIZE responses, if they return with a failure status and a COMPLETE state. completion-cause = "Completion-Cause" ":" 1*DIGIT SP 1*ALPHA CRLF Cause-Code Cause-Name Description 000 success RECOGNIZE completed with a match or DEFINE-GRAMMAR succeeded in downloading and compiling the grammar 001 no-match RECOGNIZE completed, but no match was found 002 no-input-timeout RECOGNIZE completed without a match due to a no-input-timeout 003 recognition-timeout RECOGNIZE completed without a match due to a recognition-timeout 004 gram-load-failure RECOGNIZE failed due grammar load failure. 005 gram-comp-failure RECOGNIZE failed due to grammar compilation failure. 006 error RECOGNIZE request terminated prematurely due to a recognizer error. 007 speech-too-early RECOGNIZE request terminated because speech was too early. 008 too-much-speech-timeout RECOGNIZE request terminated because speech was too long. 009 uri-failure Failure accessing a URI. 010 language-unsupported S Shanmugham, et. al. IETF-Draft Page 49 MRCPv2 Protocol May 2003 Language not supported. 8.4.9. Recognizer Context Block This parameter MAY BE sent as part of the SET-PARAMS or GET-PARAMS request. If the GET-PARAMS method, contains this header field with no value, then it is a request to the recognizer to return the recognizer context block. The response to such a message MAY contain a recognizer context block as a message entity. If the server returns a recognizer context block, the response MUST contain this header field and its value MUST match the content-id of that entity. If the SET-PARAMS method contains this header field, it MUST contain a message entity containing the recognizer context data, and a content-id matching this header field. This content-id should match the content-id that came with the context data during the GET-PARAMS operation. recognizer-context-block = "Recognizer-Context-Block" ":" 1*ALPHA CRLF 8.4.10. Recognition Start Timers This parameter MAY BE sent as part of the RECOGNIZE request. A value of false tells the recognizer to start recognition, but not to start the no-input timer yet. The recognizer should not start the timers until the client sends a RECOGNITION-START-TIMERS request to the recognizer. This is useful in the scenario when the recognizer and synthesizer engines are not part of the same session. Here when a kill-on-barge-in prompt is being played, you want the RECOGNIZE request to be simultaneously active so that it can detect and implement kill-on-barge-in. But at the same time you don't want the recognizer to start the no-input timers until the prompt is finished. The default value is "true". recognizer-start-timers = "Recognizer-Start-Timers" ":" boolean-value CRLF 8.4.11. Vendor Specific Parameters This set of headers allows the client to set Vendor Specific parameters. vendor-specific = "Vendor-Specific-Parameters" ":" vendor-specific-av-pair *[";" vendor-specific-av-pair] CRLF vendor-specific-av-pair= vendor-av-pair-name "=" vendor-av-pair-value This header can be sent in the SET-PARAMS method and is used to set vendor-specific parameters on the server. The vendor-av-pair-name S Shanmugham, et. al. IETF-Draft Page 50 MRCPv2 Protocol May 2003 can be any vendor-specific field name and conforms to the XML vendor-specific attribute naming convention. The vendor-av-pair- value is the value to set the attribute to, and needs to be quoted. When asking the server to get the current value of these parameters, this header can be sent in the GET-PARAMS method with the list of vendor-specific attribute names to get separated by a semicolon. This header field MAY occur in SET-PARAMS or GET-PARAMS. 8.4.12. Speech Complete Timeout This header field specifies the length of silence required following user speech before the speech recognizer finalizes a result (either accepting it or throwing a nomatch event). The speech-complete- timeout value is used when the recognizer currently has a complete match of an active grammar, and specifies how long it should wait for more input declaring a match. By contrast, the incomplete timeout is used when the speech is an incomplete match to an active grammar. The value is in milliseconds. speech-complete-timeout= "Speech-Complete-Timeout" ":" 1*DIGIT CRLF A long speech-complete-timeout value delays the result completion and therefore makes the computer's response slow. A short speech- complete-timeout may lead to an utterance being broken up inappropriately. Reasonable complete timeout values are typically in the range of 0.3 seconds to 1.0 seconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value for this field is platform specific. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 8.4.13. Speech Incomplete Timeout This header field specifies the required length of silence following user speech after which a recognizer finalizes a result. The incomplete timeout applies when the speech prior to the silence is an incomplete match of all active grammars. In this case, once the timeout is triggered, the partial result is rejected (with a nomatch event). The value is in milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value for this field is platform specific. speech-incomplete-timeout= "Speech-Incomplete-Timeout" ":" 1*DIGIT CRLF The speech-incomplete-timeout also applies when the speech prior to the silence is a complete match of an active grammar, but where it is possible to speak further and still match the grammar. By contrast, the complete timeout is used when the speech is a complete match to an active grammar and no further words can be spoken. S Shanmugham, et. al. IETF-Draft Page 51 MRCPv2 Protocol May 2003 A long speech-incomplete-timeout value delays the result completion and therefore makes the computer's response slow. A short speech- incomplete-timeout may lead to an utterance being broken up inappropriately. The speech-incomplete-timeout is usually longer than the speech- complete-timeout to allow users to pause mid-utterance (for example, to breathe). This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 8.4.14. DTMF Interdigit Timeout This header field specifies the inter-digit timeout value to use when recognizing DTMF input. The value is in milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value is 5 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. dtmf-interdigit-timeout= "DTMF-Interdigit-Timeout" ":" 1*DIGIT CRLF 8.4.15. DTMF Term Timeout This header field specifies the terminating timeout to use when recognizing DTMF input. The value is in milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value is 10 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. dtmf-term-timeout = "DTMF-Term-Timeout" ":" 1*DIGIT CRLF 8.4.16. DTMF-Term-Char This header field specifies the terminating DTMF character for DTMF input recognition. The default value is NULL which is specified as an empty header field. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. dtmf-term-char = "DTMF-Term-Char" ":" CHAR CRLF 8.4.17. Fetch Timeout When the recognizer needs to fetch grammar documents this header field controls URI access properties. This defines the recognizer timeout for completing the fetch of the resources the media server needs from the network. The value is in milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The default value for this field is platform specific. This header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. S Shanmugham, et. al. IETF-Draft Page 52 MRCPv2 Protocol May 2003 fetch-timeout = "Fetch-Timeout" ":" 1*ALPHA CRLF 8.4.18. Failed URI When a recognizer method needs a recognizer to fetch or access a URI and the access fails the media server SHOULD provide the failed URI in this header field in the method response. failed-uri = "Failed-URI" ":" Url CRLF 8.4.19. Failed URI Cause When a recognizer method needs a recognizer to fetch or access a URI and the access fails the media server SHOULD provide the URI specific or protocol specific response code through this header field in the method response. This field has been defined as alphanumeric to accommodate all protocols, some of which might have a response string instead of a numeric response code. failed-uri-cause = "Failed-URI-Cause" ":" 1*ALPHA CRLF 8.4.20. Save Waveform This header field allows the client to indicate to the recognizer that it MUST save the audio stream that was recognized. The recognizer MUST then record the recognized audio and make it available to the client in the form of a URI returned in the waveform-uri header field in the RECOGNITION-COMPLETE event. If there was an error in recording the stream or the audio clip is otherwise not available, the recognizer MUST return an empty waveform-uri header field. The default value for this fields is "false". save-waveform = "Save-Waveform" ":" boolean-value CRLF 8.4.21. New Audio Channel This header field MAY BE specified in a RECOGNIZE message and allows the client to tell the media server that, from that point on, it will be sending audio data from a new audio source, channel or speaker. If the recognition resource had collected any line statistics or information, it MUST discard it and start fresh for this RECOGNIZE. This helps in the case where the client MAY want to reuse an open recognition session with the media server for multiple telephone calls. new-audio-channel = "New-Audio-Channel" ":" boolean-value CRLF 8.4.22. Speech Language S Shanmugham, et. al. IETF-Draft Page 53 MRCPv2 Protocol May 2003 This header field specifies the language of recognition grammar data within a session or request, if it is not specified within the data. The value of this header field should follow RFC 1766 for its values. This MAY occur in DEFINE-GRAMMAR, RECOGNIZE, SET-PARAMS or GET-PARAMS request. speech-language = "Speech-Language" ":" 1*ALPHA CRLF 8.5. Recognizer Message Body A recognizer message may carry additional data associated with the method, response or event. The client may send the grammar to be recognized in DEFINE-GRAMMAR or RECOGNIZE requests. When the grammar is sent in the DEFINE-GRAMMAR method, the server should be able to download compile and optimize the grammar. The RECOGNIZE request MUST contain a list of grammars that need to be active during the recognition. The server resource may send the recognition results in the RECOGNITION-COMPLETE event or the GET-RESULT response. This data will be carried in the message body of the corresponding MRCPv2 message. 8.5.1. Recognizer Grammar Data Recognizer grammar data from the client to the server can be provided inline or by reference. Either way they are carried as MIME entities in the message body of the MRCPv2 request message. The grammar specified inline or by reference specifies the grammar used to match in the recognition process and this data is specified in one of the standard grammar specification formats like W3C's XML or ABNF or Sun's Java Speech Grammar Format etc. All media servers MUST support W3C's XML based grammar markup format [12](MIME-type application/grammar+xml) and SHOULD support the ABNF form (MIME-type application/grammar). When a grammar is specified in-line in the message, the client MUST provide a content-id for that grammar as part of the content headers. The server MUST store the grammar associated with that content-id for the duration of the session. A stored grammar can be overwritten by defining a new grammar with the same content-id. Grammars that have been associated with a content-id can be referenced through a special "session:" URI scheme. Example: session:help@root-level.store If grammar data needs to be specified by external URI reference, the MIME-type text/uri-list is used to list the one or more URI that will specify the grammar data. All media servers MUST support the HTTP uri access mechanism. S Shanmugham, et. al. IETF-Draft Page 54 MRCPv2 Protocol May 2003 If the data to be defined consists of a mix of URI and inline grammar data the multipart/mixed MIME-type is used and embedded with the MIME-blocks for text/uri-list, application/grammar or application/grammar+xml. The character set and encoding used in the grammar data may be specified according to standard MIME-type definitions. When more than one grammar URI or inline grammar block is specified in a message body of the RECOGNIZE request, it is an active list of grammar alternatives to listen. The ordering of the list implies the precedence of the grammars, with the first grammar in the list having the highest precedence. Example 1: Content-Type: application/grammar+xml Content-Id: request1@form-level.store Content-Length: 104 oui yes may I speak to Michel Tremblay Andre Roy Robert Robert Robert S Shanmugham, et. al. IETF-Draft Page 55 MRCPv2 Protocol May 2003 Example 2: Content-Type: text/uri-list Content-Length: 176 session:help@root-level.store http://www.example.com/Directory-Name-List.grxml http://www.example.com/Department-List.grxml http://www.example.com/TAC-Contact-List.grxml session:menu1@menu-level.store Example 3: Content-Type: multipart/mixed; boundary="--break" --break Content-Type: text/uri-list Content-Length: 176 http://www.example.com/Directory-Name-List.grxml http://www.example.com/Department-List.grxml http://www.example.com/TAC-Contact-List.grxml --break Content-Type: application/grammar+xml Content-Id: request1@form-level.store Content-Length: 104 oui yes may I speak to Michel Tremblay Andre Roy S Shanmugham, et. al. IETF-Draft Page 56 MRCPv2 Protocol May 2003 Robert Robert Robert --break 8.5.2. Recognizer Result Data Recognition result data from the server is carried in the MRCPv2 message body of the RECOGNITION-COMPLETE event or the GET-RESULT response message as MIME entities. All media servers MUST support W3C's Natural Language Semantics Markup Language (NLSML)[11] as the default standard for returning recognition results back to the client, and hence MUST support the MIME-type application/x-nlsml. Example 1: Content-Type: application/x-nlsml Content-Length: 104 S Shanmugham, et. al. IETF-Draft Page 59 MRCPv2 Protocol May 2003 oui yes may I speak to Michel Tremblay Andre Roy S->C:MRCP/2.0 543257 200 COMPLETE Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success C->S:DEFINE-GRAMMAR 543258 MRCP/2.0 Channel-Identifier: 32AECB23433801 Content-Type: application/grammar+xml Content-Id: helpgrammar@root-level.store Content-Length: 104 I need help S->C:MRCP/2.0 543258 200 COMPLETE Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success C->S:DEFINE-GRAMMAR 543259 MRCP/2.0 Channel-Identifier: 32AECB23433801 Content-Type: application/grammar+xml Content-Id: request2@field-level.store S Shanmugham, et. al. IETF-Draft Page 60 MRCPv2 Protocol May 2003 Content-Length: 104 please move the window open a file open close delete move the a window file menu S->C:MRCP/2.0 543259 200 COMPLETE Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success S Shanmugham, et. al. IETF-Draft Page 61 MRCPv2 Protocol May 2003 C->S:RECOGNIZE 543260 MRCP/2.0 Channel-Identifier: 32AECB23433801 N-Best-List-Length: 2 Content-Type: text/uri-list Content-Length: 176 session:request1@form-level.store session:request2@field-level.store session:helpgramar@root-level.store S->C:MRCP/2.0 543260 200 IN-PROGRESS Channel-Identifier: 32AECB23433801 S->C:START-OF-SPEECH 543260 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433801 S->C:RECOGNITION-COMPLETE 543260 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success Waveform-URL: http://web.media.com/session123/audio.wav Content-Type: applicationt/x-nlsml Content-Length: 276 Andre Roy may I speak to Andre Roy 8.9. RECOGNIZE The RECOGNIZE method from the client to the server tells the recognizer to start recognition and provides it with a grammar to match for. The RECOGNIZE method can carry parameters to control the sensitivity, confidence level and the level of detail in results provided by the recognizer. These parameters override the current defaults set by a previous SET-PARAMS method. If the resource is in the recognition state, the RECOGNIZE request MUST respond with a failure status. If the resource is in the Idle state and was able to successfully start the recognition, the server MUST return a success code and a request-state of IN-PROGRESS. This means that the recognizer is S Shanmugham, et. al. IETF-Draft Page 62 MRCPv2 Protocol May 2003 active and that the client should expect further events with this request-id. If the resource could not start a recognition, it MUST return a failure status code of 407 and contain a completion-cause header field describing the cause of failure. For the recognizer resource, this is the only request that can return request-state of IN-PROGRESS, meaning that recognition is in progress. When the recognition completes by matching one of the grammar alternatives or by a time-out without a match or for some other reason, the recognizer resource MUST send the client a RECOGNITON-COMPLETE event with the result of the recognition and a request-state of COMPLETE. For large grammars that can take a long time to compile and for grammars which are used repeatedly, the client could issue a DEFINE- GRAMMAR request with the grammar ahead of time. In such a case the client can issue the RECOGNIZE request and reference the grammar through the "session:" special URI. This also applies in general if the client wants to restart recognition with a previous inline grammar. Note that since the audio and the messages are carried over separate communication paths there may be a race condition between the start of the flow of audio and the receipt of the RECOGNIZE method. For example, if audio flow is started by the client at the same time as the RECOGNIZE method is sent, either the audio or the RECOGNIZE will arrive at the recognizer first. As another example, the client may chose to continuously send audio to the Media server and signal the Media server to recognize using the RECOGNIZE method. A number of mechanisms exist to resolve this condition and the mechanism chosen is left to the implementers of recognizer Media servers. Example: C->S:RECOGNIZE 543257 MRCP/2.0 Channel-Identifier: 32AECB23433801 Confidence-Threshold: 90 Content-Type: application/grammar+xml Content-Id: request1@form-level.store Content-Length: 104 oui S Shanmugham, et. al. IETF-Draft Page 63 MRCPv2 Protocol May 2003 yes may I speak to Michel Tremblay Andre Roy S->C:MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433801 S->C:START-OF-SPEECH 543257 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433801 S->C:RECOGNITION-COMPLETE 543257 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success Waveform-URL: http://web.media.com/session123/audio.wav Content-Type: application/x-nlsml Content-Length: 276 Andre Roy may I speak to Andre Roy 8.10. STOP The STOP method from the client to the server tells the resource to stop recognition if one is active. If a RECOGNIZE request is active and the STOP request successfully terminated it, then the response header contains an active-request-id-list header field containing the request-id of the RECOGNIZE request that was terminated. In this case, no RECOGNITION-COMPLETE event will be sent for the terminated request. If there was no recognition active, then the response MUST S Shanmugham, et. al. IETF-Draft Page 64 MRCPv2 Protocol May 2003 NOT contain an active-request-id-list header field. Either way the response MUST contain a status of 200(Success). Example: C->S:RECOGNIZE 543257 MRCP/2.0 Channel-Identifier: 32AECB23433801 Confidence-Threshold: 90 Content-Type: application/grammar+xml Content-Id: request1@form-level.store Content-Length: 104 oui yes may I speak to Michel Tremblay Andre Roy S->C:MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433801 C->S:STOP 543258 200 MRCP/2.0 Channel-Identifier: 32AECB23433801 S->C:MRCP/2.0 543258 200 COMPLETE Channel-Identifier: 32AECB23433801 Active-Request-Id-List: 543257 8.11. GET-RESULT The GET-RESULT method from the client to the server can be issued when the recognizer is in the recognized state. This request allows the client to retrieve results for a completed recognition. This is useful if the client decides it wants more alternatives or more S Shanmugham, et. al. IETF-Draft Page 65 MRCPv2 Protocol May 2003 information. When the media server receives this request it should re-compute and return the results according to the recognition constraints provided in the GET-RESULT request. The GET-RESULT request could specify constraints like a different confidence-threshold, or n-best-list-length. This feature is optional and the automatic speech recognition (ASR) engine may return a status of unsupported feature. Example: C->S:GET-RESULT 543257 MRCP/2.0 Channel-Identifier: 32AECB23433801 Confidence-Threshold: 90 S->C:MRCP/2.0 543257 200 COMPLETE Channel-Identifier: 32AECB23433801 Content-Type: application/x-nlsml Content-Length: 276 Andre Roy may I speak to Andre Roy 8.12. START-OF-SPEECH This is an event from the recognizer to the client indicating that it has detected speech. This event is useful in implementing kill- on-barge-in scenarios when the synthesizer resource is in a different session than the recognizer resource and hence is not aware of an incoming audio source. In these cases, it is up to the client to act as a proxy and turn around and issue the BARGE-IN- OCCURRED method to the synthesizer resource. The recognizer resource also sends a unique proxy-sync-id in the header for this event, which is sent to the synthesizer in the BARGE-IN-OCCURRED method to the synthesizer. This event should be generated irrespective of whether the synthesizer and recognizer are in the same media server or not. S Shanmugham, et. al. IETF-Draft Page 66 MRCPv2 Protocol May 2003 8.13. RECOGNITION-START-TIMERS This request is sent from the client to the recognition resource when it knows that a kill-on-barge-in prompt has finished playing. This is useful in the scenario when the recognition and synthesizer engines are not in the same session. Here when a kill-on-barge-in prompt is being played, you want the RECOGNIZE request to be simultaneously active so that it can detect and implement kill on barge-in. But at the same time you don't want the recognizer to start the no-input timers until the prompt is finished. The parameter recognizer-start-timers header field in the RECOGNIZE request will allow the client to say if the timers should be started or not. The recognizer should not start the timers until the client sends a RECOGNITION-START-TIMERS method to the recognizer. 8.14. RECOGNITON-COMPLETE This is an Event from the recognizer resource to the client indicating that the recognition completed. The recognition result is sent in the MRCPv2 body of the message. The request-state field MUST be COMPLETE indicating that this is the last event with that request-id, and that the request with that request-id is now complete. The recognizer context still holds the results and the audio waveform input of that recognition till the next RECOGNIZE request is issued. A URL to the audio waveform MAY BE returned to the client in a waveform-url header field in the RECOGNITION- COMPLETE event. The client can use this URI to retrieve or playback the audio. Example: C->S:RECOGNIZE 543257 MRCP/2.0 Channel-Identifier: 32AECB23433801 Confidence-Threshold: 90 Content-Type: application/grammar+xml Content-Id: request1@form-level.store Content-Length: 104 oui yes S Shanmugham, et. al. IETF-Draft Page 67 MRCPv2 Protocol May 2003 may I speak to Michel Tremblay Andre Roy S->C:MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433801 S->C:START-OF-SPEECH 543257 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433801 S->C:RECOGNITION-COMPLETE 543257 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success Waveform-URL: http://web.media.com/session123/audio.wav Content-Type: application/x-nlsml Content-Length: 276 Andre Roy may I speak to Andre Roy 8.15. DTMF Detection Digits received as DTMF tones will be delivered to the automatic speech recognition (ASR) engine in the RTP stream according to RFC 2833. The automatic speech recognizer (ASR) needs to support RFC 2833 to recognize digits. If it does not support RFC 2833, it will have to process the audio stream and extract the audio tones from it. 9. Examples: The following is an example of a typical MRCPv2 session of speech synthesis and recognition between a client and a server. S Shanmugham, et. al. IETF-Draft Page 68 MRCPv2 Protocol May 2003 Opening a session to the MRCPv2 server. This is exchange does not allocate a resource or setup media. It simply establishes a SIP session with the MRCPv2 server. C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314159 INVITE Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314159 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314160 ACK Content-Length: 0 The client requests the server to create synthesizer resource control channel to do speech synthesis. This also adds a media pipe to send the generated speech. S Shanmugham, et. al. IETF-Draft Page 69 MRCPv2 Protocol May 2003 C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314161 INVITE Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 0 mrcp 02 m=audio 49170 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=recvonly S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314161 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 32416 mrcp 32AECB23433802 m=audio 48260 RTP/AVP 0 a=rtpmap:0 pcmu/8000 a=sendonly C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314162 ACK Content-Length: 0 S Shanmugham, et. al. IETF-Draft Page 70 MRCPv2 Protocol May 2003 This exchange allocates an additional resource control channel for a recognizer. Since a recognizer would need to receive an audio stream for recognition, this interaction also updates the audio stream to sendrecv making it a 2-way audio stream. C->S: INVITE sip:mresources@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 142 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 0 mrcp 01 m=control 0 mrcp 02 m=audio 49170 RTP/AVP 0 96 a=rtpmap:0 pcmu/8000 a=rtpmap:96 telephone-event/8000 a=fmtp:96 0-15 a=sendrecv S->C: SIP/2.0 200 OK To: MediaServer From: sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314163 INVITE Contact: Content-Type: application/sdp Content-Length: 131 v=0 o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 s=SDP Seminar i=A session for processing media c=IN IP4 224.2.17.12/127 m=control 32416 mrcp 32AECB23433801 m=control 32416 mrcp 32AECB23433802 m=audio 48260 RTP/AVP 0 a=rtpmap:0 pcmu/8000 a=rtpmap:96 telephone-event/8000 a=fmtp:96 0-15 a=sendrecv S Shanmugham, et. al. IETF-Draft Page 71 MRCPv2 Protocol May 2003 C->S: ACK sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 To: MediaServer ;tag=a6c85cf From: Sarvi ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 314164 ACK Content-Length: 0 A MRCPv2 SPEAK request initiates speech. C->S:SPEAK 543257 MRCP/2.0 Channel-Identifier: 32AECB23433802 Kill-On-Barge-In: false Voice-gender: neutral Voice-category: teenager Prosody-volume: medium Content-Type: application/synthesis+ssml Content-Length: 104 You have 4 new messages. The first is from Stephanie Williams and arrived at 3:45pm. The subject is ski trip S->C:MRCP/2.0 543257 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 The synthesizer hits the special marker in the message to be spoken and faithfully informs the client of the event. S->C:SPEECH-MARKER 543257 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433802 Speech-Marker: Stephanie The synthesizer finishes with the SPEAK request. S->C:SPEAK-COMPLETE 543257 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 S Shanmugham, et. al. IETF-Draft Page 72 MRCPv2 Protocol May 2003 The recognizer is issued a request to listen for the customer choices. C->S:RECOGNIZE 543258 MRCP/2.0 Channel-Identifier: 32AECB23433801 Content-Type: application/grammar+xml Content-Length: 104 Can I speak to Michel Tremblay Andre Roy S->C:MRCP/2.0 543258 200 IN-PROGRESS Channel-Identifier: 32AECB23433801 The client issues the next MRCPv2 SPEAK method. It is generally RECOMMENDED when playing a prompt to the user with kill-on-barge-in and asking for input, that the client issue the RECOGNIZE request ahead of the SPEAK request for optimum performance and user experience. This way, it is guaranteed that the recognizer is online before the prompt starts playing and the user's speech will not be truncated at the beginning (especially for power users). C->S:SPEAK 543259 MRCP/2.0 Channel-Identifier: 32AECB23433802 Kill-On-Barge-In: true Content-Type: application/sml Content-Length: 104 Welcome to ABC corporation. Who would you like Talk to. S->C:MRCP/2.0 543259 200 IN-PROGRESS Channel-Identifier: 32AECB23433802 S Shanmugham, et. al. IETF-Draft Page 73 MRCPv2 Protocol May 2003 Since the last SPEAK request had Kill-On-Barge-In set to "true", the message synthesizer is interrupted when the user starts speaking. And the client is notified. Now, since the recognition and synthesizer resources are in the same session, they worked with each other to deliver kill-on-barge-in. If the resources were in different sessions it would have taken a few more messages before the client got the SPEAK-COMPLETE event from the synthesizer resource. Whether the synthesizer and recognizer are in the same session or not the recognizer MUST generate the START- OF-SPEECH event to the client. The client should have then blindly turned around and issued a BARGE-IN-OCCURRED method to the synthesizer resource. The synthesizer, if kill-on-barge-in was enabled on the current SPEAK request, would have then interrupted it and issued SPEAK-COMPLETE event to the client. In this example since the synthesizer and recognizer are in the same session, the client did not issue the BARGE-IN-OCCURRED method to the synthesizer and assumed that kill- on-barge-in was implemented between the two resources in the same session and worked. The completion-cause code differentiates if this is normal completion or a kill-on-barge-in interruption. S->C:START-OF-SPEECH 543258 IN-PROGRESS MRCP/2.0 Channel-Identifier: 32AECB23433801 S->C:SPEAK-COMPLETE 543259 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433802 Completion-Cause: 000 normal The recognition resource matched the spoken stream to a grammar and generated results. The result of the recognition is returned by the server as part of the RECOGNITION-COMPLETE event. S->C:RECOGNITION-COMPLETE 543258 COMPLETE MRCP/2.0 Channel-Identifier: 32AECB23433801 Completion-Cause: 000 success Waveform-URL: http://web.media.com/session123/audio.wav Content-Type: application/x-nlsml Content-Length: 104 S Shanmugham, et. al. IETF-Draft Page 74 MRCPv2 Protocol May 2003 Andre Roy may I speak to Andre Roy When the client wants to tear down the whole session and all its resources, it MUST issue a SIP BYE to close the SIP session. This will de-allocate all the control channels and resources allocated under the session. C->S:BYE sip:mrcp@mediaserver.com SIP/2.0 Max-Forwards: 70 From: Sarvi ;tag=a6c85cf To: MediaServer ;tag=1928301774 Call-ID: a84b4c76e66710 CSeq: 231 BYE Content-Length: 0 10. Reference Documents [1] Fielding, R., Gettys, J., Mogul, J., Frystyk. H., Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext transfer protocol -- HTTP/1.1", RFC 2616, June 1999. [2] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M., and Schooler, E., ŸSIP: Session Initiation Protocol÷, RFC 3261, June 2002 [3] Crocker, D. and P. Overell, "Augmented BNF for syntax specifications: ABNF", RFC 2234, November 1997. [4] Handley, M. and V. Jacobson, "SDP: session description protocol", RFC 2327, April 1998. [5] Rosenberg, J., Schulzrinne, H., ŸAn Offer/Answer Model with the Session Description Protocol (SDP)÷, RFC 3264, June 2002 [6] Robinson, F., Marquette, B., and R. Hernandez, "Using Media Resource Control Protocol with SIP", draft-robinson-mrcp-sip- 00, (work in progress), September 2001. [7] World Wide Web Consortium, ŸVoice Extensible Markup Language (VoiceXML) Version 2.0÷, (work in progress), October 2001. [8] Crocker, D., ŸSTANDARD FOR THE FORMAT OF ARPA INTERNET TEXT MESSAGES÷, RFC 822, August 1982. S Shanmugham, et. al. IETF-Draft Page 75 MRCPv2 Protocol May 2003 [9] Bradner, S., ŸKey words for use in RFCs to Indicate Requirement Levels÷, RFC 2119, March 1997. [10] World Wide Web Consortium, ŸSpeech Synthesis Markup Language (SSML)÷, W3C Working Draft, 3 January 2001. [11] World Wide Web Consortium, ŸNatural Language Semantics Markup Language (NLSML) for the Speech Interface Framework÷, W3C Working Draft, 30 May 2001. [12] World Wide Web Consortium, ŸSpeech Recognition Grammar Specification Version 1.0÷, W3C Candidate Recommendation, 26 June 2002. 11. Appendix ABNF Message Definitions generic-message = start-line message-header CRLF [ message-body ] start-line = request-line / status-line / event-line request-line = method-name SP request-id SP mrcp-version CRLF status-line = mrcp-version SP request-id SP status-code SP request-state CRLF event-line = event-name SP request-id SP request-state SP mrcp-version CRLF message-header = 1*(generic-header / resource-header) generic-header = active-request-id-list / proxy-sync-id / content-id / content-type / content-length / content-base / content-location / content-encoding / cache-control / logging-tag resource-header = recognizer-header / synthesizer-header method-name = synthesizer-method S Shanmugham, et. al. IETF-Draft Page 76 MRCPv2 Protocol May 2003 / recognizer-method event-name = synthesizer-event / recognizer-event request-state = "COMPLETE" / "IN-PROGRESS" / "PENDING" synthesizer-method = "SET-PARAMS" / "GET-PARAMS" / "SPEAK" / "STOP" / "PAUSE" / "RESUME" / "BARGE-IN-OCCURRED" / "CONTROL" synthesizer-event = "SPEECH-MARKER" / "SPEAK-COMPLETE" synthesizer-header = jump-target / kill-on-barge-in / speaker-profile / completion-cause / voice-parameter / prosody-parameter / vendor-specific / speech-marker / speech-language / fetch-hint / audio-fetch-hint / fetch-timeout / failed-uri / failed-uri-cause / speak-restart / speak-length Recognizer-Method = SET-PARAMS / GET-PARAMS / DEFINE-GRAMMAR / RECOGNIZE / GET-RESULT / RECOGNITION-START-TIMERS / STOP recognizer-header = confidence-threshold / sensitivity-level / speed-vs-accuracy / n-best-list-length / no-input-timeout S Shanmugham, et. al. IETF-Draft Page 77 MRCPv2 Protocol May 2003 / recognition-timeout / waveform-url / completion-cause / recognizer-context-block / recognizer-start-timers / vendor-specific / speech-complete-timeout / speech-incomplete-timeout / dtmf-interdigit-timeout / dtmf-term-timeout / dtmf-term-char / fetch-timeout / failed-uri / failed-uri-cause / save-waveform / new-audio-channel / speech-language mrcp-version = "MRCP" "/" 1*DIGIT "." 1*DIGIT request-id = 1*DIGIT active-request-id-list = "Active-Request-Id-List" ":" request-id *("," request-id) CRLF proxy-sync-id = "Proxy-Sync-Id" ":" 1*ALPHA CRLF content-base = "Content-Base" ":" absoluteURI CRLF content-encoding = "Content-Encoding" ":" 1#content-coding CRLF content-location = "Content-Location" ":" ( absoluteURI / relativeURI ) CRLF cache-control = "Cache-Control" ":" 1#cache-directive CRLF cache-directive = "max-age" "=" delta-seconds / "max-stale" "=" delta-seconds / "min-fresh" "=" delta-seconds logging-tag = "Logging-Tag" ":" 1*ALPHA CRLF jump-target = "Jump-Size" ":" speech-length-value CRLF speech-length-value = numeric-speech-length / text-speech-length text-speech-length = 1*ALPHA SP "Tag" S Shanmugham, et. al. IETF-Draft Page 78 MRCPv2 Protocol May 2003 numeric-speech-length =("+" / "-") 1*DIGIT SP numeric-speech-unit numeric-speech-unit ="Second" / "Word" / "Sentence" / "Paragraph" delta-seconds = 1*DIGIT kill-on-barge-in = "Kill-On-Barge-In" ":" boolean-value CRLF boolean-value = "true" / "false" speaker-profile = "Speaker-Profile" ":" uri CRLF completion-cause = "Completion-Cause" ":" 1*DIGIT SP 1*ALPHA CRLF voice-parameter = "Voice-" voice-param-name ":" voice-param-value CRLF prosody-parameter = "Prosody-" prosody-param-name ":" prosody-param-value CRLF vendor-specific = "Vendor-Specific-Parameters" ":" vendor-specific-av-pair *[";" vendor-specific-av-pair] CRLF vendor-specific-av-pair = vendor-av-pair-name "=" vendor-av-pair-value speech-marker = "Speech-Marker" ":" 1*ALPHA CRLF speech-language = "Speech-Language" ":" 1*ALPHA CRLF fetch-hint = "Fetch-Hint" ":" 1*ALPHA CRLF audio-fetch-hint = "Audio-Fetch-Hint" ":" 1*ALPHA CRLF fetch-timeout = "Fetch-Timeout" ":" 1*DIGIT CRLF failed-uri = "Failed-URI" ":" Url CRLF failed-uri-cause = "Failed-URI-Cause" ":" 1*ALPHA CRLF speak-restart = "Speak-Restart" ":" boolean-value CRLF speak-length = "Speak-Length" ":" speech-length-value CRLF speech-length-value = numeric-speech-length S Shanmugham, et. al. IETF-Draft Page 79 MRCPv2 Protocol May 2003 / text-speech-length text-speech-length = 1*ALPHA SP "Tag" numeric-speech-length = ("+" / "-") 1*DIGIT SP numeric-speech-unit numeric-speech-unit = "Second" / "Word" / "Sentence" / "Paragraph" confidence-threshold = "Confidence-Threshold" ":" 1*DIGIT CRLF sensitivity-level = "Sensitivity-Level" ":" 1*DIGIT CRLF speed-vs-accuracy = "Speed-Vs-Accuracy" ":" 1*DIGIT CRLF n-best-list-length = "N-Best-List-Length" ":" 1*DIGIT CRLF no-input-timeout = "No-Input-Timeout" ":" 1*DIGIT CRLF recognition-timeout = "Recognition-Timeout" ":" 1*DIGIT CRLF waveform-url = "Waveform-URL" ":" Url CRLF completion-cause= "Completion-Cause" ":" 1*DIGIT SP 1*ALPHA CRLF recognizer-context-block = "Recognizer-Context-Block" ":" 1*ALPHA CRLF recognizer-start-timers = "Recognizer-Start-Timers" ":" boolean-value CRLF speech-complete-timeout = "Speech-Complete-Timeout" ":" 1*DIGIT CRLF speech-incomplete-timeout = "Speech-Incomplete-Timeout" ":" 1*DIGIT CRLF dtmf-interdigit-timeout = "DTMF-Interdigit-Timeout" ":" 1*DIGIT CRLF dtmf-term-timeout = "DTMF-Term-Timeout" ":" 1*DIGIT CRLF dtmf-term-char = "DTMF-Term-Char" ":" CHAR CRLF fetch-timeout = "Fetch-Timeout" ":" 1*DIGIT CRLF S Shanmugham, et. al. IETF-Draft Page 80 MRCPv2 Protocol May 2003 save-waveform = "Save-Waveform" ":" boolean-value CRLF reset-audio-channel = "Reset-Audio-Channel" ":" boolean-value CRLF Full Copyright Statement Copyright (C) The Internet Society (1999). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Acknowledgements Andre Gillet (Nuance Communications) Andrew Hunt (SpeechWorks) Aaron Kneiss (SpeechWorks) Brian Eberman (SpeechWorks) Kristian Finlator (SpeechWorks) Martin Dragomirecky (Cisco Systems Inc) Peter Monaco (Nuance Communications) Pierre Forgues (Nuance Communications) Suresh Kaliannan (Cisco Systems Inc.) Corey Stohs (Cisco Systems Inc) Dan Burnett (Nuance Communications) Authors' Addresses Saravanan Shanmugham S Shanmugham, et. al. IETF-Draft Page 81 MRCPv2 Protocol May 2003 Cisco Systems Inc. 170 W Tasman Drive, San Jose, CA 95134 Email: sarvi@cisco.com S Shanmugham, et. al. IETF-Draft Page 82