Error while trying to train a 202240228 Whisper Large v2 baseline model
When trying to train a custom speech model using a dataset containing an audio file and its transcript, the model failed to train due to an internal error. Can anyone provide any insights on how to troubleshoot this issue?
Azure speech to text batch stucked on "Running" status and no percentage
this is the request: "azureRequest": { "displayName": "job_title...", "description": "job_title...", "locale": "it-it", "contentUrls": [ "{url of a wave…
TTS繁體中文國語發音錯誤
「重考」發音應該是 ㄔㄨㄥˊ ㄎㄠˇ 「假期」發音應該是 ㄐㄧㄚˋ ㄑㄧˊ TTS 是收費服務,因此請儘快修正。 謝謝
Handling connection errors in Speech SDK
Hi, we are using Speech SDK (version 1.35.0, C++) for "speech to text". We use SpeechRecognizer->StartKeywordRecognitionAsync. While running the application, we lose connection sometimes and sometimes internet connection is okay, but we get…
Sample Data for different styles of Custom Neural Voices (happy, excited, sad).
I could find individual utterances for neutral speech, questions, and exclamations here: https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/Sample%20Data/Individual%20utterances%20%2B%20matching%20script/SampleScript.txt To…
Do we need to close/suspend built-in AI voices (Ava, Andrew, Emma, Brian, etc) after using them to create a file in Audio Content Creation?
Hello, I understand that Custom Neural Voices need to be suspended after use due to their per-hour pricing. Do we also need to suspend anything after using Microsoft's built-in AI voices? I couldn't find specific information on this and want to avoid…
How to estimate the time needed to train a custom STT model?
Hey! I'm thinking about fine-tuning a STT model with Audio + human-labeled transcript data in Speech Studio. However, as I read through the docs, I can see that "If you switch to a base model that supports customization with audio data, the training…
How can I make Microsoft consider adding Faroese language to Speech Services
I need text-to-speech services for Faroese in Speech Services. How would I go about getting Microsoft to consider this request? Is there any way for me myself to train a custom voice, for a language that doesn't yet exist in Microsoft's repository of…
How do you do pronunciation
Recently I had a script for a programming video, and I needed the word GUID, or goo id. I tried typing many different ways, and the only way I could get the word GUID, was to type goo hid, and use an audio editor and get rid of the H sound. Azure Speech…
400 Bad request using whisper with AzureCliCredentials
I'm trying to use Whisper using the AzureCliCredential and i always get an error as follow { code: 'Request is badly formated', message: 'Resource Id is badly formed: NA' } my very simple code is : import * as fs from "fs"; import {…
training with mixed language in custom-stt(English & Korean)
Hi, I am working on training korean custom-stt, but in the training data , there are a few english words mixed in it. Some of them are well processed and accepted as train data but others get rejected such as winder, insulator, gripper, rewinding. below…
Can I re-train an already deployed custom voice model with newly added data without undergoing the entire training time again (approximately 24 hours)?
Here’s the context: We set up a voice talent, added training data, trained the model, and deployed it. We've now updated the dataset with more audios and transcripts, increasing the number of utterances from 1300 to 1500. When I try to train this voice…
Speech recognition service is not working correctly
Hi, I'm using your speech service to recognize phrases spoken by a user in real time and evaluate their pronunciation. However, I am facing the following issues If I pass the reference text and set EnableMiscue =true, then all the wrong words the user…
Why is the Isabella Multilingual voice available only in Clipchamp?
Hello, I noticed that the Isabella Multilingual voice for Thai Text to Speech is available in Clipchamp but not in Audio Content Creation. I'm interested in using this voice for my projects. I was wondering if there are any specific reasons why this…
How to output transcription on a word-level
With the provided callback function, the text is outputted as described by you, either after a short pause or after a maximum of 15 seconds. Is it possible to output word by word so that the text can be seen while speaking? def…
Azure TTS batch synthesis activity logs
Hi there, we're using Azure speech synthesis (batch, since we have content over 10mins). In the Azure Portal, I can see metrics for my speech resource but I can't see any records of past jobs. Is there any way to see these? Thanks, Tim
Bug Report: Mispronunciation of Welsh Contraction "i’w" in Azure Neural TTS
Subject: Bug Report: Mispronunciation of Welsh Contraction "i’w" in Azure Neural TTS Description: The Azure Neural TTS system is mispronouncing the Welsh contraction "i’w." Instead of producing the correct pronunciation…
here i cannot find To create a custom avatar endpoint, follow these steps: Sign in to Speech Studio. Navigate to Custom Avatar > Your project name > Train model.
i cannot find custom avatar key after sign in to the speech studio .
Inquiry Regarding Azure AI Speech Error
Dear Azure Support Team I recently encountered an issue while using Azure AI Speech service with recordings from the VoiceMemo app on iPhone. Specifically, when attempting to process recordings of approximately 30 minutes in length, I received the…
Speech Studio Audio Content Creation (x) Content Format and Audio Export Fail
I discovered https://speech.microsoft.com/portal, audio creation tile. (I think it should be the first one and described as "interactive batch TTS web interface.") I uploaded a file named test.txt, which has two paragraphs. For decades now,…