Good news guys. You can now use OpenAI Whisper to transcribe/translate the audio into subtitle file (90% accuracy). Came out just a few days ago. It's better than pyTranscriber.
Where did you get the numbers? For Japanese they claim to have a WER (word error rate) of 6.4 that would make accuracy of 93,6%. That would be exceptional (Human transcriber usually have WER of 4). And it would be suprising as its AI model was trained with 680.000 hours of material but 2/3 were in english.
As a non-japanese speaker its always difficult to judge the quality because even a well transcriped audio can be messed up by bad translation. For transcribing so far there was only Google Speech-To-Text (its the engine all tools Pytranscriber, autosub.. use). It would be interesting how good their transcribing is (converting audio to japanese-text) and then how good their translation is. I doubt that it would perform on the same level as DeepL. But if the transcribing is good a possible workflow could be to transcribe with Whisper and then translate the subtitles with DeepL.
From the tests conducted it doesn't seem better/less than PyTranscriber and definitely not 90%.
But it might be worth checking out for those looking to try something new.
Did you use the translation from Whisper? As mentioned above it would be interesting to compare the transcribing itself. On reddit i saw some positive tests on using Whisper to translate japanese.
It seems only works for Linux...
No Windows version.
You can use it on Windows too, you just need a command line and Python as it seems. But there is no GUI so far.