Maria Sophia wrote:
It appears to me that Occam's Razor tells us that yt-dlp is fine.
I just need a modern FFMPEG (such as ffmpeg-git-full or ffmpeg-release-full).
<https://www.gyan.dev/ffmpeg/builds/>
On 17.01.2026 04:08, Maria Sophia wrote:
Maria Sophia wrote:
It appears to me that Occam's Razor tells us that yt-dlp is fine.
I just need a modern FFMPEG (such as ffmpeg-git-full or ffmpeg-release-full).
<https://www.gyan.dev/ffmpeg/builds/>
cool! I thought yt-dlp update is all you need!
ciao...
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I've also tried using VPN and the same result. It says error 403
forbidden or something like that.
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
On 17/01/2026 1:21 pm, Maria Sophia wrote:
Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I've also tried using VPN and the same result. It says error 403
forbidden or something like that.
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1
(1/10)...
While I haven't downloaded YouTube videos on Windows for a while
since I use the Android Newpipe YouTube client to do that, so as to
volunteer to help you out, I 1st used the Aloha VPN browser to visit
this adult video page (which in the past many browsers asked for a login). >> ÿ <https://www.youtube.com/watch?v=BY82T7Q8hiw>
While this might not play in other browsers without a Google Account,
it plays just fine in the default Aloha VPN browser with VPN turned on.
Clicking your link played just fine in my SeaMonkey Suite without a
Google Account .... after I clicked 'Disallow' on a Pop-Up.
Andy Burns wrote:
what's this weird font:
It's a result of his 'random' headers, it makes thunderbird (maybe other >>newsreaders too) think the text is using chinese characters.
Content-Type: text/plain; charset=Big5
it has been pointed out that avoiding charset=big5 would be wise.
Hi Andy,
Thanks for taking a stab at why sometimes the charset header is borked.
It's off topic for this conversation, but I'll try to explain what I know.
It is absolutely not caused by the randomized headers for privacy, as they >are static, although I understand why it might look related since we talked >about that earlier this year. The only header causing it is the charset.
Which I don't overtly touch.
Nor do I randomize.
The charset problem only happens when I copy and paste text from outside my >text editor into the editor that feeds the posting scripts, and when I
forget to (or don't think about) converting to ASCII with the Notepad++ >conversion script we've discussed which converts Unicode to ASCII.
If I type everything by hand using the keyboard 95 characters, the charset >never switches to Big5 so it happening in my charset recognition binary.
The strange part is that the pasted text often looks fine in my edits.
In this case, it's simply the filespec copied over from a Windows CLI.
But most of the time the issue shows up when the source is a web page, >usually using a Chrome-based web browser which, in my case, is the Aloha >privacy browser variant, which apparently tries to prettify text. That
means it inserts things like curly quotes, em dashes, non breaking hyphens, >zero width spaces, and other characters that are not part of plain ASCII.
Once even one non ASCII character slips in, the script that guesses the >encoding gets confused and decides the whole post must be Big5 or whatever.
I don't set it. The binary I picked up off the Internet is doing that. >Thunderbird, in your case, I think, then assumes the text is Chinese.
I added some code to inspect the characters in the message so I can track >down which one is triggering the mis detection, but I am still working on >that. So I apologize as this is a quirk of writing your own newsreader.
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Schugo wrote:
cool! I thought yt-dlp update is all you need!
ciao...
BTW.. what's this weird font:
Content-Type: text/plain; charset=Big5
I have no idea. It gets inserted somehow. I don't specify the font.
It happens when I copy/paste from the Windows command line for some reason.
On 1/17/2026 11:45 AM, Stan Brown wrote:
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Recent versions of yt_dlp are not compatible with Windows 7, either x32
or c64. ...
On 17.01.2026 22:40, David E. Ross wrote:
On 1/17/2026 11:45 AM, Stan Brown wrote:
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Recent versions of yt_dlp are not compatible with Windows 7, either x32
or c64. ...
huh?
https://github.com/yt-dlp/yt-dlp/releases
yt-dlp 2025.12.08 (Latest)
C:\Users\Admin\_work>yt-dlp --version
2025.12.08
Win7 Ultimate 64
Maria Sophia wrote:
The flow would be from gVIM (which only has the 95 keyboard ASCII
characters when I type but it has funky stuff when I copy/paste) to the
charset filter, to telnet to the Internet NNTP server then back to you.
Thanks to Carlos, Andy & Adam Kerman (anonymizer?), I made a change just
now so this is simply a test of whether the change makes any difference.
<https://i.postimg.cc/vHP4gjSC/charset01.jpg>
1. In my custom newsreader, I just now wiped out all headers of the prefix:
Mime-Version:
Content-Type: (this is where the charset seems to be embedded?????)
Content-Transfer-Encoding:
And, I removed the binary charset filter that I had originally installed
(which I picked up back in the 1990s or so from somewhere on the net).
2. So that we KNOW that change was made, I cloned Adam Kerman's Newsreader:
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
3. The resulting header line for a pure ASCII text became (magically?)
Date: Sat, 17 Jan 2026 23:47:47 -0500
Message-ID: <10kholk$2ej4$1@nnrp.usenet.blueworldhosting.com>
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Previously, in the post just prior to that post, the header was:
Date: Sat, 17 Jan 2026 23:34:36 -0500
Message-ID: <10khnss$7e6$1@nnrp.usenet.blueworldhosting.com>
... Pus these which came from my binary charset filter from the 1990s.
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Which I didn't overtly set as it was set by my binary filter that I
added about a month ago when Carlos/Andy first brought this issue up.
So, if the content is purely the 95 characters of my QWERTY USA keyboard, then it seems there is no header line for the character set, which is fine.
But now for THIS post, I will add, on purpose unicode and chinese text.
a. This line includes a Chinese character: ?
b. This line includes a Japanese character: ?
c. Funky German characters: „, ”, , á, Ž, ™, š
d. Non?ASCII currencies: ? (euro), œ (pound), (yen), ? (won), ? (rupee)
e. Curly quotes: ?curly opening? and ?curly closing?
f. Dashes: em dash ?, en dash ?, figure dash ?
g. Ellipses: ?
h. Non?breaking space between these words: hello world
i. Zero?width non?joiner between these letters: a?b
j. Zero?width space between these letters: x?y
k. A few extra oddballs: ?, ?, ?, ?, ?, ?
Thanks to Carlos, Andy & Adam Kerman (anonymizer?),
I made a change just
now so this is simply a test of whether the change makes any difference.
<https://i.postimg.cc/vHP4gjSC/charset01.jpg>
1. In my custom newsreader, I just now wiped out all headers of the prefix:
Mime-Version:
Content-Type: (this is where the charset seems to be embedded?????)
Content-Transfer-Encoding:
And, I removed the binary charset filter that I had originally installed
(which I picked up back in the 1990s or so from somewhere on the net).
2. So that we KNOW that change was made, I cloned Adam Kerman's Newsreader:
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
3. The resulting header line for a pure ASCII text became (magically?)
Date: Sat, 17 Jan 2026 23:47:47 -0500
Message-ID: <10kholk$2ej4$1@nnrp.usenet.blueworldhosting.com>
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
But now for THIS post, I will add, on purpose unicode and chinese text.
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >> Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would
the problem disappear when I pasted characters that don't fit UTF-8?
Schugo wrote:
Maria Sophia wrote:
But now for THIS post, I will add, on purpose unicode and chinese text.
a. This line includes a Chinese character: ?
b. This line includes a Japanese character: ?
c. Funky German characters: „, ”, , á, Ž, ™, š
d. Non?ASCII currencies: ? (euro), œ (pound), (yen), ? (won), ? (rupee) >>> e. Curly quotes: ?curly opening? and ?curly closing?
f. Dashes: em dash ?, en dash ?, figure dash ?
g. Ellipses: ?
h. Non?breaking space between these words: hello world
i. Zero?width non?joiner between these letters: a?b
j. Zero?width space between these letters: x?y
k. A few extra oddballs: ?, ?, ?, ?, ?, ?
By avoiding setting content type and encoding, I suppose you'll get
varying results depending on which newsreader is interpreting it, for me
it doesn't look bad, so thunderbird is presumably guessing at UTF-8
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >> Autodetect->Japanese.
That option (for the user to override the encoding of received messages)
was removed a long time ago.
Carlos E.R. wrote:
I have no idea. It gets inserted somehow. I don't specify the font.Yes you do. We discussed this and told you what to do about a month ago.
It happens when I copy/paste from the Windows command line for some reason. >>
You told us you used random headers, including Content-Type.
Hi Carlos,
Thanks for remembering what it used to be because it used to be that way.
But it's not that way now.
So wipe that idea out as it's not how it works in my setup.
Having said that, I will say, as I always have said, that I really do not understand how charset headers are set. I just don't understand it.
If I understood it, I'm sure this problem would never occur.
But it does. But rarely. Very rarely. Once in every 100 posts or so.
And, it's no big deal, right?
So a few characters look funny, right?
Note: I don't see what you see because I see only what gVim shows me.
We discussed this, where I have always said a few things about this
problem, one of which is I don't do anything overt in the headers.
As you (and Andy) have astutely noted, I used to use STATIC RANDOM headers. But both of you are confusing STATIC RANDOM headers with RANDOM headers. They're not the same thing. They're not even close to the same thing.
Having explained that random headers is not the same thing as static random headers, after we discussed this, I *removed* the static random headers.
In fact, I removed the entire charset header line from my dictionaries.
So I don't set the charset header at all. It gets set by a binary I filter through, which works fine most of the time, as you can tell from my posts.
I run the entire post now thru a filter I picked up somewhere, where I
don't even remember where the binary came from, but it works generally.
That's why, likely, you see my last thousand posts (or so) have a header
that seems to fit the characters, which are almost always pure ASCII.
But sometimes things other than ASCII creep in, which I can't possibly
type, so they must be coming from the places I copy & paste from.
Most of the time when I'm heavily merging sources, I run the entire text through the well-discussed Notepad++ XML macro and that cleans it up.
But sometimes I forget.
When I forget, 99 out of 100 times the binary takes care of it.
But 1 out of 100 times it doesn't.
I don't know why.
Loop back to my statement that I really don't know how charset is set.
But let's get back to the topic at hand.
I think, after a few hours of testing, that I have proven the problem that the OP originally asked about isn't really with yt-dlp but with YouTube changing how they do things (whack-a-mole as someone described it).
We're all familiar with the whack-a-mole that companies like Google and
Apple play, where our job is to find a way around the next mole to whack.
I think having the latest yt-dlp is key (but it's almost a year old), but also having to run java runtimes is of critical importance nowadays.
Also having the latest ffmpeg seems to also be critical if we want, in the end, to be able to convert the Youtube video to an MP4 of decent quality.
I'm hoping someone will copy and paste the commands that I put into this thread so that the OP could test it out, so we can confirm that hypothesis.
Have you tried it on Windows or Linux yet?
C:\> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
.....
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
The whole battle between YouTube and dowloader developers makes me think
of Wack-a-Mole.
Andy Burns wrote:
Thanks for taking a stab at why sometimes the charset header is borked.
It's not so much a stab, as the definite cause; every message of yours
which includes "charset=big5" indicates to newsreaders that it's a
chinese message, and thunderbird (maybe others) has a different font
setting for chinese, which is why your messages look odd to us (the
microsoft SimSun and YaHei fonts render poorly).
Last time I mentioned it you claimed the message in question did not
have big5 in it, but it did. The message this time is yours with a
timestamp of
Fri, 16 Jan 2026 22:08:20 -0500
about 7 levels up from this one. You've described how your script picks
random headers from a "pool" of possible values, all it would take is
for you to exclude any messages with big5 from that source pool ...
Hi Andy,
I forgot that after our last discussion (oh, maybe a month or so ago?)
I had completely wiped out the dictionary of static random charset headers.
That header is no longer static nor random (I misspoke in a previous post).
I repeat for effect:
I removed all charset headers (every single one!) from my dictionaries.
So I don't physically set *any* charset header when I post from gVim.
The header runs through an old filter I picked up from the 1990s though.
That old binary is what is doing the big5 garbage in 1 out of 500 posts.
I think.
The reason I say "I think" is because not only do I not set the charsset header myself, nor do I even see it, but I don't really understand it.
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
b. Find a better (more modern) filter that knows how to set it better
On 17/1/2026 8:20 am, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
.....
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1
(1/10)...
Did you download the latest yt-dlp?
You have to get the most recent version because of changes made by YouTube.
(How _legal_ such downloading is is open to question of course, but the
same applies to any such you may do from YouTube itself.)
On 18.01.2026 09:26, Maria Sophia wrote:
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >>>Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of >>which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would >>the problem disappear when I pasted characters that don't fit UTF-8?
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255
to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
Andy Burns wrote:
By avoiding setting content type and encoding, I suppose you'll get
varying results depending on which newsreader is interpreting it, for me
it doesn't look bad, so thunderbird is presumably guessing at UTF-8
Hi Andy,
Thanks for confirming that removing all characterset encoding from "my" outgoing headers allowed "your" incoming headers to guess at "UTF-8".
In my recent response to Carlos, I learned from his kind help that gVIM may have been creating mojibake (UTF-8 interpreted as Latin-1).
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Thanks for that suggestion where this post will have the following set:
Content-Type: text/plain; charset=UTF-8; format=flowed
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR
Autodetect->Japanese.
That option (for the user to override the encoding of received messages)
was removed a long time ago.
This is great that you're helping Schugo where to let you know that I've inserted the charset headers above, I've cloned Schugo's newsreader line:
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.22
If you see *that* line in my headers, that means I'm autoamtically
inserting the UTF-8 (instead of not inserting any charset header).
Let's repeat the test where this is generated by an LLM for me:
a. This line includes a Chinese character: ? (U+6C49)
b. This line includes a Japanese character: ? (U+3042)
c. Funky German characters: „ (U+00E4), ” (U+00F6), (U+00FC), á (U+00DF),
Ž (U+00C4), ™ (U+00D6), š (U+00DC)
d. Non-ASCII currencies: ? (U+20AC), œ (U+00A3), (U+00A5), ? (U+20A9), ? (U+20B9)
e. Curly quotes: ? (U+201C) and ? (U+201D)
f. Dashes: ? (U+2014), ? (U+2013), ? (U+2012)
g. Ellipses: ? (U+2026)
h. Non-breaking space between these words: hello world (NBSP U+00A0)
i. Zero-width non-joiner between these letters: a?b (ZWNJ U+200C)
j. Zero-width space between these letters: x?y (ZWSP U+200B)
k. A few extra oddballs: ? (U+00A9), ? (U+00AE), ? (U+2020), ? (U+2021), ? (U+00B6), ? (U+00A7)
Mr. Man-wai Chang wrote:
(How _legal_ such downloading is is open to question of course, but the
same applies to any such you may do from YouTube itself.)
As long as you don't make money out of it nor re-publish it nor claim
ownership.
You are merely recording TV shows using a (software) VCR recorder. :)
With yt-dlp, we're also not agreeing to anything on Google's YouTube site!
If other folks can test this for the OP, we can figure out all the issues, where I tested it on Windows and found that a few tweaks fixed everything. ...
Windows:
yt-dlp --js-runtime node ^
-f "bv*+ba/b" ^
--merge-output-format mp4 ^
https://www.youtube.com/watch?v=BY82T7Q8hiw
Well that depends, in the early 80's we had to deal with printers that >didn't want to print GBP currency symbols but could be configured to >substitute a hash symbol "#" to a pound symbol instead.
Schugo <schugo@schugo.de> wrote:
On 18.01.2026 09:26, Maria Sophia wrote:
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR
Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of >>>which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would >>>the problem disappear when I pasted characters that don't fit UTF-8?
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255 >>to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
128-255 are in the 8-bit range and are not ASCII. There are any number
of different 8-bit character sets in use. These must be declared. The
user must tell the system which one was used as there's no way to parse.
On 2026-01-18 05:34, Maria Sophia wrote:[...]
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
And then the usenet server will either refuse your post or add one.
Adam H. Kerman wrote:
Andy Burns <usenet@andyburns.uk> wrote:
Well that depends, in the early 80's we had to deal with printers that >>>didn't want to print GBP currency symbols but could be configured to >>>substitute a hash symbol "#" to a pound symbol instead.
I thought teletypewriters performed "L" backspace overstrike "B".
I recognise L = from DEC compose sequences, not heard of L B
Traditionally, the cent sign wasn't on a keyboard either.
Surely it was a dot matrix? You declared your code page, then got the >>needed symbol.
yes dot matrix, these were connected to CP/M type systems, they didn't >define codepages like DOS
Some of these code pages became ISO-8859-X character
sets.
Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-01-18 05:34, Maria Sophia wrote:[...]
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
And then the usenet server will either refuse your post or add one.
If the text is pure ASCII and no charset=...is declared, a news server should not set a charset=..., nor should it refuse the post, because if undeclared, us-ascii is the default character set.
Note that nearly all my posts - and probably/hopefully this one -
don't have any MIME headers (Mime-Version:, Content-Type:, Content-Transfer-Encoding:, etc.), because they're not needed.
[...]
Carlos E.R. wrote:
Loop back to my statement that I really don't know how charset is set.
But let's get back to the topic at hand.
Your vi editor will use some windows charset, always the same.
Some unicode. You just have to tell the script to that same code.
And stop playing, stop doing conversions with notepad.
Hi Carlos,
A lot of people on Usenet use only their keyboard to enter their articles.
If all I ever did was type using a keyboard, there will only be about 94 unique characters (letters, numbers, symbols, punctuation, etc.).
As for gVim, I have no idea what the characterset is so I just looked:
:set encoding? ==> this reported "encoding=latin1"
latin1 means: single-byte Western European encoding
:set fileencoding? ==> this reported nothing
nothing means the current file is using the same (latin1) encoding
:set fileencodings? ==> this reported "fileencodings=ucs-bom"
ucs-bom means UTF-16/UTF-32 with BOM (Byte Order Mark)
Apparently a BOM is a small marker placed at the beginning of a text file
that tells software which Unicode encoding the file uses.
So I found my vimrc and changed it to make it a more modern setup.
:echo $MYVIMRC --> C:\users\username\_vimrc
C:\> gvim C:\users\username\_vimrc
set encoding=utf-8
set fileencoding=utf-8
set fileencodings=utf-8,ucs-bom,latin1
Now ":set endoding" reports utf-8 by default so Latin1 fallback will happen only when gVim thinks it's necessary. Let's see if that helps or not.
I think, after a few hours of testing, that I have proven the problem that >>> the OP originally asked about isn't really with yt-dlp but with YouTube
changing how they do things (whack-a-mole as someone described it).
We're all familiar with the whack-a-mole that companies like Google and
Apple play, where our job is to find a way around the next mole to whack. >>>
I think having the latest yt-dlp is key (but it's almost a year old), but >>> also having to run java runtimes is of critical importance nowadays.
Also having the latest ffmpeg seems to also be critical if we want, in the >>> end, to be able to convert the Youtube video to an MP4 of decent quality. >>>
I'm hoping someone will copy and paste the commands that I put into this >>> thread so that the OP could test it out, so we can confirm that hypothesis. >>>
Have you tried it on Windows or Linux yet?
C:\> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
cer@Telcontar:~/Videos/tmp> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
[youtube] Extracting URL: https://www.youtube.com/watch?v=BY82T7Q8hiw
[youtube] BY82T7Q8hiw: Downloading webpage
WARNING: [youtube] No supported JavaScript runtime could be found. Only deno is enabled by default; to use another runtime add --js-runtimes RUNTIME[:PATH] to your command/config. YouTube extraction without a JS runtime has been deprecated, and some formats may be missing. See https://github.com/yt-dlp/yt-dlp/wiki/EJS for details on installing one
[youtube] BY82T7Q8hiw: Downloading android sdkless player API JSON
[youtube] BY82T7Q8hiw: Downloading web safari player API JSON
WARNING: [youtube] BY82T7Q8hiw: Some web_safari client https formats have been skipped as they are missing a url. YouTube is forcing SABR streaming for this client. See https://github.com/yt-dlp/yt-dlp/issues/12482 for more details
[youtube] BY82T7Q8hiw: Downloading m3u8 information
WARNING: [youtube] BY82T7Q8hiw: Some web client https formats have been skipped as they are missing a url. YouTube is forcing SABR streaming for this client. See https://github.com/yt-dlp/yt-dlp/issues/12482 for more details
[info] BY82T7Q8hiw: Downloading 1 format(s): 301
[download] Sleeping 7.00 seconds as required by the site...
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 399
[download] Destination: Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (2/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (3/10)...
... ... ... ... ...
-rw-r--r-- 1 cer users 0 Jan 18 13:39 Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4.part
-rw-r--r-- 1 cer users 50 Jan 18 13:39 Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4.ytdl
Thanks for running that test for the OP. It's very illustrative.
1. No supported JavaScript runtime could be found
yt-dlp can't get normal URLs so it has been falling back to HLS.
2. "Some formats have been skipped... missing a url."
YouTube is forcing SABR streaming
3. "YouTube is forcing SABR streaming for this client."
YouTube no longer provides direct MP4/WEBM URLs for many clients.
Only HLS or DASH fragments are available.
Those fragments require JS-executed signature logic.
Without JS, yt-dlp can't compute the signatures -> fragments 403
It seems maybe that you're dealing with the current YouTube "SABR-only" rollout colliding with an older yt-dlp build and the absence of a
JavaScript runtime. (Segmented Adaptive Bitrate)
Apparently YouTube is increasingly refusing to serve classic MP4/WEBM URLs and instead forcing HLS fragments that require JS-based decryption logic. Without a JS runtime, yt-dlp can't execute that logic, so it falls back to HLS... and then the fragments 403.
From what I've determined in my hours-long testing sequence, yt-dlp now requires a JS runtime (Node, Bun, or Java) for many YouTube formats.
Without it, yt-dlp can't run the decryption code that YouTube now embeds in JS. When I inserted Node, I avoided that specific error on Windows.
You only have deno, which, apparently, is not fully supported for YouTube's current EJS extraction.
If you want to fix it, you'll need to do what I did, which was install a
real JS runtime such as Node.js, Java (OpenJDK), Bun, or GraalJS.
Worse, apparently Google is changing things daily, so we need to update to the nightly yt-dlp builds to get the latest daily SABR-related fixes.
I had needed to update ffmpeg which is required for the muxing/merging.
For linux, I think this command will be better than the Windows command:
$ yt-dlp --js-runtime node -f "bv*+ba/b" https://www.youtube.com/watch?v=BY82T7Q8hiw
What we need, if we all want to use yt-dlp moving forward, is for
other kind helpful people like you are, to test this for the OP.
On 18.01.2026 18:06, Adam H. Kerman wrote:
Schugo <schugo@schugo.de> wrote:
. . .
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255 >>>to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
128-255 are in the 8-bit range and are not ASCII. There are any number
of different 8-bit character sets in use. These must be declared. The
user must tell the system which one was used as there's no way to parse.
It's Extended ASCII. Most likely ISO 8859-1 ("ISO Latin 1"), except when >there are characters in the control codes range, then you can assume
Windows 1252. Kyrillic, Greek or Eastern languages are today all encoded
in UTF-8.
ciao..
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Andy Burns <usenet@andyburns.uk> wrote:
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Correct. Schugo was wrong. ISO-8859-1 should not be assumed. The user
needs to declare it.
On 19.01.2026 00:13, Adam H. Kerman wrote:
Andy Burns <usenet@andyburns.uk> wrote:
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Correct. Schugo was wrong. ISO-8859-1 should not be assumed. The user
needs to declare it.
and when it's not declared? better assume that than display nothing at all...
(How _legal_ such downloading is is open to question of course ...
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 15 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 173:05:34 |
| Calls: | 188 |
| Files: | 21,502 |
| Messages: | 80,021 |