hckrnws
One of the best open-source tools out there. I'm a frequent user of Plex, Jellyfin, Tunarr, local music files, etc. I use it weekly to extract subtitles, trim videos, convert music formats, and remove audio tracks. After writing the previous paragraph, I realized I've never donated to the project; it's time to change that.
It's a lower-level component in so much stuff we're not even aware of.
Yep. Which is a great architecture IMO. Simple, performant and flexible: choose 3
As a FFMPEG API user (e.g. through libavcodec etc.) I would definitely not say "simple". It's constantly breaking stuff and deprecating features from one version to another, and basically requires reading the source constantly to make sure of what's happening and on which backend / API each function can operate. Just today, when I was trying to implement vulkan video decode in ossia score (https://ossia.io) :
Copy data to or from a hw surface. At least one of dst/src must have an AVHWFramesContext attached.
int av_hwframe_transfer_data(AVFrame *dst, const AVFrame *src, int flags);
Well unlike what the very first sentence of the comment block hints towards, it actually is only implemented for host<->device copy, not device<->device for many backendsThat said, when it works, it's really great
FFmpeg is a prime single-block-everything-is-built-on xkcd example.
It's a sibling to curl in that way
Big difference for ffmpeg especially (but I imagine for curl too): it's not just one guy in Nebraska. Seems to have a very healthy community of devs involved in it.
Well, I don't how it is these days, but it hasn't always been describable as "Healthy".
> prime single-block-everything-is-built-on xkcd
Sometimes I wonder if we can vanish one single project out of existence instantly, which one would cause the most chaos.
It looks like the EU compiled a list of contenders.
https://interoperable-europe.ec.europa.eu/sites/default/file...
> Oh there's a new version of ffmpeg, I'll just quickly build it from source... no I can't wait I'll download the binary
I tend to build ffmpeg from source because package managers don't usually include support for patented codecs.
(Yes I know there are repos to get binaries for some, things like deb-multimedia.)
Building ffmpeg can be simple or complex, depending on how you configure the dependencies and if it's dynamic or static and of course it's target outputs.
I'm currently working on a cross-platform builder that runs within Github Actions runners, but the Mac and Windows builds take up so many of my monthly minutes.
https://github.com/video-commander/ffmpeg-builder
I'm using this as part of another multimedia app I'm working on for video engineers.
I had to build from source because of that CVE that dropped, couldn't do it so I just wrapped the whole thing and injected my own -version command, passed the scanners cleanly
For anyone vaguely familiar with ffmpeg, don't sleep on this video. Quite funny, and everything from `yadif` (which I dealt with today!) to mkvtoolnix to "But then it will explode if you have an apostrophe in your file name. Because it doesn't understand that."
That entire channel is a goldmine of prescient industry humour. I don't know where he gets all of his material from to be honest.
Building ffmpeg itself from source is actually quite easy.
The hardest part IMO is getting the necessary codecs to work; this can take a little while. If you know what audio and video codecs you want and need, and if you get them installed properly, then compiling ffmpeg is really simple and straightforward. It works almost always for me, and I have compiled ffmpeg from source for like +10 or even +15 years.
For reference purposes, my current configure options are:
./configure --prefix=/usr/ --enable-gnutls --enable-gpl --enable-libmp3lame --enable-libaom --enable-libopus --enable-libspeex --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libx264 --enable-libx265 --enable-nonfree --enable-pthreads --enable-shared --enable-version3 --extra-libs=\"-ldl\" --disable-doc --disable-libopenjpeg --disable-libpulse --disable-static
Probably more codecs could be added, and some options may not be necessary anymore (I changed this last ... years ago, too), but this works for the most part fairly well.One focus I have is mostly on a few .mp4 files, and for these I think you kind of want x264 x265 and so forth (I think one more codec from google too or so). But it is really quite trivial once you are past the codecs step. You can also start simple with just a few codecs, e. g. one good audio codec and one good video codec. One reason I like to have ffmpeg support many codecs is so I can use mpv, which in itself is really awesome; I like it more than vlc, which is also ok though.
quite easy? like learning to draw an owl?
getting the stock ffmpeg to compile/build might be "easy", but once you start adding on additional codecs and other features that get you into dependency nightmares "easy" is not the word I would use. I have not been able to use the stock ffmpeg since forever. for example, i see no openssl enabled in your config. I see no freetype. I see you've disabled openjpeg. clearly, you and i use ffmpeg differently which just goes to show your "easy" is very misleading
"fill in the rest of the details" is the politest way I've ever seen this, and it really lames it up.
it's easy just /etc/init.apt-get/frob-set-conf --arc=0 - +/lib/syn.${SETDCONPATH}.so.4.2 even my grandma can do that
emerge ffmpeg ;)
Changelog:
ffprobe -codec option
EXIF Metadata Parsing
gfxcapture: Windows.Graphics.Capture based window/monitor capture
hxvs demuxer for HXVS/HXVT IP camera format
MPEG-H 3D Audio decoding via mpeghdec
D3D12 H.264 encoder
drawvg filter via libcairo
ffmpeg CLI tiled HEIF support
D3D12 AV1 encoder
ProRes Vulkan hwaccel
DPX Vulkan hwaccel
Rockchip H.264/HEVC hardware encoder
Add vf_scale_d3d12 filter
JPEG-XS parser
JPEG-XS decoder and encoder through libsvtjpegxs
JPEG-XS raw bitstream muxer and demuxer
IAMF Projection mode Ambisonic Audio Elements muxing and demuxing
Add vf_mestimate_d3d12 filter
xHE-AAC Mps212 decoding support (experimental)
Remove the old HLS protocol handler
Vulkan compute codec optimizations
swscale Vulkan support
LCEVC metadata bitstream filter
Add vf_deinterlace_d3d12 filter
ffprobe: only show refs field in stream section when reading frames
ProRes Vulkan encoder
LCEVC parser
LCEVC enhancement layer exporting in MPEG-TS
TIL: JPEG XS - an image and video codec that offers both visually and mathematically lossless quality for low latency implementations.
Additionally, JPEG XS compressed content is indistinguishable from the original uncompressed content.
I've had great results using JPEG-XS to transport video for colour grading in feature film & TV post production. At 3:1 or 4:1 compression ratio is effectively lossless.
It is patent-encumbered though, you have to pay license fees to deploy it.
Isn't the point of JPEG to have lossy compression for your photos that still looks fine? As opposed to something like PNG, which has lossless compression
"JPEG" is short for Joint Photographic Experts Group, an ISO/ITU group that creates a lot of imaging standards. The JPEG image format you're thinking of is only one of the formats they've created.
The Joint Photographic Experts Group manages many standards, generally each called "JPEG [something]". The one we most commonly call "JPEG" is just one of them.
Comment was deleted :(
Reading that it looks like the point of JPEG-XS is to have near-lossless compression for raw photo and video data while having extremely high throughput.
JPEG XS supports either near lossless or truly lossless encoding depending on encoder configuration.
> Additionally, JPEG XS compressed content is indistinguishable from the original uncompressed content.
It can be indistinguishable, as long as you stick with lossless or very low compression ratios. It falls apart at typical JPEG XL compression ratios.
Not royalty free, unfortunately.
We use JXS when latency is critical. Most h24/265 decodes will have a 10 frame glass-glass delay, JXS drops that to 3 or 4, at a cost of bandwidth (our UHD jxs streams are 1.5gbit rather than 200mbit for hevc)
That's pretty depressing to read. x264 was handling the encoding side with sub-frame latency 15 years ago, and sub-frame decoding is significantly easier. "with –tune zerolatency, single-frame VBV, and intra refresh, x264 can achieve end-to-end latency (not including transport) of under 10 milliseconds for an 800×600 video stream"
But for some reason you can't make use of that and have to burn bandwidth instead.
A small part of the end-to-end process
https://www.obe.tv/how-to-lie-about-latency/
Bandwidth is cheap -- basically free, especially at this bitrate.
In theory it's a small part. But if you got that many frames of latency difference by changing codec, then it wasn't being a small part.
It's not that you should have gotten a magical 10ms latency glass to glass, it's that you should have been able to get 4 frames latency on h.264. But something prevented that, so I'm sad about it.
(And if you say the bandwidth was fine in your situation I won't argue, but using more than a gigabit extra is not usually thought of as free.)
Yeah, we've been deploying JPEG-XS for high bitrate streaming for a while.
A lot of our customers are moving their grading systems into data centres and streaming the images over IP back to their grading suites.
I've got it down to less than 1 frame for encode-transport-decode, but you've still got to copy the image to an SDI card and wait for that to clock out.
> gfxcapture: Windows.Graphics.Capture based window/monitor capture
> This source provides low overhead capture of application windows or entire monitors. The filter outputs hardware frames in d3d11 format; use hwdownload,format= if system memory frames are required.
This would strongly alter my plans if I were to develop an OSS Discord alternative. Chromium originally looked like a better core to start with largely due to its mature screen capture API. WebRTC is the other big thing, but there are other ways to do that. Native desktop apps (i.e., not browser based) are beginning to look much more compelling to me now.
If you were doing this, consider cribbing from https://github.com/obsproject/obs-studio/tree/master/plugins... which offers a variety of solutions including some rather exciting looking process injection (called "game" there).
I wonder if "entire chat app functions as OBS plugin" would work? Would solve the AV streaming side of the functionality.
You could always use Windows.Graphics.Capture directly.
Wait, are you https://www.gyan.dev/ffmpeg/builds/ ?
What I want to know is how much of these were written and/or debugged using AI tools and which ones? Using which workflow?
For that's an actual project, with countless uses, on countless machines.
Show me the AI. I want to see what AI has generated in those.
(btw I pay religiously my Claude Code subscription plan)
Khronos published a post on the Vulkan compute codecs in FFmpeg: https://www.khronos.org/blog/video-encoding-and-decoding-wit...
Is there a 'HW' guide to show the expected performance of Vulkan compute codecs anywhere?
Just realized they tag the releases with great names in math/computing. Very cool.
i'm in the process of adding bidirectional text to bitmap subtitle conversions with Claude Code:
https://connollydavid.github.io/pgs-release/
such a fun project!
Comment was deleted :(
I wouldn't get too excited about rockchip hw encoding. It's rkmpp based, not an upstream solution. You'd need Rockchip kernel for this, I guess.
It's still a big deal, you had to compile ffmpeg yourself before
How much of this release was done by to corporate/big tech employees?
I have no idea how much they contribute back, but pretty much every (big) tech company that does any media transcoding uses ffmpeg.
FFmpeg is really great. The only wish I'd have is for the usage to become simpler - both for regular stuff, but also for advanced filtering.
If anyone remembers, avisynth was pretty cool back in the days. You could kind of script video/audio manipulations, a bit like a UNIX/Linux pipe, but kind of simpler, in my opinion. FFmpeg allows for many similar operations, but remembering anything here is ... hard. I'd love for the whole usage API to become much simpler, but it seems nobody among the ffmpeg dev team is considering this. :(
I can't be the only one with that wish though ...
It does not diminish ffmpeg being so great in general, but I think it could be better.
Once I got over the -filter_complex is well, complex phobia, it isn't that bad. The command line makes it look daunting to be sure. But thinking of it as the name suggests of "filter chains" makes it less daunting. It is still cumbersome as every thing needs to be in the command.
Debugging commands gets hairy the more complex they get but you'll get muscle memory on how to search/replace to make line breaks to make it easier, similar to breaking up gnarly SQL. The worst part about debugging is the error messages can be misleading when it interprets the filter chain incorrectly because of some issue you've typoed in there somewhere. Even those start to become recognizable as "it thinks this, which means I probably messed up this other thing instead". To be fair, I work with ffmpeg daily using some commands that make your eyes bleed. So for someone using it every now and then, the practice from repetitive use just takes longer.
Also, saving things as shell scripts helps a lot. A simple script that does the same thing with a few adjustable params can be done with %1, %2 usage or even cleaner with getopts. You can then change small things within a tested complex command.
Comment was deleted :(
One of the best uses of LLMs is to help find the right command line options for tools like ffmpeg.
Prior to LLMs, I made an ffmpeg command line builder. It definitely doesn't cover everything, but handles simple common tasks quite well.
It won't always give you a perfect answer the first time, but it's much better than memorizing the manual or interpreting a forum discussion. Haven't used it for ffmpeg, but lots of other command lines.
I find it helps if you paste in the the ffmpeg manual and get the ai to use that as source. Helps it stick to real params.
Because ffmpeg is built on the Unix chained utility philosophy I find ai is also good at building scripts the use it as well
I would far rather look at the manual or a forum discussion, because then I know I'm getting something real. With LLMs, odds are decent that I'm getting something which doesn't actually exist, but it sure would be nice if it did.
I've had someone post a problematic ffmpeg command into a prompt to ask why it wasn't working. It didn't work so well. By the time that someone rejiggered their prompt, I had found the issue.
If you're just looking for an easier solution for encoding video ("regular stuff") then Handbrake is the go-to tool for that.
If there’s one thing I’ve entirely handed over to our AI overlords it’s the ffmpeg command line.
Comment was deleted :(
[flagged]
nice job
Crafted by Rajat
Source Code