hckrnws
Show HN: Unfucked - version all changes (by any tool) - local-first/source avail
by cyrusradfar
I built unf after I pasted a prompt into the wrong agent terminal and it overwrote hours of hand-edits across a handful of files. Git couldn't help because I hadn't finished/committed my in progress work. I wanted something that recorded every save automatically so I could rewind to any point in time. I wanted to make it difficult for an agent to permanently screw anything up, even with an errant rm -rf
unf is a background daemon that watches directories you choose (via CLI) and snapshots every text file on save. It stores file contents in an object store, tracks metadata in SQLite, and gives you a CLI to query and restore any version. The install includes a UI, as well to explore the history through time.
The tool skips binaries and respects `.gitignore` if one exists. The interface borrows from git so it should feel familiar: unf log, unf diff, unf restore.
I say "UN-EF" vs U.N.F, but that's for y'all to decide: I started by calling the project Unfucked and got unfucked.ai, which if you know me and the messes I get myself into, is a fitting purchase.
The CLI command is `unf` and the Tauri desktop app is titled "Unfudged" (kids safe name).
How it works: https://unfucked.ai/tech (summary below)
The daemon uses FSEvents on macOS and inotify on Linux. When a file changes, `unf` hashes the content with BLAKE3 and checks whether that hash already exists in the object store — if it does, it just records a new metadata entry pointing to the existing blob. If not, it writes the blob and records the entry. Each snapshot is a row in SQLite. Restores read the blob back from the object store and overwrite the file, after taking a safety snapshot of the current state first (so restoring is itself reversible).
There are two processes. The core daemon does the real work of managing FSEvents/inotify subscriptions across multiple watched directories and writing snapshots. A sentinel watchdog supervises it, kept alive and aligned by launchd on macOS and systemd on Linux. If the daemon crashes, the sentinel respawns it and reconciles any drift between what you asked to watch and what's actually being watched. It was hard to build the second daemon because it felt like conceding that the core wasn't solid enough, but I didn't want to ship a tool that demanded perfection to deliver on the product promise, so the sentinel is the safety net.
Fingers crossed, I haven’t seen it crash in over a week of personal usage on my Mac. But, I don't want to trigger "works for me" trauma.
The part I like most: On the UI, I enjoy viewing files through time. You can select a time section and filter your projects on a histogram of activity. That has been invaluable in seeing what the agent was doing.
On the CLI, the commands are composable. Everything outputs to stdout so you can pipe it into whatever you want. I use these regularly and AI agents are better with the tool than I am:
# What did my config look like before we broke it?
unf cat nginx.conf --at 1h | nginx -t -c /dev/stdin
# Grep through a deleted file
unf cat old-routes.rs --at 2d | grep "pub fn"
# Count how many lines changed in the last 10 minutes
unf diff --at 10m | grep '^[+-]' | wc -l
# Feed the last hour of changes to an AI for review
unf diff --at 1h | pbcopy
# Compare two points in time with your own diff tool
diff <(unf cat app.tsx --at 1h) <(unf cat app.tsx --at 5m)
# Restore just the .rs files that changed in the last 5 minutes
unf diff --at 5m --json | jq -r '.changes[].file' | grep '\.rs$' | xargs -I{} unf restore {} --at 5m
# Watch for changes in real time
watch -n5 'unf diff --at 30s'
What was new for me: I came to Rust in Nov. 2025 honestly because of HN enthusiasm and some FOMO. No regrets. I enjoy the language enough that I'm now working on custom clippy lints to enforce functional programming practices. This project was also my first Apple-notarized DMG, my first Homebrew tap, and my second Tauri app (first one I've shared).Install & Usage:
> brew install cyrusradfar/unf/unfudged
Then unf watch in a directory. unf help covers the details (or ask your agent to coach).EDIT: Folks are asking for the source, if you're interested watch https://github.com/cyrusradfar/homebrew-unf -- I'll migrate there if you want it.
I love the website; the design, the video, the NSFW toggle, the simplicity.
I love the idea; definitely something I ran into a few times before and wish I had.
Unfortunately, I am not installing a closed-source daemon with access to the filesystem from an unknown (to me) developer. I will bookmark this and revisit in a few weeks and hope you had published the source. :)
Totally understandable.
I didn't open up the source for this as I have a mono-repo with several experiments (and websites).
Happy to open the source up and link it from the existing website.
I've started to have an Agent migrate it out, and will review it before calling it done. Watch https://github.com/cyrusradfar/homebrew-unf
Edit: You can download the current version now: https://github.com/cyrusradfar/homebrew-unf/archive/refs/tag...
I have to agree with the previous user. I'm not brew installing a closed source daemon.
I'd have to imagine that moving this out to its own repo with Claude Code would be trivial so I don't understand the resistance.
This is a great idea. I look forward to seeing a proper repo for it.
I agree. It’s a neat idea and I’d be interested in seeing the details. A downloadable tarball is a lot better than nothing, but it still makes more work to evaluate a random project than I’m inclined to perform. It makes me assume the commit history is ugly in some way (being charitable and assuming the code itself isn’t). Hearing that it’s developed within a monorepo of unrelated projects and experiments isn’t inspiring either. Anyway, perhaps someone else will download the source and report back.
Edit: To be clear, I’m not saying any of those things are true, just that those are the first thoughts I have when someone says their source is open but makes it difficult to view. In this age in which it’s so trivial and commonplace to make source easily viewable.
I don't see the source in their tar archive.
it's just the homebrew cask and recipe.
> Edit: You can download the current version now: https://github.com/cyrusradfar/homebrew-unf/archive/refs/tag...
This does not contain the source.
Agreed on all counts. It looks great! Just can't trust it unless it's transparent.
FYI all Jetbrains IDEs include this, as long as they are open on the codebase. It's called "Local history".
I love to use the terminal, and I still do. But as much as I love to unfu*k my local nvim setup, I much rather pay a company to do it for me. Set up vim bindings inside jetbrains and everything comes with batteries included, along with a kick-ass debugger. While my colleagues are fighting opencode, I pointed my IDE at the correct MCP gateway and everything "just works" with more context.
Thought I'd share the data point to support jetbrains
On behalf of everyone who dislikes jetbrains business model, I would like to say: duly noted.
What's wrong with their business model? Pay once, get a year of free updates and keep it forever. Runs locally. Want updates? Pay discounted renewal. Seems reasonable. Their AI subscription OTOH needs work. Not worth it yet with how flaky it is.
I paid once in 2012 and now I can't download it anymore. Even if I could, it probably wouldn't work.
Interesting. On https://account.jetbrains.com/licenses/assets I have a "Download" button and "Download a code for offline activation" and "Generate legacy license key" buttons.. I figured I could use one of those if I ever decide to cancel my sub, but I admit I have not tested the theory. It's possible your copy is indeed too old.
I think it only keeps history for user edited files, agent edited files don't seem to end up in it for me (Claude code) but maybe it works with other agents with the proper plugins I'm not sure.
+1 OP here, this is the problem I'm solving for. Agents use tools and may be in multiple places editing; therefore, you need to watch the file system.
vscode and its forks as well (for files it saves)
I have used savevers.vim for many years as a way to recover old versions of files.
https://www.vim.org/scripts/script.php?script_id=89
It is comparatively unsophisticated, but I need it so infrequently that it has been good enough.
I do like the idea of maintaining a complete snapshot of all history.
This is a good application for virtual filesystems. The virtual fs would capture every write in order to maintain a complete edit history. As I understand it, Google's CitC system and Meta's EdenFS work this way.
https://cacm.acm.org/research/why-google-stores-billions-of-...
https://github.com/facebook/sapling/blob/main/eden/fs/docs/O...
This is so cool to have made yourself. How would you compare this to the functionality offered by jujutsu? I love the histogram, it was the first sort of thing I wanted out of jujutsu that its UI doesn't make very easy. But with jj the filesystem tracking is built in, which is a huge advantage.
I'm not a user, but I looked at the site and it looks like jj snapshots when you run a jj command. UNF snapshots continuously.
If an AI agent rewrites 30 files and you haven't touched jj yet, jj has the before-state but none of the intermediate states. UNF* captured every save as it happened, at filesystem level.
jj is a VCS. UNF is a safety net that sits below your VCS.
- UNF* works alongside git, jj, or no VCS at all
- No workflow change. You don't adopt a new tool, it just runs in the background
- Works on files outside any repo (configs, scratch dirs, notes) as it doesn't require git.
They're complementary, not competing.W.r.t. to the histogram, this is my fav feature of the app as well. Session segmentation (still definitely not perfect) creates selectable regions to make it easier, too. The algo is in the CLI as well for the Agent recap (rebuilding context) features.
To be fair, jujutsu has a watchman feature which uses inotify to create snapshots on file change as well. Your tool probably has a more tailored UX to handling these inter-commit changes though so there could still provide complementary value there.
Yes, I was thinking of the watchman integration. And I also really love the DSLs it gives you for selecting change sets and assembling log formats.
One of the uses cases on their website is the agent deleted my .env file.
jj wouldn’t help with that as it would be gitignored.
This tool doesn’t help with that either:
> The tool skips binaries and respects `.gitignore` if one exists.
In today's version of "LLM's allow a person to write thousands of lines of code to replace built-in Unix tools":
inotifywait -mr -e modify,create,delete /your/dir |
while read _; do
cd /your/dir && git add -A && git commit -m "auto-$(date +%s)" --allow-empty
done
There are +8 billion people on the planet, computing has beed around a while now and some REALLY smart people have published tools for computers. Ask yourself, "am I the first person to try to solve this problem?"Odds are, one or more people have had this problem in the past and there's probably a nifty solution that does what you want.
OP Here, hard to attempt to read and respond to this in good faith.
I think it would be dishonest if I didn't share that your approach to discourse here isn't a productive way of asking what insights I'm bringing.
If that's your concern, I agree I can't claim that nothing exists to solve pieces of the puzzle in different ways. I did my research and was happy that I could get a domain that explained the struggle -- namely unfucked.ai/unfudged.io -- moreover I do feel there are many pieces and nuances to the experience which give pause to folks who create versioning tools.
I'm open to engaging if you have a question or comment that doesn't diminish my motives, assumes I must operate in your world view "problems can only be solved once", and discourages people to try new things and learn.
Look, I'm grateful that you stopped by and hope you'll recognize I'm doing my best to manage my own sadness that my children have to exist in a world where folks think this is how we should address strangers.
> assumes I must operate in your world view "problems can only be solved once"
I never claimed anyone else has to agree with this. That's why people are allowed different opinions.Nobody ought to give a damn what I think, the only opinion that matters about you is your own.
But just like I won't ask you adopt my view, I also won't go around patting people on the back for TODO apps.
My opinion: people ought to spend more time contributing to solving genuine problems. The world needs more of that, and less "I built a TODO app" or "Here's my bespoke curl wrapper".
This is happening so frequently I just wrote a blog about it to vent my frustration:
https://gavinray97.github.io/blog/llm-build-cheaper-than-sea...
My comment is not meant as a shallow dismissal of the authors work but rather what seems to be a growing, systemic issue
Only works in a git directory, and one might want to use git only for manual version control and another tool for automatic.
Then replace git with "rsync" or "borg. But I don't see how running "git init" in a directory you have "days of work" accumulated in is a sticking point.
Git is a convenient implementation detail.
The core loop of "watch a directory for changes, create a delta-only/patch-based snapshot" has been a solved few-liner in bash for a long time...
Something something Dropbox
There are a huge number of people coming into agentic coding with no real background in software dev, no real understanding of git, and even devs with years of experience will readily reach for convenience and polish even when they could otherwise implement it themselves, see: Vercel's popularity.
Create a branch, squash the branch manually when you want and merge things.
or `git reset --soft main` and then deal with the commits
or have 2 .git directories. Just add to the git commit `--git-dir=.git-backups` or whatever you want to name it.
Yep, I’ve needed something like this a few times. Even when trying to be careful to commit every step to a feature branch, I’ve still found myself asking for code fixes or updates in a single iteration and kicking myself when I didn’t just commit the damn thing. This will be a nice safety net.
Thank you! That's great to hear.
I spent a bit of time being baffled nothing existed that does this. Then I realized that, until Agents, the velocity of changes wasn't as quick and errors were rare(er)
Thank you for pointing out a problem that I had (which I do!), solving with Time Machine and trying to make myself commit more requently - and for providing a solution! Looks very cool, too. If I close the terminal I started --watch in, will the watch continue?
Writing this, I wanted to ask if the desktop app includes the CLI, but there it says it on your website :-) Thanks for thinking ahead so far, but then picking us up here and now so we can easily follow along into an unf* future!
Looking forward to try it.
yes, it worked a lot so once you say watch it watches until you stop it, including through closing terminals, computer power off, etc. It should restart on reboot, but -- test it yourself and tell me if I'm wrong :)
> unf watch
# reboot
> unf list
it should say watching on your directory still, if it stays crashed or something else. ping me at support at v1.coJust one human, two machines at my home can't replicate all configurations...
v1.co nice domain!
This is a real problem! Sounds like you and dura landed on similar solutions: https://github.com/tkellogg/dura
Keep it up!
A useful idea!
Alternative - version files and catalog those versions (most of the work, with "Unfucked", appears to be catalog management), building it on top of a Versioning File System.
E.g. NILFS logging file system, logs every block-change (realtime)
more:
- NILFS https://en.wikipedia.org/wiki/NILFS
- topic https://en.wikipedia.org/wiki/Versioning_file_system
haha the NSFW toggle is crazy
Ha, the only feedback I needed :) I spent far too much time on the Unicorn exploding properly...
ZFS snapshots can be used to similar effect, basically for free.
Or btrfs for that matter. I'm doing something similar with btrfs. Used zfs for a while, but the external repositories kept getting out of sync with the distribtion kernel, so system updates required manual intervention. That annoyed the heck out of me over time. Switched back to btrfs, which has been working fine for the last year. 10 or so years earlier I still had data corruption and bugs with btrfs.
Don't change-notification-based mechanisms suffer from potentially reading a half-written file? Or do you do something more clever?
There's a 3-second debounce. Don't hold me to that timeframe, that's the default now.
It doesn't read the file the instant the OS fires the event. It accumulates events and waits for 3 seconds of silence before reading. So if an editor does write-tmp → rename (atomic save), or a tool writes in chunks, we only read after the dust settles.
I accept there are cases if the editor crashes mid-state that you have a corrupted state but there was never a good state to save, so, arguably you'd just restore to what's on file and remove the corrupt partial write.
It's not bulletproof against a program that holds a file open and writes to it continuously for more than 3 seconds, but in practice that doesn't happen with text files by Agent tools or IDEs.
Feel free to follow up for clarity.
I have used fossil in a similar way, also local, and sqlite based. Admittedly you have to add files to it first but setting it running via cron was simple enough. Though it wasn't be ause I let an AI access all my stuff.
I use zfs 15-minute snapshot for this. Nixos+zfs makes this too easy
Excellent idea. Looking forward to trying it. Any way to install it without brew?
what's your hardware/os?
Debian Linux.
If this is on Homebrew then it is on macOS.
Why not just use TRime Machine?
The article explains that.
this seems insanely useful and well thought out. kinda surprised something like it doesn’t already exist. def useful in the age of agents
Great idea, terrible name. Honestly this sort of stuff reinforces the idea tech types lack social skills and maturity. NB I'm fine with vulgarity in its place (UK Viz reader here), but potentially professional tools isn't that place. Edit: I notice the blueness seems to have been deprecated in naming.
+1 for the open source comments.
In your examples the framing of use cases against agent screw-ups is contemporary and well-chosen.
Best of luck with the project as you make it more useable.
This would be great as aVSCode(ium) extension.
OP here --
I could build an extension for the UI vs a Tauri app, and it could help you install the CLI if you don't have it. Would that meet your needs?
That said, the fidelity of OS-level daemon can't really be replicated from within an app process.
Some use cases are better served by a system-wide process, I agree, but when I think source code, I think VSCodium. It is about configuration and starting/stopping. I don't mind the browser based web UI, but I do mind having to babysit one more (albeit super useful) tool. I'd rather have it as a VSCodium extension that would AUTOMATICALLY start when I load a workspace, configure the watched directory from that workspace, and stop when I close the workspace. So instead of me spending my attention on babysitting UNF, through VSCodium, UNF would just follow me wherever I go with zero configuration needed.
You really shouldn't need to babysit UNF. It feels like git.
One install, one init, and then it just works. It shouldn't stop across restarts or crashes.
love the idea of this, but echoing others... closed source daemon with access to all files is a 100% non-starter.
OP here, responded on the topic: https://news.ycombinator.com/item?id=47185781
Where is the source? I'm not going to rely on or trust anything this important to code I can't read.
OP here, responded to a similar concern here: https://news.ycombinator.com/item?id=47185781
This is not something I would ever use. The idea of giving a probabilistic model the permission to run commands with full access to my filesystem, and at the very least not reviewing and approving everything it does, is bonkers to me.
But I'm amused by the people asking for the source code. You trust a tool from a giant corporation with not only your local data, but with all your data on external services as well, yet trusting a single developer with a fraction of this is a concern? (:
I don’t think that’s as crazy as you do. Corporations are supposed to have checks and balances in place, safeguards, policies. Individuals might have none of these.
Just use Jujutsu
Is this open source or source available?
OP here, responded to similar question here: https://news.ycombinator.com/item?id=47185781
Why not just fuckin commit!?
You end up with a lot of small dumb commits, and you have to do so manually between nearly every LLM interaction.
I do this but i certainly see the appeal of something better
no, you don't 'have to do so manually'. all agents can run 'git commit' for you. if you end up with too many commits for your taste; squash on merge, or before push; `git reset --soft HEAD~3; git commit -m "Squashed 3 commits"`
why did you make it so complicated? magit has a `magit-wip-mode` that just silently creates refs in git intermittently so you can just use the reflog to get things back.
This was designed for any file save.
From what I know (correct me) magit-wip-mode hooks into editor saves. UNF hooks into the filesystem.
magit-wip-mode is great if your only risk is your own edits in Emacs. UNF* exists because that's no longer the only risk; agents are rewriting codebases/docs and they don't use Emacs.
[dead]
[dead]
Comment was deleted :(
So this is Time Machine, but with extra steps? </s>
Op here - grateful you gave it a look but want to clarify TM can’t be used for this use case.
UNF is one install command + unf watch to protect a repo on every file change, takes 30s.
Time Machine snapshots hourly, not on every change, so you can lose real work between snapshots. This may have changed or I missed something but I reviewed that app to see if it was possible.
And while tmutil exists, it wasn’t designed to be invoked mid-workflow by an agent. UNF* captures every write and is built to be part of the recovery loop
[flagged]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
Edit: you did it more than once in this thread - the other case was https://news.ycombinator.com/item?id=47183957. Can you please stop posting like this? It's not what this site is for, and destroys what it is for.
Acknowledged.
[flagged]
Appreciate that perspective and assumed some folks would feel that way.
I am more interested in testing if folks have the problem and like the shape of the solution, before I try to decide on the model to sustain it. Open Source to me is saying -- "hey do you all want to help me build this?"
I'm not even at the point of knowing if it should exist, so why start asking people to help without that validation.
I work(ed) with OSS projects that have terrible times sustaining themselves and don't default to it bc of that trauma.
Thanks for stopping by.
"Local history" is a very popular feature in the JetBrains IDEs (just search HN comments), and I remember similar tools appearing on HN several times in the past (for example https://news.ycombinator.com/item?id=29784238), so clearly there is demand for such functionality (or at least was in the past, when almost all code edits were manual).
Well, some kind of transparency would be good indeed. Open source doesn't mean open contribution.
OP here, responded to open source q here: https://news.ycombinator.com/item?id=47185781
Crafted by Rajat
Source Code