hckrnws
I think as an industry LLM assisted programming is largely amplifying an existing gap: commercial software is often under much higher speed constraints and much lower correctness constraints than the sorts of software that can be created by deeply understanding a software system.
Obviously this depends on the industry; aviation software that runs in regulated environments is probably under even higher correctness constraints than that which Sussman discusses. Accounting software too needs to balance books correctly or risk fines. But most software needs to be mostly correct. New features and better UX is more useful for the users of many systems than fixing tail frequency bugs.
Personally I do hobby code on systems I understand from scratch. Writing for older, well documented retro hardware for example is very fun. Writing things from simple abstractions is also fun. But my speed in doing this is something I know is not commercially viable and that's fine by me.
Many fields have a commercial aspect of them that has a much lower quality bar and a much higher output speed bar than their hobby equivalents. Wedding photography, voice acting, the quotidian demand for these things is far lower than appreciating an Ansel Adams piece in a museum.
> New features and better UX is more useful for the users of many systems than fixing tail frequency bugs
If nobody notices a bug, does it matter?
There's whole classes of technically bugs that simply never happen under the real world operating constraints of a software system. The infamous example is that memory leak story from the Microsoft blog[1] – you don't need to fix memory leaks on a rocket that goes boom (or runs out of fuel) after 5 minutes of flight time. Just add the extra memory chip and move on.
Engineering is all about building to real world constraints. Don't build a suspension bridge where a sturdy plank will do.
[1] https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...
It's not a bug if someone designed it to work that way though - a no-op allocator is an allocator too, and can also be used on short-running processes (even outside of things that go boom).
It's pleasant to read an article that genuinely seems written by a person; warts and all. It doesn't matter that it repeats some of its points. Actually maybe that's the point. I hope more people try this.
This was the most delightful thing I've read in a long time.
This article helped me understand something I've been grappling with for a while.
I've been looking for the optimal game development environment for a while.
That basically boils down to having batteries included. (I have the opposite of Jonathan Blow's situation, I need to be able to get up and running in a few hours for game jams.)
But there seems to be this tension between convenience and control. Either APIs are low level or they are high level.
(A notable exception is the canvas API, which leaves both groups dissatisfied :)
The article basically made me realize, those are basically two separate groups of people. There's people who want total understanding and total control. And there's people who want to Do Thing With Computer.
I am not sure if it's possible to design one system they would both be happy with.
But that made me realize, I had the same idea about GUIs 20 years ago.
On Mac you usually don't get many options. On Windows you usually get too many options.
A rare few applications let you switch between two modes. "Just Do Thing" and "airplane cockpit". There's usually a gear icon or something like that which shows you all the extra options.
I wonder what that might look like for an API.
Comment was deleted :(
"Progressive disclosure" is the name of the UX principle that aims to provide a continuum between the user need for simplicity versus fine-grained control.
Also known as build the cockpit and use it to build things you want to do with the computer
There are a few emotional trigger points that LLMs seem to cause in programmers and this is a common one -- the need for deep, first-principles understanding that LLMs make obsolete.
One thing that gets me in a lot of pieces like this is they kind of assume people have no agency, that now that these tools exist we won't be able to help ourselves but use them despite our better judgement.
The broader topic which I don't see discussed so much are values. If you value deep understanding, well you should continue programming and learning in such a way. In some cases you may just want to use a language model to spin up a quick tool or PoC. And there is an entire grey area in between. It's a value judgement to decide what you use LLMs for, as much as what you don't.
> we won't be able to help ourselves but use them despite our better judgement.
Who is we, exactly. Programmers are very rarely the people that are making money in businesses that develop software. In fact they typically represent a massive expense. So in a huge portion of the cases 'we the programmer' will be told what to do in the sense they have to use LLMs to increase their productivity.
When looking at what we tell LLMs to do, you realize there are a lot of cases were humans have less agency than they think.
> There are a few emotional trigger points that LLMs seem to cause in programmers and this is a common one -- the need for deep, first-principles understanding that LLMs make obsolete.
Is it also an "emotional trigger point" that causes people to treat their hunches as facts?
> deep, first-principles understanding that LLMs make obsolete.
I don’t think that’s the case. I agree with the rest of what you wrote. But it’s not a value out of thin air. You need understanding, unless all you ever do is “spin up a quick tool or PoC”. And even then it depends on what you want to quickly use the tool for, or what concept you want to prove.
Idk, as someone who has done LLM driven development of fairly complex things (type systems, memory allocation gymnastics etc) I don't think the need to understand what's going on from first principles has really gone away. If I just want some isolated thing to work I can vibe code with no understanding, but there's no way to get coherence between behaviour, performance characteristics, purity etc without fully understanding the problem space. The LLM just saves (a shitload) of time on grunt work.
Of course if you're building some crud app it's all already tread ground, and you probably can just throw a prompt at an LLM and get something acceptable out.
>Of course if you're building some crud app it's all already tread ground, and you probably can just throw a prompt at an LLM and get something acceptable out.
This is what I think most people who haven't had boring CRUD jobs just don't get - the impact of having some deep technical knowledge goes to waste if all you're struggling with is dumb stuff like bad database design and basic security vulnerabilities everywhere. This was all done by people who are no longer there and were just in it for the paycheck. But also no one who is good is doing these jobs because the pay is too low compared to what they can get.
I'm sure all of this is true if you are teaching at MIT or are working anywhere near people who have gone there though.
Feelings aside[1], a large part of it is about having management above you. Take that plus the ever-present online nagging about productivity. If the latter is true then, well, it’s not like there is a choice.
My degree is in math, I love Dijkstra, and I think a lot of my colleagues have often created more work than necessary for themselves by treating pieces of code empirically when they could have got a more precise understanding by spending an hour reading it carefully.
However, I think the most fascinating thing about Dijkstra is how wrong he turned out to be in his prediction that an empirical approach would not scale.
I suspect that approaching programming like Dijkstra might have paid off long-term, but it was rarely a good deal in the short term, both for bad reasons (the empirical approach is a quicker and cheaper way to create buggy software that we can sell and claim as achievements on our performance reviews) and valid reasons (the unreliability of humans and hardware ultimately forces us to approach real computer systems, which are always a composite of hardware, software, and humans, empirically anyway.)
As they say in a somewhat different context: worse is better
I have experienced that people who understand the problem and simply solve it do not pass interviews. I've experienced this from both sides of the interview and I was just as confused by my fellow interviewers as I was by the companies I interviewed at. It seems like companies would prefer you to wrap everything in classes, spread the code across several files, and make N+1 copies of data, instead of writing 50-100 lines to actually solve the task.
When documentation is lacking, if the 3rd party library is source-available then reading the source code is my next step. This almost never fails me. Source code is usually understandable, generally speaking many brilliant people have put in decades of effort to make source code and the tools we use that way. I've long considered that the ability to understand source code by sitting down and reading as the most important asset a developer could have. Given that, I wonder how the latest trend seemingly against understanding source code will turn out. Maybe it will turn out to be less important, who knows. But it is really weird to live through.
Access to source code is enormously useful for understanding the behavior of any API you have to call.
I feel weirdly fortunate I came up working on open source CMSes in the late 00's --- the docs were typically woefully inadequate, but it taught me to read source and then drop debuggers and trace to confirm my theories about how the source behaved. Often when I see programmers hit a wall, they are for some reason stuck on poking at the underlying code like a black box instead of just trying to understand it. And understanding complex code is hard intellectual work, but to me it's always been more fun than the guess and test of pretending the source is closed.
This excellent article reminded me a lot of something I tried to get at a while back:
https://journal.stuffwithstuff.com/2010/11/26/the-biology-of...
When I wrote that article, it didn't seem to resonate with anyone at the time. I've been thinking about it more lately in the era of LLMs.
i've been thinking about this a lot the past few days, particularly about how coding agents can _help_ with understanding
i've been using opencode/opus to help with debugging lately, and it (he?) will happily dive into the source code of a dependency, the dependency's dependencies, and the C code that its binding to, all the way down to reading the libusb driver code and explaining what is going on where
whether or not i could have figured that all out on my own is beside the point; i wouldn't have take the time on a tight deadline to dig in deep. i would have done some poking and experiments and shipped a hacky workaround
The Android example is a bummer. After this many years and generations of GUI frameworks, you should not have to experiment nor dig into source code to learn how to do something as simple as laying out your widgets.
The docs should have examples for that kind of thing.
Note that the author is describing a custom View subclass, which is a couple of steps beyond "layout out your widgets". There used to be pretty comprehensive examples on the developer site for how to do View invalidation properly (calling requestLayout willy-nilly isn't always it).
Nowadays, they learned from many years and generations of GUI frameworks and... made a new GUI framework, Jetpack Compose UI.
I can't imagine programming without understanding aka vibe coding. Hence I will never vibe code.
The two don’t have to be mutually exclusive. You can let the agent code and you review it, or vice versa. No different from being a team lead where you don’t write all the code, or even review each and every line of code, but you have a very firm grasp of the code base.
on the contrary, reviewing LLM code and human code is very different. LLMs don't learn. if a human makes a mistake i can teach them to avoid that mistake in the future. if an LLM makes a mistake, all i can do is fix it over and over again. the dynamics are fundamentally different. some people may prefer to work with a machine, but i don't, i prefer to work with humans.
for me this is similar to the difference of using FOSS vs closed source software. if there is a problem on my linux machine, i can potentially fix it, on windows or mac i just can't.
both closed source software and working with LLMs make me feel helpless. whereas using FOSS or working with humans is empowering.
i get that not everyone feels that way, and that's fine. for my part i'll just stay away from LLM generated code.
In theory, vibe coding and understanding don't have to be mutually exclusive, but in practice, I think that the people who have the discipline to actually maintain their understanding of a codebase are few. I've code reviewed things from people who claim they are reviewing what comes out of the LLM carefully, and talked to them about the code, and while they think they understand the code, they simply don't, which becomes abundantly clear when I try to explain the problems I find in the code.
What do you do to learn new programming construct? What did you do to learn programming - didn't you write
#include <stdio.h>
int main() {
printf("Hello World");
return 0;
}
while having no idea what 'stdio.h' is?When you first learn anything you don't really understand it - that takes longer. When you learn woodworking you won't know why you have to hold the saw a certain way. When you learn chemistry you don't know how we know atoms exist. You start by doing the fun stuff and fill in the gaps later.
You can use vibe coding for learning. It is very effective.
Choosing not to know what stdio.h means is willful ignorance, an LLM has little to do with said chosen ignorance, that is a choice because "hey it works on machine!" and when I pushed it, nobody seemed to mind.
What a time to be alive. Actively choosing to rebuke knowledge because "what the fuck does it matter anyways"
Funny you should mention hello world. Kernighan and Ritchie presented it in TCPL as a little anatomical diagram of close to the smallest possible functional C program with the different parts labelled. The first line is labelled "include information about the standard library". What this means in detail is explained in that chapter. Furthermore, if you were compiling on a Unix system, stdio.h was readily available as /usr/include/stdio.h. Curious people could open it up using more or vi and see what was inside. There was no shortage of curious people back then.
The process of "going through the motions" of writing and compiling a program without even a small understanding of what it all meant was a later innovation, perhaps done as a classroom exercise in an introductory CS course for impatient freshmen or similar.
Not in my version: https://pasteboard.co/DWUSR9qHf8It.jpg
no. it was the first question I asked and was given a satisfactory explanation (along the lines of, "this adds things to your program that help it write text to the screen.")
That's not even remotely satisfactory if we're talking about understanding what we're doing
-- me, two months ago
After vibe coding: I can't understand how I could deal with coding before.
[dead]
I was relieved to read (skim) through all of that without it reaching some LLM conclusion like, “of course we have a better vantage point to understand now than thirty years ago... the LLMs can understand for us”.
In the world that the AI bros want for us, understanding has become a hobby.
The problem with that is - AI isn't a developer.
By that I mean, it's fabulous for taking the input I give it, processing it, and returning a collection of tokens that it has found in its training data that do what is being asked.
It's regurgitating fragments of prior work - I have zero complaint about that, just as a developer you now need to understand what those fragments combined do, and whether that really fits with your actual desire - or not.
To put it into old people's terms "You got the answer from Stack Overflow? Was that the code from one of the answers.... or the question?"
Comment was deleted :(
Yep. It is a luxury. Nowadays I use AI for work, and my productivity increases. However, I don't learn much from the tasks, because I get more tasks since the team went down half size. Understanding is a luxury now.
Interesting piece - reading it I found myself skimming, the point being made was being made repeatedly, albeit showing that as the author grew as a developer so did the complexity of the software they were using.
There was one thing that screamed in my head, though, whilst reading it, was, yes we can have a look at the library being used, read the code, and understand what its actually doing (this is one of the reasons I like Go so much, no matter who the upstream author is it's generally clear what they're doing [caveat: there are always going to be authors that obfuscate the f*ck out of code, no matter the language], the one thing, though, is systems like Netflix, hundreds of microservices running together in ways that people have NFI what it's all doing.
It just doesn't fit into one person's head anymore.
So, a single head can manage the data pathways for some subset of the system overall, and they might even get right down to the metal, the sheer size of the system means they only have a partial view, and abstractions (in the form of C4 diagrams) only show how complex the beast has become.
> It just doesn't fit into one person's head anymore.
True, but I think it doesn't have to, at least not everything at the same time.
You can still usually understand the ins and outs of a specific component/service/module/etc with some time - e.g. if you have to develop or maintain that component.
Alternatively, you can also try to understand certain data or action paths throughout all components of the system - that's what OP did with the layout bug: They were trying to understand how Android's relayouting logic worked, so they managed to get a mostly complete picture of all the pieces that are involved in that specific functionality. But they probably didn't bother to learn the rest of Android's UI renderer or other unrelated components with the same thoroughness.
I think this kind of "selective understanding" where make conscious decisions which parts you want to understand and which you treat like a semi-predictable black box works well in practice.
Crafted by Rajat
Source Code