notes on software development and ai

Just some random notes on the use of large language models to assist in coding. I have not worked with an agent so can not really comment on that. I like to interact with the text, the actual characters, that form a program. I am not that fond of verbose programming languages and I am not that interested in getting help writing the code. It would remove some of the fun and satisfaction with coding. I have used some AIs as better search engines and it has been quite useful for some usages and quite useless for others. I don’t really mind people using tools to generate and write code. If they like it it is good for them I guess. I use some myself when working on verbose languages like Java or Kotlin. Things I do to pay the bills… So on to some observations.

Increased speed

There seems to be a general opinion that software development needs to be done faster and that it would be good if coders could speed up a bit. I think this is a mistake. Most organizations that has been around for more than 3-5 years have systems they call legacy - these are systems that are hard to maintain and evolve. This typically happens when a serie of initiatives - often with different sets of developers - has enhanced the system with new features. (Also note that when I write systems I mean it in an abstract way. It can just as well be a forest of micro services as a monolith. It will just be different kinds of mess.) What if the developers on thes legacy systems had moved a bit slower and made sure that the code was of high quality, well tested and maintainable. Perhaps the need to replace that legacy system wouldn’t be that high. I would argue that moving a bit slower and with quality in mind will make us move faster in the long run. Also - when you think about it - large language models are based on code that make up the software we have today so using them will likely just produce more systems of the same kind. Which leads me on to….

How does this really work?

I am no expert in large language models but I understand that they need to train on real things to produce reasonable outcomes. So if you get help to write some code it will likely be similar to some code that has already been written by a bunch developers somehwere (that will not get any credit for this). I guess this is fine if you want results fast but for any orgnization that wants to create an edge over competitiors or just simply do something unique it won’t happen.

Who is responsible?

It doesn’t feel right to me that people are writing code that they could not write themselves. Who is responsible for this piece of code? When it doesn’t work there needs to be a human taking responsibility for the code. I don’t want to do that with code I don’t understand. (In a distant future with sentient AIs this may be different of course. Then they can have legal statuses by themselves.)

Ending notes

It may sound that I am complaining but I am not. I just see that - with or without the help of LLMs - we are gonna continue write mediocre systems in the future as well. For me this is great - I don’t mind messing around with legacy code - I find it kind of fun actually. I also don’t mind having a job for some years more. What we could do instead of trying and failing to replace ourselves with LLMs is to actually raise the level of abstraction of programming languages. That would increase both the speed and the quality at the same time.

Over and out for now.

written by fredrik at 2025-09-22

Related posts