You don't need a 4.0 to graduate. And even if you got one, a lot of grades are composed of tests, not projects. You can just memorize your way through things if you were dedicated enough.
It's not really that hard to get a degree in engineering if your only goal is the degree itself.
That does seem to depend on countries and universities.
I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining
Mate, have you never had to deal with over-confident graduates who think they've got the complete answers, but, in reality, they only have a sliver of the whole picture in their minds?
Grade inflation and schools passing kids who should fail to game metrics and keep collecting student loans is a problem. I wouldnt consider hiring anybody from my alma mater who didnt score a sandard deviation or higher on the tests.
Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.
The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.
>Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.
What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.
Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.
In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.
OP should have put "engineers" in double quotes. Many software developers like to describe themselves as engineers although they don't have an actual engineering degree. A lot of software development resembles plumbing more than engineering, so most devs don't really need an engineering degree anyway, but they should be more honest about what they're actually doing and not try to elevate themselves with fancy titles.
You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.
Most of the "Software Engineering" curricula I've seen is catered towards "getting a job as a programmer", and is mostly focused on languages, frameworks and outdated processes.
As an engineer in another discipline, there's no engineering there.
I would rank like this: Computer Science > Self Taught > Software Engineering.
I remember people in college bragging that they're learning Angular. I was like, is this an engineering or physics thing, angular dynamics? No, it's a web framework with a ton of boilerplate that my LLM deals with now.
Well idk what an actual software engineering program would teach that you can't learn better on your own or on the job. Formal CS education teaches things that simultaneously help with the job and also can't be learned there. But some people just don't have grit, whichever path they took.
I graduated in 2006 in CS, and I had at least 5 or 6 software development classes. We also had electives, which included DB design and algorithms. Many of the higher-level classes allowed us to use any language of our choice as well.
I was self-taught since I was 15, so most of these classes were just review for me. I met lots of people that didn't know how to code as seniors (and never ended up getting a job in their field).
Yep. Way more than half of the people I interview can't even do a very basic FizzBuzz, even with guidance. Those are people with a degree, job experience and reference letters.
A degree is passing the test. Not all degree programs get into more advanced topics nor do they necessarily require that someone is able to work through how to solve a problem that they haven't seen before.
--
A lot of students (and developers out there too) are able to pass follow instructions and pass the test.
A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".
Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.
--
... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.
Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.
This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.
... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.
I don't know but I can point at more than half of the people that I work with that can't think, and every time they try to, takes a whole group of people that can think to undo their mess, they all have degrees and I don't.
So what does that tell me?
Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.
I've seen it happen multiple times. Engineering degrees are no different than a vast majority of degrees in that if you are good at the read and regurgitate cycle, you can make it through. Not only can you make it through, but you can do it with a very respectable GPA. They come out with a large dictionary of keywords in their arsenal, but no idea how to put them into practice. Some are able to put it into practice and tie it all together. As they see practical examples of those keywords in the real world, it starts falling like dominoes, and at an accelerating rate. For some, it never goes much beyond keywords. The dominoes fall, but it is slow, and they stop falling for extended periods of time for them. Not many mature engineering organizations can tolerate that sort of progression rate. They usually don't last very long at any one place, until they find a company where they can blend into the background due to a combination of company culture, and low complexity systems being worked on.
The practice of software engineering is not what they teach in university.
I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.
Can’t think properly seems to be the real issue. That’s one of the reasons that SE domain is mostly in ruin. AI won’t help, only to delay a bigger mess.
Ever since the standard office setup went from offices or cubicles to bullpens and hot desks there is less and less time to think, and all of that is a management decision to ship things as fast as possible
The tech industry lost the plot when SCRUM Masters and AGILE coaches were highly paid con-men to waste everyone's time and add no value while raking in the coal. AI doesn't impact something already broken.
60's, 70's, 80's, 90's, basically before the Google and Meta found out ads and money printing run the world, and after the tech industry was run by nerds with mullets, New Balance sneakers and khaki shorts.
Oracle, HP, Microsoft, Cisco, IBM, Apple, Xerox and countless other names were internally bureaucratic and political in the 80's and 90's. Like famously so.
Every single one of those companies you mentioned was lean, agile and run by skilled motivated nerds with mullets and thick glasses in the beginning when they started in a garage.
And every single major company becomes bureaucratic and political after 30+ years in the business when the original founders are long retired, and the Wall Street friendly beancounters take over, caring only about the quarterly reports.
You are changing your argument by adding this: "when they started in a garage."
'Lean agile' tech companies are by far the exception, not the rule.
Look at OpenAI and Anthropic, both fairly new companies that are excessively political already. This 'garage stage' of lacking politics is a myth, read old stories about Microsoft, when it was 15 people it was political.
>You are changing your argument by adding this: "when they started in a garage."
No, you are.
You first asked: "When was tech not bureaucratic and political?"
To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".
What did you fail to understand here?
>Look at OpenAI and Anthropic, both fairly new companies that are excessively political already.
Everything becomes political when you tell them they're worth trillions if they only play the right tune. Money brings out the worst in people. SW companies didn't make trillions decades ago.
You literally just quoted me saying before two comments above: "You are changing your argument by adding this: "when they started in a garage." and then pretend otherwise.
Now you're pretending I never said and acting like you didn't read it.
Are you unable to understand an argument made by adding the context of two sentence from two consecutive comments following up on each other(which you yourself quoted and said it changes the argument), or are you just a troll acting in bad faith pretending you can't understand just to score a cheap gotcha?
>Wild behavior.
Yes you have, which is why I'll stop replying to you now, to protect my sanity. Jesus Christ.
You made up a quote you never said and insisted that you said it, argument over, you lose. And no, you can't take little pieces of several of your comments and smash them together and pretend like that was the context all along. Bizarre behavior. Please read more about how this site works, this isn't acceptable.
Compilers are a layer of abstraction that we can ask another human about. Some human is there taking care of it. Until we get to the point where we trust AI with our survival it would be good to be able to audit the entire stack.
Have you tried to shift through a whole lot of vibe coded slop? It’s really mentally draining to see all of the really bad techniques they fall back on just to brute force a solution.
I'm just saying that I already see that people are outsourcing all the thinking to the models - not only code generation and reviews, but even design - the part that "senior engineers" without imagination think only they are capable of doing.
It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.
It's a bit of a weird place to be in as a senior engineer who has spent 2 decades perfecting his craft.
So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.
Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.
And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.
I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.
I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?
I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.
Yeah, but the thing is, it's not "just work". Software now has really big impact on the world and actual lives.
In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.
Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.
Yeah that's a tough spot to be in; I think though, your responsibility really ends with you at work, unless you're very high up on the management chain.
Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?
At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.
I think those of us who have years of experience under our belt our safe. If we're older the knowledge is ingrained and atrophy of this knowledge will be limited based on the fact that it's already "imprinted" onto our brains.
Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.
You could say the same thing about compiled code, actually it's worse because anything a compiler spits out is very hard to understand even for those who understand assembly.
You don't need to look at the entire program at the assembly level to figure out parts that you want to optimise or prove for correctness. You do need to look at all the code the LLM generates in order to understand it.
You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.
You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.
I dunno, I'd rather proofread (or better yet just test) LLM-generated code than have to reason about assembly. You can't just look at part of the assembly to prove that the rest is right, especially if it's hand-written, or maybe just -O3. But anyway compilers are not what come to mind when someone mentions LLM coding.
I agree that the problem is volume, even more so than correctness.
All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.
First, there are still people who don't like high level languages and don't use them, because they find assembly better.
Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".
High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.
When people communicate they speak in terms of the overwhelming generality of reality. There's always at least one guy that is an extreme exception.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
Not a compliment. I’m saying you’re speaking from an incredibly obscure perspective because you took what the other person said way to literally and pedantically.
You can write unambiguous (UB-free) code and the compiler's output will be deterministic. There will even be a spec that explains how your source maps to your program's behavior. LLM has neither.
Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.
I suspect there are at least as many programmers working as the ASM level today than there ever was - they're a lower proportion, but the total number of programmers has increased dramatically.
Explained by LLM: It is 100% true that no human alive can write 1000 lines of assembly better than GCC or LLVM.
It is also still 100% true, right now in 2026, that a truly competent assembly programmer can write 10 lines of assembly that will beat any compiler on earth by a factor of 2x, 3x, even 5x.
The entire industry looked at this situation, and somehow concluded the exact wrong lesson: "humans should never write assembly". Instead of the correct lesson: "humans should almost only write assembly".
At a high level of abstraction, the product owner can talk to the LLM directly by themselves. The "engineers" will have abstracted themselves out of a job.
This isn't just another translation layer, though. It's squishy and stochastic. It's more like saying "managers think at a higher level of abstraction". Which is true, but it's not the same as compiled code.
GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.
The comparison to compilers doesn't make sense. When your job is to build software at work, you don't throw away the code after and commit a binary. But more importantly, an LLM is not a compiler.
I think there are engineers that can’t think without AI. But the best think with it. Unfortunately, we are now living in a day and age where simply ignoring AI is no longer an option.
There were always engineers who didn’t think and depended on crutches around them like senior engineers and politicizing the perf cycle. Most people got into this because their parents told them it makes a lot of money, and they never had the drive and curiosity to develop the passion required to truly think through the problems in computing and computer science. They will continue to use crutches to survive. Those that are driven by the problems for the problems will continue to think and use AI as a tool for leverage. This is no different than any other assistive technology.
Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.
What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
> Yes, my coding skills probably aren't as sharp as they used to be
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.
I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
I actually have gone to the AI repeatedly for system design solutions.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.
For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?
It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
I wasn't really interested in asking Claude myself, because I wasn't really able to verify the claims being made so it's just noise. I'd hoped that the person who had written the code and put it up for review would be able to.
Every time things change, the change itself is different.
I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.
Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.
"Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.
In answer to the headline - it's not, no more than calculators stopped people from thinking.
It's changing the way we think, and reason.
Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.
No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.
The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.
Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.
Thus proving the engineer steering it still has to know what they are doing/looking at.
I think if anyone is looking for a concise way to talk about the problems with LLM and agentic coding, it's this. People say AI assisted coding but for much of what I've seen (and tried), it's the tool, gateway, and interface to some people's work now.
No, AI is not creating that group of people. They already existed. They were the people who would google for StackOverflow snippets and copy+paste them without even reading the entire snippet, much less understand them. Same people, new tool.
100% agree. The key difference now though is that it's no longer 'swim or sink immediately' situation - which used to be a forcing function against intellectual laziness where it was a choice.
Many people by now have probably seen a teammate who used to be a good SWE, now spamming slop code that puts all the real work on the reviewer. That's the "second group."
Tell them no. Thats what I do. I have rejected multiple PRs that were too large and lacked proper design or alignment upfront. With code being so cheap, rejecting it should be just as cheaper. Set cultural standards that devs need to review their code before asking for reviews. Etc etc
I don't think it has the effect you think it has. No-one takes a rejection personally anymore since it's so easy to just tell an AI to fix the comments. So a rejection does not make them rethink like it would have back in the day.
Even back in the day, code review was often more of a helping exercise than gatekeeping. You can only gatekeep so much before they find someone who doesn't. Which is actually a good thing because a lot of people are pedantic about code in ways that also contradict other reviewers, but in my case I just want it to not do clearly wrong things.
the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.
Skills you don't need, atrophy. Skills you need, don't. It's very simple, and the "you won't have the skills you used to need but don't need any more!" line of reasoning is tired and invalid.
That's not how it works, unfortunately. Skills you use stay fresh, skills you don't practice get rusty and fade away. You might need things you aren't using anymore.
If you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.
Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.
People might hate this but being a good cheat requires a lot of critical thinking.
It's not really that hard to get a degree in engineering if your only goal is the degree itself.
(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.
I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining
The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.
What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.
Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.
In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.
You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.
Most of the "Software Engineering" curricula I've seen is catered towards "getting a job as a programmer", and is mostly focused on languages, frameworks and outdated processes.
As an engineer in another discipline, there's no engineering there.
I would rank like this: Computer Science > Self Taught > Software Engineering.
I even saw a "post-grad in React" lately.
Backend-wise it's the same, it comes and goes with fashion and whatever company has influence in the university recommends.
I was self-taught since I was 15, so most of these classes were just review for me. I met lots of people that didn't know how to code as seniors (and never ended up getting a job in their field).
--
A lot of students (and developers out there too) are able to pass follow instructions and pass the test.
A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".
Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.
--
... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.
Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.
This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.
... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.
So what does that tell me?
Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.
I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.
And every single major company becomes bureaucratic and political after 30+ years in the business when the original founders are long retired, and the Wall Street friendly beancounters take over, caring only about the quarterly reports.
'Lean agile' tech companies are by far the exception, not the rule.
Look at OpenAI and Anthropic, both fairly new companies that are excessively political already. This 'garage stage' of lacking politics is a myth, read old stories about Microsoft, when it was 15 people it was political.
No, you are.
You first asked: "When was tech not bureaucratic and political?"
To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".
What did you fail to understand here?
>Look at OpenAI and Anthropic, both fairly new companies that are excessively political already.
Everything becomes political when you tell them they're worth trillions if they only play the right tune. Money brings out the worst in people. SW companies didn't make trillions decades ago.
What you actually wrote in the comment four hours ago:
>60's, 70's, 80's, 90's, basically before the Google and Meta found out ads and money printing run the world
Your lie just now:
>To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".
---
>What did you fail to understand here?
Nothing because you never said it. Wild behavior.
You literally just quoted me saying before two comments above: "You are changing your argument by adding this: "when they started in a garage." and then pretend otherwise.
Now you're pretending I never said and acting like you didn't read it.
Are you unable to understand an argument made by adding the context of two sentence from two consecutive comments following up on each other(which you yourself quoted and said it changes the argument), or are you just a troll acting in bad faith pretending you can't understand just to score a cheap gotcha?
>Wild behavior.
Yes you have, which is why I'll stop replying to you now, to protect my sanity. Jesus Christ.
And putting aside the vanishing skill, there is also an issue of volume.
It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.
So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.
Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.
And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.
I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.
I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?
I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.
In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.
Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.
Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?
At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.
Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.
You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.
You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.
It's not even a close comparison.
All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.
It's only your opinion that is provably false.
First, there are still people who don't like high level languages and don't use them, because they find assembly better.
Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".
High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.
I wonder if this sort of trend will continue?
(A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)
GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
Its the feeling of having done a lot of thinking for themselves without having actually done so.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
Becoming dependent on a technology is to be expected. I'm pretty sure 95% of us are dependent on packaged meat and don't know how to hunt.
That's substantively different than going from assembly to C.
I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.
Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.
"Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.
It's changing the way we think, and reason.
Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.
No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.
The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.
Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.
Thus proving the engineer steering it still has to know what they are doing/looking at.
Of course this has to be done by people with seniority who are respected. If you don’t have that you’re in a loosing battle.
the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.
If you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.
Good luck with the nature hike!