Rendered at 17:56:25 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
rechargedaily 52 minutes ago [-]
Interesting data point from the other direction — while CEOs see no productivity impact, engineers are feeling the pressure of AI regardless of actual output gains. In our burnout survey, AI pressure to do more has emerged as a top 4 burnout driver in 2026. That finding didn't exist two years ago.
So even if AI isn't delivering the productivity CEOs expect, it's still extracting a cost from engineers in the form of heightened expectations and stress.
If anyone wants to add their data:
https://docs.google.com/forms/d/e/1FAIpQLSdu-1Sa6oPvhDtFtBuK...
Live results:
rechargedaily.co/state-of-burnout-2026
prh8 19 hours ago [-]
My company has pushed engineering all-in for AI in the last few months
Our stock price has also gone down 70% in the last few months
Naturally, we're pivoting our platform to put AI front and center
dehrmann 19 hours ago [-]
These aren't related in the way you think they are. Stock price reacts quickly to broader market trends, but more slowly for company-specific trends where revenue is likely stable. The impact of AI in engineering work will take months to show up in the product, probably a year after that for customers and the market to take notice. An AI product is a different thing entirely.
Eddy_Viscosity2 18 hours ago [-]
Did they try and be an blockchain-first company when that was the rage? Making NFTs and whatnot. Is your CEO just a trend-follower?
Grimblewald 19 hours ago [-]
The beatings will continue until moral improves
salawat 15 hours ago [-]
Don't you mean morale? Businesses are basically amoral by desi....ooooooooh. I see what you did there.
davebren 19 hours ago [-]
Are businesses all running on sunk cost fallacy now? These findings have been coming out for a while but it doesn't seem to change anything.
flextheruler 17 hours ago [-]
It seems like that because economic bubbles can last a lot longer than just 3 years. We are also in one of the longest credit cycles ever(2009 - Present) which has exacerbated this behavior.
grebc 18 hours ago [-]
They’ll say no but really… you know.
gozucito 19 hours ago [-]
I believe the lack of quick evident profit increases are partly a failure of imagination or a failure of understanding that AI agents are different from people. More impressive or faster in some ways, but much much less reliable in others.
The evolution of harnesses like claude code or open cause, and metaharnesses like Ralph loops, gas town, claws, etc. Will progressively allow for gradually better results and abilities even if models stopped evolving, and if the Mythos eval numbers are to be believed, there is still no hard ceiling to be felt yet.
At the same time, small models that can run on PCs VRAM/UNIFIED RAM have like Qwen are becoming more useful.
I predict that having more and more loops within loops within loops and layers of cloud/local models of different capabilities will solve a great many limitations of LLMS today...at the cost of speed and token count.
We've never had a tool that is at the same time so unreliable and complicated as GenAI before. It will take us a minute to figure out how to use it properly.
belZaah 13 hours ago [-]
Unlikely. There’s no change in operating profit per employee trends for major software companies like Alphabet since GenAI became a thing. But MS employees are now making 3 times more profit, than they were before Nadella took over. Clearly leadership can make a difference but there is no visible impact after several years of the technology being available. I can’t imagine a technology, that shows no economic impact at all while we figure it out. There ought to be _something_. Yes, big companies have inertia, but Nadella showed clear results in a year.
EPWN3D 2 hours ago [-]
That's not apples to apples due to Microsoft's massive force reductions and Azure's massive growth.
slopinthebag 18 hours ago [-]
Actually I think the opposite - we will learn that the most important thing is the ability to manage context & steer these models instead of using a rube goldberg machine. Some of the top performing agent harnesses on Terminal Bench provide literally one tool: tmux, which outperforms Claude Code et al. To me, the most important thing by far when getting reasonable output from these machines are what you put into it.
I wish anytime someone used the word "productivity" there was an accompanying definition.
zihotki 11 hours ago [-]
Productivity per dollar doesn't increase because for maturity levels 1&2 the costs for inference and extra team load (PRs quantity and size) eat up all gains. Only on level 3 one can see actual productivity impact. Most companies are between levels 1 and 2, that's where only costs are rising.
Levels: 0 - no AI, 1 - AI enabled (copilots), 2 - AI assisted (autonomous agent pipelines not on your PC) , 3 - AI measured.
ritcgab 17 hours ago [-]
They all know that, and we all know that.
So we are all in this "scheme".
charlie90 17 hours ago [-]
Has anyone studied the converse? Not using AI leading to loss of productivity? I feel like AI is no longer a "gain" but rather simply a requirement to compete.
jdlshore 17 hours ago [-]
Productivity gain or loss is in comparison to something else. In the article “using AI” is compared “not using AI.” So, the question is, what converse do you want to study? “Not using AI” compared to what?
cmiles8 19 hours ago [-]
AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon. A bit like the claims a few years back that we’d all have self driving cars by now.
The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.
ua709 18 hours ago [-]
If AI just means automation, then sure. We absolutely need more automation and if LLMs are not the mechanism then something else better be. More automation is the life blood of our industry. But are LLMs a game changer or today's fuzzy logic? [1] Time will tell...
P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.
negura 15 hours ago [-]
> the much promised impacts aren’t there and aren’t coming anytime soon
at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030
palmotea 11 hours ago [-]
> AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon.
Even if it doesn't result in increased productivity, AI can still take the fun out of the job (goodbye coding, hello code reviews all day).
newyankee 19 hours ago [-]
WE do have self driving cars with Waymo data showing it is clearly better than human drivers in certain markets like Phoenix. It is human regulations, laws and the general societal unease that is preventing a total rapid change. In fact a Robotaxis only urban area which is continuously mapped might be feasible today and probably could even reduce the no of cars needed for the population making it accessible to many more.
afavour 19 hours ago [-]
As a counterpoint, Waymo conducted a pilot in NYC then abandoned the permit for it:
Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.
cmiles8 19 hours ago [-]
AI has the same problem. It’s not that it doesn’t work, but that folks just aren’t all that interested in adopting it at scale. Tech makes this “build it and they will come” error a lot. The tech is quite good, but it’s all the non tech aspects of this that are why it’s not getting impact at scale.
acdha 16 hours ago [-]
The tech is good but not as good as advertised: note how Microsoft is simultaneously running ads saying Copilot can run your business and claiming it’s only for entertainment purposes in the EULA? Self-driving vehicles have a similar struggle where the manufacturers talk about the capabilities but aren’t willing to sign a legal agreement accepting liability for errors except in the easiest situations (and in the case of Waymo, only with pliable governments and control so they could immediately halt operations in the event of a major problem).
That’s more “build part of it, say you built all of it, and wonder why they don’t come”.
civvv 19 hours ago [-]
You’re generalizing too much here. One of the biggest problems with LLM’s today is in-fact that they are not at the level being advertised. This is not solely a case of regulation standing in the way of a «revolution».
oblio 19 hours ago [-]
> certain markets like Phoenix
So, basically the easiest robotaxi market on the planet? Call me when it works in Bucharest, Mumbai, Istanbul, Cairo, etc.
For software the last 80% of effort needed to finish the 20% remaining items is the hardest and hardware is even harder.
grebc 18 hours ago [-]
Ever driven in Bali?
nothinkjustai 19 hours ago [-]
No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart in a way humans don’t.
namr2000 19 hours ago [-]
This has not been my experience with Waymo. I drove a total of about ~3.5 hours in Waymos in LA when I was visiting and their robustness to very unusual situations absolutely floored me.
I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.
acdha 16 hours ago [-]
How many times did they need remote assistance? Those teams aren’t driving remotely but Waymo doesn’t pay for entire groups to exist without need.
nothinkjustai 19 hours ago [-]
No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart.
hsuduebc2 19 hours ago [-]
Was there any recent technology that really delivered what was the general promise?
grebc 17 hours ago [-]
Starlink is pretty darn good.
somewhereoutth 19 hours ago [-]
depends if post-correction it is worth anyone's money to keep training new frontier models. It could be that it isn't, so we are left with models that were trained in the bubble, but are now increasingly out of date, or (open?) models that are trained much more cheaply somehow with consequent lack of utility.
cmiles8 19 hours ago [-]
Good point. At some point there will be a reality check for the giant pile of burning cash that is new model training.
beloch 19 hours ago [-]
There's an interesting race happening here.
On one side, there is the usual process of figuring out how to properly use this new tech. It is to be expected that some experimentation is necessary to figure out what applications AI boosts productivity for and what applications it doesn't. There is unusually strong evangelism pushing AI into everything, so the negatives are going to be salient and may make it hard to spot some of the successes.
On the other side is something a little bit new: Deliberate enshittification. OpenAI and others no doubt saw the power crunch coming years in advance, yet it's still happening and is, ostensibly, the reason why prices are starting to go up. This was not unexpected. It's the business model. Build to the capacity that is cheaply available while offering your customers a sweetheart deal to get them addicted, and then jack up the prices when the competition has no cheap power to build upon. The result is locked in customers and locked out competition.
On one side, you have people learning when AI is appropriate and how to use it efficiently. On the other side, you have a small number of AI companies trying to extract every last bit of value so that any productivity gains wind up in their owners' pockets. Will the gains of more appropriately applying AI be entirely wiped out by enshittification?
Simulacra 20 hours ago [-]
Then why the layoffs???
advael 19 hours ago [-]
Partially a contracting real economy following overhiring early in the decade, partially trying to discipline labor, partially a pretty profound disconnect from both market pressures and concrete metrics that comes from a business model more centered around stock value and funding raises than revenue per se
TheOtherHobbes 19 hours ago [-]
We've been moving to faith-based markets for decades - markets where belief and hope almost entirely replace quantifiable economic activity.
19 hours ago [-]
cyb_ 4 hours ago [-]
Capital reallocation.
In other words, moving money/spend from non-AI projects to AI projects/cost. This includes trimming the bottom X% of performers to reallocate that money too.
In most cases, it is not about current productivity or AI doing people's jobs.
wildrhythms 20 hours ago [-]
Outsourcing to India and the Philippines
cmiles8 19 hours ago [-]
There’s alway a bit of that going on, but ironically if AI does result in mass labor replacement India and the Philippines are likely going to be ground zero where workforces get wiped out first. They’re ripe with the kind of things that AI is, in theory, getting very good at.
plaguuuuuu 10 hours ago [-]
I've always held the view that successfully using AI requires more knowledge and skill, as the burst radius of poor engineering decisions or lack of domain knowledge is way larger.
I just cannot see WITCH doing this without exponentiating the usual problems with outsourcing. I've seen some horrors. Can't wait for contractors wielding unprecedented chaos.
Simulacra 13 minutes ago [-]
Good point, successfully using AI takes skill. If you'll please pardon me, I don't think your average GenZ knows how to properly use it. It takes someone who grew up with technology, who understands the fundamentals of technology, who understands the fundamentals of computational decision-making, that can really make use of ChatGPT etc. Someone raised on App Tap culture just isn't equipped to fully appreciate the technology. Not that they can't, it's just… The vast majority of them are hopeless with this.
bilekas 19 hours ago [-]
Because there was bloat and AI was a good scape goat.
pragmatic 15 hours ago [-]
Reducing opex to invest more into capex (at least for companies that can like MSFT etc)?
ivankra 18 hours ago [-]
Trend following - everyone's jumping. And bad economy.
cmiles8 19 hours ago [-]
Typical bad management decisions that came home to roost. It’s a lot easier to say “AI productivity improvements” than for the CEO to say “I’m cleaning up terrible performance on my part and a lot of bad business decisions.”
fzeroracer 19 hours ago [-]
To juice the next quarter. Extreme short-term thinking has become the norm at every business I've worked at and every business I'm aware of, so upper management has no issue cutting teams right down to the bone.
It's why software has become far more unstable. There's nobody around to actually maintain it.
coldtea 17 hours ago [-]
The economy is shit. They make the layoffs but instead of saying we're scaling down, they present it as AI related productivity gains.
Just spin for not exactly bright small time stock holders.
antisthenes 18 hours ago [-]
Ah yes, first the return to office, now being forced to use AI in 50%+ of projects. Will the ingenuity of modern executives never cease?
expedition32 19 hours ago [-]
Dutch AI would just demand a 3 day workweek.
jnaina 18 hours ago [-]
Spanish AI would require all AI systems to pause for 2 hours after lunch hour
leosanchez 12 hours ago [-]
And not work during LaLiga games ?
throwuxiytayq 20 hours ago [-]
they’re holding it wrong.
tcp_handshaker 20 hours ago [-]
They should have asked AI CEOs
ChrisArchitect 17 hours ago [-]
repost from february; many referencing the same NBER report.
Some related discussions recently and months ago:
90% of CEOs Say AI Changed Nothing. The Other 10% Have a PR Team
This article is underlining the stark contrast between the viewpoints of “AI Enthusiasts” and everyone else.
Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.
I imagine I’m far from alone in that search and when you pair that with the constant marketing and glowing “analysis” from some of the enthusiasts about how this technology is “solving coding” or “changing the face of security” or even leading to AGI it starts to tickle that part of my brain where I keep blockchain, NFTs and copper bracelets.
So TLDR the tech is good but the hype-slaves and their masters are killing it with overpromising and under delivering.
runako 19 hours ago [-]
Not the OP, but there are likely many tens/hundreds of thousands of people using AI daily because their management requires it. Management tracks AI usage by employee and uses it as a KPI. You want to keep your job, you use AI. You want a bonus, you use AI a lot.
This is simultaneously one of the easier management KPIs for employees to hit and one of the least meaningful.
you know a tool is good when your boss hinges your career on you using it
ua709 18 hours ago [-]
I think a lot of the disconnect in the programming world is we treat all programming as equivalent and it's not.
There really are many programming jobs that are rote and I have no problem believing that an LLM based tool can learn the pattern and regurgitate with the tweak de jour. In those jobs LLMs probably do increase productivity.
But there are other programming jobs that are not rote and there is no pattern to learn because you haven't done the thing yet. LLMs aren't any more useful than a normal base library would be, and if you're already good at using a library of code, they're not a productivity booster and often, in my experience, a hinderance.
I think another point is the prompt actually forces the engineer to spend a moment to actually think about what they're doing and make some kind of plan. Pre-AI tools way too many programmers just jumped straight into problems without thinking what they were doing figuring they could code their way out of anything and ending up stuck in some cul de sac and having to back track. And if they just stopped and made a basic plan they wouldn't have that issue. Forcing engineers to make a plan, who wouldn't otherwise do so, before they start, could definitely be a productivity booster for them.
ytoawwhra92 19 hours ago [-]
> Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.
So why are you using the tools? Personal curiosity? Workplace mandate?
I've made measurably more and faster progress on both professional and personal projects since adopting these tools. Sometimes assisted is less productive than unassisted, but the net gain is pretty obvious to me.
ofjcihen 19 hours ago [-]
Honestly? It allows me to be lazy.
throwaway422432 19 hours ago [-]
This.
An AI is like delegating it to the junior programmer you don't have. You spend 5 minutes writing the spec rather than an hour coding.
It's usually something you could do yourself, and just can't get motivated to type out the code in the moment.
sph 11 hours ago [-]
No, it allows you to procrastinate on thinking, which is very different than laziness.
ofjcihen 9 hours ago [-]
Eh, no, it’s laziness :)
ytoawwhra92 18 hours ago [-]
That's a productivity gain in my book.
slopinthebag 19 hours ago [-]
Yep...same
I use the tools, but I'm under no delusions that I'm not just being lazy. I could just do it myself, and in some cases it would take roughly the same amount of time, but I can scroll TikTok while it dutifully churns out code.
grebc 19 hours ago [-]
I don’t like the tools personally, and find the reversion of any sort of interface to a chat interface a huge loss to UI - but for the love of all things holy why are using them if they don’t provide any benefit?
andrekandre 19 hours ago [-]
> for the love of all things holy why are using them if they don’t provide any benefit?
like most tech trends: fomo and hype
tbf, there is some benefit there but its much more nuanced than the hype suggests (as usual)
bluefirebrand 19 hours ago [-]
> Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are
I'm really struggling to understand why you would use them that much if you aren't sure they are more productive. Is it just a more enjoyable workflow for you?
I ask because I find AI assisted workflows extremely painful. Constantly pulling me out of flow, like driving in gridlock traffic.
ofjcihen 19 hours ago [-]
It allows me to be lazy honestly.
That and using it like a search engine feels a little like having good Google back.
nothinkjustai 20 hours ago [-]
Did the hype cycle not have an impact on employment with the various layoffs? Or is this and admission that the layoffs were for other reasons and were just attributed to AI?
I’m not surprised about productivity though. Efficiency gains are limited by the actual bottlenecks. And truthfully, I think people are deluding themselves a bit about how effective vibe coding is and how much faster they are actually moving when you consider developers still need to form an understanding of the codebase and its systems.
Outside of coding, is there really a use case for LLMs that has the potential to make big efficiency gains? Idk.
smalltorch 19 hours ago [-]
I've found the best way for me to wield it is the tool to build tools. I would have never in a million years been able to code. But I've used it to replace things I was paying hefty monthly subscriptions for....
So I'm not actually being more productive, but I've cut my costs significantly to do the same things I could do before.
slopinthebag 19 hours ago [-]
I thought I would do this, but of all the vibe coded tools I've built, I think I still use...one. The rest are just not worth the upkeep relative to the utility, or are either broken functionally or in their UX and I can't be arsed to put the effort into making them good. Which brings up why these tools didn't really exist in the first place.
Of course ymmv, and if you find yourself paying subscriptions for stuff you can replace with vibe coded apps, all power to you.
10sunbee 17 hours ago [-]
[dead]
lumost 19 hours ago [-]
A lot of organizations live in some game theoretic equilibrium that prevents cost improvements from being metabolized by the org without burning the cost elsewhere.
For example, consider a commodity business for software product X. All vendors of this product had their costs reduced by a factor of 100 over night for developing new product. They could increase their profits, lower their price, or re-invest the dividend. In software, the buyer usually buys on quality - so they all re-invest.
Now they are spending the same amount on product development, for the same price tag, and earning the same profit - but they might be shipping much faster.
Live results: rechargedaily.co/state-of-burnout-2026
Our stock price has also gone down 70% in the last few months
Naturally, we're pivoting our platform to put AI front and center
The evolution of harnesses like claude code or open cause, and metaharnesses like Ralph loops, gas town, claws, etc. Will progressively allow for gradually better results and abilities even if models stopped evolving, and if the Mythos eval numbers are to be believed, there is still no hard ceiling to be felt yet.
At the same time, small models that can run on PCs VRAM/UNIFIED RAM have like Qwen are becoming more useful.
I predict that having more and more loops within loops within loops and layers of cloud/local models of different capabilities will solve a great many limitations of LLMS today...at the cost of speed and token count.
We've never had a tool that is at the same time so unreliable and complicated as GenAI before. It will take us a minute to figure out how to use it properly.
Levels: 0 - no AI, 1 - AI enabled (copilots), 2 - AI assisted (autonomous agent pipelines not on your PC) , 3 - AI measured.
So we are all in this "scheme".
The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.
[1] https://www.electronicdesign.com/technologies/embedded/digit...
P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.
at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030
Even if it doesn't result in increased productivity, AI can still take the fun out of the job (goodbye coding, hello code reviews all day).
https://www.thecity.nyc/2026/04/06/waymo-driverless-cars-tes...
Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.
That’s more “build part of it, say you built all of it, and wonder why they don’t come”.
So, basically the easiest robotaxi market on the planet? Call me when it works in Bucharest, Mumbai, Istanbul, Cairo, etc.
For software the last 80% of effort needed to finish the 20% remaining items is the hardest and hardware is even harder.
I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.
On one side, there is the usual process of figuring out how to properly use this new tech. It is to be expected that some experimentation is necessary to figure out what applications AI boosts productivity for and what applications it doesn't. There is unusually strong evangelism pushing AI into everything, so the negatives are going to be salient and may make it hard to spot some of the successes.
On the other side is something a little bit new: Deliberate enshittification. OpenAI and others no doubt saw the power crunch coming years in advance, yet it's still happening and is, ostensibly, the reason why prices are starting to go up. This was not unexpected. It's the business model. Build to the capacity that is cheaply available while offering your customers a sweetheart deal to get them addicted, and then jack up the prices when the competition has no cheap power to build upon. The result is locked in customers and locked out competition.
On one side, you have people learning when AI is appropriate and how to use it efficiently. On the other side, you have a small number of AI companies trying to extract every last bit of value so that any productivity gains wind up in their owners' pockets. Will the gains of more appropriately applying AI be entirely wiped out by enshittification?
In other words, moving money/spend from non-AI projects to AI projects/cost. This includes trimming the bottom X% of performers to reallocate that money too.
In most cases, it is not about current productivity or AI doing people's jobs.
I just cannot see WITCH doing this without exponentiating the usual problems with outsourcing. I've seen some horrors. Can't wait for contractors wielding unprecedented chaos.
It's why software has become far more unstable. There's nobody around to actually maintain it.
Just spin for not exactly bright small time stock holders.
Some related discussions recently and months ago:
90% of CEOs Say AI Changed Nothing. The Other 10% Have a PR Team
https://news.ycombinator.com/item?id=47766164
Majority of CEOs report zero payoff from AI splurge
https://news.ycombinator.com/item?id=46696636
Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.
I imagine I’m far from alone in that search and when you pair that with the constant marketing and glowing “analysis” from some of the enthusiasts about how this technology is “solving coding” or “changing the face of security” or even leading to AGI it starts to tickle that part of my brain where I keep blockchain, NFTs and copper bracelets.
So TLDR the tech is good but the hype-slaves and their masters are killing it with overpromising and under delivering.
This is simultaneously one of the easier management KPIs for employees to hit and one of the least meaningful.
https://www.wsj.com/tech/ai/ai-work-use-performance-reviews-...
There really are many programming jobs that are rote and I have no problem believing that an LLM based tool can learn the pattern and regurgitate with the tweak de jour. In those jobs LLMs probably do increase productivity.
But there are other programming jobs that are not rote and there is no pattern to learn because you haven't done the thing yet. LLMs aren't any more useful than a normal base library would be, and if you're already good at using a library of code, they're not a productivity booster and often, in my experience, a hinderance.
I think another point is the prompt actually forces the engineer to spend a moment to actually think about what they're doing and make some kind of plan. Pre-AI tools way too many programmers just jumped straight into problems without thinking what they were doing figuring they could code their way out of anything and ending up stuck in some cul de sac and having to back track. And if they just stopped and made a basic plan they wouldn't have that issue. Forcing engineers to make a plan, who wouldn't otherwise do so, before they start, could definitely be a productivity booster for them.
So why are you using the tools? Personal curiosity? Workplace mandate?
I've made measurably more and faster progress on both professional and personal projects since adopting these tools. Sometimes assisted is less productive than unassisted, but the net gain is pretty obvious to me.
An AI is like delegating it to the junior programmer you don't have. You spend 5 minutes writing the spec rather than an hour coding.
It's usually something you could do yourself, and just can't get motivated to type out the code in the moment.
I use the tools, but I'm under no delusions that I'm not just being lazy. I could just do it myself, and in some cases it would take roughly the same amount of time, but I can scroll TikTok while it dutifully churns out code.
tbf, there is some benefit there but its much more nuanced than the hype suggests (as usual)
I'm really struggling to understand why you would use them that much if you aren't sure they are more productive. Is it just a more enjoyable workflow for you?
I ask because I find AI assisted workflows extremely painful. Constantly pulling me out of flow, like driving in gridlock traffic.
That and using it like a search engine feels a little like having good Google back.
I’m not surprised about productivity though. Efficiency gains are limited by the actual bottlenecks. And truthfully, I think people are deluding themselves a bit about how effective vibe coding is and how much faster they are actually moving when you consider developers still need to form an understanding of the codebase and its systems.
Outside of coding, is there really a use case for LLMs that has the potential to make big efficiency gains? Idk.
So I'm not actually being more productive, but I've cut my costs significantly to do the same things I could do before.
Of course ymmv, and if you find yourself paying subscriptions for stuff you can replace with vibe coded apps, all power to you.
For example, consider a commodity business for software product X. All vendors of this product had their costs reduced by a factor of 100 over night for developing new product. They could increase their profits, lower their price, or re-invest the dividend. In software, the buyer usually buys on quality - so they all re-invest.
Now they are spending the same amount on product development, for the same price tag, and earning the same profit - but they might be shipping much faster.