Coding assistants promise to become one of the most popular AI applications. Above all, they should free programmers from annoying routine tasks so they can focus on the really important things. And there is a large selection of suitable assistants – but how well do they actually work? And what are they not suitable for? In an interview with Dr. Alexander Shaitan, the new iX cover author, explains what software developers have to pay attention to in practice.
Advertisement
Dr. Alexander Schatten is a senior researcher, management consultant and podcaster at SBA-Research: podcast.zukunft-denken.eu
Why software developers don’t need to fear for their jobs despite AI code assistants

I don’t know if they don’t have to fear for their jobs (in the long term). In the short and medium term, it seems to me that AI tools have the same effect as other complex tools in the software lifecycle: competent programmers are empowered and get better at their jobs, but those with mediocre or poor qualifications do better and (very simply put) worse jobs.
For a long time we have too few competent programmers and too many who are overwhelmed by the complex challenges of their work. But what’s worse is that most people are not even aware of it (considering the Dunning-Kruger effect). This often leads to large, systemic and widespread errors that these programmers make in large projects and are difficult to fix.
If AI is used strategically in the wrong way in a company, AI itself helps make these mistakes happen even faster and on a larger scale.
Which AI assistants are particularly good at programming?
On the one hand, they help with relatively simple boilerplate code, but they can also (in the hands of capable programmers) help make it easier to get started with new problems in a prototypical way.
Supporting quality assurance measures can also be very helpful. For example, AI can help with testing or writing documentation.
You can also get help when working on parts of the code that you don’t know: “Explain to me this method/this regular expression”. However, it should always be added that these explanations are by no means error-free. Reasonable ability should always be present in assessing AI statements.
At what point do they become weak – or fail altogether?
At the moment it is rarely possible to keep an overview of large code bases or technical side effects. But this may change with technological improvements.
The main problem is that when major interventions are made by AI, you need to be confident that they will be done correctly. For example, if AI has to do a major refactoring, it is no longer easy to adequately test the results.
But if it’s quick and easy, it’s tempting to do it, despite the risks.
Mr. Shaitan, thank you very much for your answer! Four detailed articles in the new July iX explain in detail the situation around coding assistants. In addition to the tool overview, we show what the generated code is good for. All topics in the new issue can be found here. iX is 7/2024 From now on in the Heise shop and available at kiosks.
In the “Three Questions and Answers” series, iX wants to get to the core of today’s IT challenges – whether it’s the user’s perspective in front of the PC, the manager’s perspective or the administrator’s everyday life. Do you have any suggestions from your daily practice or from your users? Whose suggestions on what topic would you like to read briefly and to the point? Then feel free to write to us or leave a comment in the forum.
(For)
