Study investigates how writing skills
and CS achievement relate to the ability to vibe code when controlling for
domain-general cognitive skills. Credit: Proceedings of the 2026 CHI
Conference on Human Factors in Computing Systems (2026). DOI: 10.1145/3772318.3791666
The new trend of "vibe
coding" allows people to program software without writing a single line of
code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI
Conference on Human Factors in Computing Systems has shown that users
who want to develop apps and programs successfully with AI need not only a
capacity for clear written expression, but also a basic knowledge of computer
science.
For a long time, the coding of apps
or computer programs was reserved for experts with a command of complex
programming languages such as Python or Java. Thanks to AI tools such as Claude
Code, Cursor or Loveable, however, this is no longer the case. Non-experts can
now also become software developers by using natural language to describe how
an app or program should work. An AI interprets these instructions—also known
as prompts—and generates the program code in the background. The term "vibe coding" has caught on to describe this new approach to
programming.
But does that mean that programming
with AI agents is now something anyone can do? In a new study, ETH researchers
Sverrir Thorgeirsson, Theo Weidmann and Professor Zhendong Su investigated
which skills affect people's success at vibe coding. As well as the ability to
express themselves clearly in writing, people also need a basic knowledge of
computer science when it comes to using AI to develop apps or programs that
actually work.
People who understand how apps work are at an advantage
For the study, the researchers
recruited 100 Zurich students who had completed at least an introductory course
in computer science and already had some experience with AI-assisted
programming. The students were tasked with using an AI agent to recreate an
existing app for planning meals, to add new functions to an app for organizing
their own university courses, and to replicate an abstract application with no
discernible purpose. They also had to write a short essay on a specialist topic
that was familiar to them, as well as completing tests to examine their
computer science knowledge and general cognitive ability.
The three researchers demonstrated
that the participants' knowledge of computer science had the greatest impact on
how well they completed the tasks. This effect remained intact when the
researchers controlled for differences in the students' general cognitive
ability—although, as the study only investigates correlations, it wasn't
possible to determine exactly why this was the case.
The researchers suspect, however,
that people with a better understanding of how programs work can provide more
efficient instructions to an AI—even without seeing the code itself. "Our
understanding is that good computer scientists can plan an app's structure more
precisely and debug potential errors faster. They're also more likely to know
relevant technical terms in order to direct the AI agent more precisely,"
explains Theo Weidmann, a doctoral student of computer science at the Advanced
Software Technologies Lab of ETH Zurich.
Better results with clear and structured prompts
In the study, the authors also
found a significant correlation between success at vibe coding and the
students' general writing skills. Weidmann attributes this to the fact that,
in vibe coding, writing the prompts becomes a form of coding in
itself. "People who formulate clear and structured prompts achieve better
results, while unclear or imprecise wording is more likely to lead to defective
software."
The three researchers were
surprised to find that students who are particularly frequent users of large
language models in their everyday lives fared worse not only at writing essays,
but also at vibe coding. The reasons for this couldn't be conclusively
clarified in the correlation study. However, the study authors believe that
frequent use of large language models may weaken people's ability to express
themselves. Conversely, it could also be the case that students who are less
proficient at writing are more likely to use AI tools.
AI corrects already-correct code
Coding with AI was also the subject
of another recently published study by ETH researchers. ETH Professor Martin Vechev
and his team investigated how good common AI agents are at correcting code that
is actually already correct. Fixing code is one of the key potential
applications of AI in software development.
The results are sobering: in more
than 70% of cases, the AI agents corrected code even though it contained no
errors. "Common AI agents suggest fixes to what is already correct code,
which means that we still need improvements in AI technology. This is also a
reminder that human experts must continue to check AI-generated code rather
than relying on AI alone," explains Vechev, adding that there's still work
to do before some aspects of software development can be fully automated with
AI.
Provided by ETH Zurich
Source: What skills do people need to successfully program with AI?

No comments:
Post a Comment