Home / Technology / AI Code: Trusted by Few, Used by Many?
AI Code: Trusted by Few, Used by Many?
13 Jan
Summary
- 96% of developers doubt AI-generated code's correctness.
- AI code usage grew from 6% to 42% in one year.
- Developers often use personal accounts for AI tools.

A significant portion of developers express low confidence in AI-generated code, with 96% stating they don't fully trust its functional correctness. Despite this skepticism, the use of AI in coding has dramatically increased, rising from 6% in 2023 to an estimated 42% of developers' code. This trend is projected to climb to 65% by 2027, indicating a rapid integration of AI tools into development workflows.
The verification process for AI-generated code remains a critical concern. While 59% of developers spend moderate to substantial effort checking AI's output, 38% find it takes longer than reviewing human-written code. This caution is warranted, as research suggests AI tools produce 1.7 times more issues, including major ones, compared to human developers. The common observation is that AI code often appears correct but is not.
Beyond functional correctness, security and data exposure are paramount worries. Over one-third of developers use personal accounts for AI tools, a figure higher for specific assistants like ChatGPT (52%) and Perplexity (63%). This practice introduces risks of exposing confidential company information. Consequently, data exposure and vulnerabilities rank among the top concerns for developers, underscoring the need for robust verification and secure usage protocols.




