Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

  • Snarwin@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    If the compiler produces a program that doesn’t match your description, you can debug the compiler. Can you debug an LLM?

    • beeng@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Why wouldn’t a compiled program match your description (code)? The compiler is broken?? Compiled programs alwsys match their description(code).

      So more likely your translation from idea to function is wrong.

      Re-read your description, step through it slowly, what did you assume, that was wrong, or where did you add a mistake or typo? Sounds like I can do this in natural language or in Rust.

      You can say that llms are not deterministic of what they produce, but that’s got nothing to do with making a programmer worse at their job.

      If you can’t translate your idea into function and test its output to be what you want, then you are a bad programmer.