• @agent_flounder
    link
    English
    39 months ago

    A big part of it seems to be manipulation of the results? So, like, devs writing tests for more parts of the code base, but ones that are written to always pass.

    • @sushibowl@feddit.nl
      link
      fedilink
      English
      49 months ago

      Yes, of course. Fundamentally the end goal is to improve the app’s quality. However “quality” is not a measurable thing. Therefore, someone observed that as test coverage goes up, bugs tend to decrease, and as bugs decrease app quality tends to go up. So they make code coverage a KPI, and start putting pressure on developers to increase it.

      The problem is that once people are pressured into optimizing a certain number, they will get very creative at doing so. And this creativity often breaks the measure’s relationship with the actual underlying quality we were trying to improve.

        • @sushibowl@feddit.nl
          link
          fedilink
          English
          39 months ago

          Test coverage is defined as the percentage of your application’s functionality that is being covered by the automated tests.

          Usually this is measured in lines of code. You run the automated tests, then for every line of code, you track whether it’s executed or not. If 20% of lines were never executed during the test run, your test coverage is 80%.

          Software teams will often aspire to reach high coverage, because lines that are never executed during testing are a good place where bugs can hide. However it’s generally acknowledged that this isn’t a foolproof method to get rid of bugs, and reaching 100% coverage can be more effort than it’s worth. Often you have critical code sections that should be covered by multiple tests, and unimportant sections that are unlikely to fail.