Let’s do them together
Static code scanners
A genre of programs called “static code scanners” read code to identify violations of rules.
This blog identifies some Java naming standards considered “anti-patterns”, such as
“no method names beginning with get”
“no class names ending in -er”.
“method names shouldn’t start with a verb”
The list of such products include some which work on a wide range of programming languages:
SonarQube tracks summary statistics about scan results over time.
Qulice, combines several scanners to apply over 900 rules on just Java code.
These programs can be invoked as part of a “Continuous Integration/Continuous Deployment” toolchain that stops a branch from being deployed if that branch doesn’t meet all the rules.
This rather Draconian approach makes sense to some people because each piece of new code needs to work with existing code.
Where is the creativity?
Some may brissle at this “take it or leave it” approach from some.
Does that stifle creativity?
I’m personally thinking the ends justifies the means.
I think creativity is merely shifted. Perhaps to the arrangement of classes, to the UI, and other aspects machines cannot currently fathom.
When all code is known to follow a certain set of rules, the code is more maintainable.
There’s another, perhaps future benefit.
Automated refactoring of the entire code base at once can occur with less worry and work.
What’s more, when code is inevitably generated by machines, the scanners will be there to catch their errors, and thus accelerate results.
Thus, scanners ensure the conditions for speed and rapid adoption of innovation.
Who can know them all?
The concern for organizations is how to “wire” the rules as code is typed out.
To avoid bad quality code from its inception, developers can invoke static code scanners on their local machine before committing their code to the team gauntlet.
The source of coding violations can often be attributed to the training provided. In an effort to simplify concepts for learning, examples provided in tutorials are often not “production-worthy”. Nevertheless, those examples are used out of habit.
In order for the automatic scanner to be a patient tutor, it needs to explain how to do it correctly – how to correct the errant code given – rather than simply complaining and dismissing errant code.
And that’s where live human tutoring is helpful – to provide the nurturing, the explanation of “why” in a way that the learner would best understand.
MY PROPOSAL: A wiki with an entry that explains each rule, with links to explanations of underlying knowledge. Such a public forum is where debates about the merits of each rule.
I think Where understanding abouds, acceptance will florish.
Empathetic, specific, and kind feedback?
Some time back, a book named High Tech, High Touch popularized the concept (as I understand it) that more real personal physical attention is needed with heightened technology use.
The feedback from scanners is impartial, and does not take into account personality conflicts and prejudices.
This I think is where automated scanners can enhance developers doing pair programming.
When one introduces a know bad piece of code, the other doesn’t have to say a word, and just let the scanner do the rejection.
This way, feeback cannot be perceived as a personal attack and thus cause animosity.
Discussions about code can then transcend from whether someone is a good person depending on whether they use spaces or tabs.
Try it - install
Install Maven. On a Mac:
brew install maven
git clone https://github.com/teamed/qulice.git
Navigate to it. We’ll use the tool to check itself.
git clone https://github.com/teamed/quiz.git
Add the plugin dependency in the project’s pom.xml file.
Assuming you have java installed…