Topics

Find some consensus


Bob Jacobsen
 

TLDR: Skip to below the line to find a requested action item.

For a very long time, the JMRI development community was able to collaborate while still allowing people to have their own working styles. For well over a decade, the basic idea was that _additive_ changes didn't have to be perfect; that "code wins"[1] meant that contributions with warts were still a contribution to be accepted; that because JMRI was a stone-soup approach to development _subtractive_ changes were not OK. People stepped up and made this all work, mostly by making improvements to the things they cared most about. I’ve written about this approach a couple times[2]

In the last year or so, this has broken down. In the large, there are _disagreements_ about the relative emphasis of the JMRI apps vs JMRI as a library; getting features out to users vs working on the structure and quality of the code; procedures and toolsets vs individual styles and individual developer responsibility. I specifically mean _disagreements_ in the "Person X thinks Person Y's method is wrong and shouldn’t be", not "differences" in the form "X does it one way and Y does it another".

For too long, I've been spending my hobby time in the middle of _all_ of the disagreements I mentioned in that paragraph, and others.

Most recently, I've spent the majority of my hobby time over three weeks trying to resolve what started as a simple request for a holiday[3][4] of a deprecation process that (a) I found excessive and pointless, but (b) that I had been complying with it because a rough agreement had existed to put it in place.

After three weeks of trying to work out the details of somebody's suggestion of a more formal, more complex, but perhaps more powerful release methodology, that now seems to have fallen apart in a paroxysm of absolutism.[5]

The comments in the last part of that[5] discussion indicate that there really is a problem here that _I've_ been contributing to. I think I've been trying to hold a middle ground, but it's clear that some people don't agree with me on that.

Reviewing a bunch of recent discussions about this[6] surfaced more disagreements beyond the release process.

- People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way. On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the quality issues from how that's written. (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) And there are disagreements about the cost/benefit of issues raised by the automated checking and need for additional ones.

- People want to add tools, perhaps make them "primary" tools, at the cost of taking way tools that others already use (and others have to spend time maintaining tools they don't use or know much about)

- Some people want to mandate PRs reviews before merger, but consider it dictatorial when they don't get to merge the subtractive change they want.

- When doing reviews, people want stylistic changes, and get unhappy when the original author of the PR doesn't prioritize those.

- People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way. (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the issues raised by how that's written, and don’t seem to want to spend time to (learning to) write better code. And there are disagreements about the cost/benefit of issues raised by the automated checking.

- People want to add tools, even make them "primary" tools, at the cost of taking away tools that others already use (and others have to spend time maintaining tools they don't use or know much about)

I've also believe I’ve been trying to hold the middle ground of several of those[7] too, but it’s also clear that people don’t agree with that either. So it’s time for me to stop, and I now have.

------------------------------

I think it's time for you to find some consensus around how to work as a group. And by “you”, I mean _all_ the people on this list other than me; by "find" I mean "negotiate and document"; by "consensus" I mean a range of ways of working that people will either find a way to accept or need to leave. That range might be tight, might be loose, might be a mix; that’s something for you to negotiate and document.

As you may have guessed from the discussion above, I'm tired of being caught in the middle of this. I'm not interested in being called "dictatorial" ever again.

So it's up to you guys, as a collective, to find that consensus. I'm not going to take part in the discussion. Figure it, write it down, and start working together. Or not, that's up to you.

I'm going back to working on SP Coast Line signals. I got a big envelope full of copies of 1953 prototype schematics back in mid-June as a present, and I’m really looking forward to opening it and digging in.

Bob


[1] The oldest use I found in jmriusers groups.io showed "code wins" already as shorthand in a 2009 post about Layout Editor structure.
https://groups.io/g/jmriusers/message/41336
I’m sure there were older ones

[2] A recent longer explanation of the approach, complete with cartoon, is in https://jmri-developers.groups.io/g/jmri/message/654

[3] https://jmri-developers.groups.io/g/jmri/message/3630
[4] https://jmri-developers.groups.io/g/jmri/message/3816

[5] See the recent part of the discussion in PR 8832 "Update release documentation" https://github.com/JMRI/JMRI/pull/8832

[6] Specifically, see the PRs and Issues linked at the end of PR 8832[4]

[7] Approximately 80% of my latest PRs have been for trying to address other people's idea of improvements. I haven't have time since early May to make a commit (not a PR, a _local_ _commit_) on the signaling code that I really want to work on.


Bob Jacobsen
@BobJacobsen


Paul Bender
 

All,

In as much as I am at least partly the cause of this message from Bob,
I  want to address a couple of his points and  add a few of the things I
think are important, along with why, because the why really does matter
to this discussion.

On 7/16/20 12:01 AM, Bob Jacobsen wrote:
Most recently, I've spent the majority of my hobby time over three weeks trying to resolve what started as a simple request for a holiday[3][4] of a deprecation process that (a) I found excessive and pointless, but (b) that I had been complying with it because a rough agreement had existed to put it in place.
Just for the record, because I never stated it explicitly, I was
perfectly fine with the deprecation holiday resulting in version 5
coming out at Christmas time.  As with Bob, there are certain tasks that
need to be done in order to clean up a few things in the code, mostly
violations of our long standing architecture, but those tasks can't be
done because methods and/or classes needed to be deprecated.

- People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way. On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the quality issues from how that's written. (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) And there are disagreements about the cost/benefit of issues raised by the automated checking and need for additional ones.
For my own part in all of this, I really only care about one metric for
JMRI, and that is test coverage.  I've spent a LOT of time on improving
test coverage. 

Why did I start down the test coverage rabbit hole?  Like Bob, I find
myself doing a lot to cleanup code.  Fixing partially completed
projects, most of which I didn't start myself.  It was while working on
one of the projects maybe 5 years that I accidentally broke multiple
things because, in my naive thinking of the time, "this can't possibly
break anything".  That was a driver to improve the test coverage from an
abysmal 10% or so to the 50% or so we have now (still not enough, but
better.  I don't think we'll ever hit 100%, but I'd like to see us hit
the 70s.... ).

Now, why tests:

1) Tests are documentation.  i.e. if you have a test, you know what the
code should do in the cases covered by the tests.

2) Tests can give you confidence that things won't break when you make a
change.

3) Tests give you confidence that users will find fewer bugs (and if a
user finds a bug, it's a good idea to write a test to make sure it
doesn't happen again).

But, admidily, we have too many broken tests (flaky tests are broken
tests...) and those need to be fixed (not disabled, not pushed off into
a corner, but really fixed).

- Some people want to mandate PRs reviews before merger, but consider it dictatorial when they don't get to merge the subtractive change they want.
There may be a differences of opinion here as to what constitutes a
subtractive change.  One PR I was personally offended by I saw as an
additive change.

However, let's talk about why I want to advocate for PR reviews:

1) We don't have enough test coverage

2) People make mistakes.  We are all human.  Code reviews mean you get a
second set of eyes on the code, which may very well catch an error
before it makes it into the hands of users.

3) We get more people familiar with more parts of the code.

4) We can all learn from each other.

But also, we have two classes of developers in JMRI:

Maintainers, who can approve their own PRs.

Everyone else (members of the development team or just other
contributors) who must have a maintainer approve a PR.  

Before we moved to github, effectively everyone was a maintainer.  I
would never advocate going back to that chaos.

In practice however, Bob has been just about the only person who reviews
contributions of developers who are not maintainers.  When I was a
maintainer, I did take some of that load, but I am no longer a
maintainer, so I can't do that now.

The suggestion that we require a review on every PR levels the playing
field so to speak and to take some of the load off of Bob.  Every member
of the developer team should be expected to review PRs and make
suggestions that prevent errors from getting to users.  (Style comments
should only be allowed if the style makes the code unclear.) 

I think it's time for you to find some consensus around how to work as a group. And by “you”, I mean _all_ the people on this list other than me; by "find" I mean "negotiate and document"; by "consensus" I mean a range of ways of working that people will either find a way to accept or need to leave. That range might be tight, might be loose, might be a mix; that’s something for you to negotiate and document.

As you may have guessed from the discussion above, I'm tired of being caught in the middle of this. I'm not interested in being called "dictatorial" ever again.
I don't want Bob to be in the middle of all this.  I have long thought
that we put too much work on Bob, or that he takes on too much work
himself.  I don't think that is good for Bob, and I don't think that's
good for the JMRI developer community.

I also want to see the developer community continue to grow as it has
for the 18 years (give or take a little) that I have been involved in
the project.  Bob is a big part of that, and has always been the glue at
the center of the community.

Now, I also want to add one thing I want us to move to, which is, at
least in part, a reason Bob mentioned the primary tool discussion
(though this isn't about tools, this is about process).

I want to see us move to a modular build system, and here are several
reasons why:

1) We're compiling and testing an approximately 400,000 line monolythic
program every time we runt he program through CI (and sometimes when we
build locally). By dividing the program into modules, you can frequently
reduce the amount of code that must be recompiled every time (i.e. If
I'm working on LayoutEditor, why should any part of the CI process ever
need to recompile the layout connections in jmri.jmrix).

2) We can use the modular build to enforce the architecture we have
agreed on.  (i.e. if I'm writing something in jmri, I can't refer to
apps, because jmri isn't supposed to depend on apps, so it doesn't know
apps exists).

3) We may actually be able to improve test reliability. (Because if each
module is built and tested individually, It is much harder for other
tests to cause issues.)

This does complicate the assembly (into a distribution ) process but
that configuration is effectively a one time task.

That's all for now.

Paul


Ken Cameron
 

I grew up with the ideas that Bob had said about additive (new) things not
requiring the same standard as existing code. That allowed new features to
quickly come to the users. Yes, it meant usually a number of bugs but the
users seemed to understand that new features were like Alpha code and should
be suspect. Later that could be improved, particularly once it is shown that
users are using it and not just as a coding exercise.

Subtractive code was considered a solution of last resort. That was what
gave us the really long depreciation cycle. The build up of this is what
brought up the idea of a holiday to get to a cleaner slate. Yes bad ideas
should die eventually but we need to make sure the good idea way is
available and a transition path documented. Best example of this would be
how the default manager is found for different types. Those simple one line
'find this' and 'replace with that' has got lots of use over the years.

From watching and listening, it seems that we have two types of contributors
to the project: those who live in the world of coding and those who do not.
The former gets up tight about style and testing, the latter generally don't
understand those things. Trying to document things like style and testing in
a way the non-formal coders understand has been a gap in this project. With
the right guides and examples, I think many of these folks could come closer
to the expectations of the full-time coders. But I think we would benefit
more if we don't have absolute standards of these things. Meaning cut some
slack to those who have a hard-enough time with the coding itself and maybe
help them along using positive reinforcements. Some of the tone seems to
have gotten rather negative over the years.

I think that Bob's idea of a depreciation holiday turned into a classic
example of requirement creep. The simple concept seems to quickly gain a
whole lot of extra weight and baggage which quickly seemed to lose that
initial gains.

I recall ages ago that 'lint free code' was a goal for submission for
projects. Problem was that didn't seem to help the code really work. The
opposite side also exists, code that fails some tests may still be a
considerable improvement over the existing code.

I still look at some of the code submissions as I'm curious to see how
something fixes an issue. Years ago it was easy to pull each file into my
local via a side by side view of the changes and learn some of the other
coding ideas and styles. I found it helpful to watch first, code later, as a
way to learn how I should be doing things. I'm not as sure the current
generation of tools makes it as simple as it used to be. Here I see the
multiple code branches clouding the view of the code. If you aren't looking
at the right branch at the right time, you end up with the wrong results.
Some of the latest ideas about this will not be understood by many of the
contributors who are not life long programmers.

My last comment is that we should manage this in a way to not lose Bob as a
resource like some other projects had due to becoming more noise than
substance.

-Ken Cameron, Member JMRI Dev Team
www.jmri.org
www.fingerlakeslivesteamers.org
www.cnymod.org
www.syracusemodelrr.org


Dan Boudreau
 

I don’t believe in consensus, I don’t believe you can make everyone happy, but I do believe in leadership and I support any direction Bob J. wants to take JMRI.  Bob J. is the glue that holds JMRI together, anyone who doesn’t understand that should leave. I’m sorry to hear that he’s not enjoying his current role as part of the JMRI team.

 

I also agree with Paul B. that test coverage is a good thing. We’re now over 50% coverage, much better than years ago.  I just wish we could figure out how to make the dam tests more reliable, even my own tests fail for reasons I don’t understand.

 

Change is good, and the worst that can happen is that we have to return to the previous process which I truly believe we won’t.

 

I still find it amazing that contributors of various talents can produce a product as good as JMRI. If we need to progress to the next level and require peer reviews or whatever, so be it.

 

I’m here to have fun and enjoy coding with other like minded folks.  Let’s not make this a job, the pay is awful.

 

I’m happy with the current process, I was happy the process 12 years ago, and hopefully I’ll be happy with whatever comes next.  I don’t have a strong opinion of what needs to be done going forward, I’ll leave that to others.

 

Dan

 

Lead, follow or get out of the way.

 

From: Bob Jacobsen
Sent: Thursday, July 16, 2020 12:01 AM
To: jmri@jmri-developers.groups.io Notification
Subject: [jmri-developers] Find some consensus

 

TLDR: Skip to below the line to find a requested action item.

For a very long time, the JMRI development community was able to collaborate while still allowing people to have their own working styles.  For well over a decade, the basic idea was that _additive_ changes didn't have to be perfect; that "code wins"[1] meant that contributions with warts were still a contribution to be accepted; that because JMRI was a stone-soup approach to development _subtractive_ changes were not OK. People stepped up and made this all work, mostly by making improvements to the things they cared most about. I’ve written about this approach a couple times[2]

In the last year or so, this has broken down.  In the large, there are _disagreements_ about the relative emphasis of the JMRI apps vs JMRI as a library; getting features out to users vs working on the structure and quality of the code; procedures and toolsets vs individual styles and individual developer responsibility.  I specifically mean _disagreements_ in the "Person X thinks Person Y's method is wrong and shouldn’t be", not "differences" in the form "X does it one way and Y does it another".

For too long, I've been spending my hobby time in the middle of _all_ of the disagreements I mentioned in that paragraph, and others.

Most recently, I've spent the majority of my hobby time over three weeks trying to resolve what started as a simple request for a holiday[3][4] of a deprecation process that (a) I found excessive and pointless, but (b) that I had been complying with it because a rough agreement had existed to put it in place. 

After three weeks of trying to work out the details of somebody's suggestion of a more formal, more complex, but perhaps more powerful release methodology, that now seems to have fallen apart in a paroxysm of absolutism.[5]

The comments in the last part of that[5] discussion indicate that there really is a problem here that _I've_ been contributing to.  I think I've been trying to hold a middle ground, but it's clear that some people don't agree with me on that.

Reviewing a bunch of recent discussions about this[6] surfaced more disagreements beyond the release process.

 - People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way.  On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the quality issues from how that's written.  (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) And there are disagreements about the cost/benefit of issues raised by the automated checking and need for additional ones.

 - People want to add tools, perhaps make them "primary" tools, at the cost of taking way tools that others already use (and others have to spend time maintaining tools they don't use or know much about)

 - Some people want to mandate PRs reviews before merger, but consider it dictatorial when they don't get to merge the subtractive change they want.

 - When doing reviews, people want stylistic changes, and get unhappy when the original author of the PR doesn't prioritize those.

 - People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way.  (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the issues raised by how that's written, and don’t seem to want to spend time to (learning to) write better code.  And there are disagreements about the cost/benefit of issues raised by the automated checking.  

 - People want to add tools, even make them "primary" tools, at the cost of taking away tools that others already use (and others have to spend time maintaining tools they don't use or know much about)

I've also believe I’ve been trying to hold the middle ground of several of those[7] too, but it’s also clear that people don’t agree with that either.  So it’s time for me to stop, and I now have.

 ------------------------------
 
I think it's time for you to find some consensus around how to work as a group.  And by “you”, I mean _all_ the people on this list other than me; by "find" I mean "negotiate and document"; by "consensus" I mean a range of ways of working that people will either find a way to accept or need to leave.  That range might be tight, might be loose, might be a mix; that’s something for you to negotiate and document.
 
As you may have guessed from the discussion above, I'm tired of being caught in the middle of this.  I'm not interested in being called "dictatorial" ever again.

So it's up to you guys, as a collective, to find that consensus. I'm not going to take part in the discussion. Figure it, write it down, and start working together.  Or not, that's up to you. 

I'm going back to working on SP Coast Line signals. I got a big envelope full of copies of 1953 prototype schematics back in mid-June as a present, and I’m really looking forward to opening it and digging in.

Bob


[1] The oldest use I found in jmriusers groups.io showed "code wins" already as shorthand in a 2009 post about Layout Editor structure.
https://groups.io/g/jmriusers/message/41336
I’m sure there were older ones

[2] A recent longer explanation of the approach, complete with cartoon, is in https://jmri-developers.groups.io/g/jmri/message/654

[3] https://jmri-developers.groups.io/g/jmri/message/3630
[4] https://jmri-developers.groups.io/g/jmri/message/3816

[5] See the recent part of the discussion in PR 8832 "Update release documentation" https://github.com/JMRI/JMRI/pull/8832

[6] Specifically, see the PRs and Issues linked at the end of PR 8832[4]

[7] Approximately 80% of my latest PRs have been for trying to address other people's idea of improvements.  I haven't have time since early May to make a commit (not a PR, a _local_ _commit_) on the signaling code that I really want to work on.


Bob Jacobsen
rgj1927@...





 


danielb987
 

I think it would be good to have PR reviews. I'm not a maintainer, only a developer, but I try to read most of the PRs and there are three main reasons for me to do it:

*) I learn a lot of Java, design patterns and coding practices by reading other peoples code.

*) I learn about how JMRI works. By reading Issues and PRs and what problems the PRs try to solve and how the PRs solve it, I learn more about how JMRI is structured.

*) I learn about there JMRI is heading. By reading PRs, I get a glimpse of what JMRI is going to be tomorrow.


Also, people grow if they get responsibility. If nobody trust me, I don't need to take responsibility. If people see me as a leader, I need to step up to show that I deserve the trust. There may be someone that can't handle the responsibility and there the project leader needs to remove the trust, but for most people it works fine.


One really important factor is tone, how we treat one another. A "thank you" or an excuse means a lot. I have sometimes failed to have a good tone, but I try to apologize when it happens. The absolute most important factor to why I'm still active as a JMRI developer is the encouragement Bob J has given me from time to time. Harsh words leads to the opposite.


I have done several comments on PRs. Those comments are almost never requests, but thoughts. Sometimes I'm right, sometimes I'm wrong. But I have learned a lot from the responses I have got to my comments on the PRs.

Daniel


Klaus Killinger
 

Here are my thoughts and documentation on an impressive appeal.

If a matured concept is not ready to start because some steps are still to be done, why not doing it later.
I'm not a strategic planner or architect or designer, so that's just my impression. In my day job I was a team member and used to follow strategic decisions. Mostly such decisions made sense to me and following them was easy and I was glad to do it. And so it is with JMRI.

Failures happens and are human. Some happen simply because of misunderstandings. Dealing with it depends on values and expectations.
This also includes the way of communication. It's my conviction that a few who lack empathy are not capable of upsetting a community. Of course it is hard for the one who is concerned. A subject-specific and human consensus is not easy to achieve and needs much energy, i.e. time.

Creating good code is a challange for me. So I appreciate the automated checking and I try to draw the right conclusions. The reviews I get (and appreciate as well) tell me I'm not always accurate. I'm a contributor because I see a chance to add a value. Sometimes new ideas are coming faster than implementations. So I do what I can do, and I love what I do.

Klaus


Am 16.07.2020 um 06:01 schrieb Bob Jacobsen:

TLDR: Skip to below the line to find a requested action item.
For a very long time, the JMRI development community was able to collaborate while still allowing people to have their own working styles. For well over a decade, the basic idea was that _additive_ changes didn't have to be perfect; that "code wins"[1] meant that contributions with warts were still a contribution to be accepted; that because JMRI was a stone-soup approach to development _subtractive_ changes were not OK. People stepped up and made this all work, mostly by making improvements to the things they cared most about. I’ve written about this approach a couple times[2]
In the last year or so, this has broken down. In the large, there are _disagreements_ about the relative emphasis of the JMRI apps vs JMRI as a library; getting features out to users vs working on the structure and quality of the code; procedures and toolsets vs individual styles and individual developer responsibility. I specifically mean _disagreements_ in the "Person X thinks Person Y's method is wrong and shouldn’t be", not "differences" in the form "X does it one way and Y does it another".
For too long, I've been spending my hobby time in the middle of _all_ of the disagreements I mentioned in that paragraph, and others.
Most recently, I've spent the majority of my hobby time over three weeks trying to resolve what started as a simple request for a holiday[3][4] of a deprecation process that (a) I found excessive and pointless, but (b) that I had been complying with it because a rough agreement had existed to put it in place.
After three weeks of trying to work out the details of somebody's suggestion of a more formal, more complex, but perhaps more powerful release methodology, that now seems to have fallen apart in a paroxysm of absolutism.[5]
The comments in the last part of that[5] discussion indicate that there really is a problem here that _I've_ been contributing to. I think I've been trying to hold a middle ground, but it's clear that some people don't agree with me on that.
Reviewing a bunch of recent discussions about this[6] surfaced more disagreements beyond the release process.
- People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way. On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the quality issues from how that's written. (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) And there are disagreements about the cost/benefit of issues raised by the automated checking and need for additional ones.
- People want to add tools, perhaps make them "primary" tools, at the cost of taking way tools that others already use (and others have to spend time maintaining tools they don't use or know much about)
- Some people want to mandate PRs reviews before merger, but consider it dictatorial when they don't get to merge the subtractive change they want.
- When doing reviews, people want stylistic changes, and get unhappy when the original author of the PR doesn't prioritize those.
- People who are concerned about code quality metrics add to the workload of people who don't value those metrics in the same way. (And other get stuck in the middle, trying to update other people's code for metrics that they never advocated in the first place.) On the other hand, some are concerned about getting immediate features and fixes out to users, regardless of the issues raised by how that's written, and don’t seem to want to spend time to (learning to) write better code. And there are disagreements about the cost/benefit of issues raised by the automated checking.
- People want to add tools, even make them "primary" tools, at the cost of taking away tools that others already use (and others have to spend time maintaining tools they don't use or know much about)
I've also believe I’ve been trying to hold the middle ground of several of those[7] too, but it’s also clear that people don’t agree with that either. So it’s time for me to stop, and I now have.
------------------------------
I think it's time for you to find some consensus around how to work as a group. And by “you”, I mean _all_ the people on this list other than me; by "find" I mean "negotiate and document"; by "consensus" I mean a range of ways of working that people will either find a way to accept or need to leave. That range might be tight, might be loose, might be a mix; that’s something for you to negotiate and document.
As you may have guessed from the discussion above, I'm tired of being caught in the middle of this. I'm not interested in being called "dictatorial" ever again.
So it's up to you guys, as a collective, to find that consensus. I'm not going to take part in the discussion. Figure it, write it down, and start working together. Or not, that's up to you.
I'm going back to working on SP Coast Line signals. I got a big envelope full of copies of 1953 prototype schematics back in mid-June as a present, and I’m really looking forward to opening it and digging in.
Bob
[1] The oldest use I found in jmriusers groups.io showed "code wins" already as shorthand in a 2009 post about Layout Editor structure.
https://groups.io/g/jmriusers/message/41336
I’m sure there were older ones
[2] A recent longer explanation of the approach, complete with cartoon, is in https://jmri-developers.groups.io/g/jmri/message/654
[3] https://jmri-developers.groups.io/g/jmri/message/3630
[4] https://jmri-developers.groups.io/g/jmri/message/3816
[5] See the recent part of the discussion in PR 8832 "Update release documentation" https://github.com/JMRI/JMRI/pull/8832
[6] Specifically, see the PRs and Issues linked at the end of PR 8832[4]
[7] Approximately 80% of my latest PRs have been for trying to address other people's idea of improvements. I haven't have time since early May to make a commit (not a PR, a _local_ _commit_) on the signaling code that I really want to work on.

Bob Jacobsen
@BobJacobsen


Paul Bender
 

All,

On 7/17/20 4:11 AM, Klaus Killinger wrote:
Failures happens and are human. Some happen simply because of
misunderstandings. Dealing with it depends on values and expectations.
This also includes the way of communication. It's my conviction that a
few who lack empathy are not capable of upsetting a community. Of
course it is hard for the one who is concerned. A subject-specific and
human consensus is not easy to achieve and needs much energy, i.e. time.
Something about what Klaus said here actually hits home for me.  Maybe
what we're missing here is a lack of good communication, from multiple
parties (I'll include myself here) and perhaps that is what we need to
work on more than anything else as a team.

My team at work communicates with daily scrum meetings and frequent pair
programming sessions (yes, even in this COVID-19 world, where we all
work remotely).  I don't think that's practical for JMRI, given the
geographic separation.

Maybe e-mail isn't the right way for us to communicate any longer?

Maybe having communication primarilly through issues and pull requests
on github isn't right either?  We seem to have transitioned to that over
the last few years.

Maybe we need to find some other way to communicate.  One of the other
open source groups I work with uses slack channels for communication. 
Perhaps that would be better?

Now, to summarize the rest of what I'm hearing in this thread (just to
play it back, not to add any commentary)

1) Bob is really the center of this project for most of us, and we'd
like to keep it that way

2) We want to find ways we can reduce Bob's load. 

3) Reviews on pull requests are good, but people are indifferent about
requiring them.

4) Testing is good, lets keep doing that, but let's work on the reliability.

5) Most of us just want to continue making good contributions to JMRI.

6) Perhaps we need to work on our team communication skills.

Does that sum up the conversation so far?  Anything to add to the list?


Paul


danielb987
 

2020-07-19 04:43 skrev Paul Bender:
Now, to summarize the rest of what I'm hearing in this thread (just to
play it back, not to add any commentary)
1) Bob is really the center of this project for most of us, and we'd
like to keep it that way
2) We want to find ways we can reduce Bob's load. 
3) Reviews on pull requests are good, but people are indifferent about
requiring them.
4) Testing is good, lets keep doing that, but let's work on the reliability.
5) Most of us just want to continue making good contributions to JMRI.
6) Perhaps we need to work on our team communication skills.
Does that sum up the conversation so far?  Anything to add to the list?
Yes, I think so.

Daniel


Steve Todd
 

On Sat, Jul 18, 2020 at 07:43 PM, Paul Bender wrote:
1) Bob is really the center of this project for most of us, and we'd like to keep it that way
2) We want to find ways we can reduce Bob's load. 
3) Reviews on pull requests are good, but people are indifferent about requiring them.
4) Testing is good, lets keep doing that, but let's work on the reliability.
5) Most of us just want to continue making good contributions to JMRI.
6) Perhaps we need to work on our team communication skills.
This is a good list, I agree completely with all points, except #3.
On #3, "indifferent" doesn't describe my take.
I DO completely agree with the need for some sort of "peer review", but I'm very concerned how that will work, given some of the "communication issues" and "lack of empathy" mentioned.
This seems to be a critical change, much-needed to keep BobJ from shouldering all of the burden, but simply spreading the same disagreements around to more people is not a solution.
Without structure, I fear the harshest voices will "win" and the project will "lose", every time.
The reality is that JMRI is an extremely complex entity, and very few of us have the ability (or the time) to grasp more than an isolated segment at a time.
Somehow, we need to be more accepting of "good, but not perfect" changes which help contributors feel valued. A little grace goes a long way.

And #1 should be bolded, large font, emphasized, whatever, to make it clear how important BobJ is to the past, present and future of this project. He's the only reason I'm still involved, and I doubt I'm the only person who feels that way.

--SteveT


danielb987
 

I fully agree on what you write.

About peer review:

I think it may work, but it requires that people are willing to do peer review, even on parts they don't fully understand. There are parts that only one or a couple of people fully understand and if peer review would require full knowledge, development of these parts would be almost impossible.

But if we all are allowed to do peer review even if we don't fully understands the code, I think it may work. And if I'm uncertain, I can wait a day or two to see if someone else do a peer review, and if not, I can do it.

If we all can do peer reviews, and nobody can veto a PR, we can get around the harshest voice. If you create a PR and I disagree with it and do a bad review, but Paul Bender does his review and approves it, the PR can be merged.

I think it's important that nobody has the right to veto a PR for this to work.

Daniel

2020-07-19 23:44 skrev Steve Todd:

On Sat, Jul 18, 2020 at 07:43 PM, Paul Bender wrote:

1) Bob is really the center of this project for most of us, and we'd
like to keep it that way
2) We want to find ways we can reduce Bob's load.
3) Reviews on pull requests are good, but people are indifferent
about requiring them.
4) Testing is good, lets keep doing that, but let's work on the
reliability.
5) Most of us just want to continue making good contributions to
JMRI.
6) Perhaps we need to work on our team communication skills.
This is a good list, I agree completely with all points, except #3.
On #3, "indifferent" doesn't describe my take.
I DO completely agree with the need for some sort of "peer review",
but I'm very concerned how that will work, given some of the
"communication issues" and "lack of empathy" mentioned.
This seems to be a critical change, much-needed to keep BobJ from
shouldering all of the burden, but simply spreading the same
disagreements around to more people is not a solution.
Without structure, I fear the harshest voices will "win" and the
project will "lose", every time.
The reality is that JMRI is an extremely complex entity, and very few
of us have the ability (or the time) to grasp more than an isolated
segment at a time.
Somehow, we need to be more accepting of "good, but not perfect"
changes which help contributors feel valued. A little grace goes a
long way.
And #1 should be bolded, large font, emphasized, whatever, to make it
clear how important BobJ is to the past, present and future of this
project. He's the only reason I'm still involved, and I doubt I'm the
only person who feels that way.
--SteveT
Links:
------
[1] https://jmri-developers.groups.io/g/jmri/message/3950
[2] https://groups.io/mt/75535614/1303822
[3] https://jmri-developers.groups.io/g/jmri/post
[4] https://jmri-developers.groups.io/g/jmri/editsub/1303822
[5] https://jmri-developers.groups.io/g/jmri/leave/defanged


Bob M.
 

I agree in general with Paul's summary points.

A few comments:

- On pull-request reviews: It seems to me that the various JMRI parts generally have one or more "person of expertise" or one or more "person of deep JMRI coding experience". And some concepts perhaps have one or more persons with "deep experience" - I'm thinking of things like certain coding practices. But noone "knows it all", and few JMRI contributors truely know the mapping of concept-to-expert. While github's mechanisms can try to propose reviewers when a pull request is submitted, it might make sense to maintain and publish a list of various JMRI concepts/systems/implementations and the associated "person(s) of expertise". I know that such a list would have influenced "reviewer" choice on some of my past P/Rs! **Perhaps discussion of this idea needs to be taken to a new thread.**

- Like many other JMRI contributors, I am _not_ a programmer by trade or training. I have little knowledge of many of the formal techniques, procedures and terms of modern programming. It seems futile for me to weigh-in on procedures and policies with respect to things like test methodologies or documentation mechanisms or deprecation strategies. I place my trust in those with the appropriate backgrounds to propose, discuss, and implement the policies and procedures and support scripts and web configuration and... And I view it as my personal responsibility to study as needed to make proper use of those mechanisms, polcies and procedures, to the best of my abilities.

- I am a believer in testing and test-cases - my career has convinced me of the benefits. I strongly support the goal of resolving the "unstable" testcases, and would help if I had a clue of the common failure-modes and their common solutions, at least for those parts of the code where I feel that I have some strong level of experience.

- I will do what I can to help make JMRI better, within those limitations and opportunities which circumstance present to me.

Regards,
Bob M.


Andrew Crosland
 


------ Original Message ------
From: "Steve Todd" <mstevetodd@...>
Sent: 19/07/2020 22:44:34
Subject: Re: [jmri-developers] Find some consensus

The reality is that JMRI is an extremely complex entity, and very few of us have the ability (or the time) to grasp more than an isolated segment at a time. 
This is certainly where I struggle. Randall was very helpful recently when I needed to make some changes outside my own area of knowledge where I could have made a few blunders by not realising the subtleties of what was going on.

Somehow, we need to be more accepting of "good, but not perfect" changes which help contributors feel valued. A little grace goes a long way.

In general I think this is the case, is it not? I know my own code is not perfect :)

Andrew


--
Andrew Crosland


Andrew Crosland
 

------ Original Message ------
From: "Bob M." <jawhugrps@...>
To: jmri@jmri-developers.groups.io
Sent: 20/07/2020 01:52:42
Subject: Re: [jmri-developers] Find some consensus

- Like many other JMRI contributors, I am _not_ a programmer by trade or training. I have little knowledge of many of the formal techniques, procedures and terms of modern programming. It seems futile for me to weigh-in on procedures and policies with respect to things like test methodologies or documentation mechanisms or deprecation strategies. I place my trust in those with the appropriate backgrounds to propose, discuss, and implement the policies and procedures and support scripts and web configuration and... And I view it as my personal responsibility to study as needed to make proper use of those mechanisms, polcies and procedures, to the best of my abilities.
I am in 100% agreement here. Hardware and low level firmware are really my thing. I have had exposure to numerous languages and methodologies over the years but never enough to become an expert.

- I am a believer in testing and test-cases - my career has convinced me of the benefits. I strongly support the goal of resolving the "unstable" testcases, and would help if I had a clue of the common failure-modes and their common solutions, at least for those parts of the code where I feel that I have some strong level of experience.
This is another area where expertise counts. I am keen to write good test cases but suffer from the "I'll do it later when the code is working" attitude. Having CI mandatory tests for every class is a good thing. I will often cut, paste and adapt code from a similar test class. In doing so I do sometimes feel others are sometimes as guilty as me in writing only the most basic test class to call the constructor and not revisiting later. In my case I think I need a deeper understanding of unit testing and how to write good test cases, e.g. knowing what facilities for available running/monitoring tests and loggong results. Is there a good primer for Junit (is that what we use now) and how to monitor test results and log the them?

Andrew




--
Andrew Crosland


Klaus Killinger
 

I would also like to agree to the list. Good comments have now made the list even better and more precise.

#2 is important because of the high importance of #1.
Could the new release methodology ( https://github.com/JMRI/JMRI/issues/8831 ) be a load reduction? Less releases to build, less load?

I think that #3 becomes easier over time as #4 improves.

Klaus


Am 19.07.2020 um 23:44 schrieb Steve Todd:

On Sat, Jul 18, 2020 at 07:43 PM, Paul Bender wrote:

1) Bob is really the center of this project for most of us, and we'd like
to keep it that way
2) We want to find ways we can reduce Bob's load.
3) Reviews on pull requests are good, but people are indifferent about
requiring them.
4) Testing is good, lets keep doing that, but let's work on the
reliability.
5) Most of us just want to continue making good contributions to JMRI.
6) Perhaps we need to work on our team communication skills.


Paul Bender
 


On Jul 20, 2020, at 5:54 AM, Andrew Crosland <andrew@...> wrote:
In my case I think I need a deeper understanding of unit testing and how to write good test cases, e.g. knowing what facilities for available running/monitoring tests and loggong results. Is there a good primer for Junit (is that what we use now) and how to monitor test results and log the them?

We have some documentation on testing JMRI ( https://www.jmri.org/help/en/html/doc/Technical/JUnit.shtml ) but I just skimmed through it and it is outdated due to the change to JUnit 5.  

I’ll work on updating that today, and see if I can find some good tutorials on testing in general to add as links.

Paul


danielb987
 

Thanks. Please add links to the Javadoc of Jemmy. The link "Jemmy Javadoc" in the section "Using Jemmy" doesn't seem to work.

Daniel

2020-07-20 14:53 skrev Paul Bender:

We have some documentation on testing JMRI (
https://www.jmri.org/help/en/html/doc/Technical/JUnit.shtml [1] ) but
I just skimmed through it and it is outdated due to the change to
JUnit 5.
I’ll work on updating that today, and see if I can find some good
tutorials on testing in general to add as links.
Paul
Links:
------
[1] https://www.jmri.org/help/en/html/doc/Technical/JUnit.shtml#writeAddl4ExistClass
[2] https://jmri-developers.groups.io/g/jmri/message/3958
[3] https://groups.io/mt/75535614/1303822
[4] https://jmri-developers.groups.io/g/jmri/post
[5] https://jmri-developers.groups.io/g/jmri/editsub/1303822
[6] https://jmri-developers.groups.io/g/jmri/leave/defanged


danielb987
 

2020-07-20 11:53 skrev Andrew Crosland:
- I am a believer in testing and test-cases - my career has convinced me of the benefits. I strongly support the goal of resolving the "unstable" testcases, and would help if I had a clue of the common failure-modes and their common solutions, at least for those parts of the code where I feel that I have some strong level of experience.
This is another area where expertise counts. I am keen to write good
test cases but suffer from the "I'll do it later when the code is
working" attitude. Having CI mandatory tests for every class is a good
thing. I will often cut, paste and adapt code from a similar test
class. In doing so I do sometimes feel others are sometimes as guilty
as me in writing only the most basic test class to call the
constructor and not revisiting later. In my case I think I need a
deeper understanding of unit testing and how to write good test cases,
e.g. knowing what facilities for available running/monitoring tests
and loggong results. Is there a good primer for Junit (is that what we
use now) and how to monitor test results and log the them?
Some thoughts about testing:

I start with writing tests for small simple methods. It's often easier to write tests for simple methods than complex ones, it gives me experience of writing tests, and by making the simple parts less fragile, the more complex parts gets less fragile as a result. (If the bricks fall apart, the house will fall apart).


I find it very useful to look at the coverage. For example, I run:

mvn test -Dtest="jmri.jmrit.beantable.LogixNGTableActionTest,jmri.jmrit.logixng.**.*Test"

to test my code and then run:

ant coveragereport

to generate the coverage report. That creates the folder "coveragereport" with one sub folder for each package in JMRI. For example, the file "coveragereport/jmri/Conditional.html" shows the coverage of the class jmri.Conditional.

The coverage report lets me see which parts of the code that already have tests and which parts that don't.


If the code has some constant values that are important, for example:

public final static int TURNOUT_THROWN = 2;
public final static int TURNOUT_CLOSED = 4;

I write tests that tests the values of these to prevent that these values are changed by misstake. See the class jmri.ConditionalTest as an example of that. I do that for constants that must stay constant over time, for example if the value is stored to XML files or used in communication with a microcontroller. This has the effect that if somebode really wants to change these constants, they have to change the constant in two places and therefore are forced to think a second time of what they are doing.


And last but not least:
Every single test counts! I did some work on writing coverage on Logix, mostly testing small simple methods in the Logix classes. That work got far from complete, but at least the Logix code is less fragile now than it was before.

Daniel


Pete Cressman
 

I wish to add my gratitude for all the work Bob has done for us over the years. He was always tolerant of my ignorance and ineptitude and was kind when offering corrective advise. My regret is for taking too much of his time on things I should have figured out for myself. For this I sincerely apologize.

On the matter of consensus, I support Paul's list.

In the past I often felt that the PR reviews were authoritarian enforcement of an ever expanding set of rules.  Even though at the time I knew they were not intended that way, I could not escape the impression these arbitrary requirements had to be met in order to get my code merged.

In fairness, there were several occasions when the reviews pointed out things that improved the quality of my code.  However, more frequently, they often were expressions of style and preference.

But now, times have changed, I'm afraid the joys of 'code wins' are gone when JMRI was a playground to experiment with ideas and get the reactions of users in cycles of response and request.

Now JMRI is a brand, a product, that must meet the quality expectations of other brands and products; i.e. the NMRA, Kalmbach Media, Model Railroad manufacturers and a huge user community.

So I definitely support the use of peer review of PR's and I will pay much closer attention to the links under the 'Techniques and Standards' sidebar on the Developers page. But I think I will restrict my efforts to the repair and improvement of the current feature set.

Best Regards,
Pete Cressman


Paul Bender
 




On Jul 20, 2020, at 3:44 PM, Pete Cressman <pete_cressman@...> wrote:

In the past I often felt that the PR reviews were authoritarian enforcement of an ever expanding set of rules.  Even though at the time I knew they were not intended that way, I could not escape the impression these arbitrary requirements had to be met in order to get my code merged.

In fairness, there were several occasions when the reviews pointed out things that improved the quality of my code.  However, more frequently, they often were expressions of style and preference.

I want to get away from that style and preference nitpicking.  To the extent that I may have been to blame for some of that, I apologize.

My rule of thumb, something I tried to instill in my students when I was teaching, was that you have to make it work first, then you can make it pretty.

( one of my coworkers expresses the same idea as “working, right, fast”,  which means make it work, the make it stylistically correct, and lastly look at efficiency).

But now, times have changed, I'm afraid the joys of 'code wins' are gone when JMRI was a playground to experiment with ideas and get the reactions of users in cycles of response and request.

This is absolutely not what I want to hear in this thread.  I still think there is a lot of room to play with new ideas.

We don’t want to break existing code, but I still think we want to encourage developers ( new and old ) with new ideas.  This is how we grow to meet the future needs of model railroaders.

As an example, Daniel with his Analog and Digital IO classes has found a way to introduce a pair of new object types into the system.  Will they result in any new hardware types or new uses?  It is hard to say right now, but the idea is now out there and we should encourage its growth.

Some of us old timers here may have lost sight of that, and this thread has really brought that into focus for me.

Now JMRI is a brand, a product, that must meet the quality expectations of other brands and products; i.e. the NMRA, Kalmbach Media, Model Railroad manufacturers and a huge user community.

While all this is true, JMRI is also how some of us spend some or all of our hobby time.  It is our way of giving back to the community.

Paul


Paul Bender
 

All,

I want to come back to something Daniel said about peer review:
On Jul 19, 2020, at 7:18 PM, danielb987 <db123@...> wrote:
But if we all are allowed to do peer review even if we don't fully understands the code, I think it may work. And if I'm uncertain, I can wait a day or two to see if someone else do a peer review, and if not, I can do it.

If we all can do peer reviews, and nobody can veto a PR, we can get around the harshest voice. If you create a PR and I disagree with it and do a bad review, but Paul Bender does his review and approves it, the PR can be merged.

I think it's important that nobody has the right to veto a PR for this to work.
The mechanics of this may not work, at least not using the built in tools.

GitHub lets you require one or more approving review before merging into protected branches, but if there is a review requesting changes, then the PR cannot be merged.

Perhaps there is a github integration/action we can use to apply custom rules to the process.

Now, that said, IF we are going to require reviews ( and I say If here because there are still at least two voices I would like to hear from ( Randall Wood and Dave Sand)) then we need to determine the rules of engagement so to speak. In other words, what constitutes something that should have changes requested and what should just appear as a comment.

As I said in another response on this thread, My rule of thumb is that you have to make it work first, then you can make it pretty.

What that means to me for code reviews:
1) Walk through the code and see if it should do what the developer writing it says it should. If you can’t figure it out, ask questions. Those questions could come in the form of a “changes requested” review, but they could also be a comment. If you use a “changes requested” review here, make sure to re-review once the explanation is made OR changes are made to make the code more readable.

2) If you are capable of testing the code with real hardware, download it and try it out. If it doesn’t work, put up a changes requested review that includes how you made it fail.

Paul