About silos and hierarchies in software development

Disclaimer: this is NOT a rant about people. In most of the situations all devs I know want to deliver a good work. This is a rant about organisations imposing such structures calling themselves “an agile company”.

To give you some context: a digital product, sold online as a subscription. The application in my scenario is the usual admin portal to manage customers, get an overview of their payment situation, like balance, etc.
The application is built and maintained by a frontend team. The team is using the GraphQL API built and maintained by a backend team. Every team has a team lead and over all of them is at least one other lead. (Of course there are also a lot of other middle-management, etc.) 

Some time ago somebody must have decided to include in the API a field called “total” containing the balance of the customer so that it can be displayed in the portal. Obviously I cannot know what happened (I’m just a user of this product), but fact is, this total was implemented as an integer. Do you see the problem? We are talking about money displayed on the website, about a balance which is almost never an integer. This small mistake made the whole feature unusable.

Point 1: Devs implement technical requests instead of improving the product 
I don’t know if the developer who implemented this made an error by not thinking about what this total should represent or he/she simple didn’t had the experience in e-commerce but it is not my point. My point is that this person was obviously not involved in the discussion about this feature, why it is needed, what is the benefit. I can see it with my spiritual eyes how this feature became turned in code: The team lead, software lead (xyz lead) decided that this task has to be done. The task didn’t referred to the customer benefit, it stripped everything down to “include a new property called total having as value the sum of some other numbers”. I can see it because I had a lot of meetings like this. I delivered a string to the other team and this string was sometimes a URL and sometimes a name. But I did this in a company which didn’t called himself agile. 

Point 2: No chance for feedback, no chance for commitment for the product
Again: I wasn’t there as this feature was requested and built, I just can imagine that this is what it happened, but it really doesn’t matter. It is not about a special company or about special people but about the ability to deliver features or only just some lines of code sold as a product. Back to my “total”: this code was reviewed, integrated, deployed to development, then to some in-between stages and finally to production. NOBODY on this whole chain asked himself if the new field included in a public(!) API is implemented as it should. And I would bet that nobody from the frontend team was asked to review the API to see if their needs can be fulfilled.

Point 3: Power play, information hiding makes teams slow artificially (and kills innovation and the wish to commit themselves to the product they build) 
If this structure wouldn’t be built on power and position and titles then the first person observing the error could have talked to the very first developer in the team responsible for the feature to correct it. They could have changed it in a few minutes (this was the first person noticing the error ergo nobody was using it yet) and everybody would have been happy. But not if you have leads of every kind who must be involved in everything (because this is why they have their position, isn’t it?) Then somebody young and enthusiastic wanting to deliver a good product would create a JIRA ticket. In a week or two this ticket will be eventually discussed (by the leads of course)  and analyzed and it will eventually moved forward in the backlog – or not. It doesn’t matter anyway because the frontend team had a deadline and they had to solve their problem somehow.

Epilogue: the culture of “talk only to the leads” bans the cooperation between teams
at this moment I did finally understood the reason behind of another annoying behavior in the admin panel: the balance is calculated in the frontend and is equal with the sum of the shown items. I needed some time to discover this and was always wondering WTF… Now I can see what happened: the total in the API was not a total (only the integer part of the balance) and the ticket had to be finished so that somebody had this idea to create a total adding the values from the displayed items. Unfortunately this was a very short-sighted idea because it only works if you have less then 25 payments, the default number of items pro page. Or you can use the calculator app to add the single totals on every page…

All this is on so many levels wrong! For every involved person is a lose-lose situation. 

What do you think? Is this only me arguing for better “habitat for devs” or it is time that this kind of structures disappear.

Continuous Delivery Is a Journey – Part 3

In the first part I described why I think that continuous delivery is important for an adequate developer experience and in the second part I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact – and the biggest benefits – regarding the development of the product itself.

Why do more and more companies, technical and non-technical people want to change towards an agile organisation? Maybe because the decision makers have understood that waterfall is rarely purposeful? There are a lot of motives – beside the rather wrong dumb one “because everybody else does this” – and I think there are two intertwined reasons for this: the speed at wich the digital world changes and the ever increasing complexity of the businesses we try to automate.

Companies/people have finally started to accept that they don’t know what their customer need. They have started to feel that the customer – also the market – has become more and more demanding regarding the quality of the solutions they get. This means that until Skynet is not born (sorry, I couldn’t resist 😁) we oftware developers, product owners, UX designers, etc. have to decide which solution would be the best to solve the problems in that specific business and we have to decide fast.

We have to deliver fast, get feedback fast, learn and adapt the consequences even faster. We have to do all this without down times, without breaking the existing features and – for most of us very important: without getting a heart attack every time we deploy to production.

IMHO These are the most important reasons why every product development team should invest in CI/CD.

The last missing piece of the jigsaw which allows us to deliver the features fast (respectively continuously) without disturbing anybody and without losing the control how and when features are released is called feature toggle.

feature toggle[1] (also feature switchfeature flagfeature flipperconditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.[2]

Wikipedia

The concept is really simple: one feature should be hidden until somebody, something decides that it is allowed to be used.

function useNewFeature(featureId) {
  const e = document.getElementById(featureId);
  const feat = config.getFeature(featureId);
  if(!feat.isEnabled)
    e.style.display = 'none';
  else
    e.style.display = 'block';
}

As you see, implementing feature toggles is really that simple. To adopt this concept will need some effort though:

  • Strive for only one toggle (one if) per feature. At the beginning it will be hard or even impossible to achieve this but it is a very important to define this as a middle-term goal. Having only one toggle per feature means the code is highly decoupled and very good structured.
  • Place this (main) toggle at the entry point (a button, a new form, a new API endpoint) the first interaction point with the user (person or machine) and in disabled state it should hide this entry point.
  • The enabled state of the toggle should lead to new services (in micro service world), new arguments or to new functions, all of them implementing the behavior for feature.enabled == true. This will lead to code duplication: yes, this is totally ok. I look at it as a very careful refactoring without changing the initial code. Implementing a new feature should not break or eliminate existing features. The tests too (all kind of them) should be organized similarly: in different files, duplicated versions, implemented for each state.
the different states of the toggle lead to clearly separated paths
  • Through the toggle you gain real freedom to make mistakes or just the wrong feature. At the same time you can always enable the feature and show it the product owner or the stake holders. This means a feedback loop is reduced to minimum.
  • This freedom has a price of course: after the feature is implemented, the feedback is collected, the decision for enabling the feature was made, after all this the source code must be cleaned up: all code for feature.enabled == false must be removed. This is why it is so important to create the different paths so that the risk of introducing a bug is virtually zero. We want to reduce workload not increase it.
  • Toggles don’t have to be temporary, business toggles (i.e. some premium features or “maintenance mode”) can stay forever. It is important to define beforehand what kind of toggle will be needed because the business toggles will be always part of your source code. The default value for this kind of toggles should be false.
  • The default value for the temporary toggles should be true and it should be deactivated on production, activated during the development.

One advice regarding the tooling: start small, with a config map in kubernetes, a database table, a json file somewhere will suffice. Later on new requirements will appear, like notifying the client UI when a toggle changes or allowing the product owner to decide, when a feature will be enabled. That will be the moment to think about next steps but for the moment it is more important to adopt this workflow, adopt this mindset of discipline to keep the source code clean, learn the techniques how to organize the code basis and ENJOY HAVING THE CONTROL over the impact of deployments, feature decisions, stress!

That’s it, I shared all of my thoughts regarding this subject: your journey of delivering continuously can start or continued 😉) now.

p.s. It is time for the one sentence about feature branches:
Feature toggles will never work with feature branches. Period. This means you have to decide: move to trunk based development or forget continuous development.

p.p.s. For the most languages exist feature toggle libraries, frameworks, even platforms, it is not necessary to write a new one. There are libraries for different complexities how the state can be calculated (like account state, persons, roles, time settings), just pick one.

Update:

As pointed out by Gergely on Twitter, on Martin Fowlers blog is a very good article describing extensively the different feature toggles and the power of this technique: Feature Toggles (aka Feature Flags)

Continuous Delivery Is a Journey – Part 2

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part 🙄)

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes 😉? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

Good developer experience also means simplified continuous deployment

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for 😉) and about some open questions or decisions what we encountered on our journey.

Every graphic is realized with plantuml thank you very much!

to be continued …

Continuous Delivery Is a Journey – Part 1

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Let me describe the context we were working:

  • Four business teams, 100% self-organized, owning 1…n Self-contained Systems, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.
  • Boundaries (as in Domain Driven Design) defined based on the business we were in.
  • Each team having full ownership and full accountability for their part of business (represented by the SCS).
  • Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.
  • Ensuring the code quality and the software quality is 100% team responsibility.
  • You build it, you run it.
  • One Platform-as-a-service team to enable this business teams to deliver features fast.
  • Gitlab as VS, Jenkins as build server, Nexus as package repository
  • Trunk-based development, no cherry picking, “roll fast forward” over roll back.
Teams
4 Business Teams + 1 Platform-as-a-Service Team = One Product

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

  • Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus
  • Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down

We had only a couple of exceptions for these rules. As an example: authentication doesn’t really make sense in asynchronous manner.

Working in self-organized, independent teams is a really cool thing. But

with great power there must also come great responsibility

Uncle Ben to his nephew

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us 🤔 )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.

to be continued …

Base your decisions on heuristics and not on gut feeling

As a developer we tackle very often problems which can be solved in various ways. It is ok not to know how to solve a problem. The real question is: how to decide which way to go 😯

In this situations often I rather have a feeling as a concrete logical reason for my decisions. This gut feelings are in most cases correct – but this fact doesn’t help me if I want to discuss it with others. It is not enough to KNOW something. If you are not a nerd from the 80’s (working alone in a den) it is crucial to be able to formulate and explain and share your thoughts leading to those decisions.

Finally I found a solution for this problem as I saw the session of Mathias Verraes about Design Heuristics held by the KanDDDinsky.

The biggest take away seems to be a no-brainer but it makes a huge difference: formulate and visualize your heuristics so that you can talk about concrete ideas instead of having to memorize everything what was said – or what you think it was said.

Using this methodology …

  • … unfounded opinions like “I think this is good and this is bad” won’t be discussed. The question is, why is something good or bad.
  • … loop backs to the same subjects are avoided (to something already discussed)
  • … the participants can see all criteria at once
  • … the participants can weight the heuristics and so to find the probably best solution

What is necessary for this method? Actually nothing but a whiteboard and/or some stickies. And maybe to take some time beforehand to list your design heuristics. These are mine (for now):

  • Is this a solution for my problem?
  • Do I have to build it or can I buy it?
  • Can it be rolled out without breaking neither my features as everything else out of my control?
  • Breaks any architecture rules, any clean code rules? Do I have a valid reason to break these rules?
  • Can lead to security leaks?
  • Is it over engineered?
  • Is it much to simple, does it feel like a short cut?
  • If it is a short cut, can be corrected in the near future without having to throw away everything? = Is my short cut implemented driving my code in the right direction, but in more shallow way?
  • Does this solution introduce a new stack = a new unknown complexity?
  • Is it fast enough (for now and the near future)?
  • … to be continued 🙂

The video for the talk can be found here. It was a workshop disguised as a talk (thanks again Mathias!!), we could have have continued for another hour if it weren’t for the cold beer waiting 🙂

My KanDDDinsky distilled

KanDDDinsky

The second edition of “KanDDDinsky – The art of business software” took place on the 18-19th October 2018. For me it was the best conference I have visited for long time: the talks I attended at this conference created all together a coherent picture and the speakers made me sometimes feel like visiting an Open Space, an UnConference. It felt like a great community event with the right amount of people with right amount of knowledge and enough time to have great discussions during the two days.

These are my takeaways and notes:

Michael Feathers “The Design of Names and Spaces” (Keynote)

  1. Do not be dogmatic, sometimes allow the ubiquitous language to drive you to the right data structure – but sometimes is better to take the decisions the other way around.
  2. Build robust systems, follow Postel’s Law

Be liberal in what you accept, and conservative in what you send.

If you ask me, this principle shouldn’t be only applied for software development…

Kenny Baas-Schwegler – Crunching ‘real-life stories’ with DDD Event Storming and combining it with BDD

I learned so much from Kenny that I had to write it in an separate blog post.

Update: the video of this talk can be seen here

Kevlin Henney – What Do You Mean?

This talk was extrem entertaining and informative, you should watch it after it will be published. Kevlin addressed so many thoughts around software development, is impossible to choose the one message.  And yes: the sentence  “It’s only semantics” still makes me angry!

Codified Knowledge
It is not semantics, it is meaning what we turn in code

Here is the video to watch.

Herendi Zsofia – Encouraging DDD Curiosity as a Product Owner

It was interesting to see a product owner talking about her efforts making the developers interested in the domain. It was somehow curious because we were on a DDD conference – I’m sure all present were already interested in building the right features fitting to the domain and to the problem – but of course we are only the minority among the coding people. She belongs to the clear minority of product owners being openly interested in DDD. Thank you!

Matthias Verraes – Design Heuristics

This session was so informative that I had to write a separate post about all the things I learned.

J. B. Rainsberger – Some Underrated Elements of Success for the Modern Programmer

J.B. is my oldest “twitter-pal” and in the past 5+ years we discussed about everything from tests to wine or how to find whipped cream in a Romanian shopping center. But: we never met in person 😥  I am really happy that Marco and Janek fixed this for me!

The talk was just like I expected: clear, accurate, very informative. Hier a small subset of the tips  shared by J.B.

Save energy not time!

There are talks which cannot be distilled. J. B.’s talk was exactly one of those. I really encourage everybody to invest the 60 minutes and watch it here.

Statistics #womenInTech

I had the feeling it were a lot of women at the conference even if they represented “only” 10% (20 from 200) of the participants. But still: 5-6 years ago I was mostly alone and it is not the case anymore. This is great, I really think that something had changed in the last few years!

Event Storming with Specifications by Example

Event Storming is a technique defined and refined by Alberto Brandolini (@ziobrando). I fully agree the statement about this method, Event Storming is for now “The smartest approach to collaboration beyond silo boundaries”

I don’t want to explain what Event Storming is, the concept is present in the IT world for a few years already and there are a lot of articles or videos explaining the basics. What I want to emphasize is WHY do we need to learn and apply this technique:

The knowledge of the product experts may differ from the assumption of the developers
KanDDDinsky 2018 – Kenny Baas-Schwegler

On the 18-19.10.2018 I had the opportunity to not only hear a great talk about Event Storming but also to be part of a 2 hours long hands-on session, all this powered by Kandddinsky (for me the best conference I visited this year) and by @kenny_baas (and @use case driven and @brunoboucard). In the last few years I participated on a few Event Storming sessions, mostly on community events, twice at cleverbridge but this time it was different. Maybe ES is like Unit Testing, you have to exercise and reflect about what went well and what must be improved. Anyway this time I learned and observed a few rules and principles new for me and their effects on the outcome. This is what I want to share here.

  1. You need a facilitator.
    Both ES sessions I was part at cleverbridge have ended with frustration. All participants were willing to try it out but we had nobody to keep the chaos under control. Because as Kenny said “There will be chaos, this is guaranteed.” But this is OK, we – devs, product owners, sales people, etc. – have to learn fast to understand each other without learning the job of the “other party” or writing a glossary (I tried that already and didn’t helped 😐 ). Also we need somebody being able to feel and steer the dynamics in the room.


    The tweets were written during a discussion about who could be a good facilitator. You can read the whole thread on Twitter if you like. Another good article summarizing the first impressions of @mathiasverraes as facilitator is this one.

  2. Explain AND visualize the rules beforehand.
    I skip for now the basics like the necessity of a very long free wall and that the events should visualize the business process evolving in time.
    This are the additional rules I learned in the hands-on session:

      1. no dev-talk! The developer is per se a species able to transform EVERYTHING in patterns and techniques and tables and columns and this ability is not helpful if one wants to know if we can solve a problem together. By using dev-speech the discussion will be driven to the technical “solvability” based on the current technical constraints like architecture. With ES we want to create or deepen our ubiquitous language , and this surely not includes the word “Message Bus”  😉
      2. Every discussion should happen on the board. There will be a lot of discussions and we tend to talk a lot about opinions and feelings. This won’t happen if we keep discussing about the business processes and events which are visualized in front of us – on the board.
      3. No discussions regarding persons not in the room. Discussing about what we think other people would mind are not productive and cannot lead to real results. Do not waste time with it, time is too short anyway.
      4. Open questions occurring during the storming should not be discussed (see the point above) but marked prominently with a red sticky. Do not waste time
      5. Do not discuss about everything, look for the money! The most important goal is to generate benefit and not to create the most beautiful design!

Tips for the Storming:

  • “one person, one sharpie, one set of stickies”: everybody has important things to say, nobody should stay away from the board and the discussions.
  • start with describing the business process, business rules, eventual consistent business decisions aka policies, other constraints you – or the product owner whom the business “belongs” – would like to model, and write the most important information somewhere visible for everybody.
  • explain how ES works: every business relevant event should be placed on a time line and should be formulated in the past tense. Business relevant is everything somebody (Kibana is not a person, sorry 😉 ) would like know about.
  • explain the rules and the legend (you need a color legend to be able to read the results later).
  • give the participants time (we had 15 minutes) to write every business event they think it is important to know about on orange stickies. Also write the business rules (the wide dark red ones) and the product decisions (the wide pink ones) on stickies and put them there where they are applied. The rules before the event, the policies after one event happened.
  • start putting the stickies on the wall, throw away the duplicates, discuss and maybe reformulate the rest. After you are done try to tell the story based on what you can read on the wand. After this read the stickies from the end to the start. With these methods you should be able to discover if you have gaps or used wrong assumptions by modelling the process you wanted to describe.
  • mark known processes (like “manual process”) with the same stickies as the policies and do not waste time discussing it further.
  • start to discuss the open questions. Almost always there are different ways to answer this questions and if you cannot decide in a few seconds than postpone it. But as default: decide to create the event and measure how often happens so that later on you can make the right decision!
    Event Storming – measure now, decide later


    Another good article for this topic is this one from @thinkb4coding

At this point we could have continued with the process to find aggregates and bounded contexts but we didn’t. Instead we switched the methodology to Specifications by Example – in my opinion a really good idea!

Specifications
Event Storming enhanced with Specifications by Example

We prioritized the rules and policies and for the most important ones we defined examples – just like we are doing it if we discuss a feature and try to find the algorithm.

Example: in our ticket reservation business we had a rule saying “no overbooking, one ticket per seat”. In order to find the algorithm we defined different examples:

  • 4 tickets should be reserved and there are 5 tickets left
  • 4 tickets should be reserved and there are 3 tickets left
  • 4 tickets should be reserved and all tickets are already reserved.

With this last step we can verify if our ideas and assumptions will work out and we can gain even more insights about the business rules and business policies we defined – and all this not as developer writing if-else blocks but together with the other stake holders. At the same time the non-techie people would understand in the future what impact these rules and decisions have on the product we build together. The side-effect having the specifications already defined is also a great benefit as these are the acceptance tests which will be built by the developer and read and used by the product owner.

More about the example and the results can you read on the blog of Kenny Baas-Schwegler.

I hope I covered everything and have succeeded to reproduce the most important learning of the 2 days ( I tend to oversee things thinking “it is obvious”). If not: feel free to ask, I will be happy to answer 🙂

Happy Storming!

Update: we had our first event storming and it was good!

Unfortunately we didn’t get to define the examples (not enough time). Most of the rules described above were accepted really well (explain the rules, create a legend for the stickies, flag everything out of scope as Open Question). Where I as facilitator need more training is by keeping the discussion ON the board and not beside. I have also a few new takeaways:

  • the PO describes his feature and gives answers, but he doesn’t write stickies. The main goal ist to share his vision. This means, he should test us if we understood the same vision. As bonus he should complete his understanding of the feature trough the questions which appear during the storming.
  • one color means one action/meaning. We had policies and processes on the same red stickies and this was misleading.
  • if you have a really complex domain
    (like e-commerce for SaaS products in our case) or really complex features start with one happy path example. Define this example and create the event “stream” with this example. At the end you should still add the other, not so happy-path examples.

Kollegen-Bashing – Überraschung, es hilft nicht!

Bei allen Konferenzen, die meinen Kollegen und ich besuchen, poppt früher oder später das Thema Team-Kultur auf, als Grund von vielen/allen Problemen. Wenn wir erzählen, wie wir arbeiten, landen wir unausweichlich bei der Aussage “eine selbstorganisierte crossfunktionale Organisation ohne einen Chef, der DAS Sagen hat, ist naiv und nicht realistisch“. “Ihr habt irgendwo sicher einen Chef, ihr wisst es nur nicht!” war eine der abgefahrensten Antworten, die wir unlängst gehört haben, nur weil der Gesprächspartner nicht in der Lage war, dieses Bild zu verarbeiten: 5 Selbstorganisierte Teams, ohne Chefs, ohne CTO, ohne Projektmanager, ohne irgendwelche von außen eingekippte Regeln und Anforderungen, ohne Deadlines denen wir widerspruchslos unterliegen würden. Dafür aber mit selbst auferlegten Deadlines, mit Budgets, mit Freiheiten und Verantwortung gleichermaßen.

Ich spreche jetzt hier nicht vom Gesetz und von der Papierform: natürlich haben wir in der Firma einen CTO, einen Head of Development, einen CFO, sie entscheiden nur nicht wann, was und wie wir etwas tun. Sie definieren die Rahmen, in der die Geschäftsleitung in das Produkt/Vorhaben investiert, aber den Rest tun wir: POs und Scrum Master und Entwickler, gemeinsam.

Wir arbeiten seit mehr als einem Jahr in dieser Konstellation und wir können noch 6 Monate Vorlaufzeit dazurechnen, bis wir in der Lage waren, dieses Projekt auf Basis von Conways-Law zu starten.

“Organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.” [Wikipedia]

In Umkehrschluss (und freie Übersetzung) heißt das “wie deine Organisation ist, so wird auch dein Produkt, dein Code strukturiert sein”. Wir haben also an unserer Organisation gearbeitet. Das Ziel war, ein verantwortungsvolles Team aufzubauen, das frei zum Träumen ist, um ein neues, großartiges Produkt zu bauen, ohne auferlegten Fesseln.

Wir haben jetzt dieses Team, wir leben jetzt diesen Traum – der natürlich auch Schatten hat, das Leben ist schließlich kein Ponyhof :). Der Unterschied ist: es sind unsere Probleme und wir drücken uns nicht davor, wir lösen sie zusammen.

Bevor ihr sagt “das ist ein Glücksfall, passiert normalerweise nicht” würde ich widersprechen. Bei uns ist es auch nicht nur einfach so passiert, wir haben (ungefähr 6 Monate) daran gearbeitet, und tun es weiterhin kontinuierlich. Der Clou, der Schlüssel zu dieser Organisation ist nämlich eine offene Feedback-Kultur.

Was soll das heißen, wie haben wir das bei uns erreicht?

  • Wir haben gelernt, Feedback zu geben und zu nehmen – ja, das ist nicht so einfach. Das sind die Regeln
    • Alle Aussagen sind Subjektiv: “Gestern als ich Review gemacht habe, habe ich das und das gesehen. Das finde ich aus folgenden Gründen nicht gut genug/gefährlich. Ich könnte mir vorstellen, dass so oder so es uns schneller zum Ziel bringen könnte.” Ihr merkt: niemals DU sagen, alles in Ich-Form, ohne vorgefertigten Meinungen oder Annahmen.
    • Alle Aussagen mit konkreten Beispielen. Aussagen mit “ich glaube, habe das Gefühl, etc.” sind Meinungen und keine Tatsachen. Man muss ein Beispiel finden sonst ist das Feedback nicht “zulässig”
    • Das Feedback wird immer konstruktiv formuliert. Es hilft nicht zu sagen, was schlecht ist, es ist viel wichtiger zu sagen woran man arbeiten sollte: “Ich weiß aus eigener Erfahrung, dass Pair-Programming in solchen Fällen sehr hilfreich ist” z.B.
    • Derjenige, die Feedback bekommt, muss es anhören ohne sich zu recht fertigen. Sie muss sich selber entscheiden, was sie mit dem Feedback macht. Jeder, der sich verbessern möchte, wird versuchen, dieses Feedback zu Herzen zu nehmen und an sich zu arbeiten. Das muss man nicht vorschreiben!
  • One-and-Ones: das sind Feedback-Runden zwischen 2 Personen in einem Team, am Anfang mit Scrum Master, solange die Leute sich an die Formulierung gewöhnt haben (wir haben am Anfang die ganze Idee ausgelacht) und später dann nur noch die Paare. Jedes mal nur in eine Richtung (nur der eine bekommt Feedback) und z.B. eine Woche später in die andere Richtung. Das Ergebnis ist, das wir inzwischen keine Termine mehr haben, wir machen das automatisch, jedes Mal, wenn etwas zu “Feedbacken” ist.
  • Team-Feedback: ist die letzte Stufe, läuft nach den gleichen regeln. Wird nicht nur zwischen Teams sondern auch zwischen Gruppen/Gilden gehalten, wie POs oder Architektur-Owner.

Das war’s. Ich habe seit über einem Jahr nicht mehr Sätze gehört, wie “die Teppen von dem anderen Team waren, die alles verbockt haben” oder “Die kriegen es ja sowieso nicht hin” oder “Was kümmert es mich, sie haben ja den Fehler eingecheckt” Und diese Arbeitsatmosphäre verleiht Flügel! (sorry für die copy-right-Verletzung 😉 )

10 Jahre Open Space – meine Retrospektive

Workshop-Tag:

Seit ein paar Jahren gibt es die Möglichkeit, den Open Space um ein Tag Workshop zu erweitern – wenn einem die zwei Tage Nerdtalk nicht reichen  😉

Ich habe mich diesmal für Tensorflow: Programming Neural Networks mit Sören Stelzer entschieden – und es war großartig. Obwohl ein sehr schwieriges Thema (das Wort Voodoo ist öfter gefallen), ich weiß jetzt genug über Machine Learning und Neuronale Netze, um mit dem Thema gut starten zu können. Ich formuliere es mal so: ich weiß jetzt, was ich weiß und vor allem, was ich nicht weiß und wie wir weiter machen müssen. Und mehr kann man von einem Workshop nicht erwarten. Zusätzlich finde ich, dass Sören eine sehr große Bereicherung für unsere Community ist, die sich genauso weiterentwickeln muss, wie die IT-Welt da draußen. Vielen Dank für dein Engagement!

Eigentlich ein fetten Dank an alle Trainer, die sich bei Community-Events engagieren!!

Erkenntnisse der nächsten 48 Stunden – geclustert:

Agile datengetriebene Entwicklung – war meine eigene Session (das heißt, ich habe das Thema vorgeschlagen, war Themen-Owner aber das war’s dann auch mit den Pflichten).

Ich wollte Tipps und Ideen dazu hören, wie man seine Arbeit nach scrum organisieren kann wenn man Themen beackert, wie Reporting, wo die Features auf große Menge Daten basieren. Es ist eine Sache, ein Testsetup für 2 möglichen Situationen zu schreiben und es ist eine ganz andere, die vielfalt der Situationen in Reporting zu beschreiben.

Take-aways:

  • wir werden damit leben müssen, dass unsere Features, Tests, Erwartungen eventual consistent sind  😀 Wichtig ist, dass wir Annahmen treffen, die wir für den Anfang als “die Wahrheit” betrachten.
  • User labs beauftragen.
  • Measurements weit vor ihre Auswertung einzubauen ist ok, bricht nicht mit dem Konzept “Jedes Feature muss Business Value haben” – auch wenn der echte Business Value erst in 2 Jahren auswertbar ist.
  • Aha-Effekt: In der Welt von Business Teams gibt es keine Fachabteilung. Ich bin in dem Reporting-Team ergo ich bin die Fachabteilung. (finde ich gut, häßliches Wort  😎 )

Stolperfallen mit React

  • unser Internationalisierungskonzept ist richtig (Texte aufteilen nach Modulen/Bereiche/o.ä., ein common Bereich, alles via API in den State laden)
  • Package-Empfehlung: react-intl
  • das Thema so früh, wie möglich berücksichtigen, später kann es richtig weh tun.
  • DevTool-Empfehlung: https://github.com/crysislinux/chrome-react-perf um die Performance der einzelnen React-Componenten zu sehen.
  • (es)Linting Empfehlung um zirkuläre Referenzen zu vermeiden:  “import/no-internal-modules” (Danke @kjiellski)

Wann kann Scrum funktionieren

  • wenn die Möglichkeit besteht, auf Feedback zu reagieren, sprich die Entwickler sind keine Resourcen sondern kreative Menschen.
  • das Team, in dem ich die Ehre habe, unser Produkt mitzugestallten, und @cleverbridge ist führend was agiles Arbeiten betrifft.

Menschen

  • man kann bei Trinkspielen mitmachen, ohne zu trinken
  • nachts träumen, dass der Partner einen enttäuscht hat und danach den ganzen Tag sauer auf ihn sein, ist eine Frauen-Sache (bestätigt von @AHirschmueller und @timur_zanagar) 😀

Nachtrag: fast vergessen, dass

  • wir dank @agross eine super wertvolle Session über dotfiles hatten
  • DDD wird gerade durch Zertifizierung kaputt gemacht, Serverless durch Hype
  • mit der Session von @a_mirmohammadi über/zu den Anonymen Abnehmer ist der @devopenspace eindeutig in die Kategorie “es gibt nichts, was nicht geht” angekommen