Tag: agile roles

Responsibility inversion

Responsibility inversion is an extremely common management anti-pattern, present in many organisations. It occurs when a person in a more senior role (in terms of hierarchy or experience) does not delegate sufficient responsibility to those more junior than themselves, even though they possess the relevant skills, ability or experience.

With the following diagram in mind, consider a typical career progression progression path, with a manager and their report(s) as an example:

Responsibility inversion principle
Responsibility inversion principle

In the Venn diagrams above, the upper circles represent the more senior role, their skills and experience and the lower circles represent the more junior role. The intersections represent the overlap in skills and experience.

The Responsibility Inversion Principle asserts that if you prevent someone from using their abilities they will begin to lose them (shown on the left). Whereas, when they are allowed to exercise their ability, under the guidance of someone senior, they’ll actually grow their ability.

One common example of an inverted responsibility situation you may encounter is the much lambasted “ivory tower architect“. Instead of focusing on higher-level concerns outside the logical scope of the senior developer role, they’ll spend time dictating all but the lowest-level aspects of their skill set. They do not trust or empower developers to make use of their experience – removing the opportunity for them to create or influence solutions, designs or to make any related decisions about the software they will be expected to deliver – instead they’ll be spoon fed the results of the architect’s own (isolated) activity.

If you’ve employed talented people, this inversion will be a massive source of frustration which will slowly erode their skills (through lack of use), quality of work and output if left unchecked. The more astute will leave to pursue opportunities elsewhere long before it comes to that…

Talent retention aside (although you’ll be aware how big an issue that can be), the inverted approach doesn’t scale! Taking on as much as possible and delegating as little as possible creates a huge bottleneck that isn’t always very obvious from outside the team. Time that should be invested in important areas is rushed to continue to feed work to their reports (compounding the frustration of those who could have done a better job, given the chance).

There are many potential reasons for this situation. It can emerge from a perceived lack of skills in the more junior employees – or from a lack of trust. It can also come from a desire of the senior person to (continue to) be seen as valuable or irreplaceable to the business – or simply as a result of unconscious actions (in ignorance of what they’re doing). Whatever the root cause, the result can be toxic in the long term.

Regardless of the role within an organisation, you will usually find that there is a substantial amount of overlap in the overall skills and experience required to carry out the related day-to-day responsibilities, yet many organisational structures place people in silos – as if the entirety of those skill sets are mutually exclusive. Whilst this loses some economy of scale in larger structures, some delineation is required for effective coordination of activity – “somebody’s gotta be the ensign“.

Some career paths have evolved to avoid this pitfall fairly naturally – such as the route from junior to lead software developer. One might expect to begin their career as a support developer then progress in to a senior developer role and (eventually) beyond – leading teams or perhaps moving into architecture. In a healthy organisation, there’s usually an implicit requirement in their senior development roles to support and mentor more junior developers. They’ll be coached and assigned progressively larger and more difficult pieces of work. All the work they produce is reviewed by the senior, who will then give appropriate and constructive feedback to the junior – demonstrating areas that they can improve. Over time they will grow to fill the shoes of their mentor.

It’s important to note that using the delegation approach in no way diminishes the more senior role. The skills you learn by communicating your experience, explaining complex concepts and being able to divide and distribute large undertakings of work help set you up for your next challenge.

So how does this relate to Agile software development or scaled Agile? An Agile self-governing team not only promotes the vertical progression described above, but can also provide a great way for people to expand their skills horizontally – from the additional experience gained in embedded team roles, rather than being limited by a traditional silo position. You’ll find that people naturally gravitate to areas where they have the aptitude and interest. When people are given the opportunity to grow like this, you may be pleasantly surprised how much your team starts to achieve.

The role of an Agile architect

“Architecture is about the important stuff. Whatever that is.”

That’s what Martin Fowler told us way back in 2003. If you’re interested in the field of software architecture, it’s probably a quote you’re already familiar with. It’s often repeated, but it doesn’t really help explain what architecture is – or why it’s important. If we continue reading the same article, Fowler goes on to highlight a quote that goes a little further, from Ralph Johnson:

“Architecture is the decisions that you wish you could get right early in a project, but that you are not necessarily more likely to get them right than any other.”

Software development would be oh-so-easy if there was a reliable way to create a completely fit for purpose and efficient design up-front. Agile emerged partly from a realisation that our understanding of what we need will always change over the course of our development efforts. That could just mean adding more detail to our understanding or changing it substantially – that’s why we actively strive to respond to change in the way we work.

Even if we’re in an unlikely situation where we have a complete understanding of what we want now, there is very little chance we’ll know exactly how to solve all the problems we’re going to face getting there. We could spend lots of time up-front, acknowledging that there are risks with a completely predictive approach – or we could spend very little (or no) time in up-front design, reacting to any problems as they become known. Many Agile projects choose the latter and as a result produce less than elegant solutions – and are rightly criticised for it as a result…

How much forethought is enough?

We can’t accurately predict the future, so it seems a little unwise to rely on our precision in that area. Likewise, an entirely reactive approach is probably not going to be very efficient either – especially if we’re forced to rework large areas of our solution on a regular basis. So, we’re faced with a choice about how much time (and to what level of detail) we should invest in the architecture of our solution up-front. Talking about Agile Modelling, Scott Ambler described his “just barely good enough” approach:

I like to say they are just barely good enough (JBGE). I make the following critical points about a model or document (an artifact) being just barely good enough:

  • It is actually the most effective possible
  • It is situational
  • It does not imply low quality
  • It changes over time
  • It comes sooner than you think

All fairly self-explanatory, but in relation to his last point, he expands his explanation to include a very astute observation:

“Traditionalists seem to assume that significant investment in modeling, and corresponding specification, will continue to add value over time.”

This is a common assumption I’ve encountered many times myself. Funnily enough, in situations where the delivery of a project is delayed (or even derailed) by poor architectural decisions emerging after new information surfaces part-way through the delivery process, the response is often: “Go back, spend more time up-front and this time do it right!

If the previous activity exposed the only show-stopper your design will encounter, there’s a chance you’ll now be equipped with the information required to address the problem. The problem with this attitude is that in complex systems (and many simpler ones) there’s likely to be a whole host of potential show-stoppers – many in competition with each other. So what is the answer?

My advice echoes Scott’s – do just enough. What that looks like is highly situational, it doesn’t mean you can accept low quality and it may have to change over time. The traditionalist assumption was that prolonged investment will continue to add value – it kinda does, but the return is ever diminishing. There’s a sweet-spot, where after a certain point you’re spending more time than the value you’re adding – that’s the time to stop!

The Agile architect

In the same article from Martin Fowler we cited at the start of this one, he goes on to define two types of architect. The first is described as:

The person who makes all the important decisions. The architect does this because a single mind is needed to ensure a system’s conceptual integrity, and perhaps because the architect doesn’t think that the team members are sufficiently skilled to make those decisions. Often, such decisions must be made early on so that everyone else has a plan to follow.

Fowler’s first description is often associated (though sometimes unfairly) with the traditional architect job titles, who sit above lowly developers, dictating their every move from their ivory towers. That doesn’t sound very Agile to me… In contrast, the second type is described as:

This kind of architect must be very aware of what’s going on in the project, looking out for important issues and tackling them before they become a serious problem. When I see an architect like this, the most noticeable part of the work is the intense collaboration. In the morning, the architect programs with a developer, trying to harvest some common locking code. In the afternoon, the architect participates in a requirements session, helping explain the technical consequences of some of their ideas in nontechnical terms – such as development costs.

This time, we have someone actively collaborating with the other members of their team. They have an awareness of the overall project and spend time working on high and low-level problems. They get their hands dirty when required, but are also able to communicate with anyone less technical. This person sounds like someone useful to have around – and could fit in to an Agile environment very well.

If you have someone producing a design in a silo, who does not spend time with the people who are building or requesting that change, then you have an ivory tower architect. In a nutshell, collaboration is the mark of an agile architect.

Pulling it together

  • Start with a simple design, because at the beginning, you won’t have enough of a design to support all the software.
  • Have a more and more robust design as the project goes on, because you can’t make progress with insufficient design.
  • Wind up with a fully robust design, capable of supporting the whole project and its future needs.

Source: http://ronjeffries.com/xprog/blog/context-my-foot/

From the point you embark upon your mission to deliver the Product Owner’s vision in an Agile way, you should be collaboratively refining your understanding about what is required. You then agree on their relative priority and begin thinking about the ways to implement those ideas. Early on, your team should have some idea about what the minimum viable product (MVP) might look like. From there, you need to start articulating how you are going to start building it.

With any complex problem, there is very rarely ever just one “right way” to solve it – and those solutions aren’t going to be perfect. There’s always going to be some form of trade-off, that cannot be avoided. A good architect should have enough experience to highlight where some of these trade-offs may occur (and what the potential impact could be). Armed with that information, what you should be doing is finding a solution that could work, then quickly proving whether or not it is going to be (just barely) good enough. Importantly, that means good enough for what your product needs to do now – not what it may need to do later.

To give you a related example, I once attended a talk given by George Berkowski about his book “How to Build a Billion Dollar App”. He described how a team had spent months of long hours and late nights to get an app finished and ready for its launch. After a huge push they got it finished and it launched – then no one downloaded it! At the time, he reflected that they could have just created a single page website instead, to see if anyone ever actually clicked the download link. If as it turned out, no one did, they could have saved a huge amount of time and money. This example was originally given to illustrate the need to prove a market demand for an idea, rather than any technical design concerns, but it transcends the business lesson.

If you identify that your application component really has to handle 1000 concurrent requests from day one, that will impact the architecture of your application. Just letting your team blindly hack away without any discussion is unlikely to be effective – but don’t be tempted to try to predict every bottleneck and point of contention either.

Instead, formulate a plan to prove your proposed initial design can handle those 1000 requests in an identified test scenario. Then, continue working together with the (embedded or prescriptive) developer, test and other roles – leveraging their expertise to bring that idea to life. If one way doesn’t work, fine – move on to the next solution candidate and continue working until you’ve satisfied your minimum acceptance criteria.

Remember that meeting the business acceptance criteria doesn’t necessarily mean the solution is good enough (as the focus is very often solely at the functional level). Your team may still be faced with some refactoring required to clean the code up to a maintainable state – or you might have some edge-cases or other functionality to implement. The important thing is the team has proven a solution for the current situation.

A huge benefit of this approach is that you’ll probably learn about several pinch-points and possible limitations along the way. Often this is information you would not have predicted in advance, but you are now left in a very good position – where you are better able to explain the consequences of any change introduced later. For example, if the Product Owner subsequently wanted to add more functionality to a key user journey, where the change may affect performance, you will have detailed knowledge of many of the likely bottlenecks.This knowledge is based on empirical evidence (from your work in that area), not a crystal ball. You can help the Product Owner accurately assess whether the value added from any changes are going to be worth the cost to develop going forward. That level of understanding is a great place for any product team to get to.

We are not chickens, nor are we pigs!

We are not chickens, nor are we pigs!

I frequently see Scrum teams continuing to use “Chickens and Pigs” to describe someone’s relationship to the team. Despite the potential negative connotations, I often read new articles and documentation still using this terminology – and there doesn’t seem to be much sign of this slowing down…

Now, for some, it may surprise you to know that the origin of the “Chicken and Pig” terminology was deliberately removed from the Scrum Guide way back in 2011 – yet people continue to use it. So, what do these animal names relate to in the real world of product development?

Participants

Not pigs!

This is your Agile (product or feature / “Scrum”) team – the people who will collaboratively work towards the team’s goal. Your team may occasionally include a number specialists, consultants and subject matter experts for portions of your product’s delivery (many scaled Agile frameworks acknowledge these as “secondary roles“). If you participate, you’re a participant – simple.

Observers

Not chickens!

This is anyone your team is likely to consult or inform. They may be stakeholders, accountable for the successful delivery of your product, but they are not responsible for implementing it (in the DAD framework, it specifically uses “stakeholder” to refer to someone materially impacted by the success of the product). They could also be the PMO, finance or some other function providing wider governance beyond the team’s “what and how” responsibility. All these observers may have influence over the product development, but they aren’t (actively) participating.

Interactions

If you’re going to attend a daily stand-up, it should be to participate, not just observe. Stating “I’m a chicken!” when it’s your turn to give an update just isn’t helpful. However, always remember that Agile requires collaboration to achieve its goals – so don’t attend if you’re going to try to dictate your agenda (if you have the authority, communicate your priorities via the Product Owner instead, for them to add to the team’s backlog where appropriate).

Don’t derail the process. Let the team be responsible for the level of detail it needs to be to get things done. Trust them to do their job, while they trust you to do yours.

Clear Responsibility

If we are dismissive of any possible offence or hurt feelings from those involved (poor little Jimmy doesn’t like being labelled as a dirty, ignorant pig), then what is the problem with referring to people as chicken and pigs?

To some, it may sound unprofessional, while to others it may be Zeitgeist. Personally, I don’t like overloading common, well understood words with non-intuitive meanings (and we’re already pushing our luck with “Agile”).
Most importantly, one thing Agile is frequently accused of is obfuscating common sense. That is exactly what these terms were doing – replacing words understood outside of the software development and product management context and creating an ambiguous, industry specific usage. That’s two legs baaaad, we’re not farm animals!