Predicting the Unpredictable

I came to this book with an anticipation of a coherent discussion of estimating on modern software development projects. The subtitle is Pragmatic Approached to Estimating Project Schedule and Cost, says I’d find practical advice on how to estimate software development.

The Introduction opens with the standard project questions – how much will this cost and when will we be done? These are questions critical to any business spending money to produce a product – since time is money. You can buy technical solutions, you can get more more, but you can’t buy back lost time.

In the 2nd paragraph, there is an obvious statement the problem with these questions is that they are predictions. Then it follows I don’t know about you, but my crystal ball is opaque, It’s (sic – should be I’ve) never been good at predictions.

This indicates to me that the author actually doesn’t know how to estimate, but intends to tell the readers how to estimate, starting with a misunderstanding of what an estimate is and how that estimate is produced.

There are more observations about estimates changing and estimates expiring. This is correct. Estimates get updated with actual information, changes in future plans, discovered risks, etc. Estimates age out with this new information.

Chapter 2 starts with the naïve definition of estimating software projects. Estimates are guesses. The dictionary definition of a guess is used for an estimate. Trouble is the dictionary is usually not a good source of probability and statistics terms. Estimating is part of probabilistic decision-making.

One useful definition of an estimate is finding a value that is close enough to the right answer, usually with some thought or calculation involved. Another is an approximate calculation of some value of interest. For example how much will this cost and when will we be done?

A broad definition is:

Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available. Typically, estimation involves « using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter. The sample provides information that can be projected, through various formal or informal processes, to determine a range most likely to describe the missing information. An estimate that turns out to be incorrect will be an overestimate if the estimate exceeded the actual result, and an underestimate if the estimate fell short of the actual result.

This chapter confuses the accuracy of an estimate with the precision of an estimate. Accuracy and precision are defined in terms of systematic and random errors. The more common definition associates accuracy with systematic errors and precision with random errors. Another definition, advanced by ISO, associates trueness with systematic errors and precision with random errors and defines accuracy as the combination of both trueness and precision.

Precision

According to ISO 5725-1, the general term accuracy is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the same measurand, it involves a component of random error and a component of systematic error. In this case, trueness is the closeness of the mean of a set of measurement results to the actual (true) value and precision is the closeness of agreement among a set of results.

To make this really formal, here’s the math for accuracy and precision. Accuracy is the portion of true results (both positive and negative) among the total number of cases examined.

Screen Shot 2019-05-30 at 8.10.53 PM

Precision is the portion of the true positives against all the positive results.

Screen Shot 2019-05-30 at 8.12.10 PM

Chapter 2 ends with a description of an example of bad management. Making and accepting estimates without assessment of precision and accuracy – the variances of the estimated value – is simply bad management. It’s actually naïve management that goes bad when actions are taken with this naïve estimate.

Just an aside, the optimistic, most likely, and pessimistic estimate are OK but are actually not allowed in the domain I work. These are subject to optimism bias and result in large variances depending on in which order the questions are asked.

Chapter 3 starts with another misinformation term. An Order of Magnitude estimate is a number that is within 10X the actual value. This is an estimate that is ±100%. Not very useful in practice. The term rough can have tight or broad ranges.

The core issue here is the question why do we estimate. The popular software developer point of view is because we have been asked, or because we have always done estimates.

The actual answer is

because making decisions in the presence of uncertainty about future outcomes from these decisions is called microeconomics. Making these decisions required – mandates actually – estimating these impacts.

So in the end, estimates are for those providing the money. Any sense that estimates are a waste needs to be confirmed with those providing the money. This is not to say that estimates and estimating are not done poorly, manipulated, misused, or used to abuse people. But from the development point of view, it’s not your money.

Chapter 4 starts with the common lament of software developers – we’ve never done this before so how can we possibly estimate our work?

Well, if you haven’t done this before, go find someone who has. It’s that simple and that hard. The notion on the first page in Chapter 4 about the tools like SLIM, COCOMO seems a bit narrow-minded. As one who uses those tools as well a Seer and QSM, I can attest of their measurable value, accuracy, and precision, I wonder if the author has applied them in a mature development environment. These tools require skill, experience, and most of all calibration. The conjecture that they require substantial (no units of measure) time that takes away time from team learning to work together begs the question what is the value at risk. Applying Seer to $1.2B of national asset software development is much different than applying a tool to $200K of web site development. The problem with this book – at least up to Chapter 4 – is there is no domain defined in which the advice is applicable.

Next comes the notion that estimates in software are like estimates in construction. If you provide a single point estimate is again bad management – don’t do that. By the way, construction is innovation as well. Estimating new and innovative ways to pour concrete foundations for nuclear power stations is estimating in the presence of unknowns. Estimating the construction of fusion power development is actually rocket science. I’ve done both in recent years. These types of phrases appear to come from people who have not actually worked in those domains, and are being used as Red Herrings.

At the end of Chapter 4, we’re back to practices no matter the domain. Inch-pebbles, small stories all fit into a critical success factor for all projects, no matter the domain.

How long are you willing to wait before you find out you’re late?

The answer to this question defines the sample time for producing outcomes. Not too long, not too short. This is the Nyquist sampling interval

Chapter 5 starts out with some generalizations that just aren’t true. Estimating time and budget is always possible. We do it all the time. The answer to the question of budget and time is how much margin is needed. This requires a model of the work, the uncertainties – both reducible and irreducible. This is the standard launch date problem. This can be a literal launch date. A date that can’t be missed. Either a product launch or a physical system launch – we can fly to Mars in a 3-week window every 3½ years – be there, with your newly developed, never been done before autonomous rendezvous and dock software.

The 4 steps in §5.1 are logical steps. Except for the last sentence in #4 that estimates are guesses. They can be a guess, but that’s bad estimating. Don’t guess, apply good estimating practices. The rest of Chapter 5 gets better. Although the notion of exclusive tradeoffs in  §5.2 ignores the purpose of margin.  It’s not a trade between features, time, cost, and quality. The very purpose of estimating is to provide schedule margin, cost margin, management reserve, and technical reserve, and to establish the probabilistic values for each of those and the combinations of those.

For simple projects that is too much. For complex enterprise software intensive systems that kind of analysis, planning and execution is mandatory.

The enterprise project class needs to show up on time, with the needed capabilities, for the planned cost, and the planned effectiveness, performance, reliability, and all the other …illities needed for success.[2]

Chapter 6 starts with advice on how to actually estimate. Make stories small is good advice anywhere. In our domain, we have the 44-day rule. No work can cross more than one accounting period. This limits exposure to not knowing what done looks like. In the small agile world, 44 days (2 working months) sound huge. In a software intensive system – even using agile – it’s a short time. Building DO-178 compliant flight software is not the same as build web pages for the shoe store ordering application. So yes, decompose the work into visible chunks that can be sized in one of several manners.

The author uses the term SWAG (Scientific Wild Ass Guess). SWAG’s are not estimates. SWAGs are bad estimates. There are much easier ways with much more accurate results than Guessing out of your Ass. This starts with a Binary Search method as shown in How to Estimate Almost Any Software Deliverable. This way you can stop guessing and start estimating using proven methods.

In §6.4.1 there is the notion of using past performance and past velocity as a measure of the future. There is a fundamental flaw in this common agile approach to estimating.

Using past velocity for the future only works if the future is like the past. No secondly this only works if the variance in that past velocity is narrow enough to provide a credible forecast of the future velocity. Below is a simple example of some past performance numbers. They can be stories or anything else you want to forecast into the future. Care is needed to assess how the variances in the past will likely be expressed in the future. This is called The Flaw of Averages and there is a book of the same title. The colored bands on the right are 80% and 90% confidence ranges of the possible outcomes. These, from the past data, are 45% swings from the Mean (Average). Not good confidence when spending other people’s money.

The chart below is produced from a simple R script from past performance data and shows the possible ranges of the future, given the past.

ARIMAChapter 6 starts with one of my favorite topics. Rolling Wave Planning is a standard process on our Software Intensive Systems [1] We can’t know what’s going to happen beyond the planning horizon, so detailed planning is not possible. This, of course, is a fallacy, when you have past performance.

Chapter 7 speaks to various estimating models. The cone of uncertainty is the first example. The sentence This is a Gaussian distribution is not mathematically true. No cost or schedule variance model can be normally distributed. To be normally distributed all the random variables in the population represented by the distribution must be I.I.D. – Independent, and Identically Distributed. This means there is no coupling between each random variance. For any non-trivial project that can never be the case. Cost and Schedule distribution functions are long-tailed – asymmetric.

The suggestion that projects stay at 90% complete for a long time has nothing to do with the shape of the probability distribution of the possible durations or costs. Like a few other authors in agile, this author may not be familiar with the underlying statistical mathematics of estimates, so it’s forgivable that concepts are included without consideration to what math is actually taking place. Projects are a coupled stochastic process. That is networks of dependent activities, where each node in the network is a stochastic process, is a collection of random variables, representing the evolution of some system of random values over time. These random variables interact with each other and may evolve over time. This is why estimating is hard. But also why Monte Carlo Simulation tools are powerful solutions to the estimating problem.

In Chapter 8, 8.1 gives me heartburn. This notion that we’re inventing software says, we’ve never done this before. Which really says we don’t know what we’re doing and we’ll have to invent the solution. Would you hire anyone to solve a complex problem for you that has not done it or something like it before? Or go get some reference design? What happened to all the Patterns books, and reference designs. Model View Controller was my rock when building process control systems and their user interfaces. So this section is a common repeat of the current agile approach – all software development is new. Which means it’s new to me, so I don’t know how to estimate. Go Find Someone Who Does

There are some good suggestions in this section. The multiple references to Troy Magennis’s book Forecasting and Simulating Software Development Projects leads me to believe his book is what you should read, rather than this one. But that aside, Chapter 8 has the beginnings of a description of estimating processes. There is a wealth of information at USC’s Center for Systems and Software Engineering on estimating processes. Read some of those first.

Chapter 9 starts with the same red herring word – perfect estimate. There is no such thing. All estimates are probabilistic, with measures of accuracy and precision, as described above. This is a common phrase when those making the phrase don’t seem to have the math skills for actually making estimates. This is not a criticism, it is an observation.

The notion of stop estimating and start producing begs the question – produce what? Does the poor agile project manager described here have any idea of what done looks like? Does anyone have any idea of what done looks like, how is he (I assumed a he) going to get to done? What problems will be encountered along the way? How will progress be measured – other than the passage of time and spending of money? This is not only bad project management, this is bad business management.

This is where this book becomes disconnected from the reality of the business of writing software for money.

Management is obligated to know how much this will cost and when it will be done to some level of confidence determined by the governance process of the business.

Open-ended spending is usually not a good way to stay in business. Having little or no confidence about when the spending will stop is not good business. Having little or no confidence in when the needed capabilities for that spend will arrive is not good business. What the author is describing on page 40 is low maturity, inexperienced management of other people’s money.

Chapter 10 opens with the loaded question Do your estimates provide value? And it repeats the often doublespeak of the #NoEstimates advocates. No estimates doesn’t literally mean no estimates. Well explain this Lucy from the original poster of #NoEstimates hashtag:

NE

It seems pretty clear that No Estimates means we can make decisions with No Estimates. At this point in the book, the author has lost me. This oxymoron of no estimates means estimates have reduced the conversations to essentially nonsense ideas.

The approach to breaking down the work into atomic (singular outcomes) is a nice way to decompose all the work. But this takes effort. It takes analysis. It takes deep understanding sometimes about the future. This is physical estimating by revealing all the work to an atomic unit or maybe 2nd unit level. With that in hand you don’t need to estimate. You’ve got a visible list of all the work, sized in singular measures. Just add them up and that’s the Estimate to Complete.

But how long will that take? Can it even be done? I really wanted to finish the book and the review, so I skipped the rest of Chapter 10.

Chapter 11 starts to lay out how to produce an estimate but falls back into the naïve mathematics of those unfamiliar with the probabilistic processes of projects. The use of 90% confidence in June 1 and 100% confidence in August 1 tells me there is no understanding of probability and statistics here. Here is no such thing as 100% confidence in anything in the software business. A 90% confidence number is unheard of. And without margins and management reserve those numbers are essential – and I’ll be considered rude here – pure nonsense.

This is the core problem with estimating even in our domain. There is little understanding of the mathematics of project work. Troy’s book is a good start, but it has some issues as well. But those of us who work in a domain that lives and many times dies by estimates have all learned that naïve approaches, like those described here, are at the root of the smell of dysfunction so popularly used by the #NoEstimates advocates.

Chapter 12 finally arrives at the core process of all good estimating – probabilistic scheduling. Unfortunately, an example used by the author is not a software development project but a book-writing project.

The core concept in estimating cost and schedule of a software project is the probabilistic behavior of the work and the accumulation of the variance in that work to produce a confidence of completing on or before a need date.

The notion at the end of this chapter is it doesn’t matter what kind of life cycle you use, the further out the dates are, the less you know is not actually true in practice. In the probabilistic scheduling model, the future is less known, but that future can be modeled in many ways. In all cases, a Monte Carlo Simulation is used to model this future to show the probability of completing on or before the need date.

While the author used Troy’s book in some past chapter, it would have been useful to use Troy’s book here as well, where he shows how to model the entire project, not just the close in work.

Chapter 13 opens with a short description of the principle failure mode of all projects. You don’t know what done looks like. The agile notion of starting work and letting the requirements emerge is seriously flawed in the absence of some tangible description of Done in terms of capabilities. What is this project trying to produce in terms of capabilities for the customer? Don’t Know? Don’t Start. The only way to start in this condition, is to have the customer pay to discover the answer to what does done look like?

Even in the pure science world – and I know something about pure science in the particle physics world – there is a research goal. Something planned outcome for the discovery efforts to convince the funding agency they will get something back in the future. Research grants have a Stated Goal. Build software for money is rarely research. It is development. It’s the D in R&D.

So before you can know something about when you’ll be done, you must know What done looks like in units of measures meaningful to the decision makers. With that information, you can start asking and answering questions about the attributes of done? How fast, how reliable, how big, what measures of effectiveness, measures of performance, key performance parameters, technical performance measures, and all the other…ilities must be satisfied before starting, during execution, and when everything changes all the time – which it will.

In Chapter 13 there is a good checklist (pg. 50) that is connected to other project management methods we use in our domain. Integrated Master Plan / Integrated Master Schedule, where physical percent complete uses Quantifiable Backup Data to state where we are in the plan from the tangible evidence of progress to that plan. So Chapter 13 is a winner all around.

[1] A Software Intensive System is one where software contributes essential influences to the design, construction, deployment and evolution of the system as a whole. http://www2.cs.uni-paderborn.de/cs/ag-schaefer/Lehre/Lehrveranstaltungen/Vorlesungen/SoftwareEngineeringForSoftwareIntensiveSystems/WS0506/SEfSIS-I.pdf

[2] …illities are all the things that the resulting system needs to do to be successful beyond the tangible requirements http://www.dtic.mil/ndia/2011system/13166_WillisWednesday.pdf

Source de l’article sur HerdingCats

Having “no company strategy” is one of the biggest issues facing product managers, according to a recent survey of over 600 product people. After all, how can you set a reasonable direction for your product when you don’t know where your company is headed?

It’s an issue that confronted me recently, when I started work with a higher education provider in the UK.

A few months on, and we’ve found we’ve been able to solve it. And it landed pretty well, we’ve been given the green light (and extra cash) to deliver it!

Here’s how we did it.

Step 1. Define Your key Objective

Initially, the project was presented to me as something to “increase the number of people who apply to, and join the university”. While these numbers may be useful to measure, they’re also vanity metrics, much like website visitors for an e-commerce website.

Here’s why.

UK universities typically get paid by the student every three or four months, receiving the first payment about three weeks into the first semester. So a student is of no commercial value until this point. Getting paid enables the university to deliver its mission of providing education services and helping people into employment.

So, as a minimum, our key objective had to be something like, “to increase the number of people who pay their first tuition fee instalment”. But we felt that wasn’t enough really, because if the student leaves after the first semester then the university would lose on a significant amount of revenue – around 89% for a three-year course.

The key objective, therefore, had to be focused on retention. Something like, “to increase the number of people who complete their studies with the university”. And you could go a step further and add, “…and enter their chosen field of employment”, given that this is typically a student’s end goal and hence a factor in their likelihood to recommend the university.

What we were talking about, of course, was customer lifetime value (LTV) – a term that’s widely used in SaaS and subscription-based businesses – and Net Promoter Score (NPS).

We made one final tweak – to focus on the percentage of applicants rather than total numbers, as it was more within our control – and went with “to increase the percentage of applicants who complete their studies with the university, and enter their chosen field of employment”.

The key objective we defined as part of our product strategy

In hindsight, we basically answered two questions to determine our key objective. These were, why does the company exist (i.e. what’s its mission)? And what needs to happen to allow the company to keep working towards its mission?

Step 2. Define Your Target Customer

The university had a number of distinct customer segments spanning across qualification levels (e.g. undergraduate, postgraduate), study type (e.g. full-time, part-time), demographics (e.g. age, residency) and more. It would have been nigh-on impossible to try to create something for every combination from day one.

Fortunately, there was enough data available on the student population and the market to be able to determine which combinations were the most significant. A lot of this information was freely available online, for example, Universities UK’s Higher education in numbers report, which gave us the rich insight displayed below.

We learned that an overwhelming number of students are undergraduates…

…and choose to study full-time.

Likewise, a high number of students came from the UK, which was important because the application process differs slightly depending where the student is coming from.

So, based on this, we decided to focus our efforts on full-time, undergraduate students, who came from the UK, with a view to expanding to all segments as soon as possible.

Step 3. Map the Steps to Your key Objective

To understand where the existing experience could be improved and where we should focus first, we mapped out the milestones a student must go through to reach our key objective. In other words, we mapped out a conversion funnel.

Here are the milestones we came up with.

The milestones a typical student will go through before reaching the key objective

These milestones could also be used as lead metrics, to help determine whether a student is making meaningful progress towards the key objective, which in this case could take over three years to achieve.

Step 4. Collect the Data

Next, we cobbled together data from a variety of sources and populated the conversion funnel. We didn’t have useful data for the final step (entering chosen employment) so we left it out and made a request to start collecting it.

We ended up with something like this.

The number of people at each stage of the funnel and as a percentage of the total number of applicants, figures are illustrative only

At this point, there was still no way of telling what was good or bad so we gathered benchmarks for each of the figures based on competitor and sector averages, where known, as well as any internal year-on-year trends.

This gave us a number of areas to investigate further, for example, the withdrawal rates during the first year, which were among the highest in the sector and the “application” to “offer” rates, which were were notably lower than competitor averages.

Step 5. Determine the “why” Behind the “What”

Analysing data was great for telling us what was happening but it didn’t tell us why. So, we took the outputs of the steps above and laid out each one as a question. Then we dug deeper.

You could use an infinite number of methods here but we focused on three things: speaking with people (colleagues and students) to understand what happens at different milestones and why, analysing reams of secondary research and consumer reports, and scouring the largest UK student forum, The Student Room (TSR).

The Student Room was particularly useful. We found thousands of people in our target customer segment openly discussing the same questions we’d laid out, from why they wanted to go to university to how they decide between institutions. The legwork was in finding the answers and drawing logical conclusions, and Google’s Site Search function helped with this.

C:UsersliamsAppDataLocalTempenhtmlclipImage(10).png
We used Google’s site search function to trawl The Student Room
C:UsersliamsAppDataLocalTempenhtmlclipImage(5).png
An example of one of the more helpful threads on The Student Room – a survey showing why people choose to go to university

To help us draw conclusions from the research, we created a mind map. This had our key objective at the centre, our most important questions surrounding it, then any insight and best-guess answers coming off as branches. By the end, it covered most the stuff you’d expect to find on something like a Product Vision Board from the market and customer needs to internal objectives and product requirements.

We built a mind map to help draw conclusions from the data and research we were gathering

We were able to deduce that a number of the “problematic” areas actually had more to do with the perception of the university, which was way beyond the scope of this work to change (though was noted in our recommendations). Improving the areas later in the funnel, as well as internal efficiency, were perhaps more within our control to change. We then formed a number of hypotheses about how we might achieve our key objective and agreed specific targets (as percentage point increases). This gave us the focus we needed to proceed.

Step 6. Scope the Solution

The next step – and perhaps the simplest – was to think about a solution.

Based on what we’d learned so far, we layered in high-level user experience designs alongside the conversion funnel.

We layered in high-level user experience designs alongside the conversion funnel, which provided a useful template

Three or four distinct – but connected – products emerged from our first pass of the experience design, for example, an application product and a separate customer support product. This in turn gave us ideas for the high-level architecture, team structures and skills needed.

Then, using the insight we’d gathered plus some additional technical discovery, we were able to form a view on the relative priority of the products and features and a rough Now, Next, Later-style product roadmap.

We also created a Now, Next, Later-style product roadmap based on what we’d learnt so far

All that was left was to share it with senior management and get the go-ahead…

So, how did it Turn out?

Surprisingly well, actually. We were successful in “selling” our vision and strategy, and were allocated funds to deliver it. And because we’d involved a number of teams in the process, our peers were (and still are) generally supportive too.

The products and services that are delivered will inevitably be quite different from our early designs. That’s fine, at least we have overcome one of the biggest hurdles to corporate innovation – the urge to procrastinate and do nothing.

Perhaps the most valuable part of this work, however, was the template and process that were created – connecting company strategy (objectives), data and insights to the product strategy, and then seamlessly to the user experience and what’s delivered. This is something I’ve personally struggled to do in the past, having wrestled with tools like the Business Model Canvas and Product Vision Board. Similar to these tools though, the Product Funnel (as we now call it) can continually be updated as the team inevitably learn more – everything is stuck on with Post-it Notes and Blu Tack after all.

Want to give it a try? Download the Product Funnel template.

The post How to Create a Product Strategy Without a Clear Company Strategy appeared first on Mind the Product.

The zeal for different opinions concerning religion, concerning the government, and many other points…have in turn divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other than to cooperate for their common good  – James Madison


Source de l’article sur HerdingCats

In Software Value of something is totally unrelated to Cost

This is a popular fallacy in the #Noestimates advocates vocabulary. Let’s look at the principles of cost, price, and value to the customer from the point of view of the business funding the development of the software, the customer buying the software, and the financial models needed to know if the firm producing the software can stay in business over time. 

Knowing the difference between cost, price, and value in the software development domain is critical to producing profitability for the project or product, and the firm.

First some definitions:

  • The cost of our product or service is the amount the firm spends to produce it.
  • The price is our financial reward to our balance sheet top line when our customers buy our product or service.
  • The value is what our customer believes the product or service is worth to them in some monetary measure that matches or exceeds what they paid to receive that benefit.

Let’s Look at a Non-Software Example First 


The cost for a plumber to fix a leaking pipe at our house could be $25 for travel, materials with an hour’s labor at $70 for $95 in cost to fix the leak. The value of the service to me – who may have water leaking all over our kitchen – is far greater than that $95 cost. So the plumber may decide to charge a total of $150. Leaving the plumber with $55 of profit. 

The gross margin for the plumber is

  • Cost of Goods Sold (COGS)(a service as well as material in this case) – labor (assuming salary paid to the employee), materials, travel = $95
  • Revenue = $150
  • Margin = 36.6%
  • Profit after expenses = $55

The price the plumber charges me should be in line with the value of the benefits I receive. This is true for plumbing as well as software. Both provide value to the customer.

But pricing must also consider the prices the competitors charge for similar functionality. Our plumber knows us and has worked at our house before. He’s a local guy, so has low overhead and gives us a friends and family rate.

To maximize profitability for any product or service, be it a plumbing fix or a software product,  we must determine:

  • What benefits do our customers gain from using our product or service?
  • What are they willing to pay for this benefit?
  • How can those benefits be monetized in some way so they can compare value versus price?
  • What are the criteria our customers use to make buying decisions – for example, the Features and Functions, the convenience of procurement, performance or reliability?
  • What value does our customer place on receiving the benefits we provide through the software?

Usually, the price reflects the value we provide. This must cover our cost if we’re hoping to make a profit and stay in business. The decision process here usually starts with some target margin compared to the industry in general and the local conditions supported by the margin. High labor rates, cost of operations and other costs.

This means covering the fixed and variable costs to produce the product or service. And then assesing what the market price could be above these costs to determine the margin needed to stay in business.

Every business needs to cover its costs to make a profit. Working out costs accurately is an essential part of working out pricing. These costs are covered under two headings:

  • Fixed costs are those that are always there, regardless of how much or how little we sell, for example, building rent, salaries, and business capital rates.
  • Variable costs are those that rise as the sales increase, such as additional IT assets or facilities, extra labor.

Let’s stop here and restate this concept

Value Can Not be determined for our software product without knowing the cost to produce that product. No matter how many times the original quote at the top of this post is stated, it’s simply not true. 

When the price is set, it must be higher than the variable cost of producing the product or service. Each sale will then make a contribution towards covering our fixed costs and moving us along the path to making a profit.

For example, a software firm has variable costs of $18,000 for each product sold and total fixed costs of $400,000 a year that must be covered.

If the software company sells 80 instances of the software a year, it needs a contribution towards the fixed costs of at least $5,000 per instance ($400,000 divided by 80) to avoid a loss.

Using this structure, the framework for setting different price levels can be assessed:

  • If the software costs $18,000 (the variable cost per instance), it makes a loss on each installation it sells and does not cover any of its fixed costs
  • Selling 80 installs at $18,000 means a loss of $400,000 per year since none of the fixed costs are covered.
  • Selling the software at $23,000 results in breaking even, assuming the target 80 installs are sold (80 contributions of $5,000 per licenses = $400,000, i.e. the fixed costs)
  • Selling software licenses at $24,000 results in a profit, assuming 80 licenses are sold (80 contributions of $6,000 = $480,000, i.e. $80,000 over the fixed costs).
  • If more or fewer than 80 licenses are sold, profits are correspondingly higher or lower.

Cost-Plus Versus Value-Based Pricing 


There are two basic approaches of pricing our products and services: cost-plus and value-based pricing. The best choice depends on our type of business, what influences our customers to buy and the nature of our competition.

Cost-Plus Pricing 


This takes the cost of producing our product or service and adds an amount that we need to make a profit. This is usually expressed as a percentage of the cost. The Gross Margin.

It is generally more suited to businesses that deal with large volumes or which operate in markets dominated by competition on price.

But cost-plus pricing ignores our image and market positioning. And hidden costs are easily forgotten, so our true profit per sale is often lower than we realize.

Value-Based Pricing 


This focuses on the price we believe customers are willing to pay, based on the benefits our product or service offers them.

Value-based pricing depends on the strength of the benefits we can prove we offer to our customers.

If we have clearly-defined benefits that give us an advantage over our competitors, we can charge according to the value we offer customers. While this approach can prove very profitable, it can alienate potential customers who are driven only by price and can also draw in new competitors.

So another conjecture heard from #Noestimates advocates of Focus on Value not on Cost must be assessed from the Managerial Finance point of view, informed by the market dynamics of the offering.

In The End The Original Quote is a Fallacy


No value can be determined until we know the Price of that Value. That Price cannot be determined until we know the cost to produce that value. The fixed and Variable costs. 

This is an immutable principle of business managerial finance, and it must be followed if we have any hope of staying in business.


Source de l’article sur HerdingCats

Risk identification during early design phases of complex systems is commonly implemented but often fails to identify events and circumstances that challenge program performance. Inefficiencies in cost and schedule estimates are usually held accountable for cost and schedule overruns, but the true root cause is often the realization of programmatic risks. A deeper understanding of frequent risk identification trends and biases pervasive during system design and development is needed, for it would lead to the improved execution of existing identification processes and methods. 

Risk management means building a model of the risk, the impact of the risk on the program, and a model for handling of the risk, since it is a risk, the corrective or preventive action has not occurred yet. 

Probabilistic Risk Assessment (PRA) is the basis of these models and provides the Probability of Project Success Probabilities result from uncertainty and are central to the analysis of the risk. Scenarios, model assumptions, with model parameters based on current knowledge of the behavior of the system under a given set of uncertainty conditions. 
The source of uncertainty must be identified, characterized, and the impact on program success modeled and understood, so decisions can be made about corrective and preventive actions needed to increase the Probability of Project Success.

Since risk is the outcome of Uncertainty, distinguishing between the types of uncertainty in the definition and management of risk on complex systems is useful when building risk assessment and management models. 

  • Epistemic uncertainty ‒ from the Greek επιστηµη (episteme), is uncertainty from the lack of knowledge of a quantity or process in the system or an environment. Epistemic uncertainty is represented by a range of values for parameters, a range of workable models, the level of model detail, multiple expert interpretations, or statistical confidence. The accumulation of information and implementation of actions reduce epistemic uncertainty to eliminate or reduce the likelihood and/or impact of risk. This uncertainty is modeled as a subjective assessment of the probability of our knowledge and the probability of occurrence of an undesirable event.

Incomplete knowledge about some characteristics of the system or its environment are primary sources of Epistemic uncertainty.

  • Aleatory uncertainty ‒ from the Latin alea (a single die) is the inherent variability associated with a physical system or environment. Aleatory uncertainty comes from an inherent randomness, natural stochasticity, environmental or structural variation across space and time in the properties or behavior of the system under study.  The accumulation of more data or additional information cannot reduce aleatory uncertainty. This uncertainty is modeled as a stochastic process of an inherently random physical model. The projected impact of the risk produced by Aleatory uncertainty can be managed through cost, schedule, and/or technical margin.

Naturally occurring variations associated with the physical system are primary sources of Aleatory uncertainty.

There is a third uncertainty found on some projects.

  • Ontological Uncertainty ‒ is attributable to the complete lack of knowledge of the states of a system. This is sometimes labeled an Unknowable Risk. Ontological uncertainty cannot be measured directly.

Separating Aleatory and Epistemic Uncertainty for Risk Management 


Knowing the percentage of reducible versus irreducible uncertainty is needed to construct a credible risk model.
Without the separation, knowing what uncertainty is reducible and what uncertainty is irreducible inhibits the design of the corrective and preventive actions needed to increase the probability of program success. 

Separating the uncertainty types increases the clarity of risk communication, making it clear which type of uncertainty can be reduced and which types cannot be reduced. For the latter (irreducible risk), only margin can be used to protect the program from the uncertainty. 

As uncertainty increases, the ability to precisely measure the uncertainty is reduced to where a direct estimate of the risk can no longer be assessed through a mathematical model. While a decision in the presence of uncertainty must still be made, deep uncertainty and poorly characterized risks lead to the absence of data and risk models in many domains. 

Epistemic Uncertainty Creates Reducible Risk

The risk created by Epistemic Uncertainty represents resolvable knowledge, with elements expressed as a probabilistic uncertainty of a future value related to a loss in a future period of time.  Awareness of this lack of knowledge provides the opportunity to reduce this uncertainty through direct corrective or preventive actions. 

Epistemic uncertainty, and the risk it creates, is modeled by defining the probability that the risk will occur, the time frame in which that probability is active, and the probability of an impact or consequence from the risk when it does occur, and finally, the probability of the residual risk when the handing of that risk has been applied.
Epistemic uncertainty statements define and model these event‒based risks:

  • If‒Then ‒ if we miss our next milestone then the program will fail to achieve its business value during the next quarter.
  • Condition‒Concern ‒ our subcontractor has not provided enough information for us to status the schedule, and our concern is the schedule is slipping and we do not know it.
  • Condition‒Event‒Consequence ‒ our status shows there are some tasks behind schedule, so we could miss our milestone, and the program will fail to achieve its business value in the next quarter.

For these types of risks, an explicit or an implicit risk handling plan is needed. The word handling is used with special purpose. “We Handle risks” in a variety of ways. Mitigation is one of those ways. In order to mitigate the risk, new effort (work) must be introduced into the schedule. We are buying down the risk, or we are retiring the risk by spending money and/or consuming time to reduce the probability of the risk occurring. Or we could be spending money and consuming time to reduce the impact of the risk when it does occur. In both cases, actions are taken to address the risk.

Reducible Cost Risk

Reducible cost risk is often associated with unidentified reducible Technical risks, changes in technical requirements and their propagation that impacts cost.  Understanding the uncertainty in cost estimates supports decision making for setting targets and contingencies, risk treatment planning, and the selection of options in the management of program costs. Before reducible cost risk can take place, the cost structure must be understood. Cost risk analysis goes beyond capturing the cost of WBS elements or content of the Product Roadmap in the Basis of Estimate and the Cost Estimating Relationships. This involves:

  • Development of quantitative modeling of integrated cost and schedule, incorporating the drivers of reducible uncertainty in quantities, rates and productivities, and the recording of these drivers in the Risk Register.
  • Determining how cost and schedule uncertainty can be integrated into the analysis of the cost risk model.
  • Performing sensitivity analysis to provide an understanding of the effects of reducible uncertainty and the allocation of contingency amounts across the program.

Reducible Schedule Risk

While there is significant variability, for every 10% in Schedule Growth there is a corresponding 12% Cost Growth. 
Schedule Risk Analysis (SRA) is an effective technique to connect the risk information of program activities to the baseline schedule, to provide sensitivity information of individual program activities to assess the potential impact of uncertainty on the final program duration and cost.
Schedule risk assessment is performed in 4 steps:

  1. Baseline Schedule ‒ Construct a credible activity network compliant with GAO‒16‒89G, “Schedule Assessment Guide: Best Practices for Project Schedule.”
  2. Define Reducible Uncertainties ‒ for activity durations and cost distributions from the Risk Register and assign these to work activities affected by the risk and/or the work activities assigned to reduce the risk.
  3. Run Monte‒Carlo simulations ‒ for the schedule using the assigned Probability Distribution Functions (PDFs), using the Min/Max values of the distribution, for each work activity in the Integrated Master Schedule.
  4. Interpret Simulation Results ‒ using data produced by the Monte Carlo Simulation

Reducible Technical Risk

Technical risk is the impact on a program, system, or entire infrastructure when the outcomes from engineering development do not work as expected, do not provide the needed technical performance or create higher than the planned risk to the performance of the system. Failure to identify or properly manage this technical risk results in performance degradation, security breaches, system failures, increased maintenance time, and a significant amount of technical debt and addition cost and time for end item deliverable for the program.

Reducible Cost Estimating Risk

Reducible cost estimating risk is dependent on technical, schedule, and programmatic risks, which must be assessed to provide an accurate picture of program cost. Cost risk estimating assessment addresses the cost, schedule, and technical risks that impact the cost estimate. To quantify these cost impacts from the reducible risk, sources of risk need to be identified. This assessment is concerned with three sources of risk and ensures that the model calculating the cost also accounts for these risks: 

  • The risk inherent in the cost estimating method. The Standard Error of the Estimate (SEE), confidence intervals, and prediction intervals.
  • The risk inherent in technical and programmatic processes. The technology’s maturity, design, and engineering, integration, manufacturing, schedule, and complexity.
  • The risk inherent in the correlation between WBS elements, which decides to what degree one WBS element’s change in cost is related to another and in which direction. WBS elements within the project have positive correlations with each other, and the cumulative effect of this positive correlation increases the range of the costs.

Unidentified reducible Technical Risks are often associated with Reducible Cost and Schedule risk.

Aleatory Uncertainty Creates Irreducible Risk

Aleatory uncertainty and the risk it creates comes not from the lack of information, but from the naturally occurring processes of the system. For aleatory uncertainty, more information cannot be bought nor specific risk reduction actions are taken to reduce the uncertainty and resulting risk. The objective of identifying and managing aleatory uncertainty to be preparing to handle the impacts when risk is realized.

The method for handling these impacts is to provide margin for this type of risk, including cost, schedule, and technical margin.

Using the standard project management definition, Margin is the difference between the maximum possible value and the maximum expected Value and separate from Contingency. Contingency is the difference between the current best estimates and maximum expected estimate. For systems under development, the technical resources and the technical performance values carry both margin and contingency.

Schedule Margin should be used to cover the naturally occurring variances in how long it takes to do the work. Cost Margin is held to cover the naturally occurring variances in the price of something being consumed in the program. The technical margin is intended to cover the naturally occurring variation of technical products.

Aleatory uncertainty and the resulting risk is modeled with a Probability Distribution Function (PDF) that describes the possible values the process can take and the probability of each value. The PDF for the possible durations for the work in the program can be determined. Knowledge can be brought about the aleatory uncertainty through Reference Class Forecasting and past performance modeling. This new information then allows us to update ‒ adjust ‒ our past performance on similar work will provide information about our future performance. But the underlying processes are still random, and our new information simply created a new aleatory uncertainty PDF.

The first step in handling Irreducible Uncertainty is the creation of Margin. Schedule margin, Cost margin, Technical Margin, to protect the program from the risk of irreducible uncertainty. The margin is defined as the allowance in the budget, programmed schedule … to account for uncertainties and risks.
Margin needs to be quantified by:

  • Identifying WBS elements that contribute to margin.
  • Identifying uncertainty and risk that contributes to margin.

Irreducible Schedule Risk

Programs are over budget and behind schedule, to some extent because uncertainties are not accounted for in schedule estimates. Research and practice are now addressing this problem, often by using Monte Carlo methods to simulate the effect of variances in work package costs and durations on total cost and date of completion. However, many such program risk approaches ignore the significant impact of probabilistic correlation on work package cost and duration predictions.

Irreducible schedule risk is handled with Schedule Margin which is defined as the amount of added time needed to achieve a significant event with an acceptable probability of success.  Significant events are major contractual milestones or deliverables. 

With minimal or no margins in schedule, technical, or cost present to deal with unanticipated risks, successful acquisition is susceptible to cost growth and cost overruns. 

The Project Manager owns the schedule margin. It does not belong to the client nor can it be negotiated away by the business management team or the customer. This is the primary reason to CLEARLY identify the Schedule Margin in the Integrated Master Schedule. It is there to protect the program deliverable(s). Schedule margin is not allocated to over‒running tasks, rather is planned to protect the end item deliverables.

The schedule margin should protect the delivery date of major contract events or deliverables. This is done with a Task in the IMS that has no budget (BCWS). The duration of this Task is derived from Reference Classes or Monte Carlo Simulation of aleatory uncertainty that creates a risk to the event or deliverable. 

The Integrated Master Schedule (or Product Roadmap and Release Plan), with margin to protect against the impact aleatory uncertainty, represents the most likely and realistic risk‒based plan to deliver the needed capabilities of the program. 


Source de l’article sur HerdingCats

The order of time In came across The Order of Time in Science, 4 May 2018. It looked interesting on the surface from the review.

 The author also wrote Seven Brief Lessons on Physics which I haven’t read.

 This book approaches some fundamental questions about the nature of the universe. 

 Things like » « why do we remember the past and not the future? » « Do we exist in time or does time exist in us? » « What does it really mean when we say time ‘passes’? »

The book is an easy read, with only a few simple equations, but it is a powerful read asking and answering questions about the nature of time, our true lack of understanding of time, and how this lack of understanding impacts our larger understanding of the universe.


Source de l’article sur HerdingCats

Agree, and focus more on discovery since in *delivery* you have 4 problems: 1. Requirements will change, 2. Requirements are never complete 3. It’s impossible to gather all requirements in beginning 4. You don’t have enough time or $$$ to do everything

This quote is typically the basis of proposing agile software development over traditional software development. While there is some truth in the principles stated there, there is a fundamental flaw.

If the development work has no deadline, no not to exceed budget, no Minimal Viable Capabilities in the sense of MVP’s meaning without these Capabilities being there Features we cannot Go Live on the needed date for the needed budget, then the phrases in the quote may be applicable.

But if the development work is a Project is a fixed period of performance (with margin), for a fixed (with margin) budget, and a fixed set of Capabilities, then the question is can agile be used to develop the software?

The answer is, of course, it can and it is done every single day in many domains I work in. Ranging from embedded flight control systems for winged vehicles to orbiting spacecraft, to Software Intensive System of Systems from industrial, business, insurance, and financial systems.

But these developments are not Products in the sense of the term used by agilest. They are projects in the sense of the term defined by PMI

A project is a temporary endeavor undertaken to create a unique product, service or result. A project is temporary in that it has a defined beginning and end in time, and therefore defined scope and resources.

When an agile advocate says, software development is a product, not a project, and provides all the reasons why the thought process needs to switch from project to products, they may be unfamiliar with how many businesses actually work. In the product development world, the funding, recording of revenue, management of staffing at the financial level is managed as a Project. FASB 86 is an example of how cost and revenue for internal software development are recorded on the balance sheet. 

Financial Accounting Standards Board (FASB) Statement No. 86, Accounting for the Costs of Computer Software to Be Sold, Leased, or Otherwise Marketed, applies to the costs of both internally developed and produced software and purchased software to be sold, leased, or otherwise marketed.

Projects build Products on the Balance Sheet. Developers may be working on a Product, but the CFO is recording the work as a Project that has a bounded period of performance, a bounded budget, a bounded set of Capabilities, from which revenue will be generated.

So those loud voices shouting software is product development, may very well hold that view from their position in the firm. But for those signing the paychecks, that is not likely the view.

What Does This Mean for Agile? 


Agile shouldn’t care. Agile produces useful working software at the end of every Sprint. If that software is not put into the hands of those paying for the software until some release date, those writing the software shouldn’t care. The software is still useful. It’s likely useful to the Staging and Pre-Production manager.  The software is ready for use by someone internal or external to the firm, but regardless of the end uses location the software is still ready for use.

To separate project from the product is a developers point of view, it is not a business point of view.

When you hear that they are separate, and we need to move to a product point of view, you’re likely talking to a coder, that hasn’t taken that Managerial Finance class in his educational institution. For us in the Finance and Business Operations side of writing software for money, it’s a moot point.

Go write code that every few weeks produces value to the next step in the process. That next step could be the end user in some distant land, or it could be the Systems Integration Testing staff down the hall, who in turn produce useful outcomes to the User Acceptance Testing staff across campus, who in turn releases the working UAT code to the customer around the world.

If the agile development team of 6 people sitting in the same room with their end customet who will start using the working software every few weeks for their business, there is no differenc in principle from the team of 6 people who have never met their customer except through the Product Owner who comes from downtown every 3rd day to sit with the team, and who has a go live date for the working system this coming December (7 months from now) and on that date, the customer will have been trained, all the external and internal interfaces to other systems will have been end-to-end verified and validated, and a new system will be avaialbel for those strangers who didn’t even know something new was coming.

So before listening to any conjecture about how agile should be done or not done – establish a context, a domain, a business and technical process, the external and internal business, technical, and financial governance process.

You’re not going live with working software every few weeks for DO-178 flight control system in the same way you can go live every few hours for a sports photo sharing web system. Both ends of the spectrum can and do use agile software development processes. Don’t confuse the business process with the everyday software development processes.


Source de l’article sur HerdingCats

There’s a fallacy used by some in the software development business, that estimates are not needed to make decisions in the presence of uncertainty. It turns ourt of course, this can only be true if the world we live in is deterministic

But we don’t live in a deterministic world, we live in a non-deterministic world. Determinism was debunked long ago. But that determinism was built on a house of sand, the originators of determinism just didn’t know it. In the same way those suggesting we don’t need to estimate, while spending other people’s money is a house built on sand.

Leibniz (1702): “There is no doubt that a man could make a machine which was capable of walking around a town for a time, and of turning precisely at the corners of certain streets. And an incomparably more perfect, although still limited, mind could foresee and avoid an incomparably greater number of obstacles. And this being so, if this world were, as some think it is, only a combination of a finite number of atoms which interact in accordance with mechanical laws, it is certain that a finite mind could be sufficiently exalted as to understand and predict with certainty everything that will happen in a given period. This mind could then not only make a ship capable of getting itself to a certain port, by first giving it the route, the direction, and the requisite equipment, but it could also build a body capable of simulating a man.”

Of course, Laplace had on math to support his position, that came later in the form of differential equations. But Laplace’s statements are the foundation of classical mechanics.

For each system in classical mechanics, there are equations of motions of the form: 

d2r / dt2 = F(r)

which have a unique solution for given initial conditions: 

(r(t0) = r0 and dr/dt (t0) = v0

Before moving on, the phrase about a machine capable of walking around town has come to pass. So now what about determinism and the machine.

Of course, the machine in the video wouldn’t be able to take a step without falling if it didn’t have a probabilistic feedforward adaptive closed loop control system. Closed-loop control systems actively control the system based on state feedback. Open-loop control systems execute a fixed sequence of control inputs without any feedback. Here’s a nice paper as an example of control systems, « A Probabilistic Approach to Mixed Open-loop and Closed-loop Control, with Application to Extreme Autonomous Driving. »

Putting this to work on Projects 


In our domain, the role I usually participate in is called Program Planning and Controls. These planning and controls activities always take place in the presence of uncertainty, which of course creates risk. The presence of uncertainty makes the system indeterminate. And of course to manage in the presence of uncertainty and the resulting risks we need specific processes and practices based on principles. 

Since all project work operates in the presence of uncertainty – reducible and irreducible – and the managers of these projects need to make decisions in the presence of these uncertainties, we need to make estimates to inform our decision-making process.

Here’s an example of the probability distribution of a cost estimate for a project.

The 50th percentile cost is $2,296,898. That means there is a 50/50 chance the project will cost more than that or less than that. That number is the median – the middle of the range. If we want to know what the number is with an 80% confidence than it is $2,333,153. That says there is an 80% confidence that the cost of the project will be $2.3M or less.

Cost Probabilities

When we speak about the schedule, we can use the same terminology. In this case, there is an 80% confidence the project will complete on or before 11/06/2015. 

Schedule Probabilities
These graphs are generated from a Monte Carlo simulation tool applied to a resource loaded Integrated Master Schedule (IMS). If there is any doubt as to why you MUST create a resource loaded schedule, please please put those doubts away. These two graphs are the source for the Joint Confidence Level showing the overall probability distributions for both cost and schedule for the project.

This picture should convince anyone that the « joint » probability of completing on or before the target date (I didn’t say what that was) and at or below that planned budget (didn’t say what that was either) needs to be modeled in a way that would be considered credible. 

Cost and Finish JCL
Outside of a small domain of trained and experienced risk analyst, this is rarely if ever done. Most IT projects have some made up budget and schedule. Their strategy is based almost entirely on HOPE with no underlying assessment of the statistical and probabilistic drivers of the cost and schedule, let alone the technical aspects coupled with risk.

And we wonder why IT projects come in late and over budget 


Turns out of course large IT  programs have the same problem – poor estimating, politically motivated estimating, naive estimating, etc. But they are required to produce data like this, so we know it is bogus sometimes early enough to fix it – sometimes not.

We only wish it were as simple as many would have us believe. But sadly it is one of those wicked problems that we simply have to manage in the presence of uncertainty.

But no credible decision can be made in the presence of uncertainty without making an estimate of the outcome of that decision.

 


Source de l’article sur HerdingCats

« Uncertainty is an essential and non-negotiable part of a forecast. …. sometimes an honest and accurate expression of the uncertainty is what has the potential to save [big things]…. However, there is another reason to quantify the uncertainty carefully and explicitly. It is essential to scientific progress, especially under Bayes’s theorem. » – The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t, Nate Silver, from Musing on Project Management, John Goodpasture, author of Project Management: The Agile Way. 

Since all project work operates in the presence of uncertainty, any decisions that need to be made on the project, need to have estimated. This uncertainty is further complicated by scarce resources, changing demands on the project team, changing conditions in the market or project domain, variances in productivity, unanticipated defects, and other stochastic processes found on all projects.

Nate Silver cautions use

In project work, one rarely sees all the data point toward one precise conclusion. Real data is noisy—even if the theory is perfect, the strength of the signal will vary. And under Bayes’s theorem, no theory is perfect. Rather, it is a work in progress, always subject to further refinement and testing. This is what skepticism is all about.

And we should be skeptical about the data from our projects. Reducible and Irreducible uncertainties abound. These create risks to cost, schedule, and technical performance. So the starting point to dealing with these uncertainties is …

Risk Management is How Adults Manage Projects – Tim Lister

And of course, managing risk requires making estimates. 


And while you’re reading John’s book, read Agile!: The Good, The Bad, and the Ugly, which speaks to the unsubstantiated conjectures, carnival hucksters, and other purveyors of out and out fallacies for agile processes, estimates, planning, testing, finance and economics of software development.  Names provided on request.


Source de l’article sur HerdingCats