|
spec_by_ex_case_studies
The team started taking business stakeholders through different Cucumber scenarios, not only to verify edge cases but also to identify which scenarios were important, reducing the scope and avoiding just-in-case code. User enters 100 in box id.” Now you would say, “The user enters a valid amount.” A valid amount would be defined in a separate test. Once you have it written, you don’t have to test that explicitly in every other test. A test for valid amounts would also try negative numbers, letters, and so on, but it was abstracting away from having that test in every other test. This was quite a big step. In general, the uSwitch team doesn’t track many technical project metrics. Instead, they only look into lead time and throughput. They’re much more focused on the business performance of the system and on the value added by a feature. The entire development process is now driven by expected business values of features. Instead of big plans and large releases, they build small increments, release them often, and monitor whether the increment added value to the business. Because their business model depends on immediate web conversion rates, they can easily achieve this kind of evaluation. There was a lot of frustration on both sides. We asked, “How come they aren’t reviewing the tests; how come they aren’t valuing the tests?” At the same time the business team was frustrated: “How come the developers have a passing test but it doesn’t work?” They did not believe in those tests “Fantasy state” is the term I kept using to let other developers know that I didn’t trust a test that wasn’t using the correct entry point. Other symptoms of fantasy state are “thick fixtures”; fixtures shouldn’t have much logic in them and should be pretty lightweight. Using personas helped us find the right level of abstraction to identify the appropriate entry point in our application. Before we used personas, we often picked an inappropriate entry point, which led us to heavy fixtures that were prone to fantasy state. Typically, we use bond proceeds to fund private student loans. However, we changed our business model and made all of the funding portion of the system configurable so that we could use lenders to provide funds and continue to provide loans to the students. It was a dramatic overhaul of a core piece of the system. Before this new funding requirement, our system didn’t even have the concept of a lender because we were able to assume Iowa Student Loan was the lender. We were able to use our existing acceptance tests and repurpose them to say, “OK, here’s our funding requirement.” For all of the tests we had, we discussed the impact and provided funding so they would still work. We had some interesting discussions based on scenarios where there is no more funding available, or funding is available but not for this school or this lender, so we had some edge cases for these requirements, but it was really making the new funding model more flexible and configurable The developers were the only ones who wrote executable specifications, and they understood that didn’t give them the benefits they expected. To improve the communication and collaboration, everyone had to be involved. A single group of people was unable to do that on their own. We learned that we should try to keep fixtures as simple as possible and that duplication is bad. It layer is code, like any other code. The customer thought completely differently about the application when they saw the user interface. When we started writing acceptance tests for the UI, they had much more in them than the ones written for the domain. So the domain code had to be changed. But the customer assumed that that part was done. They had their FitNesse test there, they drove it, and it was passing. People assumed that the back end would handle everything that the UI mockup screens had on them. Sometimes the back end didn’t support queries or data retrieval in a form that was usable to the front end. It was a struggle at the beginning to say that it’s OK for a developer to write a test, because testers thought that they did such a better job of testing. I think they come from a completely different perspective. Actually, I’ve found since then that when a developer and a tester talk about the test together, it comes out significantly better than if one of them does it on their own. Getting people to work together not only helped them address bottlenecks in the process but also resulted in better specifications, because different people were approaching the same problem from different aspects. Collaboration helped both groups share knowledge and build trust in the other group gradually, which made the process much more efficient long term. Most of the bugs testers found before were unit-level bugs. You spent all your time with that and didn’t have time for anything else The biggest benefit from this is getting us to talk together so that we have a mutual understanding of the requirements. That’s more important than test automation. After we saw the benefits of collaboration, the product owner got excited about it as well and heard about Acceptance Test-Driven Development. We had a vague idea at first that we could write acceptance tests ahead of time so that they could be the requirements. What changed over time is how much detail we need to put into tests up front, how many test cases is enough. I’m a tester; I could probably test something forever and keep thinking of things to test, but we have only two weeks. So we had to figure out how to internalize the risk analysis and say: Here are the tests we really need; here are the really important parts of the story that have to work. Building a living documentation system to share the knowledge helped the development team learn about the business processes, and it gave the business users visibility of what they were actually doing. Writing things down exposes inconsistencies and gaps. In this case, it made people think harder about what they’re actually doing from a business perspective. Getting to what we actually intended to build quicker because we use the same language in the tests as we do when we decide what to build and go through the process of understanding our customers. That helps reduce communication issues. We aspire to never having a situation where developers turn around and say: What we built works; you just didn’t ask for the right thing We spent too long testing trivial bits of the user interface, because that was easy to do. We didn’t spend enough time digging into edge cases and alternative paths through the application. A model based on mistrust creates adversarial situations and requires a lot of bureaucracy to run. Supposedly, requirements have to go through sign-off because users want to ensure what the analysts will do is right—in truth, sign-off is required so analysts can’t be blamed for functional gaps later on. Because everyone needs to know what’s going on, specifications go through change management; really, this ensures that nobody can be blamed for not telling others about a change. It’s said that code is frozen for testing to provide testers with a more stable environment. This also guarantees that developers can’t be blamed for cheating while the system was being tested. On the face of it, all these systems are in place to provide better quality. In reality, they’re only alibi generators. After talking to teams who formalized a preparation phase in different ways, I have learned that the collaboration on examples is a two-step process. In the first step, someone prepares the basic examples. In the second step, these examples are discussed with the team and extended. The goal of the preparation phase is to ensure that basic questions are answered and that there’s a suggested format for examples when the team starts to discuss them. All these things can be done by a single person or two people, making the larger workshop much more effective. Many teams found that, at the start, big workshops were useful as a means to transfer the domain knowledge and align the expectations of developers, testers, and business analysts and stakeholders. But the majority of teams stopped doing big workshops after a while because they discovered that they’re hard to coordinate and cost too much in terms of people’s time. Specifications that talk about business processes are worth much more over the long term. Business users can participate in documenting business processes and provide much better feedback than they would on acceptance tests that pertain to software. Some things, such as usability, can never be properly automated; but we can still try to validate parts of specifications frequently. This addresses the problem of specifying things that are hard to automate, an issue that many teams avoid. top 5 Collaboration on requirements builds trust between stakeholders and delivery team members Collaboration requires preparation There are many different ways to collaborate Looking at the end goal as business process documentation is a useful model Long-term value comes from living documentation The more information you have, the better decisions you can make, so investments that produce information become much more compelling because they enable your team to make better decisions. One of the profound insights from viewing software in this manner is that information flows in the opposite direction of work product. That is, if software is created by performing analysis, programming, testing, and deploying, then information flows in the opposite direction – from deploying to testing to programming to analysis. For example, when a defect is found in testing, this is a piece of new information that is communicated to development and possibly analysis depending on the type of defect. Why is this a profound insight? Well, if we want to maximize information, then we should perform all of the later development steps such as testing and deployment as soon as possible to make the information available. |