My favorites | Sign in
Project Home Downloads Wiki Issues Source
READ-ONLY: This project has been archived. For more information see this post.
Search
for
SpecificationByExample  
Updated Mar 29, 2015 by luis.d.c...@gmail.com

By relying on customers to give them a list of user stories, use cases, or other relevant information, software delivery teams are asking their customers to design a solution.

The business users focus on communicating the intent of the desired feature and the value they expect to get out of it. This helps everyone understand what’s needed. The team can then suggest a solution that’s cheaper, faster, and easier to deliver or maintain than what the business users would come up with on their own.

Instead of relying on a single person to get the specifications right in isolation, successful delivery teams collaborate with the business users to specify the solution. People coming from different backgrounds have different ideas and use their own experienced based techniques to solve problems. Technical experts know how to make better use of the underlying infrastructure or how emerging technologies can be applied. Testers know where to look for potential issues, and the team should work to prevent those issues. All this information needs to be captured when designing specifications.

Key examples must be concise to be useful. By refining the specification, successful teams remove extraneous information and create a concrete and precise context for development and testing. They define the target with the right amount of detail to implement and verify it. They identify what the software is supposed to do, not how it does it.

Living documentation is a reliable and authoritative source of information on system functionality that anyone can access. It’s as reliable as the code but much easier to read and understand. Support staff can use it to find out what the system does and why. Developers can use it as a target for development. Testers can use it for testing. Business analysts can use it as a starting point when analyzing the impact of a requested change of functionality. It also provides free regression testing.

From my experience, changing the parts that are outdated doesn’t contribute significantly to cost. Often, cost is the result of time spent on finding what needs to be changed.

With so little time to complete a delivery phase, we need to eliminate as much unnecessary work as possible. Common problems that require fixing are rework, duplicated tasks caused by miscommunication, time wasted working back from code in order to understand the system, and time spent repeatedly executing the same tests manually.

The Batman is a dedicated person who jumps in to solve urgent issues and resolve important bugs, while the rest of the team keeps on working on new functionality

In the fixture code I put a little thing that told me when people were running tests on their machines. That group was pretty shy. I used this to find out when people aren’t running the tests to go and talk to them and see what’s wrong and whether they had any problems.

story champion -this ensures that there’s one point of contact for a story—so if you have an issue with a story, you can talk to the story champion.

Keep executable specifications in a version control system Get sign-off on exported living documentation Get sign-off on scope (project manager), not specifications (business analyst, tester and programmer) Get sign-off on “slimmed down use cases”

If a validation fails and you change the code, that means you found and fixed a problem. If a validation fails and you have to change the specification, that means it wasn’t written properly.

Business rules should be much more stable than the technology that implements them. Watch out for executable specifications that change frequently. Look for ways to write them better

A boomerang is a story or a product backlog item that comes back into the process less than a month after it was released. The team thought it was done, but it needs rework. But it’s not a boomerang when the business later extends existing requirements to incorporate innovation as the product evolves.

If your testers are lagging behind development, you’re doing something wrong. A similar warning sign is misaligned analysis. Some teams start analysis ahead of the relevant iteration, but they still have regular intervals and flow. Analyzing too much up front, analyzing things that won’t be implemented immediately, or being late with analysis when details are needed are signs that the process is wrong.

the biggest source of waste in software development is just-in-case code—software that was written without being needed.

Watch out for people who implement more than what was agreed on and specified with examples. Another good way to avoid just-in-case code is by discussing not only what you want to deliver but also what’s out of scope.

Shotgun surgery is a classic programming antipattern (also called code smell ) that occurs when a small change to one class requires cascading changes in several related classes. This telling sign can be applied to living documentation; if a single change in production code requires you to change many executable specifications, you’re doing something wrong.

The F-16 was successful because the design provided a better and cheaper solution than what the customer asked for.

Most of the business users and customers I work with are inclined to present requirements as solutions; rarely do they discuss goals they want to achieve or the specific nature of problems that need solutions.

Defining scope plays an important role in the process of building the right software. If you get the scope wrong, the rest is just painting the corpse. -Gojko Adzic

Asking business users to provide the scope is, in effect, relying on individuals who have no experience with designing software to give us a high-level solution.

understanding why someone needs a particular application and how they’ll use it often leads to better solutions.

Asking business users to prioritize features or even business goals works better than asking them to prioritize low-level stories or tasks.

When goals are hard to pin down, a useful place to start is the expected outputs of the system: Investigate why they’re needed and how the software can provide them. Once you nail down expected outputs, you can focus your work on fulfilling the requirements that come with them. Analyzing why those outputs are required leads to formulating the goals of the project.

great way to obtain the right scope for a goal is to firmly place the responsibility f or a solution on the development team

the business users provide direction for the “as a ” and “in order to ” statements, whereas the developers provide content for the “I want ” statement.

Instead of a technical feature specification, we should ask for a high-level example of how a feature would be useful. This will point us towards the real problem.

Asking why something is needed can sound like a challenge and might put the other person in a defensive position, especially in larger organizations. Asking how something would be useful starts a discussion without challenging anyone’s authority.

Asking for alternative solutions can make whoever is asking for a feature think twice about whether the proposed solution is the best one. It should also start a discussion about alternatives with the delivery team.

Splitting larger user stories into smaller ones that can be delivered individually is good practice. Looking at higher-level stories is still required in order to know when we’re done. Instead of a flat, linear backlog, we need a hierarchical backlog to look at both levels. Lower-level specifications and tests will tell us that we’ve delivered the correct logic in parts; a higher-level acceptance test will tell us that all those parts work together as expected.

Specification by Example won’t work if we write documents in isolation, even if we implement all the other patterns described in this book.

A failure to collaborate on defining specifications and writing acceptance tests is guaranteed to lead to tests that are costly to maintain.

When developers wrote specifications in isolation, those documents ended up being too closely tied to the software design and hard to understand. If testers wrote them in isolation, the documents were organized in a way that was hard to maintain. In contrast, successful teams quickly moved on to more collaborative work models.

split stories until everybody agrees

A Three Amigos meeting is often sufficient to get good feedback from different perspectives. Compared to larger specification workshops, it doesn’t ensure a shared understanding across the entire team, but it’s easier to organize than larger meetings and doesn’t need to be scheduled up front.

Involve stakeholders - end user

When a project has many interested parties, all the requirements are often funneled through a single person, typically called a product owner. This works well for scope and prioritization but not specifications.

prioritization must always be done by one person but that clarification can be done by the team itself.

If possible, include the actual stakeholders in the collaboration on specifications. This will ensure that you get the right information from an authoritative or dependable source and reduce the need for up-front analysis.

Our project manager who’s effectively the product owner will have prepared in advance the stories that he wants us to play. He and the business analyst already had them for the next iteration up on the board, and the business analyst went though preparing the acceptance tests.

if the team finds that they don’t have enough information to write the executable specifications, someone has to provide analysis earlier. That someone doesn’t necessarily have to be a business analyst or a subject matter expert. It could be a tester or a developer.

put initial analysis before going to stakeholders - advises preparing “just enough” examples up front

Complex specifications are hard to understand, so most people won’t be able to identify functional gaps and inconsistencies in such specifications.

Whether you decide to have someone work one week ahead to prepare initial examples or hold an introductory meeting to identify open questions, remember that the goal is to prepare for the discussion later, not replace it.

Specification by Example relies heavily on collaboration between business users and delivery team members.

The balance between the work done in preparation and the work done during collaboration depends on several factors: the maturity of the product, the level of domain knowledge in the delivery team, typical change request complexity, process bottlenecks, and availability of business users.

Everyone invents their own examples, but there’s nothing to ensure that these examples are even consistent, let alone complete. In software development, this is why the end result is often different from what was expected at the beginning. To avoid this, we have to prevent misinterpretation between different roles and maintain one source of truth.

Instead of starting to develop an incomplete story only to see it blow up in the middle of an iteration, we can flush such problems out during the collaboration on specifications while we can still address them—and when the business users are still available.

Feedback exercises - When someone suggests a special case after a story has been discussed, the person running the workshop should ask the participants to write down how they think the system should work.

Don’t have yes/no answers in your examples

Classes of equivalence (such as “less than 10”) or variables can create an illusion of shared understanding. Without choosing a concrete example, different people might, for example, be unclear on whether negative values are included or left out.

Ask for an alternative way to check the functionality “How else would you be able to test this?” is a good question to kick off that discussion. Bas Vodde also suggests asking, “Is there anything else that would happen?”

Invented, simplified, or abstracted examples won’t have enough detail or exhibit enough variation for this.

To reduce the risk of legacy data surprising the team late in the iteration, try to use realistic data from the existing legacy system in the examples instead of specifying completely new cases.

A common mistake teams make when starting out with Specification by Example is to illustrate requirements using complex and convoluted examples. They focus on capturing realistic examples in precise detail and create huge, confusing tables with dozens of columns and rows. Examples like these make it hard to evaluate consistency and completeness of specifications.

Avoid the temptation to explore every combinatorial possibility

When illustrating using examples, look for examples that move the discussion forward and improve understanding.

I strongly advise against discarding any examples suggested as edge cases without discussion. If someone suggests an edge case example that the others consider to have been covered already, there might be two possible reasons: Either the person making the suggestion doesn’t understand the existing examples, or they have genuinely found something that breaks the existing description that the others don’t see

Looking for missing concepts and raising the level of abstraction

What are nonfunctional requirements? Characteristics such as performance, usability, or response times are often called nonfunctional because they aren’t related to isolated functionality.

performance is critical for their data-archiving tools, so they make sure to express the performance requirements in detail. Performance requirements are collected in the form “The system has to import X records within Y minutes on Z CPUs.” Remember that “faster than the current system” isn’t a good performance requirement. Tell people exactly how much faster and in what way.

Try the QUPER model Utility is the point where a product moves from unusable to usable. For example, the utility point for startup time of a mobile phone is one minute. Differentiation describes when the feature starts to develop a competitive advantage that will influence marketing. For example, the differentiation point for mobile phone startup is five seconds. Saturation is where the increase in quality becomes overkill. It makes no difference to the user if a phone takes half a second or one second to start, making one second a possible saturation point for mobile phone startup.

We build what we call a “vertical slice” as early on in the process as we can, typically at the end of our pre-production phase. This vertical slice is a small section of the game (e.g., one level, part of a level, the game introduction) and is to final (shippable) quality. This is usually supplemented by a “horizontal slice,” i.e., a broad slice of the whole game but blocked out and in low fidelity, to give an idea of the scale and breadth of the game. You can get a lot of use out of reference or concept art to illustrate the visual look and fidelity of the final product and employ people specifically for this, to produce high quality artwork that shows how the game will look. Instead of trying to quantify features that have an elusive quality, Supermassive Games builds a reference example against which team members can compare their work.

a specification should be • Precise and testable • A true specification, not a script • About business functionality, not about software design • Self explanatory • Focused • In domain language

Capturing acceptance criteria with scripts instead of specifications costs a lot of time in the long term,

focus on what should it do rather than how should it work.

Executable specifications guide us in delivering the right business functionality. Technical tests ensure that we look at low-level technical quality aspects of the system. We need both, but we shouldn’t mix them.

Instead of dwelling on user interface details, it’s more useful to think about user journeys through the website. When specifying collaboratively, invest time in parts of the specifications in proportion to their importance to the business. Items that are important and risky should be explored in detail. Those that aren’t that important might not need to be specified so precisely.

Many teams made the mistake of putting all the configuration and setup for all prerequisites into the specification. Although this makes the specification explicit and complete from a conceptual perspective, it can also make it difficult to read and understand.

Push the responsibility for creating a valid object to the automation layer.

If a key attribute of an example matches the default value provided by the automation layer, it’s still wise to specify it explicitly, although it can be omitted.

Instead of letting different jargons emerge, Eric Evans suggested developing a common language as a basis for a shared understanding of the domain

• Don’t just use the first set of examples directly; refine the specification from them.

When automation is done along with implementation, developers have to design the system to make it testable. When automation is delegated to testers or consultants, developers don’t take care to implement the system in a way that makes it easy to validate. This leads to more costly and more difficult automation. It also causes tests to slip into the next iteration, interrupting the flow when problems come back.

Creating an executable specification from existing manual test scripts might seem to be a logical thing to do when starting out. Such scripts already describe what the system does and the testers are running them anyway, so automation will surely help, right? Not really—in fact, this is one of the most common failure patterns.

Instead o f plainly automating manual test scripts, think about what the script is testing and describe that with a group of independent, focused tests. This will significantly reduce the automation overhead and maintenance costs.

Many tools for automating executable specifications allow us to integrate with software below the user interface. This reduces the cost of maintenance, makes the automation easier to implement, and provides quicker feedback. But business users and testers might not trust such automation initially. Without seeing the screens moving with their own eyes, they don’t believe that the right code is actually being exercised.

Executable specifications should generally be automated through the user interface only as a last resort, because user interface automation slows down feedback and significantly increases the complexity of the automation layer.

Your test suite is a first-class part of the code that needs to be maintained as much as the regular code of the application. I now think of acceptance tests as first class and the production code itself as less than first class. The tests are a canonical description of what the application does.

What typically happens on projects is they put a junior programmer to write the tests and the test system. However, automated test systems are difficult to get right. Junior programmers tend to choose the wrong approximations and build something less reliable.

Specifications with examples—those that end up in the living documentation— are much longer lived than the production code. A good living documentation system is crucial when completely rewriting production code in a better technology. It will outlive any code.

Describe validation processes in the automation layer

Don’t replicate business logic in the test automation layer don’t fake state; fantasy state is prone to bugs and has a higher maintenance cost. Use the real system to create your state. We had a bunch of tests break. We looked at them and discovered that with this new approach, our existing tests exposed bugs.

Don’t check business logic through the user interface

Browser automation libraries are often slow and lock user profiles, so only one such check can run at any given time on a single machine.

a quick manual check can give the team a level of confidence in the system that was acceptable to their customers. Automation would cost much more than the time it would save long term. Other good examples of checks that are probably not worth automating are intuitiveness or asserting how good something looks or how easy it is to use.

User interface tests were task oriented (click, point) and therefore tightly coupled to the implementation of the GUI, rather than activity oriented. There was a lot of duplication in tests. FitNesse tests were organized according to the way UI was set up. When the UI was updated, all these tests had to be updated. The translation from conceptual to technical changed. A small change to the GUI, adding a ribbon control, broke everything. There was no way we could update the tests.

If testing UI, Specify user interface functionality at a higher level of abstraction Check only UI functionality with UI specifications UI tests check for mandatory form fields and functionality implemented in JavaScript. All their business logic specifications are automated below the user interface.

Avoid recorded UI tests

Avoid using prepopulated data When: Specifying logic that’s not data driven

Setting up databases by prepopulating a standard baseline data set almost always causes a lot of pain. It becomes hard to understand what the data is, why it is there, and what it is being used for. When tests fail, it’s hard to know why. As the data is shared, tests influence each other. People get confused very quickly. This is a premature optimization. Write tests to be data agnostic.

Try using prepopulated reference data When: Data-driven systems

At the beginning, for the sake of getting the tests to run, we weren’t doing things well. Setup and teardown were in the test, and they were so cluttered. We started centralizing the database setup and enforced change control on top of that. Tests just did checks; we didn’t bother with entering data in the tests. This made the tests much faster and much easier to read and manage.

prepopulate only referential data that doesn’t change.

To get faster feedback from their executable specifications, the background generator pulls the full context of an object (representative example) from the real database to an in memory database for testing

We automate specifications to get fast feedback, but our primary goal should be to create executable specifications that are easily accessible and human readable, not just to automate a validation process. Once our specifications are executable, we can validate them frequently to build a living documentation system.

The automation layer should define how something is tested; specifications should define what is to be tested.

Don’t rely too much on existing data if you don’t have to.

Botts’ Dots.

Identify unstable tests using CI test history seeing the test history helped me focus my efforts to increase stability. That showed me which groups of tests were failing the most often so that I could fix them first.

Without a dedicated environment, it’s hard to know whether a test failed because there’s a bug, whether someone changed something on the test environment, or whether the system is just unstable. A dedicated environment eliminates unplanned changes and mitigates the risk of unstable environments.

In large enterprises with complex networks of systems, a team might work on only one part of the workflow, and its test system will talk to the test systems of other teams. The problem is that the other teams have to do their own work and testing so their test servers might not be always available, reliable, or correct.

A risk with test doubles is that the real system will evolve over time, and the double will no longer reflect the realistic functionality. To avoid that, be sure to check periodically whether the double still does what the original system is supposed to do. This is particularly important when the double is representing a third-party system over which you have no control.

They selectively turned off access to some services based on the goal of each executable specification. This made their tests significantly faster but still involved the minimal set of real external systems in each test. The solution didn’t completely protect them from external influences, but it made troubleshooting a lot easier. If a test failed, it was clear which external dependency might have influenced it.

With larger groups of teams, especially if they’re spread across several sites, this can cause problems to accumulate. If one team breaks the database, the other teams won’t be able to validate their changes until the problem gets fixed. It might take a several hours to find out that there’s a problem, determine what it is, fix it, and rerun the tests to confirm that it’s fixed. A broken build will always be someone else’s problem, and very soon the continuous-validation test pack will always be broken. At that point, we might as well just stop running the tests. (much like component CI if it's successful push to project CI)

runs most of tests just in the local team environments. Slower tests don’t run on the central environment in order to provide quick feedback.

If we create a user during our test, the test might fail the next time we run it because of a unique constraint on the username in the database. If we run that test inside a transaction and roll back at the end of the test, the user won’t be stored, so the two test executions will be independent.

keep transaction control outside the specifications. Database transaction control is a crosscutting concern that’s best implemented in the automation layer, not in the description of executable specifications.

Run quick checks on reference data before executing the workflow

Asynchronous processing has been a real headache for us. We do a lot of background processing that is asynchronous for performance reasons, and we ran into a lot of problems because the tests work instantly. The background processing hadn’t happened by the time the test moved to the next step.

A common mistake with asynchronous systems is to wait a specific time for something to happen. A symptom of this is a test step such as “Wait 10 seconds.” This is bad if a test unconditionally waits 1 minute for a process to end, but it finishes in just 10 seconds, we delay the feedback unnecessarily for 50 seconds. Small delays might not be an issue for individual tests, but they accumulate in test packs. With 20 such tests, we delay the feedback for the entire test pack for more than 15 minutes, which makes a lot of difference.

Wait for an event to happen, not for a set period of time to elapse. This will make tests much more reliable and not delay the feedback any longer than required.

If you turn off asynchronous processing for functional tests, remember to write additional technical tests to verify that asynchronous processing works. Although this might sound like doubling the work, it isn’t.

Don’t test too many things at the same time with one big executable specification.

Clearly separate the business logic from the infrastructure code (for example, pushing to a queue, writing to the database).

By isolating complicated business logic tests from the infrastructure, we improve reliability.

Introducing business time as a concept also solves the problem of expiring data. For example, tests that use expiring contracts might work nicely for six months and then suddenly start failing when the contracts expire. If the system supports business time changes, we could ensure that those contracts never expire and reduce the cost of maintenance.

Instead o f a big set of executable specifications that takes six hours to run, I’d rather h ave 12 s maller sets that each take no longer than 30 minutes. Generally, I break those apart according to functional areas.

Avoid using in-memory databases for testing When: Data-driven systems

Mixing end-to-end integration tests and functional acceptance tests might not be the best idea (as mentioned earlier in this chapter), but if you really want to do that, then use the real database and look for ways to speed it up.

Many teams organized their executable specifications into two or three groups based on the speed of execution.At RainStor, for example, they have to run some of the tests with very large data sets to check the system performance. They have a functional pack that runs after every build and takes less than one hour. They also run customer scenarios overnight, with realistic data obtained from customers. They run long-running suites every weekend.

A big problem with delayed execution of slow tests is that any issues in those tests won’t get discovered and fixed quickly. If an overnight test pack fails, we find about that in the morning, try to fix it, and get the results the next morning. Such slow feedback can make the overnight build break frequently, which can mask additional problems introduced during the day.

A good idea to keep the overnight packs stable is to move any failing tests into a separate test pack

Only add test to overnight packs if its stable for a long time

When the current iteration pack is split from the rest of the tests, we can safely include in it even the tests for the functionality that was planned but not yet implemented.

fail pack (all tests that fail which needs clarification) iteration pack (might fail frequently) fast pack (should be under an hour) slow pack (executed periodically on demand) overnight pack (stable slow)

If some tests have to run in isolation and can’t be executed in parallel, it’s worth splitting them out into a separate pack as well.

Commit build ran all unit tests and statistical analysis within 3 minutes. If that passed, sequential boxes executed tests that needed to be run one after another or not in parallel. Then 23 virtual machines ran acceptance tests in parallel. After that, a performance test kicked off. They specifications typically ran between 8 and 20 minutes. At the end, if the tests passed a certain amount, the QA instance was available to deploy to run smoke tests and exploratory tests and provide feedback to development. If the commit build failed, everyone had to stop (a complete embargo) and fix it.

Some continuous build systems, such as TeamCity,4 now offer test execution on EC2 as a standard feature. This makes it even easier to use computing clouds for continuous validation. There are also emerging services that offer automation through cloud services, such as Sauce Labs, which might be worth investigating.

If a test spends a long time in the fail pack, that might be an argument to drop the related functionality because nobody cares about it.

Some teams don’t move failing tests into a separate pack, but they disable them so that a known failing test won’t break the overall validation. The problem with this approach is that it’s easy to forget about such disabled tests.

People were turning tests off because we needed a decision or we were writing a new test and weren’t sure how this old test fit in. There were conversations that never got followed up on, or people just forgot to turn the test back on. Sometimes the tests were turned off because people were working on it and there was no code behind it yet.

Don’t use many small specifications to describe a single feature If someone has to read 10 different specifications to understand how a feature works, it’s time to think about reorganizing the documentation.

In the course of adding functionality to the system, we sometimes end up with similar specifications that have only minor differences. Step back and look at what your specifications describe from a higher level of abstraction. Once we’ve identified a higher-level concept, a whole set of specifications can typically be replaced with a single specification that focuses only on the attributes that are different.

Avoid using technical automation concepts in tests When: Stakeholders aren’t technical

One of the biggest challenges for many teams was keeping the structure and the language of their living documentation consistent.

evolving a language for the living documentation system is a great way to create and maintain the ubiquitous language

Some teams described user stories through personas, especially when developing websites. In those cases, the specification language can come from the activities that different personas can perform.

Personas helped us because they made us think about how the system needs to behave from the perspective of a user. There are a bunch of positive side effects of using personas that we didn’t anticipate—for example, personas were test helpers components that were able to interact with our system at a more appropriate entry point. (not applicable everytime esp.if its a batch processing system)

Building a domain language that was consistent and bound well was completely impossible without guidance. Testers wrote things that developers had to rephrase. Sometimes this was because the way testers were writing them down was unclear or not easy to bind automate. When the tester had written already a lot of things, we had to rephrase a lot of things. If we tried to bind automate the first example immediately, we would notice that it was not easy to do. It’s like doing pair programming compared to doing code reviews afterwards. If you do pair programming, the pair will tell you immediately if he thinks you are doing something wrong. If you do reviews, you say: Yes, next time I’ll do it differently, but this time let’s leave it like this

Some teams have built a separate documentation area for their building blocks. At Iowa Student Loan, they have a page with all the personas. It doesn’t have any assertions, but instead shows which specification building blocks are already available. The page is built from the underlying automation code, creating a living dictionary of the living documentation.

If we have to spend hours trying to piece together the big picture from hundreds of seemingly unrelated files every time we want to understand how something works, we might as well read the programming language code. To get the most out of living documentation, information has to be easy to find.

Reorganize stories by functional areas

Organize along UI navigation routes (my vote maybe intergrate it on the page) When: Documenting user interfaces This approach is intuitive for systems with clearly defined navigational routes, such as back-office applications. But it might cause maintenance problems if the UI navigational routes change often.

Organize along business processes When: End-to-end use case traceability required

Use tags instead of URLs when referring to executable specifications When: You need traceability of specifications Avoid referring to a particular specification in the living documentation system directly, because that prevents you from reorganizing the documentation later. Metadata, tags, or keywords that you can dynamically search for are much better for external links.

If a test is too complicated, it’s telling you something about the system. Workflow testing was very painful. There was too much going on and tests were very complicated. Developers started asking why the tests were so complicated. It turned out that workflows were overcomplicated by each department not really knowing what the others are doing. Tests helped because they put everything together, so people could see that another department is doing validations and handling errors as well. The whole thing got reduced to something much simpler.

Markus Gärtner points out that long setups signal bad API design: When you notice a long setup, think about the user of your API and the stuff you’re creating. This will become the business of someone to deal with your complicated API. Do you really want to do this?

Cooper suggested looking at living documentation as an alternative user interface to the system. If this interface is hard to write and maintain, the real user interface will also be hard to write and maintain.

business goal Increase repeat sales to existing customers by 50% over the next 12 months.

User stories for a basic loyalty system • In order to be able to do direct marketing of products to existing customers, as a marketing manager I want customers to register personal details by joining a VIP program. • In order to entice existing customers to register for the VIP program, as a marketing manager I want the system to offer free delivery on certain items to VIP customers. • In order to save money, as an existing customer I want to receive information on available special offers.

Key Examples: Free delivery • VIP customer with five books in the cart gets free delivery. • VIP customer with four books in the cart doesn’t get free delivery. • Regular customer with five books in the cart doesn’t get free delivery. • VIP customer with a five washing machines in the cart doesn’t get free delivery. • VIP customer with five books and a washing machine in the cart doesn’t get free delivery.

BarBara: The next thing on the list is free delivery. We have arranged deal with Manning to offer free delivery on their books. The basic example is this: If a user purchases a Manning book, say Specification by Example, the shopping cart will offer free delivery. Any questions? David, a developer, spots a potential functional gap. He asks: Is this free delivery to anywhere? What if a customer lives on an island off South America? That free delivery will cost us much more than we earn from the books. BarBara: No, this isn’t worldwide, just domestic. Tessa, a tester, asks for another example. She says: The first thing I’d check when this comes for testing is that we don’t offer free delivery for all books. Can we add one more case to show that the free delivery is offered only for Manning books? BarBara: Sure. For example, Agile Testing was published by Addison-Wesley. If a user buys that, then the shopping cart won’t offer free delivery. I think this is relatively simple; there isn’t a lot more to it. Can anyone think of any other example? Can we play around with the data to make it invalid? DaviD: There aren’t any numerical boundary conditions, but we could play with the list in the shopping cart. For example, what happens if I buy both Agile Testing and Specification by Example? BarBara: You get free delivery for both books. As long as a Manning book is in the shopping cart, you get free delivery. DaviD: I see. But what if I buy Specification by Example and a fridge? That delivery would be much more expensive than our earnings from the book. BarBara: That might be a problem. I didn’t talk about that with Owen. I’ll have to get back to you on this. Any other concerns? DaviD: Not apart from that. BarBara: OK. Do we have enough information to start working, apart from the fridge problem? DaviD anD Tessa: Yes. BarBara: Great. I’ll get back to you on that fridge problem early next week.

A specification should list only the key representative examples. This will help to keep the specification short and easy to understand. The key examples typically include the following: • A representative example illustrating each important aspect of business functionality. Business users, analysts, or customers will typically define these. • An example illustrating each important technical edge case, such as technical boundary conditions. Developers will typically suggest such examples when they’re concerned about functional gaps or inconsistencies. Business users, analysts, or customers will define the correct expected behavior. • An example illustrating each particularly troublesome area of the expected implementation, such as cases that caused bugs in the past and boundary conditions that might not be explicitly illustrated by previous examples. Testers will typically suggest these, and business users, analysts, or customers will define the correct behavior.

• Business rule level—What is this test demonstrating or exercising? For example: Free delivery is offered to customers who order two or more books. • User w orkflow l evel—How c an a u ser e xercise t he f unctionality t hrough t he UI, on a higher activity level? For example: Put two books in a shopping cart, enter address details, and verify that delivery options include free delivery. • Technical a ctivity l evel—What a re t he t echnical s teps r equired t o e xercise individual w orkflow s teps? F or e xample: O pen t he s hop h ome p age, l og i n with “ testuser” a nd “ testpassword,” g o t o t he “ /book” p age, c lick t he fi rst image with the “book” CSS class, wait for the page to load, click the Buy Now l ink, a nd s o o n. Specifications should be described at the business rule level. The automation layer should handle the workflow level by combining blocks composed at the technical a ctivity l evel. S uch t ests w ill b e e asy t o u nderstand, e fficient t o w rite, and r elatively i nexpensive t o m aintain.

“Yeah, we’re always talking about following your passion, but we’re all part of the flow of history … you’ve got to put something back into the flow of history that’s going to help your community, help other people … so that 20, 30, 40 years from now … people will say, this person didn’t just have a passion, he cared about making something that other people could benefit from.” -Steve Jobs

Powered by Google Project Hosting