Export to GitHub

oscars-idc - OSCARS_v6_Testing.wiki


We use this document to share testing ideas, describe the testing environment, approaches, requirements and scenarios, track testing issues, and keep records of test reports.



Testing Environment

The current OSCARS 0.6 testing environment runs on three 32-bit (i386) Linux CentOS 5.5 virtual machines, each having 4GB memory and 40GB storage space. The VM hosts are odev-vm-16, 17, 18.es.net. On each host, OSCARS-0.6 is installed under root context with the following: $OSCARS_DIST=/home/rootxy/oscars-0.6 $OSCARS_HOME=/usr/local/oscars JAVA_Version=1.6.0_22 A single-domain configuration is made on odev-vm-18. The available topology files include the following: testdomain[1..N].xml : Simple test domain topology #1 .. #N esnet-tedb.xml : Production ESnet topology internet2-ion-tedb.xml : Production I2 ION topology uslhcnet-tedb.xml : Production USLHCnet topology

TODO: Multi-domain configuration


Testing Approach

To achieve good testing coverage for OSCARS v0.6, we follow a Requirement -> Scenario -> Run -> Report procedure.

1 Requirement

Every developer in the group can add a Test Requirement to this document. A test requirement will cover a unique aspect or test case of the code. For example, we can have a requirement for "testing VLAN translation in single domain setting".

2 Scenario

A Test Scenario is designed to satisfy the testing requirement. Usually we derive multiple test scenarios from each requirement. For the above "testing VLAN translation in single domain setting" requirement, for instance, we can have the following scenarios

  • specifctag-to-specifictag translation
  • anytag-to-specific translation
  • specific-to-anytag translation
  • anytag-to-anytag translation

These scenarios can be further split into finer ones with specific topology conditions. For example, we can have condition “No common VLAN in path, Translation enabled,” which says the topology is so configured that the tested paths have no common (continuous) VLAN and VLAN translation is enabled on some links along the paths.

3 Run

"Test Run" describes the procedure to realize the test scenarios. At this point we will focus on System-Level testing and will describe how to run tests for both Intra-domain and Inter-domain settings.

System-Level Intra-domain Testing

The workflow is as follows. 1. Develop a "Test Topology File". We can use really simple topology or complicated production topologies in different testing scenarios. 1. Configure the topology file with specific "Topology Conditions. Each specific topology condition corresponds to a testing scenario. Example "Topology Condition" currently identified are: * Common Vlan in Path, Translation Enabled * No Common Vlan in Path, Translation Enabled 1. Determine running method based on the defined test scenario. * "clean start": Clean up initial states, including emptying database. * "serial" vs "simultaneous": The former means to setup, modify and teardown each circuit sequentially. There is no simultaneous circuits in the system. The latter requests for simultaneous circuits. * "random": Randomly picking source, destination, starting time and duration for simultaneous requests. * "saturate": Fill up the system with predefined criteria including bandwidth range, average duration, max number of circuits etc. 1. Run the defined tests by sending in reserve, modify, cancel requests to the system via WBUI and/or OSCARS API client. This can be done either manually or by using automated scripts. All the circuits will be manually CANCELLED before it is FINISHED. Running states and logging messages will also be collected for the teardown process. For each test scenario, we run more than one test to assure accurate results.

System-Level Inter-domain Testing

The procedure start with setting up multi-domain environments and developing a set of topology files for the domains. The remaining steps are similar. TODO: More details?

4 Report

At the end of the procedure we collect and present test results.

We record our testing results in a report table as shown below. | Code Tag/Version | | | | | Test Date | | | | | |:---------------------|:|:|:|:|:--------------|:|:|:|:| | Topology File(s) |testdomain1-1.xml| | | | Topology Condition |common vlans in path: translation enabled| | | | | Source | Bandwidth | Vlan | Destination | Vlan | Expected end states | Results | Notes | Reference | | Test Scenario |specifictag-to-anytag:serial | | | | | | | | |node1:ge-1/1/1 |100|2500|node1:ge-1/1/2|any|Active/Modified/Cancelled|Success| | | |... | | | | | | | | | |node2:ge-1/1/1 |100|2500|node2:ge-1/1/2|any|Active/Modified/Cancelled|Setup failed| |1| |... | | | | | | | | |

  • Each table corresponds to a test topology file (single-domain tests) or a set of files (multi-domain tests).
  • Topology condition describes the specific configurations associated with the topology file(s).
  • The test circuit bandwidth, source and destination VLAN (specific or ‘any’) and expected states are determined before test run. The end state can be ACTIVE/CANCELLED that indicates ACTIVE for setup and CANCELLED for teardown. Or it can be expected FAILED.
  • At run time, we collect end states from setup, modify and teardown processes. The states are recorded in the Results column.
  • We also watch for any failure, exception and error messages that show up in log files under $OSCARS_HOME/logs/. Summary of these messages and other observations are recorded in Notes column. Additional comments can be placed outside of the table and referenced by numbers that are marked in the Reference column.
  • The above table also shows how to present individual test results in rows that are grouped by test scenarios.

Testing Requirements

Baseline Tests

We identify the following baseline test cases in the following table. In each case, a test requirement is presented and refers to one or more testing scenarios that we will describe in next section. We can use this cross-refence to verify testing coverage for every testing requirement.

| Testing Requirement | Testing Scenarios | |:------------------------|:----------------------| | Single circuit can be setup, modified and torn down when resources are available (common VLAN and sufficient bandwidth).| 1.1 | | Negative tests: no connectivity, no requested VLAN, no sufficient bandwidth.| 4.1~4.3 | | Oversubscribing: simultaneous requests to create negative results.| 4.1~4.3 5.1~5.3 | | Testing both multi-hop paths and paths with only single node cross-connect.| 1.6 1.7 4.3 | | Testing layer2 and layer3 services.| TBD | | VLAN translation: translation on topologies with common/continuous VLAN available and without.| 1.2~1.5 2.2~2.5 4.2 | | Two domain and three domain testing for the above cases.| 3.1~3.3 5.1~5.3 | | Test domains with different VLAN filters| 3.1~3.3 5.1~5.3 | | Test domains with different PSS configs| 3.1~3.3 5.1~5.3 | | Test with both WBUI and OSCARS API client (using scripts) for the above cases.| TBD |

Backward Compatibility Tests

| Testing Requirement | Testing Scenarios | |:------------------------|:----------------------| | v5 Client -- v6 Server | 6.1 6.2 | | v6 Client -- v5 Server | 6.3 6.4 7.1 7.2 | | v5 Server -- v6 Server (IDCP)| 6.1~6.4 7.1 7.2 |

Operation & Maintenance Tests

| Testing Requirement | Testing Scenarios | |:------------------------|:----------------------| | Recovery from Individual module crash | 9.1 9.2 | | Graceful shutdown and restart of whole system | 9.3~9.6 | | Change of topology with system running | TBD | | Cleanup of circuits and topology states upon setup/teardown failures | 9.5 9.6 | | Impact from loss of upstream or downstream IDC in inter-domain provisioning | 9.5 9.6 |

Stress Tests

| Testing Requirement | Testing Scenarios | |:------------------------|:----------------------| | Excessive simultaneous requests | 4.1~4.3 5.1~5.3 8.1 8.2 | | Evaluate response time and resource computation efficiency under heady loads | 8.1 | | Also evaluate multi-domain IDCP handling under heavy requests | 8.1 | | Create large number of history records into database to evaluate system performance (not necessarily from simultaneous requests). | 8.2 |

Regression Tests

We can start with a select suite of test cases from baseline tests to be run by automated test scripts. They could be initially only for smoke testing. The list will be expanded along with progress in development of automated testing scripts.

The script can be run as a cron job to automatically update from svn repository, compile code and run the baseline tests.

Here is an initial list of suggested test cases for regression testing. | Testing Requirement | Testing Scenarios | |:------------------------|:----------------------| |Single circuit can be setup, modified and torn down when resources are available (common VLANs and sufficient bandwidth)| 1.1 | |Negative tests: no connectivity, no requested VLAN, no sufficient bandwidth.| 4.1~4.3 | |Testing both multi-hop paths and paths with only single node cross-connect.| 1.6 1.7 4.3 | |VLAN translation: translation on topologies with and without common/continuous VLAN available in path.| 1.2~1.5 2.2~2.5 4.2 | |Two-domain testing with one having EoMPLS network setting (mpls=1) and eomplsPSS and the other having Ethernet network setting and dragon PSS (stub mode only).| 3.1~3.3 5.1~5.3 |

TODO: Add more requirements below


Testing Scenarios

Here we define the test scenarios to meet the above requirements. These are high-level descriptions. We can refer to detailed design in separate wiki pages or attached documents.

Topology Condition (1)

Single domain with EoMPLS network setting Common VLANs in path with sufficient bandwidth; L2SC edge links with VLAN translation enabled and PSC trunk links; * Test Scenario (1.1)

specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation * Test Scenario (1.2) specific_vlan_tag-to-specific_vlan_tag : serial-execution : translation (src_tag!=dst_tag) * Test Scenario (1.3) any_vlan_tag-to-any_vlan_tag : serial-execution * Test Scenario (1.4) specific_vlan_tag-to-any_vlan_tag : serial-execution * Test Scenario (1.5) any_vlan_tag-to-specific_vlan_tag : serial-execution * Test Scenario (1.6) specific_vlan_tag-to-specific_vlan_tag : serial-execution : single-node-path * Test Scenario (1.7) any_vlan_tag-to-any_vlan_tag : serial-execution : single-node-path

Topology Condition (2)

Single domain with EoMPLS network setting No common VLANs in path with sufficient bandwidth; L2SC edge links with VLAN translation enabled and PSC trunk links; * Test Scenario (2.1)

specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation * Test Scenario (2.2) specific_vlan_tag-to-specific_vlan_tag : serial-execution : translation (src_tag!=dst_tag) * Test Scenario (2.3) any_vlan_tag-to-any_vlan_tag : serial-execution * Test Scenario (2.4) specific_vlan_tag-to-any_vlan_tag : serial-execution * Test Scenario (2.5) any_vlan_tag-to-specific_vlan_tag : serial-execution

Topology Condition (3)

Two peering domains with EoMPLS network setting + eomplsPSS and Ethernet network setting + dragonPSS respectively; Common VLAN in path with sufficient bandwidth for both intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (3.1)

specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : inter-domain * Test Scenario (3.2) specific_vlan_tag-to-specific_vlan_tag : serial-execution : translation (src_tag!=dst_tag) : inter-domain * Test Scenario (3.3) : inter-domain any_vlan_tag-to-any_vlan_tag : serial-execution : inter-domain * Test Scenario (3.4) : inter-domain specific_vlan_tag-to-any_vlan_tag : serial-execution : inter-domain * Test Scenario (3.5) any_vlan_tag-to-specific_vlan_tag : serial-execution : inter-domain * Test Scenario (3.6) specific_vlan_tag-to-specific_vlan_tag : serial-execution : inter-domain : single-node-path-in-both-domains

Topology Condition (4)

Single domain with EoMPLS network setting Bottleneck link in path with very small bandwidth and very limited number of VLANs; L2SC edge links with VLAN translation enabled and PSC trunk links; * Test Scenario (4.1)

specific_vlan_tag-to-specific_vlan_tag : simultaneous-execution-to-saturate : mixed translation and no-translation * Test Scenario (4.2) mixed specific_vlan_tag-to-specific_vlan_tag and any_vlan_tag-to-any_vlan_tag : simultaneous-execution-to-saturate : no-translation * Test Scenario (4.3) specific_vlan_tag-to-specific_vlan_tag : simultaneous-execution-to-saturate : no-translation : single-node-path with one end on the bottleneck link

Topology Condition (5)

Three peering domains in linear topology with two in EoMPLS network setting + eomplsPSS and one in Ethernet network setting + dragonPSS; Bottleneck intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (5.1)

specific_vlan_tag-to-specific_vlan_tag : simultaneous-execution-to-saturate : mixed translation and no-translation : inter-domain path * Test Scenario (5.2) mixed specific_vlan_tag-to-specific_vlan_tag and any_vlan_tag-to-any_vlan_tag : simultaneous-execution-to-saturate : no-translation : inter-domain path * Test Scenario (5.3) specific_vlan_tag-to-specific_vlan_tag : simultaneous-execution-to-saturate : no-translation : single-hop-path-on-bottleneck-interdomain-link

Topology Condition (6)

Two peering domains with v0.6+EoMPLS network setting+eomplsPSS and v0.5.3+Ethernet network setting+dragonPSS respectively; Common VLAN in path with sufficient bandwidth for both intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (6.1)

specific_vlan_tag-to-specific_vlan_tag : v0.5-api-client-at-v0.6-domain : serial-execution : translation and no-translation : inter-domain path * Test Scenario (6.2) any_vlan_tag-to-any_vlan_tag : v0.5-api-client-at-v0.6-domain : serial-execution : translation and no-translation : inter-domain path * Test Scenario (6.3) specific_vlan_tag-to-specific_vlan_tag : v0.6-api-client-at-v0.5.3-domain : serial-execution : translation and no-translation : inter-domain path * Test Scenario (6.4) any_vlan_tag-to-any_vlan_tag : v0.6-api-client-at-v0.5.3-domain : serial-execution : translation and no-translation : inter-domain path

Topology Condition (7)

Two peering domains with v0.6+EoMPLS network setting+eomplsPSS and v0.5.3+Ethernet network setting+dragonPSS respectively; Common VLAN in path with sufficient bandwidth for both intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (7.1)

specific_vlan_tag-to-specific_vlan_tag : v0.6-api-client-at-v0.5.3-domain : serial-execution : translation and no-translation : inter-domain path * Test Scenario (7.2) any_vlan_tag-to-any_vlan_tag : v0.6-api-client-at-v0.5.3-domain : serial-execution : translation and no-translation : inter-domain path

Topology Condition (8)

Three peering domains in linear topology, two configured with EoMPLS network setting+eomplsPSS and on with v0.5.3+Ethernet network setting+dragonPSS respectively; Common VLAN in path with sufficient bandwidth for both intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (8.1)

mixed specific_vlan_tag-to-specific_vlan_tag and any_vlan_tag-to-any_vlan_tag : simultaneous-execution-to-saturate : translation and no-translation : intra-domain and inter-domain paths : continue-random-execution-and-monitor $ * Test Scenario (8.2) mixed specific_vlan_tag-to-specific_vlan_tag and any_vlan_tag-to-any_vlan_tag : simultaneous-execution-to-saturate : translation and no-translation : intra-domain and inter-domain paths : :schedule-for-ten-thousand-circuits $$ : continue-random-execution-and-monitor $ $ The circuits saturating the network. Then provisioning continues, trying to add more circuits. Meanwhile, start monitoring response time at API client and time of state transition for these events: ACCEPTED, INCREATE, INSETUP and ACTIVE. $$ Use advance scheduling to add sufficient number of circuits for relatively short durations (<15 mins) in system.

Topology Condition (9)

Three peering domains (1, 2 and 3) in linear topology, two configured with EoMPLS network setting+eomplsPSS and on with v0.5.3+Ethernet network setting+dragonPSS respectively; Common VLAN in path with sufficient bandwidth for both intra- and inter-domain links; L2SC edge links with VLAN translation enabled and PSC trunk links * Test Scenario (9.1)

specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : intra-domain path : crash-individual-module-during-active & * Test Scenario (9.2) specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : intra-domain path : crash-individual-module-during-setup & * Test Scenario (9.3) specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : intra-domain path : restart-oscars-during-active * Test Scenario (9.4) specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : intra-domain path : restart-oscars-during-setup * Test Scenario (9.5) specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : inter-domain path : restart-osacars-during-active && * Test Scenario (9.6) specific_vlan_tag-to-specific_vlan_tag : serial-execution : no-translation : inter-domain path : restart-osacars-during-setup &&

& Restart modules, including topoBridge, RM, PCE, authZ, authN, lookupService, WBUI (no Coordinator). && Restart IDC, including source, middle and destination domain cases. Proper timing of OSCARS shutdown during setup after PSS sets up a circuit and before setup gets confirmed from peer domains will create a teardown-up-failure case.

TODO: Add more test scenario descriptions below


Automated Testing Scripts

The test suite is in the subversion repository. It consists of a test driver test_main.pl and perl modules in the Lib sub directory for each of the Test Conditions. In addition to the predefined tests, there is a template file SimpleTest.pm that can be copied and used to define new tests.

Tests are defined by choosing the type of test to run (single reservation test, batch reservation test) and adding the source and destination urns.

Tests are run on two testbeds. The odev testbed consists of:

odev-vm-16.es.net OSCARS v0.6 testdomain-1.net

odev-vm-17.es.net OSCARS v0.6 testdomain-2.net

odev-vm-18.es.net OSCARS v0.6 testdomain-3.net

odev-vm-25.es.net OSCARS v0.5.4 tedb-inter.xml testdomain-4.net

Inter version testing is done on the I2 testbed. The I2 testbed is:

idcdev6 OSCARS v0.6 testdomain-2.net

idcdev5 OSCARS v0.5 tedb-inter.xml testdomain-4.net

These testbeds are updated regularly to provide quick turnaround as the subversion repository is updated and issues are fixed and verified. The testing revisions are tagged and can be found here

More information on the testing topologies is listed below.


Testing Issues

References to the Issues that are assigned to Developers/Testers to ask for testing input for requirements and develop testing related tools and features. These Issues should be labelled as " OSCARS-V6-Testing"?

The following issues were identified through manual and automated testing.

Issue 143: Assign review task to request inputs for testing requirements from developers.

Issue 144: Assign feature task to Nils to develop automated testing scripts.

Issue 145: V0.6 not backward compatible with V5.3 topology files.

Issue 146: V0.6 not backward compatible with V5.3 topology files

Issue 155: OSCARS memory usage

Issue 156: Template files need to get copied during clean install

Issue 157: Multidomain reservations fail

Issue 159: Crash during stress test

Issue 162: GUI Reservation Error

Issue 169: Updates to bandwidth parameters in topology file do not get picked up by Oscars.

Issue 170: idc-useradd failure

Issue 171: EoMPLS PSS cannot call coordinator

Issue 173: StubPSS won't start after update to revision 9451

Issue 178: Reservation scheduler scan frequency.

Issue 181: OSCARSService (api) will not stay running

Issue 186: Start time on web interface should be synchronized with scanInterval in ResourceManager/conf/config..yaml

Issue 187: Reservations not ending on time

Issue 189: Install fails when run without -DskipTests

Issue 190: Error on Interdomain connect

Issue 194: Provide Example Circuits for Testers and Users to Run

The set of use cases is posted on the oscars-idc wiki and can be found here

Issue 205: Wrong port displayed when wsnbroker starts

Issue 206: Error connecting to 06 to 05

Issue 208: NPE in queryReservation from 05 to 06

Issue 209: Modify reservation GUI error.

Issue 210: Reservations Tab Error when All Status Selected

Issue 211: values in vlan text box on reservations tab cause no results to be displayed

Issue 212: User field on Reservations tab has no effect

Issue 213: PRODUCTION Mode, non root user, testServers.sh reports Lookup service as not running when it is running

Issue 214: Exception when making reservation in PRODUCTION mode when OSCARS run as non root user

Issue 215: Log files duplicated

Issue 216: PRODUCTION context, config file problem, 'no public block'

Issue 219: vm-25 cannot access wsnotify port 9013 on vm-17

Issue 220: Trunk: Config files missing a couple lines

Issue 222: idc-wbuiaccess script hangs

Issue 223: NPE making intra-domain circuit.

Issue 224: Error message from layer 3 reservation request

Issue 226: Need a way to close 'stuck' reservations

Issue 227: Vlan field on GUI Reservations tab not working.

Issue 228: Link ids widget on GUI reservations tab not working

Issue 230: oscars-idclist not working in production mode

Issue 231: idc-localdomainview script error

Issue 233: testServers.sh gives wrong message for restarting NotificationBridgeService

Issue 235: Install fails looking for wrong artifacts (0.0.1-SNAPSHOT jars instead of rel-06-20110603 jars)

Issue 237: Tests for Topology Conditions 6, 7 and 8 depend on v0.6/v0.5 interoperability

Issue 238: Production and SDK testing

Issue 239: Reservation tab link id list functionality not working

Issue 242: Services not starting after revision 9690

Issue 249: Unable to make multi domain reservation in PRODUCTION mode

Issue 250: WSNBrokerService not starting in PRODUCTION mode

Issue 251: Override Status button security issue

Issue 252: For consideration: User can set the x509 subject and issuer to arbitrary value.

Issue 254: Large Database Problems

Issue 255: createRes.sh failing in production mode (needed for automated testing)

Issue 257: Inter version bandwidth displayed with wrong units

Issue 268: Interdomain reservations get "stuck" when sending multiple concurrent reservations

Issue 270: Multi node interdomain modify fails on localhost or fails to affect remote IDC

Issue 272: Problem reading v0.5 topology files

TODO: Add more issue references below


Testing Reports

Here we keep records of test results from running the above defined test scenario. These are indexed by testing date, summary of topology file(s) and condition and requirement/scenario names. We can refer to detailed reports in separate wiki pages or attached spreadsheets.

Additional Testing

As OSCARS features are developed new tests are run to monitor changes. As testing has progressed, an OscarsFAQ has been created.

A set of Use Cases has been developed for testing GUI functionality.

Some manual tests are done to help find bugs that are not documented in the weekly test results below. * Malformed user input. * Privilege escalation

Simulator This is designed to mimic human interaction and to stress the IDC. Each local port in a given topology attempts to make a circuit with another local port in that topology. The reservations are spaced pseudorandomly an adjustable number minutes apart. Each reservation duration is a pseudorandom number of tester adjustable minutes. The source ports are chosen pseudorandomly and this process is repeated n times. This is what is being used for the stress tests.

Stress Tests

The stress tests create mixed translation and no translation tests that are both local and across multiple domains simultaniously. Some of the circuits are made with limited resources. These characteristics help to satisfy the requirements of topology conditions 4, 5 and eventualy 8, when that functinality is available. These tests are done in addition to the testing scenarios. The results of these test are reviewed manually currently.

Version Interoperability Tests

Each topology has three intra-domain nodes and one extra-domain node. These will be used to specify multiple hop routes between the 0.6 and 0.5.4 IDCs. Testing for interoperability is done on the I2 testbed. * Test that v0.6 reads v0.5 topology files correctly. Issue 272: Problem reading v0.5 topology files * Topology files used in testing: tedb-inter_compat_1.xml, tedb-inter_compat_2.xml

ION Tests

ion-test-checklist-20120622.txt

Example Client Tests

The source tree contains two example clients, SimpleOSCARSClient and oscars-client. * TBD

Weekly Reports

oscarsv6-testing-results-20120111.xlsx

oscarsv6-testing-results-20111213.xlsx

oscarsv6-testing-results-20111207.xlsx

oscarsv6-testing-results-20111128.xlsx

oscarsv6-testing-results-20111115.xlsx

oscarsv6-testing-results-20111107.xlsx

oscarsv6-testing-results-20111031.xlsx

oscarsv6-testing-results-20111024.xlsx

oscarsv6-testing-results-20111017.xlsx

oscarsv6-testing-results-20111002.xlsx Tests postponed due to Issue 268.

oscarsv6-testing-results-20110927.xlsx

oscarsv6-testing-results-20110919.xlsx

oscarsv6-testing-results-20110912.xlsx

oscarsv6-testing-results-20110908.xlsx

oscarsv6-testing-results-20110830.xlsx

Tests above this line contain the full suite of Topology Conditions

oscarsv6-testing-results-20110823.xlsx

Tests above this line are run in PRODUCTION mode

oscarsv6-testing-results-20110815.xlsx

oscarsv6-testing-results-20110808.xlsx

Note to oscars-dev mailing list 08/03/2011

I'll be running tests on topology conditions 4 and 5 from now on. The tests request simultaneous circuits that fit a specific scenario. They are requesting 14 to 16 circuits at once. The number of simultainious circuits is restricted by the topology files. For example there are only so many ports with a limited vlan range.

To address this limitation I have been running stress tests that request more simultainious reservations and I will start posting those results.

Of the remaining topology conditions to test, numbers 6, 7 and 8 can be run once we have v0.6/v0.5 interoperability working.

The simultanious circuit tests are currently verified manually by viewing the circuits from the GUI. This method was chosen because having the tests grep through list.sh output would be too expensive.

The test listed under topology condition number 9 requires crashing individual modules and restarting oscars during setup and active states. This test will have to be done manually. It will be difficult for developers to reproduce exact test conditions. I think the goal of this test is to verify that reservations that are in the database persist through an outage.

oscarsv6-testing-results-20110801.xlsx

oscarsv6-testing-results-20110729.xlsx

Tests above this line will no longer be run on the on the set of baseline tests because they are based on a topology that is not used in the test conditions, and the test conditions provide coverage of the baseline tests. This will eliminate the need to change the topology/domain on an IDC to run the baseline tests.

oscarsv6-testing-results-20110721.xlsx

oscarsv6-testing-results-20110711.xlsx

oscarsv6-testing-results-20110704.xlsx

oscarsv6-testing-results-20110627.xlsx

oscarsv6-testing-results-20110613.xlsx

ResourceManager Scan Interval Test

A 10 minute scan interval was added that scans for new reservations. Testing is done that tests OSCARS performance with different scan frequencies.

scan-frequencies_06_09_2011.xlsx

The issue 178 that the scan interval tests addressed has been resolved and the tests are no longer necessary.

oscarsv6-testing-results-20110607.xlsx

oscarsv6-testing-results-20110509.xlsx

oscarsv6-testing-results-20110503.xlsx

oscarsv6-testing-results-20110426.xlsx

oscarsv6-testing-results-20110420.xlsx

oscarsv6-testing-results-20110412.xlsx

oscarsv6-testing-results-20110404.xlsx

oscarsv6-testing-results-20110328.xlsx

oscarsv6-testing-results-20110321.xlsx

oscarsv6-testing-results-20110318.xlsx

oscarsv6-testing-results-20110309.xlsx

oscarsv6-testing-results-20110304.xlsx

oscarsv6-testing-results-20110223.xlsx

oscarsv6-testing-results-20110217.xlsx

oscarsv6-testing-results-20110208.xlsx

oscarsv6-testing-results-20110126.xlsx

oscarsv6-testing-results-20110119.xlsx

Topologies

The odev testbed is set up such that testdomain-2.net on odev-vm-17 is the center of a hub. It connects to testdomain-1.net on odev-vm-16 and testdomain-3.net on odev-18. There is also a path to a v0.5.4 IDC testdomain-4.net on odev-vm-25.

All topologies have three nodes except for testdomain-2.net which has 5. For all domains nodes 1 and 2 are intradomain nodes and the following node(s) interdomain.

The third node of each branch topology points to testdomain-2.net. The numbering is as follows: * testdomain-1.net node 3 -> testdomain-2.net node 3 * testdomain-3.net node 4 -> testdomain-2.net node 4 * testdomain-4.net node 5 -> testdomain-2.net node 5

Each node contains ports: * With translation * Without translation * With vlans < 2000 * With vlans > 2000 * With limited bandwidth * To the same node. * Combinations of the above.

This allows for simulations of the test conditions.

testdomain-1.net

testdomain-2.net

testdomain-3.net

testdomain-4.net

tedb-inter.xml

tedb-intra.xml

v0.5 testbed topologies

TODO: Add more reports below