Plugins Reference

Task Component

Charts

EmbeddedChart [Chart]

Chart for embedding custom html as a complete chart.

Example of usage:

self.add_output(
    complete={
        "title": "Embedding link to example.com",
        "chart_plugin": "EmbeddedChart",
        "data": "<a href='example.com'>"
                "To see external logs follow this link"
                "</a>"
    }
)

Platform: default

Module: rally.task.processing.charts

EmbeddedExternalChart [Chart]

Chart for embedding external html page as a complete chart.

Example of usage:

self.add_output(
    complete={
        "title": "Embedding external html page",
        "chart_plugin": "EmbeddedExternalChart",
        "data": "https://example.com"
    }
)

Platform: default

Module: rally.task.processing.charts

Lines [Chart]

Display results as generic chart with lines.

This plugin processes additive data and displays it in HTML report as linear chart with X axis bound to iteration number. Complete output data is displayed as linear chart as well, without any processing.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    additive={"title": "Additive data as stacked area",
              "description": "Iterations trend for foo and bar",
              "chart_plugin": "Lines",
              "data": [["foo", 12], ["bar", 34]]},
    complete={"title": "Complete data as stacked area",
              "description": "Data is shown as stacked area, as-is",
              "chart_plugin": "Lines",
              "data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
                       ["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
              "label": "Y-axis label text",
              "axis_label": "X-axis label text"})

Platform: default

Module: rally.task.processing.charts

Pie [Chart]

Display results as pie, calculate average values for additive data.

This plugin processes additive data and calculate average values. Both additive and complete data are displayed in HTML report as pie chart.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    additive={"title": "Additive output",
              "description": ("Pie with average data "
                              "from all iterations values"),
              "chart_plugin": "Pie",
              "data": [["foo", 12], ["bar", 34], ["spam", 56]]},
    complete={"title": "Complete output",
              "description": "Displayed as a pie, as-is",
              "chart_plugin": "Pie",
              "data": [["foo", 12], ["bar", 34], ["spam", 56]]})

Platform: default

Module: rally.task.processing.charts

StackedArea [Chart]

Display results as stacked area.

This plugin processes additive data and displays it in HTML report as stacked area with X axis bound to iteration number. Complete output data is displayed as stacked area as well, without any processing.

Keys “description”, “label” and “axis_label” are optional.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    additive={"title": "Additive data as stacked area",
              "description": "Iterations trend for foo and bar",
              "chart_plugin": "StackedArea",
              "data": [["foo", 12], ["bar", 34]]},
    complete={"title": "Complete data as stacked area",
              "description": "Data is shown as stacked area, as-is",
              "chart_plugin": "StackedArea",
              "data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
                       ["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
              "label": "Y-axis label text",
              "axis_label": "X-axis label text"})

Platform: default

Module: rally.task.processing.charts

StatsTable [Chart]

Calculate statistics for additive data and display it as table.

This plugin processes additive data and compose statistics that is displayed as table in HTML report.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    additive={"title": "Statistics",
              "description": ("Table with statistics generated "
                              "from all iterations values"),
              "chart_plugin": "StatsTable",
              "data": [["foo stat", 12], ["bar", 34], ["spam", 56]]})

Platform: default

Module: rally.task.processing.charts

Table [Chart]

Display complete output as table, can not be used for additive data.

Use this plugin for complete output data to display it in HTML report as table. This plugin can not be used for additive data because it does not contain any processing logic.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    complete={"title": "Arbitrary Table",
              "description": "Just show columns and rows as-is",
              "chart_plugin": "Table",
              "data": {"cols": ["foo", "bar", "spam"],
                       "rows": [["a row", 1, 2], ["b row", 3, 4],
                                ["c row", 5, 6]]}})

Platform: default

Module: rally.task.processing.charts

TextArea [Chart]

Arbitrary text

This plugin processes complete data and displays of output in HTML report.

Examples of using this plugin in Scenario, for saving output data:

self.add_output(
    complete={"title": "Script Inline",
              "chart_plugin": "TextArea",
              "data": ["first output", "second output",
                       "third output"]]})

Platform: default

Module: rally.task.processing.charts

Contexts

dummy_context [Context]

Dummy context.

Platform: default

Parameters:

  • fail_cleanup (bool) [ref]
  • fail_setup (bool) [ref]

Module: rally.plugins.common.contexts.dummy

Hook Actions

sys_call [Hook Action]

Performs system call.

Platform: default

Parameters:

  • str [ref]

    Command to execute.

Module: rally.plugins.common.hook.sys_call

Hook Triggers

event [Hook Trigger]

Triggers hook on specified event and list of values.

Platform: default

Note

One of the following groups of parameters should be provided.

Option 1 of parameters:

Triage hook based on specified seconds after start of workload.

  • at (list) [ref]

    Elements of the list should follow format(s) described below:

    • Type: int. Format:

       {
          "minimum": 0,
          "type": "integer"
      }
      
  • unit [ref]

    Set of expected values: ‘time’.

Option 2 of parameters:

Triage hook based on specific iterations.

  • at (list) [ref]

    Elements of the list should follow format(s) described below:

    • Type: int. Format:

       {
          "minimum": 1,
          "type": "integer"
      }
      
  • unit [ref]

    Set of expected values: ‘iteration’.

Module: rally.plugins.common.hook.triggers.event

periodic [Hook Trigger]

Periodically triggers hook with specified range and step.

Platform: default

Note

One of the following groups of parameters should be provided.

Option 1 of parameters:

Periodically triage hook based on elapsed time after start of workload.

  • start (int) [ref]

    Min value: 0.

  • step (int) [ref]

    Min value: 1.

  • end (int) [ref]

    Min value: 1.

  • unit [ref]

    Set of expected values: ‘time’.

Option 2 of parameters:

Periodically triage hook based on iterations.

  • start (int) [ref]

    Min value: 1.

  • step (int) [ref]

    Min value: 1.

  • end (int) [ref]

    Min value: 1.

  • unit [ref]

    Set of expected values: ‘iteration’.

Module: rally.plugins.common.hook.triggers.periodic

SLAs

failure_rate [SLA]

Failure rate minimum and maximum in percents.

Platform: default

Parameters:

  • max (float) [ref]

    Min value: 0.0.

    Max value: 100.0.

  • min (float) [ref]

    Min value: 0.0.

    Max value: 100.0.

Module: rally.plugins.common.sla.failure_rate

max_avg_duration [SLA]

Maximum average duration of one iteration in seconds.

Platform: default

Parameters:

  • float [ref]

    Min value: 0.0.

Module: rally.plugins.common.sla.max_average_duration

max_avg_duration_per_atomic [SLA]

Maximum average duration of one iterations atomic actions in seconds.

Platform: default

Parameters:

Dictionary is expected. Keys should follow pattern(s) described bellow.

  • . (str)* [ref]

    The name of atomic action.

Module: rally.plugins.common.sla.max_average_duration_per_atomic

max_seconds_per_iteration [SLA]

Maximum time for one iteration in seconds.

Platform: default

Parameters:

  • float [ref]

    Min value: 0.0.

Module: rally.plugins.common.sla.iteration_time

outliers [SLA]

Limit the number of outliers (iterations that take too much time).

The outliers are detected automatically using the computation of the mean and standard deviation (std) of the data.

Platform: default

Parameters:

  • max (int) [ref]

    Min value: 0.

  • min_iterations (int) [ref]

    Min value: 3.

  • sigmas (float) [ref]

    Min value: 0.0.

Module: rally.plugins.common.sla.outliers

performance_degradation [SLA]

Calculates performance degradation based on iteration time

This SLA plugin finds minimum and maximum duration of iterations completed without errors during Rally task execution. Assuming that minimum duration is 100%, it calculates performance degradation against maximum duration.

Platform: default

Parameters:

  • max_degradation (float) [ref]

    Min value: 0.0.

Module: rally.plugins.common.sla.performance_degradation

Scenarios

Dummy.dummy [Scenario]

Do nothing and sleep for the given number of seconds (0 by default).

Dummy.dummy can be used for testing performance of different ScenarioRunners and of the ability of rally to store a large amount of results.

Platform: default

Parameters:

  • sleep [ref]

    Idle time of method (in seconds).

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_exception [Scenario]

Throws an exception.

Dummy.dummy_exception used for testing if exceptions are processed properly by task engine and analyze rally results storing & displaying capabilities.

Platform: default

Parameters:

  • size_of_message [ref]

    Int size of the exception message

  • sleep [ref]

    Idle time of method (in seconds).

  • message [ref]

    Message of the exception

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_exception_probability [Scenario]

Throws an exception with given probability.

Dummy.dummy_exception_probability used for testing if exceptions are processed properly by task engine and analyze rally results storing & displaying capabilities.

Platform: default

Parameters:

  • exception_probability [ref]

    Sets how likely it is that an exception will be thrown. Float between 0 and 1 0=never 1=always.

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_output [Scenario]

Generate dummy output.

This scenario generates example of output data.

Platform: default

Parameters:

  • random_range [ref]

    Max int limit for generated random values

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_random_action [Scenario]

Sleep random time in dummy actions.

Platform: default

Parameters:

  • actions_num [ref]

    Int number of actions to generate

  • sleep_min [ref]

    Minimal time to sleep, numeric seconds

  • sleep_max [ref]

    Maximum time to sleep, numeric seconds

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_random_fail_in_atomic [Scenario]

Dummy.dummy_random_fail_in_atomic in dummy actions.

Can be used to test atomic actions failures processing.

Platform: default

Parameters:

  • exception_probability [ref]

    Probability with which atomic actions fail in this dummy scenario (0 <= p <= 1)

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.dummy_timed_atomic_actions [Scenario]

Run some sleepy atomic actions for SLA atomic action tests.

Platform: default

Parameters:

  • number_of_actions [ref]

    Int number of atomic actions to create

  • sleep_factor [ref]

    Int multiplier for number of seconds to sleep

Module: rally.plugins.common.scenarios.dummy.dummy

Dummy.failure [Scenario]

Raise errors in some iterations.

Platform: default

Parameters:

  • sleep [ref]

    Float iteration sleep time in seconds

  • from_iteration [ref]

    Int iteration number which starts range of failed iterations

  • to_iteration [ref]

    Int iteration number which ends range of failed iterations

  • each [ref]

    Int cyclic number of iteration which actually raises an error in selected range. For example, each=3 will raise error in each 3rd iteration.

Module: rally.plugins.common.scenarios.dummy.dummy

HttpRequests.check_random_request [Scenario]

Executes random HTTP requests from provided list.

This scenario takes random url from list of requests, and raises exception if the response is not the expected response.

Platform: default

Parameters:

  • requests [ref]

    List of request dicts

  • status_code [ref]

    Expected Response Code it will be used only if we doesn’t specified it in request proper

Module: rally.plugins.common.scenarios.requests.http_requests

HttpRequests.check_request [Scenario]

Standard way for testing web services using HTTP requests.

This scenario is used to make request and check it with expected Response.

Platform: default

Parameters:

  • url [ref]

    Url for the Request object

  • method [ref]

    Method for the Request object

  • status_code [ref]

    Expected response code

  • kwargs [ref]

    Optional additional request parameters

Module: rally.plugins.common.scenarios.requests.http_requests

Scenario Runners

constant [Scenario Runner]

Creates constant load executing a scenario a specified number of times.

This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations up to the number of times specified in the scenario config.

The concurrency parameter of the scenario config controls the number of concurrent iterations which execute during a single scenario in order to simulate the activities of multiple users placing load on the cloud under test.

Platform: default

Parameters:

  • max_cpu_count (int) [ref]

    The maximum number of processes to create load from.

    Min value: 1.

  • timeout (float) [ref]

    Operation’s timeout.

  • concurrency (int) [ref]

    The number of parallel iteration executions.

    Min value: 1.

  • times (int) [ref]

    Total number of iteration executions.

    Min value: 1.

Module: rally.plugins.common.runners.constant

constant_for_duration [Scenario Runner]

Creates constant load executing a scenario for an interval of time.

This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations until a specified interval of time has elapsed.

The concurrency parameter of the scenario config controls the number of concurrent iterations which execute during a single sceanario in order to simulate the activities of multiple users placing load on the cloud under test.

Platform: default

Parameters:

  • duration (float) [ref]

    The number of seconds during which to generate a load. If the duration is 0, the scenario will run once per parallel execution.

    Min value: 0.0.

  • timeout (float) [ref]

    Operation’s timeout.

    Min value: 1.

  • concurrency (int) [ref]

    The number of parallel iteration executions.

    Min value: 1.

Module: rally.plugins.common.runners.constant

rps [Scenario Runner]

Scenario runner that does the job with specified frequency.

Every single scenario iteration is executed with specified frequency (runs per second) in a pool of processes. The scenario will be launched for a fixed number of times in total (specified in the config).

An example of a rps scenario is booting 1 VM per second. This execution type is thus very helpful in understanding the maximal load that a certain cloud can handle.

Platform: default

Parameters:

  • max_cpu_count (int) [ref]

    Min value: 1.

  • max_concurrency (int) [ref]

    Min value: 1.

  • timeout (float) [ref]
  • times (int) [ref]

    Min value: 1.

Module: rally.plugins.common.runners.rps

serial [Scenario Runner]

Scenario runner that executes scenarios serially.

Unlike scenario runners that execute in parallel, the serial scenario runner executes scenarios one-by-one in the same python interpreter process as Rally. This allows you to execute scenario without introducing any concurrent operations as well as interactively debug the scenario from the same command that you use to start Rally.

Platform: default

Parameters:

  • times (int) [ref]

    Min value: 1.

Module: rally.plugins.common.runners.serial

Task Exporters

elastic [Task Exporter]

Exports task results to the ElasticSearch 2.x, 5.x or 6.x clusters.

The exported data includes:

  • Task basic information such as title, description, status, deployment uuid, etc. See rally_task_v1_data index.
  • Workload information such as scenario name and configuration, runner type and configuration, time of the start load, success rate, sla details in case of errors, etc. See rally_workload_v1_data index.
  • Separate documents for all atomic actions. See rally_atomic_action_data_v1 index.

The destination can be a remote server. In this case specify it like:

Or we can dump documents to the file. The destination should look like:

/home/foo/bar.txt

In case of an empty destination, the http://localhost:9200 destination will be used.

Platform: default

Module: rally.plugins.common.exporters.elastic.exporter

html [Task Exporter]

Generates task report in HTML format.

Platform: default

Module: rally.plugins.common.exporters.html

html-static [Task Exporter]

Generates task report in HTML format with embedded JS/CSS.

Platform: default

Module: rally.plugins.common.exporters.html

json [Task Exporter]

Generates task report in JSON format.

Platform: default

Module: rally.plugins.common.exporters.json_exporter

junit-xml [Task Exporter]

Generates task report in JUnit-XML format.

An example of the report (All dates, numbers, names appearing in this example are fictitious. Any resemblance to real things is purely coincidental):

<testsuites>
  <!--Report is generated by Rally 0.10.0 at 2017-06-04T05:14:00-->
  <testsuite id="task-uu-ii-dd"
             errors="0"
             failures="1"
             skipped="0"
             tests="2"
             time="75.0"
             timestamp="2017-06-04T05:14:00">
    <testcase classname="CinderVolumes"
              name="list_volumes"
              id="workload-1-uuid"
              time="29.9695231915"
              timestamp="2017-06-04T05:14:44" />
    <testcase classname="NovaServers"
              name="list_keypairs"
              id="workload-2-uuid"
              time="5"
              timestamp="2017-06-04T05:15:15">
      <failure>ooops</failure>
    </testcase>
  </testsuite>
</testsuites>

Platform: default

Module: rally.plugins.common.exporters.junit

Validators

args-spec [Validator]

Scenario arguments validator

Platform: default

Module: rally.plugins.common.validators

check_constant [Validator]

Additional schema validation for constant runner

Platform: default

Module: rally.plugins.common.runners.constant

check_rps [Validator]

Additional schema validation for rps runner

Platform: default

Module: rally.plugins.common.runners.rps

enum [Validator]

Checks that parameter is in a list.

Ensure a parameter has the right value. This value need to be defined in a list.

Platform: default

Parameters:

  • param_name [ref]

    Name of parameter to validate

  • values [ref]

    List of values accepted

  • missed [ref]

    Allow to accept optional parameter

  • case_insensitive [ref]

    Ignore case in enum values

Module: rally.plugins.common.validators

es_exporter_destination [Validator]

Validates the destination for ElasticSearch exporter.

In case when the destination is ElasticSearch cluster, the version of it should be 2.* or 5.*

Platform: default

Module: rally.plugins.common.exporters.elastic.exporter

file_exists [Validator]

Validator checks parameter is proper path to file with proper mode.

Ensure a file exists and can be accessed with the specified mode. Note that path to file will be expanded before access checking.

Platform: default

Parameters:

  • param_name [ref]

    Name of parameter to validate

  • mode [ref]

    Access mode to test for. This should be one of: * os.F_OK (file exists) * os.R_OK (file is readable) * os.W_OK (file is writable) * os.X_OK (file is executable)

    If multiple modes are required they can be added, eg:

    mode=os.R_OK+os.W_OK

  • required [ref]

    Boolean indicating whether this argument is required.

Module: rally.plugins.common.validators

jsonschema [Validator]

JSON schema validator

Platform: default

Module: rally.plugins.common.validators

map_keys [Validator]

Check that parameter contains specified keys.

Platform: default

Parameters:

  • param_name [ref]

    Name of parameter to validate

  • required [ref]

    List of all required keys

  • allowed [ref]

    List of all allowed keys

  • additional [ref]

    Whether additional keys are allowed. If list of allowed keys are specified, defaults to False, otherwise defaults to True

  • missed [ref]

    Allow to accept optional parameter

Module: rally.plugins.common.validators

number [Validator]

Checks that parameter is a number that pass specified condition.

Ensure a parameter is within the range [minval, maxval]. This is a closed interval so the end points are included.

Platform: default

Parameters:

  • param_name [ref]

    Name of parameter to validate

  • minval [ref]

    Lower endpoint of valid interval

  • maxval [ref]

    Upper endpoint of valid interval

  • nullable [ref]

    Allow parameter not specified, or parameter=None

  • integer_only [ref]

    Only accept integers

Module: rally.plugins.common.validators

required_contexts [Validator]

Validator checks if required contexts are specified.

Platform: default

Parameters:

  • contexts [ref]

    List of strings and tuples with context names that should be specified. Tuple represent ‘at least one of the’.

Module: rally.plugins.common.validators

required_param_or_context [Validator]

Validator checks if required image is specified.

Platform: default

Parameters:

  • param_name [ref]

    Name of parameter

  • ctx_name [ref]

    Name of context

Module: rally.plugins.common.validators

required_params [Validator]

Scenario required parameter validator.

This allows us to search required parameters in subdict of config.

Platform: default

Parameters:

  • subdict [ref]

    Sub-dict of “config” to search. if not defined - will search in “config”

  • params [ref]

    List of required parameters

Module: rally.plugins.common.validators

required_platform [Validator]

Validates specification of specified platform for the workload.

Platform: default

Parameters:

  • platform [ref]

    Name of the platform

Module: rally.common.validation

restricted_parameters [Validator]

Validates that parameters is not set.

Platform: default

Parameters:

  • param_names [ref]

    Parameter or parameters list to be validated.

  • subdict [ref]

    Sub-dict of “config” to search for param_names. if not defined - will search in “config”

Module: rally.plugins.common.validators

Verification Component

Verification Reporters

html [Verification Reporter]

Generates verification report in HTML format.

Platform: default

Module: rally.plugins.common.verification.reporters

html-static [Verification Reporter]

Generates verification report in HTML format with embedded JS/CSS.

Platform: default

Module: rally.plugins.common.verification.reporters

json [Verification Reporter]

Generates verification report in JSON format.

An example of the report (All dates, numbers, names appearing in this example are fictitious. Any resemblance to real things is purely coincidental):

{"verifications": {
    "verification-uuid-1": {
        "status": "finished",
        "skipped": 1,
        "started_at": "2001-01-01T00:00:00",
        "finished_at": "2001-01-01T00:05:00",
        "tests_duration": 5,
        "run_args": {
            "pattern": "set=smoke",
            "xfail_list": {"some.test.TestCase.test_xfail":
                               "Some reason why it is expected."},
            "skip_list": {"some.test.TestCase.test_skipped":
                              "This test was skipped intentionally"},
        },
        "success": 1,
        "expected_failures": 1,
        "tests_count": 3,
        "failures": 0,
        "unexpected_success": 0
    },
    "verification-uuid-2": {
        "status": "finished",
        "skipped": 1,
        "started_at": "2002-01-01T00:00:00",
        "finished_at": "2002-01-01T00:05:00",
        "tests_duration": 5,
        "run_args": {
            "pattern": "set=smoke",
            "xfail_list": {"some.test.TestCase.test_xfail":
                               "Some reason why it is expected."},
            "skip_list": {"some.test.TestCase.test_skipped":
                              "This test was skipped intentionally"},
        },
        "success": 1,
        "expected_failures": 1,
        "tests_count": 3,
        "failures": 1,
        "unexpected_success": 0
    }
 },
 "tests": {
    "some.test.TestCase.test_foo[tag1,tag2]": {
        "name": "some.test.TestCase.test_foo",
        "tags": ["tag1","tag2"],
        "by_verification": {
            "verification-uuid-1": {
                "status": "success",
                "duration": "1.111"
            },
            "verification-uuid-2": {
                "status": "success",
                "duration": "22.222"
            }
        }
    },
    "some.test.TestCase.test_skipped[tag1]": {
        "name": "some.test.TestCase.test_skipped",
        "tags": ["tag1"],
        "by_verification": {
            "verification-uuid-1": {
                "status": "skipped",
                "duration": "0",
                "details": "Skipped until Bug: 666 is resolved."
            },
            "verification-uuid-2": {
                "status": "skipped",
                "duration": "0",
                "details": "Skipped until Bug: 666 is resolved."
            }
        }
    },
    "some.test.TestCase.test_xfail": {
        "name": "some.test.TestCase.test_xfail",
        "tags": [],
        "by_verification": {
            "verification-uuid-1": {
                "status": "xfail",
                "duration": "3",
                "details": "Some reason why it is expected.\n\n"
                    "Traceback (most recent call last): \n"
                    "  File "fake.py", line 13, in <module>\n"
                    "    yyy()\n"
                    "  File "fake.py", line 11, in yyy\n"
                    "    xxx()\n"
                    "  File "fake.py", line 8, in xxx\n"
                    "    bar()\n"
                    "  File "fake.py", line 5, in bar\n"
                    "    foo()\n"
                    "  File "fake.py", line 2, in foo\n"
                    "    raise Exception()\n"
                    "Exception"
            },
            "verification-uuid-2": {
                "status": "xfail",
                "duration": "3",
                "details": "Some reason why it is expected.\n\n"
                    "Traceback (most recent call last): \n"
                    "  File "fake.py", line 13, in <module>\n"
                    "    yyy()\n"
                    "  File "fake.py", line 11, in yyy\n"
                    "    xxx()\n"
                    "  File "fake.py", line 8, in xxx\n"
                    "    bar()\n"
                    "  File "fake.py", line 5, in bar\n"
                    "    foo()\n"
                    "  File "fake.py", line 2, in foo\n"
                    "    raise Exception()\n"
                    "Exception"
            }
        }
    },
    "some.test.TestCase.test_failed": {
        "name": "some.test.TestCase.test_failed",
        "tags": [],
        "by_verification": {
            "verification-uuid-2": {
                "status": "fail",
                "duration": "4",
                "details": "Some reason why it is expected.\n\n"
                    "Traceback (most recent call last): \n"
                    "  File "fake.py", line 13, in <module>\n"
                    "    yyy()\n"
                    "  File "fake.py", line 11, in yyy\n"
                    "    xxx()\n"
                    "  File "fake.py", line 8, in xxx\n"
                    "    bar()\n"
                    "  File "fake.py", line 5, in bar\n"
                    "    foo()\n"
                    "  File "fake.py", line 2, in foo\n"
                    "    raise Exception()\n"
                    "Exception"
                }
            }
        }
    }
}

Platform: default

Module: rally.plugins.common.verification.reporters

junit-xml [Verification Reporter]

Generates verification report in JUnit-XML format.

An example of the report (All dates, numbers, names appearing in this example are fictitious. Any resemblance to real things is purely coincidental):

<testsuites>
  <!--Report is generated by Rally 0.8.0 at 2002-01-01T00:00:00-->
  <testsuite id="verification-uuid-1"
             tests="9"
             time="1.111"
             errors="0"
             failures="3"
             skipped="0"
             timestamp="2001-01-01T00:00:00">
    <testcase classname="some.test.TestCase"
              name="test_foo"
              time="8"
              timestamp="2001-01-01T00:01:00" />
    <testcase classname="some.test.TestCase"
              name="test_skipped"
              time="0"
              timestamp="2001-01-01T00:02:00">
      <skipped>Skipped until Bug: 666 is resolved.</skipped>
    </testcase>
    <testcase classname="some.test.TestCase"
              name="test_xfail"
              time="3"
              timestamp="2001-01-01T00:03:00">
      <!--It is an expected failure due to: something-->
      <!--Traceback:
HEEELP-->
    </testcase>
    <testcase classname="some.test.TestCase"
              name="test_uxsuccess"
              time="3"
              timestamp="2001-01-01T00:04:00">
      <failure>
          It is an unexpected success. The test should fail due to:
          It should fail, I said!
      </failure>
    </testcase>
  </testsuite>
  <testsuite id="verification-uuid-2"
             tests="99"
             time="22.222"
             errors="0"
             failures="33"
             skipped="0"
             timestamp="2002-01-01T00:00:00">
    <testcase classname="some.test.TestCase"
              name="test_foo"
              time="8"
              timestamp="2001-02-01T00:01:00" />
    <testcase classname="some.test.TestCase"
              name="test_failed"
              time="8"
              timestamp="2001-02-01T00:02:00">
      <failure>HEEEEEEELP</failure>
    </testcase>
    <testcase classname="some.test.TestCase"
              name="test_skipped"
              time="0"
              timestamp="2001-02-01T00:03:00">
      <skipped>Skipped until Bug: 666 is resolved.</skipped>
    </testcase>
    <testcase classname="some.test.TestCase"
              name="test_xfail"
              time="4"
              timestamp="2001-02-01T00:04:00">
      <!--It is an expected failure due to: something-->
      <!--Traceback:
HEEELP-->
    </testcase>
  </testsuite>
</testsuites>

Platform: default

Module: rally.plugins.common.verification.reporters