xref: /aosp_15_r20/external/apache-xml/test/java/xdocs/sources/tests/design.xml (revision 1212f9a0ffdc28482b8821715d2222bf16dc14e2)
1<?xml version="1.0" standalone="no"?>
2<!DOCTYPE s1 SYSTEM "sbk:/style/dtd/document.dtd">
3<!--
4 * Licensed to the Apache Software Foundation (ASF) under one
5 * or more contributor license agreements. See the NOTICE file
6 * distributed with this work for additional information
7 * regarding copyright ownership. The ASF licenses this file
8 * to you under the Apache License, Version 2.0 (the  "License");
9 * you may not use this file except in compliance with the License.
10 * You may obtain a copy of the License at
11 *
12 *     http://www.apache.org/licenses/LICENSE-2.0
13 *
14 * Unless required by applicable law or agreed to in writing, software
15 * distributed under the License is distributed on an "AS IS" BASIS,
16 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 * See the License for the specific language governing permissions and
18 * limitations under the License.
19-->
20<s1 title="Testing Design/Standards">
21<ul>
22<li><link anchor="overview-tests">Overview of Testing concepts</link></li>
23<li><link anchor="standards-api-tests">Standards for API Tests</link></li>
24<li><link anchor="standards-xsl-tests">Standards for Stylesheet Tests</link></li>
25<li><link anchor="testing-links">Links to other testing sites</link></li>
26</ul>
27
28  <anchor name="overview-tests"/>
29  <s2 title="Overview of Testing concepts">
30    <p>While an overview of software testing in general is outside
31    the scope we can address in this document, here are some of the
32    concepts and background behind the Xalan testing effort.</p>
33    <gloss>
34      <label>A quick glossary of Xalan testing terms:</label><item></item>
35      <label>What is a test?</label>
36      <item>The word 'test' is overused, and can refer to a number
37      of things.  It can be an API test, which will usually be a Java
38      class that verifies the behavior of Xalan by calling it's API's.
39      It can be a stylesheet test, which is normally an .xsl stylesheet
40      file with a matching .xml data file, and often has an expected
41      output file with a .out extension.</item>
42      <label>What kinds of tests does Xalan have?</label>
43      <item>There are several different ways to categorize the
44      tests currently used in Xalan: API tests and testlets, specific tests
45      for detailed areas of the API in Xalan; Conformance Tests,
46      with stylesheets in the tests\conf directory that each test
47      conformance with a specific part of the XSLT spec, and are
48      run automatically by a test driver; performance tests, which
49      are a set of stylesheets specifically designed to show the
50      performance of a processor in various ways, that are run
51      automatically by a test driver; contributed tests, which are
52      stored in tests\contrib, where anyone is invited to submit their
53      own favorite stylesheets that we can use to test future Xalan
54      releases.  There are also a few specific tests of extensions, as well
55      as a small but growing suite of individual Bugzilla bug regression tests.
56      We are working on better documentation and
57      structure for the tests.</item>
58      <label>What is a test result?</label>
59      <item>While most people view tests as having a simple boolean
60      pass/fail result, I've found it more useful to have a range of
61      results from our tests. Briefly, they include INCP or incomplete
62      tests; PASS tests, where everything went correctly; FAIL tests,
63      where something obviously didn't go correctly; ERRR tests, where
64      something failed in an unexpected way, and AMBG or ambiguous tests,
65      where the test appears to have completed but the output results
66      haven't been verified to be correct yet.
67      <link anchor="overview-tests-results">See a full description of test results.</link></item>
68      <label>How are test results stored/displayed?</label>
69      <item>Xalan tests all use
70      <jump href="apidocs/org/apache/qetest/Reporter.html">Reporter</jump>s and
71      <jump href="apidocs/org/apache/qetest/Logger.html">Logger</jump>s to store their results.
72      By default, most Reporters send output to a ConsoleLogger (so you
73      can see what's happening as the test runs) and to an XMLFileLogger
74      (which stores it's results on disk).  The logFile input to a test
75      (generally on the command line or in a .properties file)
76      determines where it will produce it's MyTestResults.xml file, which
77      are the complete report of what the test did, as saved to disk by
78      it's XMLFileLogger.  You can
79      then use <link idref="run" anchor="how-to-view-results">viewResults.xsl</link>
80      to pretty-print the results into a MyTestResults.html
81      file that you can view in your browser.  We are working on other
82      stylesheets to output results in different formats.
83      </item>
84      <label>What are your file/test naming conventions?</label>
85      <item>See the sections below for <link anchor="standards-api-tests">API test naming</link> and
86      <link anchor="standards-xsl-tests">stylesheet file naming</link> conventions.</item>
87    </gloss>
88
89    <anchor name="overview-tests-results"/>
90    <p>Xalan tests will report one of several results, as detailed below.
91    Note that the framework automatically rolls-up the results for
92    any individual test file: a testCase's result is calculated from
93    any test points or <code>check*()</code> calls within that testCase;
94    a testFile's result is calculated from the results of it's testCases.</p>
95    <ul>
96    <li>INCP/incomplete: all tests start out as incomplete.  If a test never calls
97    a <code>check*()</code> method (i.e. never officially verifies a test
98    point), then it's result will be incomplete. This is important for cases
99    where a test file begins running, and then causes some unexpected
100    error that exits the test.
101    <br/>Some other test harnesses will erroneously
102    report this test as passing, since it never actually reported that
103    anything failed.  For Xalan, this may also be reported if a test
104    calls <code>testFileInit</code> or <code>testCaseInit</code>, but
105    never calls the corresponding <code>testFileClose</code> or <code>testCaseClose</code>.
106    See <jump href="apidocs/org/apache/qetest/Logger.html#INCP">Logger.INCP</jump></li>
107
108    <li>PASS: the test ran to completion and all test points verified correctly.
109    This is obviously a good thing. A test will only pass if it has at least one
110    test point that passes and has no other kinds of test points (i.e. fail,
111    ambiguous, or error).
112    See <jump href="apidocs/org/apache/qetest/Logger.html#PASS">Logger.PASS</jump></li>
113
114    <li>AMBG/ambiguous: the test ran to completion but at least one test point
115    could not verify it's data because it could not find the 'gold'
116    data to verify against.  This test niether passes nor fails,
117    but exists somewhere in the middle.
118    <br/>The usual solution is to
119    manually compare the actual output the test produced and verify
120    that it is correct, and then check in the output as the 'gold'
121    or expected data.  Then when you next run the test, it should pass.
122    A test is ambiguous if at least one test point is ambiguous, and
123    it has no fail or error test points; this means that a test with
124    both ambiguous and pass test points will roll-up to be ambiguous.
125    See <jump href="apidocs/org/apache/qetest/Logger.html#AMBG">Logger.AMBG</jump></li>
126
127    <li>FAIL: the test ran to completion but at least one test point
128    did not verify correctly.  This is normally used for cases where
129    we attempt to validate a test point, but get the wrong answer:
130    for example if we call setData(3) then call getData and get a '2' back.
131    <br/>In most cases, a test should be able to continue normally after a FAIL
132    result, and the rest of the results should be valid.
133    A test will fail if at least one test point is fail, and
134    it has no error test points; thus a fail always takes precedence
135    over a pass or ambiguous result.
136    See <jump href="apidocs/org/apache/qetest/Logger.html#FAIL">Logger.FAIL</jump></li>
137
138    <li>ERRR/error: the test ran to completion but at least one test point
139    had an error or did not verify correctly. This is normally used for
140    cases where we attempt to validate a test point, but something unexpected
141    happens: for example if we call setData(3), and calling getData throws
142    an exception.
143    <br/>Although the difference seems subtle, it can be a useful
144    diagnostic, since a test that reports an ERRR may not necessarily be able
145    to continue normally.  In Xalan API tests, we often use this code if
146    some setup routines for a testCase fail, meaning that the rest of the
147    test case probably won't work properly.
148    <br/>A test will report an ERRR result if at least one test point is ERRR;
149    thus an ERRR result takes precedence over any other kind of result.
150    Note that calling <code>Reporter.logErrorMsg()</code> will not cause
151    an error result, it will merely log out the message.  You generally must
152    call <code>checkErr</code> directly to cause an ERRR result.
153    See <jump href="apidocs/org/apache/qetest/Logger.html#ERRR">Logger.ERRR</jump></li>
154
155    </ul>
156  </s2>
157
158  <anchor name="standards-api-tests"/>
159  <s2 title="Standards for API Tests">
160    <p>In progress. Both the overall Java testing framework, the test drivers,
161    and the specific API tests have a number of design decisions detailed
162    in the javadoc
163    <jump href="apidocs/org/apache/qetest/package-summary.html">here</jump> and
164    <jump href="apidocs/org/apache/qetest/xsl/package-summary.html">here</jump>.</p>
165    <p>Naming conventions: obviously we follow basic Java coding
166    standards as well as some specific standards that apply to Xalan
167    or to testing in general.  Comments appreciated.</p>
168    <gloss>
169      <label>Some naming conventions currently used:</label><item></item>
170      <label>*Test.java/.class</label>
171      <item>As in 'ConformanceTest', 'PerformanceTest', etc.: a single,
172      automated test file designed to be run from the command line or
173      from a testing harness.  This may be used in the future by
174      automated test discovery mechanisims.</item>
175      <label>*Testlet.java/.class</label>
176      <item>As in '<jump href="apidocs/org/apache/qetest/xsl/StylesheetTestlet.html">StylesheetTestlet</jump>', 'PerformanceTestlet', etc.: a single,
177      automated testlet designed to be run from the command line or
178      from a testing harness.  Testlets are generally focused on one
179      or a very few test points, and usually are data-driven.  A testlet
180      defines a single test case algorithim, and relies on the caller
181      (or *TestletDriver) to provide it with the data point(s) to use
182      in it's test, including gold comparison info.</item>
183      <label>*Datalet.java/.class</label>
184      <item>As in '<jump href="apidocs/org/apache/qetest/xsl/StylesheetDatalet.html">StylesheetDatalet</jump>': a single set of test data for
185      a Testlet to execute.  Separating a specific set of data from the
186      testing algorithim to use with the data makes it easy to write
187      and run large sets of data-driven tests.</item>
188      <label>*APITest.java/.class</label>
189      <item>As in 'TransformerAPITest', etc.: a single,
190      automated test file designed to be run from the command line or
191      from a testing harness, specifically providing test coverage of
192      a number of API's.  Instead of performing the same kind of generic
193      processing/transformations to a whole directory tree of files, these
194      *APITests attempt to validate the API functionality itself: e.g. when
195      you call setFoo(1), you should expect that getFoo() will return 1.
196      </item>
197      <label>XSL*.java/.class</label>
198      <item>Files that are specific to some kind of XSL(T) and XML concepts in
199      general, but not necessarily specific to Xalan itself. I.e. these
200      files may generally need org.xml.sax.* or org.w3c.dom.* to compile, but
201      usually should not need org.apache.xalan.* to compile.</item>
202      <label>Logging*.java/.class</label>
203      <item>Various testing implementations of common error handler,
204      URI resolver, and other classes.  These generally do not implement
205      much functionality of the underlying classes, but simply log out
206      everything that happens to them to a Logger, for later analysis.
207      Thus we can hook a LoggingErrorHandler up to a Transformer, run a
208      stylesheet with known errors through it, and then go back and validate
209      that the Transformer logged the appropriate errors with this service.</item>
210      <label>QetestUtils.java/.class</label>
211      <item>A simple static utility class with a few general-purpose
212      utility methods for testing.</item>
213    </gloss>
214    <p>Please: if you plan to submit Java API tests, use the existing framework
215    as <link idref="submit" anchor="write-API-tests">described</link>.</p>
216  </s2>
217
218  <anchor name="standards-xsl-tests"/>
219  <s2 title="Standards for Stylesheet Tests">
220    <p>In progress. See the <link idref="submit" anchor="write-xsl-tests">discussion about OASIS</link> for an overview.</p>
221    <p>Currently, the basic standards for Conformance and related
222    tests are to provide similarly-named
223    *.xml and *.xsl files, and a proposed *.out 'gold' or expected
224    output file.  The basenames of the file should start with the name
225    of the parent directory the files are in.  Thus if you had a new
226    test you wanted to contribute about the 'foo' feature, you might
227    submit a set of files like so:</p>
228    <p>All under <code>xml-xalan\test\tests</code>:<br/>
229      <code>contrib\foo\foo.xml</code><br/>
230      <code>contrib\foo\foo.xsl</code><br/>
231      <code>contrib-gold\foo\foo.out</code><br/><br/>
232      You could then run this test through the Conformance test driver like:<br/>
233      <code>cd xml-xalan\test</code><br/>
234      <code>build contrib -Dqetest.category=foo</code><br/>
235    </p>
236    <p>Tests using Xalan Extensions may be found under test/tests/extensions and are separated
237    into directories by language:<br/>
238    <gloss>
239      <label>test/tests/extensions/library</label>
240        <item>Stylesheets for extensions implemented natively in Xalan; these only
241        have .xsl and .xml files for the test</item>
242      <label>test/tests/extensions/java</label>
243        <item>Stylesheets for extensions implemented in Java; these are run by
244        a .java file that uses an ExtensionTestlet to run</item>
245      <label>test/tests/extensions/javascript</label>
246        <item>Stylesheets for extensions implemented in Javascript; these include
247        only a .xsl and .xml file but require
248        <jump href="http://xml.apache.org/xalan-j/extensions.html#supported-lang">bsf.jar and js.jar</jump> in the classpath</item>
249    </gloss>
250    </p>
251
252  </s2>
253
254  <anchor name="testing-links"/>
255  <s2 title="Links to other testing sites">
256    <p>A few quick links to other websites about software quality
257    engineering/assurance.  No endorsement, express or implied should
258    be inferred from any of these links, but hopefully they'll be
259    useful for a few of you.</p>
260    <p>One note: I've commonly found two basic
261    kinds of sites about software testing: ones for IS/IT types,
262    and ones for software engineers.  The first kind deal with testing
263    or verifying the deployment or integration of business software
264    systems, certification exams for MS or Novell networks, ISO
265    certification for your company, etc.  The second kind (which I
266    find more interesting) deal with testing software applications
267    themselves; i.e. the testing ISV's do to their own software before
268    selling it in the market.  So far, there seem to be a lot more
269    IS/IT 'testing' sites than there are application 'testing' sites.</p>
270    <ul>
271    <li><jump href="http://www.soft.com/Institute/HotList/index.html">Software Research Institute HotList</jump>
272    This is a pretty good laundry list of top-level links for software testing</li>
273    <li><jump href="http://www.swquality.com/users/pustaver/index.shtml">SWQuality site; plenty of links</jump></li>
274    <li><jump href="http://www.stickyminds.com/">StickyMinds</jump></li>
275    <li><jump href="http://www.sqe.com/press/index.asp">SQE</jump></li>
276    </ul>
277  </s2>
278</s1>
279