27.4. unittest
— 單元測試框架¶
Source code: Lib/unittest/__init__.py
(假如你已經熟悉相關基礎的測試概念,你可能會希望跳過以下段落,直接參考 assert 方法清單。)
unittest
原生的單元測試框架最初由 JUnit 開發,和其他程式語言相似有主要的單元測試框架。支援自動化測試,對測試分享安裝與關閉程式碼,集合所有匯總的測試,並且獨立各個測試報告框架。
unittest
用來作為實現支援一些重要的物件導向方法的概念。
- test fixture
- 一個 test fixture 代表一個或多個測試所需要執行的準備,以及其他相關清理行動,可以包括,例如,建立臨時或是代理資料庫,目錄,或是啟動一個服務的程序。
- test case(測試用例)
- 一個 test case 是一個獨立的單元測試。這是用來確認一個特定設定的輸入的特殊回饋。
unittest
提供一個基礎類別,類別TestCase
,可以用來建立一個新的測試條例。 - test suite(測試套件)
- test suite 是一個搜集測試條例,測試套件,或是兩者皆有。它需要一起被執行並用來匯總測試。
- test runner(測試執行器)
- test runner 是一個編排測試執行與提供結果給使用者的一個元件。執行器可以使用圖形化介面,文字介面或是回傳一個特別值用來標示出執行測試的結果。
也參考
doctest
模組- 另一個執行測試的模組,但使用不一樣的測試方法與規範。
- Simple Smalltalk Testing: With Patterns
- Kent Beck 的原始論文討論使用
unittest
這樣模式的測試框架。 - Nose and py.test
- 第三方的單元測試框架,但在撰寫測試時使用更輕量的語法。例如:
assert func(10) == 42
。 - The Python Testing Tools Taxonomy
- 一份詳細的 Python 測試工具列表,包含 functional testing 框架和mock object 函式庫。
- Testing in Python Mailing List
- 一個專門興趣的群組用來討論 Python 中的測試方式與測試工具。
The script Tools/unittestgui/unittestgui.py
in the Python source distribution is
a GUI tool for test discovery and execution. This is intended largely for ease of use
for those new to unit testing. For production environments it is
recommended that tests be driven by a continuous integration system such as
Buildbot, Jenkins
or Hudson.
27.4.1. 簡單範例¶
unittest
模組提供一系列豐富的工具用來建構與執行測試。本節將展示這一系列工具中一部份,它們已能滿足大部份使用者需求。
這是一段簡短的腳本用來測試 3 個字串方法:
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
unittest.main()
測試用例 (testcase) 可以透過繼承 unittest.TestCase
類別來建立。這裡定義了三個獨立的物件方法,名稱皆以 test
開頭。這樣的命名方式能告知 test runner 哪些物件方法為定義的測試。
每個測試的關鍵為呼叫 assertEqual()
來確認是否為期望的結果; assertTrue()
或是 assertFalse()
用來驗證一個條件式; assertRaises()
用來驗證是否觸發一個特定的 exception。使用這些物件方法來取代 assert
陳述句,將能使 test runner 收集所有的測試結果並產生一個報表。
The setUp()
and tearDown()
methods allow you
to define instructions that will be executed before and after each test method.
They are covered in more detail in the section Organizing test code.
最後將顯示一個簡單的方法去執行測試 unittest.main()
提供一個命令執行列介面測試腳本。當透過命令執行列執行,輸出結果將會像是:
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK
在測試時加入 -v
選項將指示 unittest.main()
提高 verbosity 層級,產生以下的輸出:
test_isupper (__main__.TestStringMethods) ... ok
test_split (__main__.TestStringMethods) ... ok
test_upper (__main__.TestStringMethods) ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
以上的例子顯示大多數使用 unittest
特徵足以滿足大多數日常測試的需求。接下來第一部分文件的剩餘部分將繼續探索完整特徵設定。
27.4.2. 命令執行列介面 (Command-Line Interface)¶
單元測試模組可以透過命令執行列執行測試模組,物件甚至個別的測試方法:
python -m unittest test_module1 test_module2
python -m unittest test_module.TestClass
python -m unittest test_module.TestClass.test_method
你可以通過一個串列與任何模組名稱的組合,完全符合類別與方法的名稱。
測試模組可以根據檔案路徑指定:
python -m unittest tests/test_something.py
這允許你使用 shell 檔案名稱補完功能 (filename completion) 來指定測試模組。給定的檔案路徑必須亦能被當作模組 import。此路徑轉換為模組名稱的方式為移除 『.py』 並將路徑分隔符 (path separator) 轉換成 『.』。 假如你的測試檔案無法被 import 成模組,你應該直接執行該測試檔案。
通過增加 -v 的旗標數,可以在你執行測試時得到更多細節(更高的 verbosity):
python -m unittest -v test_module
若執行時不代任何引數,將執行 Test Discovery(測試探索):
python -m unittest
列出所有命令列選項:
python -m unittest -h
3.2 版更變: 在早期的版本可以個別執行測試方法和不需要模組或是類別。
27.4.2.1. 命令列模式選項¶
unittest supports these command-line options:
-
-b
,
--buffer
¶
Standard output 與 standard error stream 將在測試執行被緩衝 (buffer)。這些輸出在測試通過時被丟棄。若是測試錯誤或失則,這些輸出將會正常地被印出,並且被加入至錯誤訊息中。
-
-c
,
--catch
¶
Control-C 測試執行過程中等待正確的測試結果並回報目前為止所有的測試結果。第二個 Control-C 拋出一般例外
KeyboardInterrupt
。參照 Signal Handling 針對函式提供的功能。
-
-f
,
--failfast
¶
在第一次錯誤或是失敗停止執行測試。
-
-k
¶
Only run test methods and classes that match the pattern or substring. This option may be used multiple times, in which case all test cases that match of the given patterns are included.
Patterns that contain a wildcard character (
*
) are matched against the test name usingfnmatch.fnmatchcase()
; otherwise simple case-sensitive substring matching is used.Patterns are matched against the fully qualified test method name as imported by the test loader.
For example,
-k foo
matchesfoo_tests.SomeTest.test_something
,bar_tests.SomeTest.test_foo
, but notbar_tests.FooTest.test_something
.
-
--locals
¶
透過 traceback 顯示本地變數。
3.2 版新加入: 增加命令列模式選項 -b
、 -c
與 -f
。
3.5 版新加入: 命令列選項 --locals
。
3.7 版新加入: The command-line option -k
.
對執行所有的專案或是一個子集合測試,命令列模式可以可以被用來做測試探索。
27.4.3. Test Discovery(測試探索)¶
3.2 版新加入.
單元測試支援簡單的 test discovery(測試探索)。為了相容於測試探索,所有的測試檔案都要是模組或是套件(包含 namespace packages),並能從專案的最上層目錄中 import(代表它們的檔案名稱必須是有效的 identifiers)。
Test discovery(測試探索)實作在 TestLoader.discover()
,但也可以被用於命令列模式。基本的命令列模式用法如下:
cd project_directory
python -m unittest discover
備註
python -m unittest
作為捷徑,其功能相當於 python -m unittest discover
。假如你想傳遞引數至探索測試的話,一定要明確地加入 discover
子指令。
discover
子指令有以下幾個選項:
-
-v
,
--verbose
¶
詳細(verbose)輸出
-
-s
,
--start-directory
directory
¶ 開始尋找的資料夾(預設為
.
)
-
-p
,
--pattern
pattern
¶ 匹配測試檔案的模式(預設為
test*.py
)
-
-t
,
--top-level-directory
directory
¶ 專案的最高階層目錄 (defaults to start directory)
-s
, -p
, 和 -t
選項依照傳遞位置作為引數排序順序。以下兩個命令列被視為等價:
python -m unittest discover -s project_directory -p "*_test.py"
python -m unittest discover project_directory "*_test.py"
As well as being a path it is possible to pass a package name, for example
myproject.subpackage.test
, as the start directory. The package name you
supply will then be imported and its location on the filesystem will be used
as the start directory.
警示
Test discovery loads tests by importing them. Once test discovery has found
all the test files from the start directory you specify it turns the paths
into package names to import. For example foo/bar/baz.py
will be
imported as foo.bar.baz
.
If you have a package installed globally and attempt test discovery on a different copy of the package then the import could happen from the wrong place. If this happens test discovery will warn you and exit.
If you supply the start directory as a package name rather than a path to a directory then discover assumes that whichever location it imports from is the location you intended, so you will not get the warning.
Test modules and packages can customize test loading and discovery by through the load_tests protocol.
3.4 版更變: Test discovery supports namespace packages.
27.4.4. Organizing test code¶
The basic building blocks of unit testing are test cases — single
scenarios that must be set up and checked for correctness. In unittest
,
test cases are represented by unittest.TestCase
instances.
To make your own test cases you must write subclasses of
TestCase
or use FunctionTestCase
.
The testing code of a TestCase
instance should be entirely self
contained, such that it can be run either in isolation or in arbitrary
combination with any number of other test cases.
The simplest TestCase
subclass will simply implement a test method
(i.e. a method whose name starts with test
) in order to perform specific
testing code:
import unittest
class DefaultWidgetSizeTestCase(unittest.TestCase):
def test_default_widget_size(self):
widget = Widget('The widget')
self.assertEqual(widget.size(), (50, 50))
Note that in order to test something, we use one of the assert*()
methods provided by the TestCase
base class. If the test fails, an
exception will be raised with an explanatory message, and unittest
will identify the test case as a failure. Any other exceptions will be
treated as errors.
Tests can be numerous, and their set-up can be repetitive. Luckily, we
can factor out set-up code by implementing a method called
setUp()
, which the testing framework will automatically
call for every single test we run:
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def test_default_widget_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_widget_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
備註
The order in which the various tests will be run is determined by sorting the test method names with respect to the built-in ordering for strings.
If the setUp()
method raises an exception while the test is
running, the framework will consider the test to have suffered an error, and
the test method will not be executed.
Similarly, we can provide a tearDown()
method that tidies up
after the test method has been run:
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
If setUp()
succeeded, tearDown()
will be
run whether the test method succeeded or not.
Such a working environment for the testing code is called a
test fixture. A new TestCase instance is created as a unique
test fixture used to execute each individual test method. Thus
setUp()
, tearDown()
, and __init__()
will be called once per test.
It is recommended that you use TestCase implementations to group tests together
according to the features they test. unittest
provides a mechanism for
this: the test suite, represented by unittest
’s
TestSuite
class. In most cases, calling unittest.main()
will do
the right thing and collect all the module’s test cases for you and execute
them.
However, should you want to customize the building of your test suite, you can do it yourself:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_widget_size'))
suite.addTest(WidgetTestCase('test_widget_resize'))
return suite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(suite())
You can place the definitions of test cases and test suites in the same modules
as the code they are to test (such as widget.py
), but there are several
advantages to placing the test code in a separate module, such as
test_widget.py
:
- The test module can be run standalone from the command line.
- The test code can more easily be separated from shipped code.
- There is less temptation to change test code to fit the code it tests without a good reason.
- Test code should be modified much less frequently than the code it tests.
- Tested code can be refactored more easily.
- Tests for modules written in C must be in separate modules anyway, so why not be consistent?
- If the testing strategy changes, there is no need to change the source code.
27.4.5. Re-using old test code¶
Some users will find that they have existing test code that they would like to
run from unittest
, without converting every old test function to a
TestCase
subclass.
For this reason, unittest
provides a FunctionTestCase
class.
This subclass of TestCase
can be used to wrap an existing test
function. Set-up and tear-down functions can also be provided.
Given the following test function:
def testSomething():
something = makeSomething()
assert something.name is not None
# ...
one can create an equivalent test case instance as follows, with optional set-up and tear-down methods:
testcase = unittest.FunctionTestCase(testSomething,
setUp=makeSomethingDB,
tearDown=deleteSomethingDB)
備註
Even though FunctionTestCase
can be used to quickly convert an
existing test base over to a unittest
-based system, this approach is
not recommended. Taking the time to set up proper TestCase
subclasses will make future test refactorings infinitely easier.
In some cases, the existing tests may have been written using the doctest
module. If so, doctest
provides a DocTestSuite
class that can
automatically build unittest.TestSuite
instances from the existing
doctest
-based tests.
27.4.6. Skipping tests and expected failures¶
3.1 版新加入.
Unittest supports skipping individual test methods and even whole classes of
tests. In addition, it supports marking a test as an 「expected failure,」 a test
that is broken and will fail, but shouldn’t be counted as a failure on a
TestResult
.
Skipping a test is simply a matter of using the skip()
decorator
or one of its conditional variants.
Basic skipping looks like this:
class MyTestCase(unittest.TestCase):
@unittest.skip("demonstrating skipping")
def test_nothing(self):
self.fail("shouldn't happen")
@unittest.skipIf(mylib.__version__ < (1, 3),
"not supported in this library version")
def test_format(self):
# Tests that work for only a certain version of the library.
pass
@unittest.skipUnless(sys.platform.startswith("win"), "requires Windows")
def test_windows_support(self):
# windows specific testing code
pass
This is the output of running the example above in verbose mode:
test_format (__main__.MyTestCase) ... skipped 'not supported in this library version'
test_nothing (__main__.MyTestCase) ... skipped 'demonstrating skipping'
test_windows_support (__main__.MyTestCase) ... skipped 'requires Windows'
----------------------------------------------------------------------
Ran 3 tests in 0.005s
OK (skipped=3)
Classes can be skipped just like methods:
@unittest.skip("showing class skipping")
class MySkippedTestCase(unittest.TestCase):
def test_not_run(self):
pass
TestCase.setUp()
can also skip the test. This is useful when a resource
that needs to be set up is not available.
Expected failures use the expectedFailure()
decorator.
class ExpectedFailureTestCase(unittest.TestCase):
@unittest.expectedFailure
def test_fail(self):
self.assertEqual(1, 0, "broken")
It’s easy to roll your own skipping decorators by making a decorator that calls
skip()
on the test when it wants it to be skipped. This decorator skips
the test unless the passed object has a certain attribute:
def skipUnlessHasattr(obj, attr):
if hasattr(obj, attr):
return lambda func: func
return unittest.skip("{!r} doesn't have {!r}".format(obj, attr))
The following decorators implement test skipping and expected failures:
-
@
unittest.
skip
(reason)¶ Unconditionally skip the decorated test. reason should describe why the test is being skipped.
-
@
unittest.
skipIf
(condition, reason)¶ Skip the decorated test if condition is true.
-
@
unittest.
skipUnless
(condition, reason)¶ Skip the decorated test unless condition is true.
-
@
unittest.
expectedFailure
¶ Mark the test as an expected failure. If the test fails when run, the test is not counted as a failure.
-
exception
unittest.
SkipTest
(reason)¶ This exception is raised to skip a test.
Usually you can use
TestCase.skipTest()
or one of the skipping decorators instead of raising this directly.
Skipped tests will not have setUp()
or tearDown()
run around them.
Skipped classes will not have setUpClass()
or tearDownClass()
run.
Skipped modules will not have setUpModule()
or tearDownModule()
run.
27.4.7. Distinguishing test iterations using subtests¶
3.4 版新加入.
When some of your tests differ only by a some very small differences, for
instance some parameters, unittest allows you to distinguish them inside
the body of a test method using the subTest()
context manager.
For example, the following test:
class NumbersTest(unittest.TestCase):
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i % 2, 0)
will produce the following output:
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
Without using a subtest, execution would stop after the first failure,
and the error would be less easy to diagnose because the value of i
wouldn’t be displayed:
======================================================================
FAIL: test_even (__main__.NumbersTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
27.4.8. Classes and functions¶
This section describes in depth the API of unittest
.
27.4.8.1. Test cases¶
-
class
unittest.
TestCase
(methodName='runTest')¶ Instances of the
TestCase
class represent the logical test units in theunittest
universe. This class is intended to be used as a base class, with specific tests being implemented by concrete subclasses. This class implements the interface needed by the test runner to allow it to drive the tests, and methods that the test code can use to check for and report various kinds of failure.Each instance of
TestCase
will run a single base method: the method named methodName. In most uses ofTestCase
, you will neither change the methodName nor reimplement the defaultrunTest()
method.3.2 版更變:
TestCase
can be instantiated successfully without providing a methodName. This makes it easier to experiment withTestCase
from the interactive interpreter.TestCase
instances provide three groups of methods: one group used to run the test, another used by the test implementation to check conditions and report failures, and some inquiry methods allowing information about the test itself to be gathered.Methods in the first group (running the test) are:
-
setUp
()¶ Method called to prepare the test fixture. This is called immediately before calling the test method; other than
AssertionError
orSkipTest
, any exception raised by this method will be considered an error rather than a test failure. The default implementation does nothing.
-
tearDown
()¶ Method called immediately after the test method has been called and the result recorded. This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state. Any exception, other than
AssertionError
orSkipTest
, raised by this method will be considered an additional error rather than a test failure (thus increasing the total number of reported errors). This method will only be called if thesetUp()
succeeds, regardless of the outcome of the test method. The default implementation does nothing.
-
setUpClass
()¶ A class method called before tests in an individual class run.
setUpClass
is called with the class as the only argument and must be decorated as aclassmethod()
:@classmethod def setUpClass(cls): ...
See Class and Module Fixtures for more details.
3.2 版新加入.
-
tearDownClass
()¶ A class method called after tests in an individual class have run.
tearDownClass
is called with the class as the only argument and must be decorated as aclassmethod()
:@classmethod def tearDownClass(cls): ...
See Class and Module Fixtures for more details.
3.2 版新加入.
-
run
(result=None)¶ Run the test, collecting the result into the
TestResult
object passed as result. If result is omitted orNone
, a temporary result object is created (by calling thedefaultTestResult()
method) and used. The result object is returned torun()
』s caller.The same effect may be had by simply calling the
TestCase
instance.3.3 版更變: Previous versions of
run
did not return the result. Neither did calling an instance.
-
skipTest
(reason)¶ Calling this during a test method or
setUp()
skips the current test. See Skipping tests and expected failures for more information.3.1 版新加入.
-
subTest
(msg=None, **params)¶ Return a context manager which executes the enclosed code block as a subtest. msg and params are optional, arbitrary values which are displayed whenever a subtest fails, allowing you to identify them clearly.
A test case can contain any number of subtest declarations, and they can be arbitrarily nested.
See Distinguishing test iterations using subtests for more information.
3.4 版新加入.
-
debug
()¶ Run the test without collecting the result. This allows exceptions raised by the test to be propagated to the caller, and can be used to support running tests under a debugger.
The
TestCase
class provides several assert methods to check for and report failures. The following table lists the most commonly used methods (see the tables below for more assert methods):Method Checks that New in assertEqual(a, b)
a == b
assertNotEqual(a, b)
a != b
assertTrue(x)
bool(x) is True
assertFalse(x)
bool(x) is False
assertIs(a, b)
a is b
3.1 assertIsNot(a, b)
a is not b
3.1 assertIsNone(x)
x is None
3.1 assertIsNotNone(x)
x is not None
3.1 assertIn(a, b)
a in b
3.1 assertNotIn(a, b)
a not in b
3.1 assertIsInstance(a, b)
isinstance(a, b)
3.2 assertNotIsInstance(a, b)
not isinstance(a, b)
3.2 All the assert methods accept a msg argument that, if specified, is used as the error message on failure (see also
longMessage
). Note that the msg keyword argument can be passed toassertRaises()
,assertRaisesRegex()
,assertWarns()
,assertWarnsRegex()
only when they are used as a context manager.-
assertEqual
(first, second, msg=None)¶ Test that first and second are equal. If the values do not compare equal, the test will fail.
In addition, if first and second are the exact same type and one of list, tuple, dict, set, frozenset or str or any type that a subclass registers with
addTypeEqualityFunc()
the type-specific equality function will be called in order to generate a more useful default error message (see also the list of type-specific methods).3.1 版更變: Added the automatic calling of type-specific equality function.
3.2 版更變:
assertMultiLineEqual()
added as the default type equality function for comparing strings.
-
assertNotEqual
(first, second, msg=None)¶ Test that first and second are not equal. If the values do compare equal, the test will fail.
-
assertTrue
(expr, msg=None)¶ -
assertFalse
(expr, msg=None)¶ Test that expr is true (or false).
Note that this is equivalent to
bool(expr) is True
and not toexpr is True
(useassertIs(expr, True)
for the latter). This method should also be avoided when more specific methods are available (e.g.assertEqual(a, b)
instead ofassertTrue(a == b)
), because they provide a better error message in case of failure.
-
assertIs
(first, second, msg=None)¶ -
assertIsNot
(first, second, msg=None)¶ Test that first and second evaluate (or don’t evaluate) to the same object.
3.1 版新加入.
-
assertIsNone
(expr, msg=None)¶ -
assertIsNotNone
(expr, msg=None)¶ Test that expr is (or is not)
None
.3.1 版新加入.
-
assertIn
(first, second, msg=None)¶ -
assertNotIn
(first, second, msg=None)¶ Test that first is (or is not) in second.
3.1 版新加入.
-
assertIsInstance
(obj, cls, msg=None)¶ -
assertNotIsInstance
(obj, cls, msg=None)¶ Test that obj is (or is not) an instance of cls (which can be a class or a tuple of classes, as supported by
isinstance()
). To check for the exact type, useassertIs(type(obj), cls)
.3.2 版新加入.
It is also possible to check the production of exceptions, warnings, and log messages using the following methods:
Method Checks that New in assertRaises(exc, fun, *args, **kwds)
fun(*args, **kwds)
raises excassertRaisesRegex(exc, r, fun, *args, **kwds)
fun(*args, **kwds)
raises exc and the message matches regex r3.1 assertWarns(warn, fun, *args, **kwds)
fun(*args, **kwds)
raises warn3.2 assertWarnsRegex(warn, r, fun, *args, **kwds)
fun(*args, **kwds)
raises warn and the message matches regex r3.2 assertLogs(logger, level)
The with
block logs on logger with minimum level3.4 -
assertRaises
(exception, callable, *args, **kwds)¶ -
assertRaises
(exception, msg=None) Test that an exception is raised when callable is called with any positional or keyword arguments that are also passed to
assertRaises()
. The test passes if exception is raised, is an error if another exception is raised, or fails if no exception is raised. To catch any of a group of exceptions, a tuple containing the exception classes may be passed as exception.If only the exception and possibly the msg arguments are given, return a context manager so that the code under test can be written inline rather than as a function:
with self.assertRaises(SomeException): do_something()
When used as a context manager,
assertRaises()
accepts the additional keyword argument msg.The context manager will store the caught exception object in its
exception
attribute. This can be useful if the intention is to perform additional checks on the exception raised:with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3)
3.1 版更變: Added the ability to use
assertRaises()
as a context manager.3.2 版更變: Added the
exception
attribute.3.3 版更變: Added the msg keyword argument when used as a context manager.
-
assertRaisesRegex
(exception, regex, callable, *args, **kwds)¶ -
assertRaisesRegex
(exception, regex, msg=None) Like
assertRaises()
but also tests that regex matches on the string representation of the raised exception. regex may be a regular expression object or a string containing a regular expression suitable for use byre.search()
. Examples:self.assertRaisesRegex(ValueError, "invalid literal for.*XYZ'$", int, 'XYZ')
或是:
with self.assertRaisesRegex(ValueError, 'literal'): int('XYZ')
3.1 版新加入: under the name
assertRaisesRegexp
.3.2 版更變: Renamed to
assertRaisesRegex()
.3.3 版更變: Added the msg keyword argument when used as a context manager.
-
assertWarns
(warning, callable, *args, **kwds)¶ -
assertWarns
(warning, msg=None) Test that a warning is triggered when callable is called with any positional or keyword arguments that are also passed to
assertWarns()
. The test passes if warning is triggered and fails if it isn’t. Any exception is an error. To catch any of a group of warnings, a tuple containing the warning classes may be passed as warnings.If only the warning and possibly the msg arguments are given, return a context manager so that the code under test can be written inline rather than as a function:
with self.assertWarns(SomeWarning): do_something()
When used as a context manager,
assertWarns()
accepts the additional keyword argument msg.The context manager will store the caught warning object in its
warning
attribute, and the source line which triggered the warnings in thefilename
andlineno
attributes. This can be useful if the intention is to perform additional checks on the warning caught:with self.assertWarns(SomeWarning) as cm: do_something() self.assertIn('myfile.py', cm.filename) self.assertEqual(320, cm.lineno)
This method works regardless of the warning filters in place when it is called.
3.2 版新加入.
3.3 版更變: Added the msg keyword argument when used as a context manager.
-
assertWarnsRegex
(warning, regex, callable, *args, **kwds)¶ -
assertWarnsRegex
(warning, regex, msg=None) Like
assertWarns()
but also tests that regex matches on the message of the triggered warning. regex may be a regular expression object or a string containing a regular expression suitable for use byre.search()
. Example:self.assertWarnsRegex(DeprecationWarning, r'legacy_function\(\) is deprecated', legacy_function, 'XYZ')
或是:
with self.assertWarnsRegex(RuntimeWarning, 'unsafe frobnicating'): frobnicate('/etc/passwd')
3.2 版新加入.
3.3 版更變: Added the msg keyword argument when used as a context manager.
-
assertLogs
(logger=None, level=None)¶ A context manager to test that at least one message is logged on the logger or one of its children, with at least the given level.
If given, logger should be a
logging.Logger
object or astr
giving the name of a logger. The default is the root logger, which will catch all messages.If given, level should be either a numeric logging level or its string equivalent (for example either
"ERROR"
orlogging.ERROR
). The default islogging.INFO
.The test passes if at least one message emitted inside the
with
block matches the logger and level conditions, otherwise it fails.The object returned by the context manager is a recording helper which keeps tracks of the matching log messages. It has two attributes:
-
records
¶ A list of
logging.LogRecord
objects of the matching log messages.
Example:
with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message'])
3.4 版新加入.
-
There are also other methods used to perform more specific checks, such as:
Method Checks that New in assertAlmostEqual(a, b)
round(a-b, 7) == 0
assertNotAlmostEqual(a, b)
round(a-b, 7) != 0
assertGreater(a, b)
a > b
3.1 assertGreaterEqual(a, b)
a >= b
3.1 assertLess(a, b)
a < b
3.1 assertLessEqual(a, b)
a <= b
3.1 assertRegex(s, r)
r.search(s)
3.1 assertNotRegex(s, r)
not r.search(s)
3.2 assertCountEqual(a, b)
a and b have the same elements in the same number, regardless of their order 3.2 -
assertAlmostEqual
(first, second, places=7, msg=None, delta=None)¶ -
assertNotAlmostEqual
(first, second, places=7, msg=None, delta=None)¶ Test that first and second are approximately (or not approximately) equal by computing the difference, rounding to the given number of decimal places (default 7), and comparing to zero. Note that these methods round the values to the given number of decimal places (i.e. like the
round()
function) and not significant digits.If delta is supplied instead of places then the difference between first and second must be less or equal to (or greater than) delta.
Supplying both delta and places raises a
TypeError
.3.2 版更變:
assertAlmostEqual()
automatically considers almost equal objects that compare equal.assertNotAlmostEqual()
automatically fails if the objects compare equal. Added the delta keyword argument.
-
assertGreater
(first, second, msg=None)¶ -
assertGreaterEqual
(first, second, msg=None)¶ -
assertLess
(first, second, msg=None)¶ -
assertLessEqual
(first, second, msg=None)¶ Test that first is respectively >, >=, < or <= than second depending on the method name. If not, the test will fail:
>>> self.assertGreaterEqual(3, 4) AssertionError: "3" unexpectedly not greater than or equal to "4"
3.1 版新加入.
-
assertRegex
(text, regex, msg=None)¶ -
assertNotRegex
(text, regex, msg=None)¶ Test that a regex search matches (or does not match) text. In case of failure, the error message will include the pattern and the text (or the pattern and the part of text that unexpectedly matched). regex may be a regular expression object or a string containing a regular expression suitable for use by
re.search()
.3.1 版新加入: under the name
assertRegexpMatches
.3.2 版更變: The method
assertRegexpMatches()
has been renamed toassertRegex()
.3.2 版新加入:
assertNotRegex()
.3.5 版新加入: The name
assertNotRegexpMatches
is a deprecated alias forassertNotRegex()
.
-
assertCountEqual
(first, second, msg=None)¶ Test that sequence first contains the same elements as second, regardless of their order. When they don’t, an error message listing the differences between the sequences will be generated.
Duplicate elements are not ignored when comparing first and second. It verifies whether each element has the same count in both sequences. Equivalent to:
assertEqual(Counter(list(first)), Counter(list(second)))
but works with sequences of unhashable objects as well.3.2 版新加入.
The
assertEqual()
method dispatches the equality check for objects of the same type to different type-specific methods. These methods are already implemented for most of the built-in types, but it’s also possible to register new methods usingaddTypeEqualityFunc()
:-
addTypeEqualityFunc
(typeobj, function)¶ Registers a type-specific method called by
assertEqual()
to check if two objects of exactly the same typeobj (not subclasses) compare equal. function must take two positional arguments and a third msg=None keyword argument just asassertEqual()
does. It must raiseself.failureException(msg)
when inequality between the first two parameters is detected – possibly providing useful information and explaining the inequalities in details in the error message.3.1 版新加入.
The list of type-specific methods automatically used by
assertEqual()
are summarized in the following table. Note that it’s usually not necessary to invoke these methods directly.Method Used to compare New in assertMultiLineEqual(a, b)
strings 3.1 assertSequenceEqual(a, b)
sequences 3.1 assertListEqual(a, b)
lists 3.1 assertTupleEqual(a, b)
tuples 3.1 assertSetEqual(a, b)
sets or frozensets 3.1 assertDictEqual(a, b)
dicts 3.1 -
assertMultiLineEqual
(first, second, msg=None)¶ Test that the multiline string first is equal to the string second. When not equal a diff of the two strings highlighting the differences will be included in the error message. This method is used by default when comparing strings with
assertEqual()
.3.1 版新加入.
-
assertSequenceEqual
(first, second, msg=None, seq_type=None)¶ Tests that two sequences are equal. If a seq_type is supplied, both first and second must be instances of seq_type or a failure will be raised. If the sequences are different an error message is constructed that shows the difference between the two.
This method is not called directly by
assertEqual()
, but it’s used to implementassertListEqual()
andassertTupleEqual()
.3.1 版新加入.
-
assertListEqual
(first, second, msg=None)¶ -
assertTupleEqual
(first, second, msg=None)¶ Tests that two lists or tuples are equal. If not, an error message is constructed that shows only the differences between the two. An error is also raised if either of the parameters are of the wrong type. These methods are used by default when comparing lists or tuples with
assertEqual()
.3.1 版新加入.
-
assertSetEqual
(first, second, msg=None)¶ Tests that two sets are equal. If not, an error message is constructed that lists the differences between the sets. This method is used by default when comparing sets or frozensets with
assertEqual()
.Fails if either of first or second does not have a
set.difference()
method.3.1 版新加入.
-
assertDictEqual
(first, second, msg=None)¶ Test that two dictionaries are equal. If not, an error message is constructed that shows the differences in the dictionaries. This method will be used by default to compare dictionaries in calls to
assertEqual()
.3.1 版新加入.
Finally the
TestCase
provides the following methods and attributes:-
fail
(msg=None)¶ Signals a test failure unconditionally, with msg or
None
for the error message.
-
failureException
¶ This class attribute gives the exception raised by the test method. If a test framework needs to use a specialized exception, possibly to carry additional information, it must subclass this exception in order to 「play fair」 with the framework. The initial value of this attribute is
AssertionError
.
-
longMessage
¶ This class attribute determines what happens when a custom failure message is passed as the msg argument to an assertXYY call that fails.
True
is the default value. In this case, the custom message is appended to the end of the standard failure message. When set toFalse
, the custom message replaces the standard message.The class setting can be overridden in individual test methods by assigning an instance attribute, self.longMessage, to
True
orFalse
before calling the assert methods.The class setting gets reset before each test call.
3.1 版新加入.
-
maxDiff
¶ This attribute controls the maximum length of diffs output by assert methods that report diffs on failure. It defaults to 80*8 characters. Assert methods affected by this attribute are
assertSequenceEqual()
(including all the sequence comparison methods that delegate to it),assertDictEqual()
andassertMultiLineEqual()
.Setting
maxDiff
toNone
means that there is no maximum length of diffs.3.2 版新加入.
Testing frameworks can use the following methods to collect information on the test:
-
countTestCases
()¶ Return the number of tests represented by this test object. For
TestCase
instances, this will always be1
.
-
defaultTestResult
()¶ Return an instance of the test result class that should be used for this test case class (if no other result instance is provided to the
run()
method).For
TestCase
instances, this will always be an instance ofTestResult
; subclasses ofTestCase
should override this as necessary.
-
id
()¶ Return a string identifying the specific test case. This is usually the full name of the test method, including the module and class name.
-
shortDescription
()¶ Returns a description of the test, or
None
if no description has been provided. The default implementation of this method returns the first line of the test method’s docstring, if available, orNone
.3.1 版更變: In 3.1 this was changed to add the test name to the short description even in the presence of a docstring. This caused compatibility issues with unittest extensions and adding the test name was moved to the
TextTestResult
in Python 3.2.
-
addCleanup
(function, *args, **kwargs)¶ Add a function to be called after
tearDown()
to cleanup resources used during the test. Functions will be called in reverse order to the order they are added (LIFO). They are called with any arguments and keyword arguments passed intoaddCleanup()
when they are added.If
setUp()
fails, meaning thattearDown()
is not called, then any cleanup functions added will still be called.3.1 版新加入.
-
doCleanups
()¶ This method is called unconditionally after
tearDown()
, or aftersetUp()
ifsetUp()
raises an exception.It is responsible for calling all the cleanup functions added by
addCleanup()
. If you need cleanup functions to be called prior totearDown()
then you can calldoCleanups()
yourself.doCleanups()
pops methods off the stack of cleanup functions one at a time, so it can be called at any time.3.1 版新加入.
-
-
class
unittest.
FunctionTestCase
(testFunc, setUp=None, tearDown=None, description=None)¶ This class implements the portion of the
TestCase
interface which allows the test runner to drive the test, but does not provide the methods which test code can use to check and report errors. This is used to create test cases using legacy test code, allowing it to be integrated into aunittest
-based test framework.
27.4.8.1.1. Deprecated aliases¶
For historical reasons, some of the TestCase
methods had one or more
aliases that are now deprecated. The following table lists the correct names
along with their deprecated aliases:
Method Name Deprecated alias Deprecated alias assertEqual()
failUnlessEqual assertEquals assertNotEqual()
failIfEqual assertNotEquals assertTrue()
failUnless assert_ assertFalse()
failIf assertRaises()
failUnlessRaises assertAlmostEqual()
failUnlessAlmostEqual assertAlmostEquals assertNotAlmostEqual()
failIfAlmostEqual assertNotAlmostEquals assertRegex()
assertRegexpMatches assertNotRegex()
assertNotRegexpMatches assertRaisesRegex()
assertRaisesRegexp 3.1 版後已棄用: the fail* aliases listed in the second column.
3.2 版後已棄用: the assert* aliases listed in the third column.
3.2 版後已棄用:
assertRegexpMatches
andassertRaisesRegexp
have been renamed toassertRegex()
andassertRaisesRegex()
.3.5 版後已棄用: the
assertNotRegexpMatches
name in favor ofassertNotRegex()
.
27.4.8.2. Grouping tests¶
-
class
unittest.
TestSuite
(tests=())¶ This class represents an aggregation of individual test cases and test suites. The class presents the interface needed by the test runner to allow it to be run as any other test case. Running a
TestSuite
instance is the same as iterating over the suite, running each test individually.If tests is given, it must be an iterable of individual test cases or other test suites that will be used to build the suite initially. Additional methods are provided to add test cases and suites to the collection later on.
TestSuite
objects behave much likeTestCase
objects, except they do not actually implement a test. Instead, they are used to aggregate tests into groups of tests that should be run together. Some additional methods are available to add tests toTestSuite
instances:-
addTests
(tests)¶ Add all the tests from an iterable of
TestCase
andTestSuite
instances to this test suite.This is equivalent to iterating over tests, calling
addTest()
for each element.
TestSuite
shares the following methods withTestCase
:-
run
(result)¶ Run the tests associated with this suite, collecting the result into the test result object passed as result. Note that unlike
TestCase.run()
,TestSuite.run()
requires the result object to be passed in.
-
debug
()¶ Run the tests associated with this suite without collecting the result. This allows exceptions raised by the test to be propagated to the caller and can be used to support running tests under a debugger.
-
countTestCases
()¶ Return the number of tests represented by this test object, including all individual tests and sub-suites.
-
__iter__
()¶ Tests grouped by a
TestSuite
are always accessed by iteration. Subclasses can lazily provide tests by overriding__iter__()
. Note that this method may be called several times on a single suite (for example when counting tests or comparing for equality) so the tests returned by repeated iterations beforeTestSuite.run()
must be the same for each call iteration. AfterTestSuite.run()
, callers should not rely on the tests returned by this method unless the caller uses a subclass that overridesTestSuite._removeTestAtIndex()
to preserve test references.3.2 版更變: In earlier versions the
TestSuite
accessed tests directly rather than through iteration, so overriding__iter__()
wasn’t sufficient for providing tests.3.4 版更變: In earlier versions the
TestSuite
held references to eachTestCase
afterTestSuite.run()
. Subclasses can restore that behavior by overridingTestSuite._removeTestAtIndex()
.
In the typical usage of a
TestSuite
object, therun()
method is invoked by aTestRunner
rather than by the end-user test harness.-
27.4.8.3. Loading and running tests¶
-
class
unittest.
TestLoader
¶ The
TestLoader
class is used to create test suites from classes and modules. Normally, there is no need to create an instance of this class; theunittest
module provides an instance that can be shared asunittest.defaultTestLoader
. Using a subclass or instance, however, allows customization of some configurable properties.TestLoader
objects have the following attributes:-
errors
¶ A list of the non-fatal errors encountered while loading tests. Not reset by the loader at any point. Fatal errors are signalled by the relevant a method raising an exception to the caller. Non-fatal errors are also indicated by a synthetic test that will raise the original error when run.
3.5 版新加入.
TestLoader
objects have the following methods:-
loadTestsFromTestCase
(testCaseClass)¶ Return a suite of all test cases contained in the
TestCase
-derivedtestCaseClass
.A test case instance is created for each method named by
getTestCaseNames()
. By default these are the method names beginning withtest
. IfgetTestCaseNames()
returns no methods, but therunTest()
method is implemented, a single test case is created for that method instead.
-
loadTestsFromModule
(module, pattern=None)¶ Return a suite of all test cases contained in the given module. This method searches module for classes derived from
TestCase
and creates an instance of the class for each test method defined for the class.備註
While using a hierarchy of
TestCase
-derived classes can be convenient in sharing fixtures and helper functions, defining test methods on base classes that are not intended to be instantiated directly does not play well with this method. Doing so, however, can be useful when the fixtures are different and defined in subclasses.If a module provides a
load_tests
function it will be called to load the tests. This allows modules to customize test loading. This is the load_tests protocol. The pattern argument is passed as the third argument toload_tests
.3.2 版更變: Support for
load_tests
added.3.5 版更變: The undocumented and unofficial use_load_tests default argument is deprecated and ignored, although it is still accepted for backward compatibility. The method also now accepts a keyword-only argument pattern which is passed to
load_tests
as the third argument.
-
loadTestsFromName
(name, module=None)¶ Return a suite of all test cases given a string specifier.
The specifier name is a 「dotted name」 that may resolve either to a module, a test case class, a test method within a test case class, a
TestSuite
instance, or a callable object which returns aTestCase
orTestSuite
instance. These checks are applied in the order listed here; that is, a method on a possible test case class will be picked up as 「a test method within a test case class」, rather than 「a callable object」.For example, if you have a module
SampleTests
containing aTestCase
-derived classSampleTestCase
with three test methods (test_one()
,test_two()
, andtest_three()
), the specifier'SampleTests.SampleTestCase'
would cause this method to return a suite which will run all three test methods. Using the specifier'SampleTests.SampleTestCase.test_two'
would cause it to return a test suite which will run only thetest_two()
test method. The specifier can refer to modules and packages which have not been imported; they will be imported as a side-effect.The method optionally resolves name relative to the given module.
3.5 版更變: If an
ImportError
orAttributeError
occurs while traversing name then a synthetic test that raises that error when run will be returned. These errors are included in the errors accumulated by self.errors.
-
loadTestsFromNames
(names, module=None)¶ Similar to
loadTestsFromName()
, but takes a sequence of names rather than a single name. The return value is a test suite which supports all the tests defined for each name.
-
getTestCaseNames
(testCaseClass)¶ Return a sorted sequence of method names found within testCaseClass; this should be a subclass of
TestCase
.
-
discover
(start_dir, pattern='test*.py', top_level_dir=None)¶ Find all the test modules by recursing into subdirectories from the specified start directory, and return a TestSuite object containing them. Only test files that match pattern will be loaded. (Using shell style pattern matching.) Only module names that are importable (i.e. are valid Python identifiers) will be loaded.
All test modules must be importable from the top level of the project. If the start directory is not the top level directory then the top level directory must be specified separately.
If importing a module fails, for example due to a syntax error, then this will be recorded as a single error and discovery will continue. If the import failure is due to
SkipTest
being raised, it will be recorded as a skip instead of an error.If a package (a directory containing a file named
__init__.py
) is found, the package will be checked for aload_tests
function. If this exists then it will be calledpackage.load_tests(loader, tests, pattern)
. Test discovery takes care to ensure that a package is only checked for tests once during an invocation, even if the load_tests function itself callsloader.discover
.If
load_tests
exists then discovery does not recurse into the package,load_tests
is responsible for loading all tests in the package.The pattern is deliberately not stored as a loader attribute so that packages can continue discovery themselves. top_level_dir is stored so
load_tests
does not need to pass this argument in toloader.discover()
.start_dir can be a dotted module name as well as a directory.
3.2 版新加入.
3.4 版更變: Modules that raise
SkipTest
on import are recorded as skips, not errors. Discovery works for namespace packages. Paths are sorted before being imported so that execution order is the same even if the underlying file system’s ordering is not dependent on file name.3.5 版更變: Found packages are now checked for
load_tests
regardless of whether their path matches pattern, because it is impossible for a package name to match the default pattern.
The following attributes of a
TestLoader
can be configured either by subclassing or assignment on an instance:-
testMethodPrefix
¶ String giving the prefix of method names which will be interpreted as test methods. The default value is
'test'
.This affects
getTestCaseNames()
and all theloadTestsFrom*()
methods.
-
sortTestMethodsUsing
¶ Function to be used to compare method names when sorting them in
getTestCaseNames()
and all theloadTestsFrom*()
methods.
-
suiteClass
¶ Callable object that constructs a test suite from a list of tests. No methods on the resulting object are needed. The default value is the
TestSuite
class.This affects all the
loadTestsFrom*()
methods.
-
testNamePatterns
¶ List of Unix shell-style wildcard test name patterns that test methods have to match to be included in test suites (see
-v
option).If this attribute is not
None
(the default), all test methods to be included in test suites must match one of the patterns in this list. Note that matches are always performed usingfnmatch.fnmatchcase()
, so unlike patterns passed to the-v
option, simple substring patterns will have to be converted using*
wildcards.This affects all the
loadTestsFrom*()
methods.3.7 版新加入.
-
-
class
unittest.
TestResult
¶ This class is used to compile information about which tests have succeeded and which have failed.
A
TestResult
object stores the results of a set of tests. TheTestCase
andTestSuite
classes ensure that results are properly recorded; test authors do not need to worry about recording the outcome of tests.Testing frameworks built on top of
unittest
may want access to theTestResult
object generated by running a set of tests for reporting purposes; aTestResult
instance is returned by theTestRunner.run()
method for this purpose.TestResult
instances have the following attributes that will be of interest when inspecting the results of running a set of tests:-
errors
¶ A list containing 2-tuples of
TestCase
instances and strings holding formatted tracebacks. Each tuple represents a test which raised an unexpected exception.
-
failures
¶ A list containing 2-tuples of
TestCase
instances and strings holding formatted tracebacks. Each tuple represents a test where a failure was explicitly signalled using theTestCase.assert*()
methods.
-
skipped
¶ A list containing 2-tuples of
TestCase
instances and strings holding the reason for skipping the test.3.1 版新加入.
-
expectedFailures
¶ A list containing 2-tuples of
TestCase
instances and strings holding formatted tracebacks. Each tuple represents an expected failure of the test case.
-
unexpectedSuccesses
¶ A list containing
TestCase
instances that were marked as expected failures, but succeeded.
-
testsRun
¶ The total number of tests run so far.
-
buffer
¶ If set to true,
sys.stdout
andsys.stderr
will be buffered in betweenstartTest()
andstopTest()
being called. Collected output will only be echoed onto the realsys.stdout
andsys.stderr
if the test fails or errors. Any output is also attached to the failure / error message.3.2 版新加入.
-
failfast
¶ If set to true
stop()
will be called on the first failure or error, halting the test run.3.2 版新加入.
-
tb_locals
¶ If set to true then local variables will be shown in tracebacks.
3.5 版新加入.
-
wasSuccessful
()¶ Return
True
if all tests run so far have passed, otherwise returnsFalse
.3.4 版更變: Returns
False
if there were anyunexpectedSuccesses
from tests marked with theexpectedFailure()
decorator.
-
stop
()¶ This method can be called to signal that the set of tests being run should be aborted by setting the
shouldStop
attribute toTrue
.TestRunner
objects should respect this flag and return without running any additional tests.For example, this feature is used by the
TextTestRunner
class to stop the test framework when the user signals an interrupt from the keyboard. Interactive tools which provideTestRunner
implementations can use this in a similar manner.
The following methods of the
TestResult
class are used to maintain the internal data structures, and may be extended in subclasses to support additional reporting requirements. This is particularly useful in building tools which support interactive reporting while tests are being run.-
startTest
(test)¶ Called when the test case test is about to be run.
-
stopTest
(test)¶ Called after the test case test has been executed, regardless of the outcome.
-
startTestRun
()¶ Called once before any tests are executed.
3.1 版新加入.
-
stopTestRun
()¶ Called once after all tests are executed.
3.1 版新加入.
-
addError
(test, err)¶ Called when the test case test raises an unexpected exception. err is a tuple of the form returned by
sys.exc_info()
:(type, value, traceback)
.The default implementation appends a tuple
(test, formatted_err)
to the instance’serrors
attribute, where formatted_err is a formatted traceback derived from err.
-
addFailure
(test, err)¶ Called when the test case test signals a failure. err is a tuple of the form returned by
sys.exc_info()
:(type, value, traceback)
.The default implementation appends a tuple
(test, formatted_err)
to the instance’sfailures
attribute, where formatted_err is a formatted traceback derived from err.
-
addSuccess
(test)¶ Called when the test case test succeeds.
The default implementation does nothing.
-
addSkip
(test, reason)¶ Called when the test case test is skipped. reason is the reason the test gave for skipping.
The default implementation appends a tuple
(test, reason)
to the instance’sskipped
attribute.
-
addExpectedFailure
(test, err)¶ Called when the test case test fails, but was marked with the
expectedFailure()
decorator.The default implementation appends a tuple
(test, formatted_err)
to the instance’sexpectedFailures
attribute, where formatted_err is a formatted traceback derived from err.
-
addUnexpectedSuccess
(test)¶ Called when the test case test was marked with the
expectedFailure()
decorator, but succeeded.The default implementation appends the test to the instance’s
unexpectedSuccesses
attribute.
-
addSubTest
(test, subtest, outcome)¶ Called when a subtest finishes. test is the test case corresponding to the test method. subtest is a custom
TestCase
instance describing the subtest.If outcome is
None
, the subtest succeeded. Otherwise, it failed with an exception where outcome is a tuple of the form returned bysys.exc_info()
:(type, value, traceback)
.The default implementation does nothing when the outcome is a success, and records subtest failures as normal failures.
3.4 版新加入.
-
-
class
unittest.
TextTestResult
(stream, descriptions, verbosity)¶ A concrete implementation of
TestResult
used by theTextTestRunner
.3.2 版新加入: This class was previously named
_TextTestResult
. The old name still exists as an alias but is deprecated.
-
unittest.
defaultTestLoader
¶ Instance of the
TestLoader
class intended to be shared. If no customization of theTestLoader
is needed, this instance can be used instead of repeatedly creating new instances.
-
class
unittest.
TextTestRunner
(stream=None, descriptions=True, verbosity=1, failfast=False, buffer=False, resultclass=None, warnings=None, *, tb_locals=False)¶ A basic test runner implementation that outputs results to a stream. If stream is
None
, the default,sys.stderr
is used as the output stream. This class has a few configurable parameters, but is essentially very simple. Graphical applications which run test suites should provide alternate implementations. Such implementations should accept**kwargs
as the interface to construct runners changes when features are added to unittest.By default this runner shows
DeprecationWarning
,PendingDeprecationWarning
,ResourceWarning
andImportWarning
even if they are ignored by default. Deprecation warnings caused by deprecated unittest methods are also special-cased and, when the warning filters are'default'
or'always'
, they will appear only once per-module, in order to avoid too many warning messages. This behavior can be overridden using Python’s-Wd
or-Wa
options (see Warning control) and leaving warnings toNone
.3.2 版更變: Added the
warnings
argument.3.2 版更變: The default stream is set to
sys.stderr
at instantiation time rather than import time.3.5 版更變: Added the tb_locals parameter.
-
_makeResult
()¶ This method returns the instance of
TestResult
used byrun()
. It is not intended to be called directly, but can be overridden in subclasses to provide a customTestResult
._makeResult()
instantiates the class or callable passed in theTextTestRunner
constructor as theresultclass
argument. It defaults toTextTestResult
if noresultclass
is provided. The result class is instantiated with the following arguments:stream, descriptions, verbosity
-
run
(test)¶ This method is the main public interface to the TextTestRunner. This method takes a
TestSuite
orTestCase
instance. ATestResult
is created by calling_makeResult()
and the test(s) are run and the results printed to stdout.
-
-
unittest.
main
(module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=unittest.defaultTestLoader, exit=True, verbosity=1, failfast=None, catchbreak=None, buffer=None, warnings=None)¶ A command-line program that loads a set of tests from module and runs them; this is primarily for making test modules conveniently executable. The simplest use for this function is to include the following line at the end of a test script:
if __name__ == '__main__': unittest.main()
You can run tests with more detailed information by passing in the verbosity argument:
if __name__ == '__main__': unittest.main(verbosity=2)
The defaultTest argument is either the name of a single test or an iterable of test names to run if no test names are specified via argv. If not specified or
None
and no test names are provided via argv, all tests found in module are run.The argv argument can be a list of options passed to the program, with the first element being the program name. If not specified or
None
, the values ofsys.argv
are used.The testRunner argument can either be a test runner class or an already created instance of it. By default
main
callssys.exit()
with an exit code indicating success or failure of the tests run.The testLoader argument has to be a
TestLoader
instance, and defaults todefaultTestLoader
.main
supports being used from the interactive interpreter by passing in the argumentexit=False
. This displays the result on standard output without callingsys.exit()
:>>> from unittest import main >>> main(module='test_module', exit=False)
The failfast, catchbreak and buffer parameters have the same effect as the same-name command-line options.
The warnings argument specifies the warning filter that should be used while running the tests. If it’s not specified, it will remain
None
if a-W
option is passed to python (see Warning control), otherwise it will be set to'default'
.Calling
main
actually returns an instance of theTestProgram
class. This stores the result of the tests run as theresult
attribute.3.1 版更變: The exit parameter was added.
3.2 版更變: The verbosity, failfast, catchbreak, buffer and warnings parameters were added.
3.4 版更變: The defaultTest parameter was changed to also accept an iterable of test names.
27.4.8.3.1. load_tests Protocol¶
3.2 版新加入.
Modules or packages can customize how tests are loaded from them during normal
test runs or test discovery by implementing a function called load_tests
.
If a test module defines load_tests
it will be called by
TestLoader.loadTestsFromModule()
with the following arguments:
load_tests(loader, standard_tests, pattern)
where pattern is passed straight through from loadTestsFromModule
. It
defaults to None
.
It should return a TestSuite
.
loader is the instance of TestLoader
doing the loading.
standard_tests are the tests that would be loaded by default from the
module. It is common for test modules to only want to add or remove tests
from the standard set of tests.
The third argument is used when loading packages as part of test discovery.
A typical load_tests
function that loads tests from a specific set of
TestCase
classes may look like:
test_cases = (TestCase1, TestCase2, TestCase3)
def load_tests(loader, tests, pattern):
suite = TestSuite()
for test_class in test_cases:
tests = loader.loadTestsFromTestCase(test_class)
suite.addTests(tests)
return suite
If discovery is started in a directory containing a package, either from the
command line or by calling TestLoader.discover()
, then the package
__init__.py
will be checked for load_tests
. If that function does
not exist, discovery will recurse into the package as though it were just
another directory. Otherwise, discovery of the package’s tests will be left up
to load_tests
which is called with the following arguments:
load_tests(loader, standard_tests, pattern)
This should return a TestSuite
representing all the tests
from the package. (standard_tests
will only contain tests
collected from __init__.py
.)
Because the pattern is passed into load_tests
the package is free to
continue (and potentially modify) test discovery. A 『do nothing』
load_tests
function for a test package would look like:
def load_tests(loader, standard_tests, pattern):
# top level directory cached on loader instance
this_dir = os.path.dirname(__file__)
package_tests = loader.discover(start_dir=this_dir, pattern=pattern)
standard_tests.addTests(package_tests)
return standard_tests
3.5 版更變: Discovery no longer checks package names for matching pattern due to the impossibility of package names matching the default pattern.
27.4.9. Class and Module Fixtures¶
Class and module level fixtures are implemented in TestSuite
. When
the test suite encounters a test from a new class then tearDownClass()
from the previous class (if there is one) is called, followed by
setUpClass()
from the new class.
Similarly if a test is from a different module from the previous test then
tearDownModule
from the previous module is run, followed by
setUpModule
from the new module.
After all the tests have run the final tearDownClass
and
tearDownModule
are run.
Note that shared fixtures do not play well with [potential] features like test parallelization and they break test isolation. They should be used with care.
The default ordering of tests created by the unittest test loaders is to group
all tests from the same modules and classes together. This will lead to
setUpClass
/ setUpModule
(etc) being called exactly once per class and
module. If you randomize the order, so that tests from different modules and
classes are adjacent to each other, then these shared fixture functions may be
called multiple times in a single test run.
Shared fixtures are not intended to work with suites with non-standard
ordering. A BaseTestSuite
still exists for frameworks that don’t want to
support shared fixtures.
If there are any exceptions raised during one of the shared fixture functions
the test is reported as an error. Because there is no corresponding test
instance an _ErrorHolder
object (that has the same interface as a
TestCase
) is created to represent the error. If you are just using
the standard unittest test runner then this detail doesn’t matter, but if you
are a framework author it may be relevant.
27.4.9.1. setUpClass and tearDownClass¶
These must be implemented as class methods:
import unittest
class Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls._connection = createExpensiveConnectionObject()
@classmethod
def tearDownClass(cls):
cls._connection.destroy()
If you want the setUpClass
and tearDownClass
on base classes called
then you must call up to them yourself. The implementations in
TestCase
are empty.
If an exception is raised during a setUpClass
then the tests in the class
are not run and the tearDownClass
is not run. Skipped classes will not
have setUpClass
or tearDownClass
run. If the exception is a
SkipTest
exception then the class will be reported as having been skipped
instead of as an error.
27.4.9.2. setUpModule and tearDownModule¶
These should be implemented as functions:
def setUpModule():
createConnection()
def tearDownModule():
closeConnection()
If an exception is raised in a setUpModule
then none of the tests in the
module will be run and the tearDownModule
will not be run. If the exception is a
SkipTest
exception then the module will be reported as having been skipped
instead of as an error.
27.4.10. Signal Handling¶
3.2 版新加入.
The -c/--catch
command-line option to unittest,
along with the catchbreak
parameter to unittest.main()
, provide
more friendly handling of control-C during a test run. With catch break
behavior enabled control-C will allow the currently running test to complete,
and the test run will then end and report all the results so far. A second
control-c will raise a KeyboardInterrupt
in the usual way.
The control-c handling signal handler attempts to remain compatible with code or
tests that install their own signal.SIGINT
handler. If the unittest
handler is called but isn’t the installed signal.SIGINT
handler,
i.e. it has been replaced by the system under test and delegated to, then it
calls the default handler. This will normally be the expected behavior by code
that replaces an installed handler and delegates to it. For individual tests
that need unittest
control-c handling disabled the removeHandler()
decorator can be used.
There are a few utility functions for framework authors to enable control-c handling functionality within test frameworks.
-
unittest.
installHandler
()¶ Install the control-c handler. When a
signal.SIGINT
is received (usually in response to the user pressing control-c) all registered results havestop()
called.
-
unittest.
registerResult
(result)¶ Register a
TestResult
object for control-c handling. Registering a result stores a weak reference to it, so it doesn’t prevent the result from being garbage collected.Registering a
TestResult
object has no side-effects if control-c handling is not enabled, so test frameworks can unconditionally register all results they create independently of whether or not handling is enabled.
-
unittest.
removeResult
(result)¶ Remove a registered result. Once a result has been removed then
stop()
will no longer be called on that result object in response to a control-c.
-
unittest.
removeHandler
(function=None)¶ When called without arguments this function removes the control-c handler if it has been installed. This function can also be used as a test decorator to temporarily remove the handler while the test is being executed:
@unittest.removeHandler def test_signal_handling(self): ...