Pytestコマンドライン実行テスト

42367 ワード

テストコード

from collections import namedtuple
Task = namedtuple('Task', ['summary','owner','done','id'])
# __new__.__defaults__ Task 
Task.__new__.__defaults__ = (None, None, False, None)


def test_default():
    """
     , Task.__new__.__defaults__ = (None, None, False, None)
    """
    t1 = Task()
    t2 = Task(None, None, False, None)
    assert t1 == t2


def test_member_access():
    """
     
    :return: 
    """
    t = Task('buy milk', 'brian')
    assert t.summary == 'buy milk'
    assert t.owner == 'brian'
    assert(t.done,  t.id) == (False, None)


def test_asdict():
    """
    _asdict() 
    """
    t_task = Task('do something','okken',True, 21)
    t_dict = t_task._asdict()
    expected_dict = {'summary': 'do something',
                     'owner': 'okken',
                     'done': True,
                     'id': 21}
    assert t_dict == expected_dict


def test_replace():
    """
    _replace() 
    """
    t_before_replace = Task('finish book', 'brian', False)
    t_after_replace = t_before_replace._replace(id=10, done=True)
    t_expected = Task('finish book', 'brian', True, 10)
    assert t_after_replace == t_expected

Pytest実行規則


pytestコマンドの実行にはoptionsとfile or directoryを追加できます.これらのパラメータが指定されていない場合、pytesは現在のディレクトリとそのサブディレクトリの下でテストファイルを探し、検索したテストコードを実行します.1つ以上のファイル名、ディレクトリが指定されている場合、pytestはすべてのテストを検索して実行します.すべてのテストコードを検索するには、pytestは各ディレクトリとそのサブディレクトリを再帰的に巡回しますが、test_先頭または_test先頭のテスト関数
pytestがテストファイルとテスト例を検索するプロセスをテスト検索と呼び、以下のいくつかの原則に従えば検索できます.
  • テストファイルはtest_と命名する必要があります.(something).pyまたは(something)test.py
  • テスト関数、テストクラスメソッドはtest_と命名すべきである.(something)
  • 試験クラスはTest(something)と命名すべきである上記のコードファイルを実行する場合、実行結果は:
  • である.
    D:\PythonPrograms\PythonPytest\TestScripts>pytest test_two.py
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\PythonPrograms\PythonPytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 4 items
    
    test_two.py ....                                                         [100%]
    
    ========================== 4 passed in 0.04 seconds ===========================
    
  • 実行結果の最初の行に実行コードのオペレーティングシステム、pythonバージョン、およびpytestのバージョンが表示されます:platform win 32–Python 3.7.2、pytest-4.0.2、py-1.8.0、plugy-0.12.0
  • 2 2 2行目に検索コードのヒントディレクトリおよびプロファイルが表示され、この例ではプロファイルがないため、inifileは空:rootdir:D:PythonProgramsPythonPytestTestScripts,inifile:
  • 3行目pytestプラグイン
  • を表示
  • 4行目collected 4 itemsは、4つのテスト関数が見つかったことを示します.
  • test_two.py…はテストファイル名を表示し、後の点はテストに合格したことを示し、点のほかにFailure、error(テスト異常)、skip、xfail(予想失敗と確かに失敗)、xpass(予想失敗で実際に合格したが、予想に合致しない)がそれぞれF、E、s、x、X
  • を示す可能性がある
  • 4 passed in 0.04 secondsは試験結果と実行時間
  • を表す.

    単一のテストケースの実行


    コマンドpytest-vパス/ファイル名を使用する::テスト例関数名の実行結果は次のとおりです.
    D:\PythonPrograms\Python_Pytest\TestScripts>pytest -v test_two.py::test_replace
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe
    cachedir: .pytest_cache
    rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 1 item
    
    test_two.py::test_replace PASSED                                         [100%]
    
    ========================== 1 passed in 0.04 seconds ===========================
    

    コマンド・ライン・ルールは次のとおりです.モジュール内のテスト関数を実行します.
    pytest test_mod.py::test_func
    

    モジュール内のクラスのテストメソッドを実行します.
    pytest test_mod.py::TestClass::test_method
    

    単一テストモジュールの実行

    pytest test_module.py
    

    ディレクトリ内のすべてのテストを実行

    pytest test/
    

    -collect-onlyオプション


    テスト・インスタンスを一括して実行する前に、どのインスタンスが予想通りに実行されるかなどを知りたい場合があります.このシナリオでは、次の実行結果に示すように、-collect-onlyオプションを使用できます.
    D:\PythonPrograms\Python_Pytest\TestScripts>pytest --collect-only
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 17 items
    
      
        
        
        
        
        
        
        
      
        
        
      
        
            
            
      
        
        
      
        
        
        
        
    
    ======================== no tests ran in 0.09 seconds =========================
    

    -kオプション


    このオプションを使用すると、式を使用して実行するテストの例を指定できます.テスト名が一意または複数のテスト名の接頭辞または接尾辞が同じである場合、このオプションを使用して、次の実行結果を示します.
    D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default" --collect-only
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 17 items / 15 deselected
    
      
        
        
    
    ======================== 15 deselected in 0.06 seconds ========================
    

    実行結果から、-kと-collect-onlyの組み合わせを使用して、設定したパラメータで実行できるテスト方法をクエリーできることがわかります.次に-collect-onlyをコマンドラインから移動し、-kのみでtest_を実行します.defaultとtest_asdictだ
    D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default"
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 17 items / 15 deselected
    
    test_two.py ..                                                           [100%]
    
    =================== 2 passed, 15 deselected in 0.07 seconds ===================
    

    インスタンス名を定義するときに特に注意すれば、-kを使用して一連のテストインスタンスを実行できます.式にはand、or、notが含まれています.

    -mオプション


    タグ付けおよびグループ化に使用し、タグ付きの使用例のみを実行すると、次のコードに示すように、テストセットを実行するシーンが実現され、以前の2つのテスト方法にタグが追加されます.
    
    @pytest.mark.run_these_cases
    def test_member_access():
        """
         
        :return:
        """
        t = Task('buy milk', 'brian')
        assert t.summary == 'buy milk'
        assert t.owner == 'brian'
        assert(t.done,  t.id) == (False, None)
    
    @pytest.mark.run_these_cases
    def test_asdict():
        """
        _asdict() 
        """
        t_task = Task('do something','okken',True, 21)
        t_dict = t_task._asdict()
        expected_dict = {'summary': 'do something',
                         'owner': 'okken',
                         'done': True,
                         'id': 21}
        assert t_dict == expected_dict
    

    実行コマンド:pytest-v-m run_these_cases、結果は以下の通りです.
    D:\PythonPrograms\Python_Pytest\TestScripts>pytest -v -m run_these_cases
    ============================= test session starts =============================
    platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe
    cachedir: .pytest_cache
    rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 17 items / 15 deselected
    
    test_two.py::test_member_access PASSED                                   [ 50%]
    test_two.py::test_asdict PASSED                                          [100%]
    
    =================== 2 passed, 15 deselected in 0.07 seconds ===================
    
    

    -mオプションは、式で複数のタグ名を指定することもできます.たとえば、-m「mark 1 and mark 2」または-m「mark 1 and not mark 2」または-m「mark 1 or mark 2」などです.

    -xオプション


    Pytestは、検索された各テスト・インスタンスを実行します.テスト関数が失敗と断言されたり、外部異常がトリガーされたりすると、そのテスト・インスタンスの実行が停止し、pytestは失敗とマークしてテスト・インスタンスを実行し続けますが、debugの場合、失敗したときにすぐにセッション全体を停止することを望んでいます.-xオプションは、次の実行結果に示すように、シーンのサポートを提供します.
    E:\Programs\Python\Python_Pytest\TestScripts>pytest -x
    =============================== test session starts ==============================
    platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
    rootdir: E:\Programs\Python\Python_Pytest\TestScripts
    plugins: allure-pytest-2.6.3
    collected 17 items                                                                                                                                                                                                                     
    
    test_asserts.py ...F
    
    ========================FAILURES =================================================
    _________________________test_add4 _____________________________________________________
    
        def test_add4():
    >       assert add(17,22) >= 50
    E       assert 39 >= 50
    E        +  where 39 = add(17, 22)
    
    test_asserts.py:34: AssertionError
    ======================== warnings summary ====================================
    c:\python37\lib\site-packages\_pytest\mark\structures.py:324
      c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
    est.org/en/latest/mark.html
        PytestUnknownMarkWarning,
    
    -- Docs: https://docs.pytest.org/en/latest/warnings.html
    ===================== 1 failed, 3 passed, 1 warnings in 0.41 seconds ====================
    

    実行結果では,実際に収集したテスト例は17件であったが,4件実行し,3件失敗1件で実行が停止した.-xオプションが適用されない場合は、次のようにします.
    
    E:\Programs\Python\Python_Pytest\TestScripts>pytest --tb=no
    =============================== test session starts ================================
    platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
    rootdir: E:\Programs\Python\Python_Pytest\TestScripts
    plugins: allure-pytest-2.6.3
    collected 17 items                                                                                                                                                                                                                     
    
    test_asserts.py ...F..F                                                                                                                                                                                                          [ 41%]
    test_fixture1.py ..                                                                                                                                                                                                              [ 52%]
    test_fixture2.py ..                                                                                                                                                                                                              [ 64%]
    test_one.py .F                                                                                                                                                                                                                   [ 76%]
    test_two.py ....                                                                                                                                                                                                                 [100%]
    
    ============================ warnings summary =================================
    c:\python37\lib\site-packages\_pytest\mark\structures.py:324
      c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
    est.org/en/latest/mark.html
        PytestUnknownMarkWarning,
    
    -- Docs: https://docs.pytest.org/en/latest/warnings.html
    ================= 3 failed, 14 passed, 1 warnings in 0.31 seconds ========================
    

    実行結果から、合計で収集されたテスト例は17件、14件で合格し、3件で失敗し、オプション-tb=noを使用してエラー情報の遡及を閉じ、実行結果だけを見てそんなに多くのエラー情報を見たくない場合に使用することができます.

    –maxfail=numオプション


    -xは失敗したらグローバルに停止します.もし私たちが失敗したら何回停止しますか?-maxfailオプションは、次の実行結果に示すように、このシーンのサポートを提供します.
    
    E:\Programs\Python\Python_Pytest\TestScripts>pytest --maxfail=2 --tb=no
    ================================ test session starts ====================================
    platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
    rootdir: E:\Programs\Python\Python_Pytest\TestScripts
    plugins: allure-pytest-2.6.3
    collected 17 items                                                                                                                                                                                                                     
    
    test_asserts.py ...F..F
    
    ============================= warnings summary ====================================
    c:\python37\lib\site-packages\_pytest\mark\structures.py:324
      c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
    est.org/en/latest/mark.html
        PytestUnknownMarkWarning,
    
    -- Docs: https://docs.pytest.org/en/latest/warnings.html
    =========================== 2 failed, 5 passed, 1 warnings in 0.22 seconds ===================
    

    実行結果から,17件の用例を収集し,7件実行し,エラー数が2に達すると実行を停止した.

    –tb=


    コマンドおよびパラメータ
    説明
    pytest --showlocals
    # show local variables in tracebacks
    pytest -l
    # show local variables (shortcut)
    pytest --tb=auto
    # (default) ‘long’ tracebacks for the first and last entry, but ‘short’ style for the other entries
    pytest --tb=long
    # exhaustive, informative traceback formatting
    pytest --tb=short
    # shorter traceback format
    pytest --tb=line
    # only one line per failure
    pytest --tb=native
    # Python standard library formatting
    pytest --tb=no
    # no traceback at all
    pytest --full-trace
    #causes very long traces to be printed on error (longer than --tb=long).

    -v(–verbose)


    -v, --verbose:increase verbosity.

    -q(–quiet)


    -q, --quiet:decrease verbosity.

    –lf(–last-failed)


    –lf, --last-failed:rerun only the tests that failed at the last run (or all if none failed)
    E:\Programs\Python\Python_Pytest\TestScripts>pytest --lf --tb=no
    ========================== test session starts ========================================
    platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
    rootdir: E:\Programs\Python\Python_Pytest\TestScripts
    plugins: allure-pytest-2.6.3
    collected 9 items / 6 deselected / 3 selected                                                                                                                                                                                          
    run-last-failure: rerun previous 3 failures (skipped 7 files)
    test_asserts.py FF                                                                                                                                                                                                               [ 66%]
    test_one.py F                                                                                                                                                                                                                    [100%]
    
    ======================== 3 failed, 6 deselected in 0.15 seconds ============================
    

    –ff(–failed-first)


    –ff, --failed-first :run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown
    E:\Programs\Python\Python_Pytest\TestScripts>pytest --ff --tb=no
    ================================= test session starts ==================================
    platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
    rootdir: E:\Programs\Python\Python_Pytest\TestScripts
    plugins: allure-pytest-2.6.3
    collected 17 items                                                                                                                                                                                                                     
    run-last-failure: rerun previous 3 failures first
    test_asserts.py FF                                                                                                                                                                                                               [ 11%]
    test_one.py F                                                                                                                                                                                                                    [ 17%]
    test_asserts.py .....                                                                                                                                                                                                            [ 47%]
    test_fixture1.py ..                                                                                                                                                                                                              [ 58%]
    test_fixture2.py ..                                                                                                                                                                                                              [ 70%]
    test_one.py .                                                                                                                                                                                                                    [ 76%]
    test_two.py ....                                                                                                                                                                                                                 [100%]
    ======================== warnings summary ==========================================
    c:\python37\lib\site-packages\_pytest\mark\structures.py:324
      c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
    est.org/en/latest/mark.html
        PytestUnknownMarkWarning,
    
    -- Docs: https://docs.pytest.org/en/latest/warnings.html
    ================= 3 failed, 14 passed, 1 warnings in 0.25 seconds ==========================
    

    -sと-capture=method


    -sは–capture=noに等しく、実行結果は以下の通りです.
    (venv) D:\Python_Pytest\TestScripts>pytest -s
    ============================= test session starts ============================================
    platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 18 items                                                                                                                                                                                                                        
    
    test_asserts.py ...F...F
    test_fixture1.py
    
    setup_module================>
    setup_function------>
    test_numbers_3_4
    .teardown_function--->
    setup_function------>
    test_strings_a_3
    .teardown_function--->
    teardown_module=============>
    
    test_fixture2.py
    
    setup_class=========>
    setup_method----->>
    setup----->
    test_numbers_5_6
    .teardown-->
    teardown_method-->>
    setup_method----->>
    setup----->
    test_strings_b_2
    .teardown-->
    teardown_method-->>
    teardown_class=========>
    
    test_one.py .F
    test_two.py ....
    
    ========================= FAILURES =============================================
    _____________________________ test_add4 ______________________________________________
    
       @pytest.mark.aaaa
       def test_add4():
    >       assert add(17, 22) >= 50
    E       assert 39 >= 50
    E        +  where 39 = add(17, 22)
    
    test_asserts.py:36: AssertionError
    _____________________________________________________________________________________________________________ test_not_true ______________________________________________________________________________________________________________
    
       def test_not_true():
    >       assert not is_prime(7)
    E       assert not True
    E        +  where True = is_prime(7)
    
    test_asserts.py:70: AssertionError
    _______________________________________ test_not_equal ________________________________________________
    
       def test_not_equal():
    >       assert (1, 2, 3) == (3, 2, 1)
    E       assert (1, 2, 3) == (3, 2, 1)
    E         At index 0 diff: 1 != 3
    E         Use -v to get the full diff
    
    test_one.py:9: AssertionError
    ================================== 3 failed, 15 passed in 0.15 seconds =================================
    
    

    –capture=method per-test capturing method: one of fd|sys|no.

    -l(–showlocals)


    -l, --showlocals show locals in tracebacks (disabled by default).

    –duration=N


    –durations=N show N slowest setup/test durations (N=0 for all). ほとんどはテストコードをチューニングするために使用され、このオプションは最も遅いNの使用例を示し、0に等しいとすべての逆順序を表す.
    (venv) D:\Python_Pytest\TestScripts>pytest --duration=5
    ============================= test session starts =============================
    platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: D:\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 18 items                                                             
    
    test_asserts.py ...F...F                                                 [ 44%]
    test_fixture1.py ..                                                      [ 55%]
    test_fixture2.py ..                                                      [ 66%]
    test_one.py .F                                                           [ 77%]
    test_two.py ....                                                         [100%]
    
    ================================== FAILURES ===================================
    __________________________________ test_add4 __________________________________
    
        @pytest.mark.aaaa
        def test_add4():
    >       assert add(17, 22) >= 50
    E       assert 39 >= 50
    E        +  where 39 = add(17, 22)
    
    test_asserts.py:36: AssertionError
    ________________________________ test_not_true ________________________________
    
        def test_not_true():
    >       assert not is_prime(7)
    E       assert not True
    E        +  where True = is_prime(7)
    
    test_asserts.py:70: AssertionError
    _______________________________ test_not_equal ________________________________
    
        def test_not_equal():
    >       assert (1, 2, 3) == (3, 2, 1)
    E       assert (1, 2, 3) == (3, 2, 1)
    E         At index 0 diff: 1 != 3
    E         Use -v to get the full diff
    
    test_one.py:9: AssertionError
    ========================== slowest 5 test durations ===========================
    0.01s call     test_asserts.py::test_add4
    
    (0.00 durations hidden.  Use -vv to show these durations.)
    ===================== 3 failed, 15 passed in 0.27 seconds =====================
    
    

    実行結果には、ヒント(0.00 durations hidden.Use-vv to show these durations.)が表示されます.-vvを追加すると、実行結果は次のようになります.
    (venv) D:\Python_Pytest\TestScripts>pytest --duration=5 -vv
    ============================= test session starts =============================
    platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe
    cachedir: .pytest_cache
    rootdir: D:\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 18 items                                                             
    
    test_asserts.py::test_add PASSED                                         [  5%]
    test_asserts.py::test_add2 PASSED                                        [ 11%]
    test_asserts.py::test_add3 PASSED                                        [ 16%]
    test_asserts.py::test_add4 FAILED                                        [ 22%]
    test_asserts.py::test_in PASSED                                          [ 27%]
    test_asserts.py::test_not_in PASSED                                      [ 33%]
    test_asserts.py::test_true PASSED                                        [ 38%]
    test_asserts.py::test_not_true FAILED                                    [ 44%]
    test_fixture1.py::test_numbers_3_4 PASSED                                [ 50%]
    test_fixture1.py::test_strings_a_3 PASSED                                [ 55%]
    test_fixture2.py::TestUM::test_numbers_5_6 PASSED                        [ 61%]
    test_fixture2.py::TestUM::test_strings_b_2 PASSED                        [ 66%]
    test_one.py::test_equal PASSED                                           [ 72%]
    test_one.py::test_not_equal FAILED                                       [ 77%]
    test_two.py::test_default PASSED                                         [ 83%]
    test_two.py::test_member_access PASSED                                   [ 88%]
    test_two.py::test_asdict PASSED                                          [ 94%]
    test_two.py::test_replace PASSED                                         [100%]
    
    ================================== FAILURES ===================================
    __________________________________ test_add4 __________________________________
    
        @pytest.mark.aaaa
        def test_add4():
    >       assert add(17, 22) >= 50
    E       assert 39 >= 50
    E        +  where 39 = add(17, 22)
    
    test_asserts.py:36: AssertionError
    ________________________________ test_not_true ________________________________
    
        def test_not_true():
    >       assert not is_prime(7)
    E       assert not True
    E        +  where True = is_prime(7)
    
    test_asserts.py:70: AssertionError
    _______________________________ test_not_equal ________________________________
    
        def test_not_equal():
    >       assert (1, 2, 3) == (3, 2, 1)
    E       assert (1, 2, 3) == (3, 2, 1)
    E         At index 0 diff: 1 != 3
    E         Full diff:
    E         - (1, 2, 3)
    E         ?  ^     ^
    E         + (3, 2, 1)
    E         ?  ^     ^
    
    test_one.py:9: AssertionError
    ========================== slowest 5 test durations ===========================
    0.00s setup    test_one.py::test_not_equal
    0.00s setup    test_fixture1.py::test_strings_a_3
    0.00s setup    test_asserts.py::test_add3
    0.00s call     test_fixture2.py::TestUM::test_strings_b_2
    0.00s call     test_asserts.py::test_in
    ===================== 3 failed, 15 passed in 0.16 seconds =====================
    

    -r


    簡単な概要レポートを生成し、-rと組み合わせて使用することもできます.
    Option
    Description
    f
    failed
    E
    error
    s
    skipped
    x
    xfailed
    X
    xpassed
    p
    passed
    P
    passed with output
    a
    all except pP
    A
    all
    たとえば、失敗したテストとスキップしたテストだけを見たい場合は、このように実行できます.
    
    (venv) E:\Python_Pytest\TestScripts>pytest -rfs
    ============================= test session starts =============================
    platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
    rootdir: E:\Python_Pytest\TestScripts, inifile:
    plugins: allure-adaptor-1.7.10
    collected 18 items                                                             
    
    test_asserts.py ...F...F                                                 [ 44%]
    test_fixture1.py ..                                                      [ 55%]
    test_fixture2.py ..                                                      [ 66%]
    test_one.py .F                                                           [ 77%]
    test_two.py ....                                                         [100%]
    
    ================================== FAILURES ===================================
    __________________________________ test_add4 __________________________________
    
        @pytest.mark.aaaa
        def test_add4():
    >       assert add(17, 22) >= 50
    E       assert 39 >= 50
    E        +  where 39 = add(17, 22)
    
    test_asserts.py:36: AssertionError
    ________________________________ test_not_true ________________________________
    
        def test_not_true():
    >       assert not is_prime(7)
    E       assert not True
    E        +  where True = is_prime(7)
    
    test_asserts.py:70: AssertionError
    _______________________________ test_not_equal ________________________________
    
        def test_not_equal():
    >       assert (1, 2, 3) == (3, 2, 1)
    E       assert (1, 2, 3) == (3, 2, 1)
    E         At index 0 diff: 1 != 3
    E         Use -v to get the full diff
    
    test_one.py:9: AssertionError
    =========================== short test summary info ===========================
    FAIL test_asserts.py::test_add4
    FAIL test_asserts.py::test_not_true
    FAIL test_one.py::test_not_equal
    ===================== 3 failed, 15 passed in 0.10 seconds =====================
    
    

    pytest--help詳細パラメータの取得


    コマンドラインにpytest--helpを入力して実行した結果、印刷された結果、pytestコマンドの使用方法usage:pytest[options][file_or_dir][file_or_dir][...]および一連の実行方法(options)とその説明が表示されます.
    C:\Users\Administrator>pytest --help
    usage: pytest [options] [file_or_dir] [file_or_dir] [...]
    positional arguments:
      file_or_dir
    general:
      -k EXPRESSION         only run tests which match the given substring
                            expression. An expression is a python evaluatable
                            expression where all names are substring-matched
                            against test names and their parent classes. Example:
                            -k 'test_method or test_other' matches all test
                            functions and classes whose name contains
                            'test_method' or 'test_other', while -k 'not
                            test_method' matches those that don't contain
                            'test_method' in their names. Additionally keywords
                            are matched to classes and functions containing extra
                            names in their 'extra_keyword_matches' set, as well as
                            functions which have names assigned directly to them.
      -m MARKEXPR           only run tests matching given mark expression.
                            example: -m 'mark1 and not mark2'.
      --markers             show markers (builtin, plugin and per-project ones).
      -x, --exitfirst       exit instantly on first error or failed test.
      --maxfail=num         exit after first num failures or errors.
      --strict              marks not registered in configuration file raise
                            errors.
      -c file               load configuration from `file` instead of trying to
                            locate one of the implicit configuration files.
      --continue-on-collection-errors
                            Force test execution even if collection errors occur.
      --rootdir=ROOTDIR     Define root directory for tests. Can be relative path:
                            'root_dir', './root_dir', 'root_dir/another_dir/';
                            absolute path: '/home/user/root_dir'; path with
                            variables: '$HOME/root_dir'.
      --fixtures, --funcargs
                            show available fixtures, sorted by plugin appearance
                            (fixtures with leading '_' are only shown with '-v')
      --fixtures-per-test   show fixtures per test
      --import-mode={prepend,append}
                            prepend/append to sys.path when importing test
                            modules, default is to prepend.
      --pdb                 start the interactive Python debugger on errors or
                            KeyboardInterrupt.
      --pdbcls=modulename:classname
                            start a custom interactive Python debugger on errors.
                            For example:
                            --pdbcls=IPython.terminal.debugger:TerminalPdb
      --trace               Immediately break when running each test.
      --capture=method      per-test capturing method: one of fd|sys|no.
      -s                    shortcut for --capture=no.
      --runxfail            run tests even if they are marked xfail
      --lf, --last-failed   rerun only the tests that failed at the last run (or
                            all if none failed)
      --ff, --failed-first  run all tests but run the last failures first. This
                            may re-order tests and thus lead to repeated fixture
                            setup/teardown
      --nf, --new-first     run tests from new files first, then the rest of the
                            tests sorted by file mtime
      --cache-show          show cache contents, don't perform collection or tests
      --cache-clear         remove all cache contents at start of test run.
      --lfnf={all,none}, --last-failed-no-failures={all,none}
                            change the behavior when no test failed in the last
                            run or no information about the last failures was
                            found in the cache
      --sw, --stepwise      exit on test fail and continue from last failing test
                            next time
      --stepwise-skip       ignore the first failing test but stop on the next
                            failing test
      --allure_severities=SEVERITIES_SET
                            Comma-separated list of severity names. Tests only
                            with these severities will be run. Possible values
                            are:blocker, critical, minor, normal, trivial.
      --allure_features=FEATURES_SET
                            Comma-separated list of feature names. Run tests that
                            have at least one of the specified feature labels.
      --allure_stories=STORIES_SET
                            Comma-separated list of story names. Run tests that
                            have at least one of the specified story labels.
    
    reporting:
      -v, --verbose         increase verbosity.
      -q, --quiet           decrease verbosity.
      --verbosity=VERBOSE   set verbosity
      -r chars              show extra test summary info as specified by chars
                            (f)ailed, (E)error, (s)skipped, (x)failed, (X)passed,
                            (p)passed, (P)passed with output, (a)all except pP.
                            Warnings are displayed at all times except when
                            --disable-warnings is set
      --disable-warnings, --disable-pytest-warnings
                            disable warnings summary
      -l, --showlocals      show locals in tracebacks (disabled by default).
      --tb=style            traceback print mode (auto/long/short/line/native/no).
      --show-capture={no,stdout,stderr,log,all}
                            Controls how captured stdout/stderr/log is shown on
                            failed tests. Default is 'all'.
      --full-trace          don't cut any tracebacks (default is to cut).
      --color=color         color terminal output (yes/no/auto).
      --durations=N         show N slowest setup/test durations (N=0 for all).
      --pastebin=mode       send failed|all info to bpaste.net pastebin service.
      --junit-xml=path      create junit-xml style report file at given path.
      --junit-prefix=str    prepend prefix to classnames in junit-xml output
      --result-log=path     DEPRECATED path for machine-readable result log.
    
    collection:
      --collect-only        only collect tests, don't execute them.
      --pyargs              try to interpret all arguments as python packages.
      --ignore=path         ignore path during collection (multi-allowed).
      --deselect=nodeid_prefix
                            deselect item during collection (multi-allowed).
      --confcutdir=dir      only load conftest.py's relative to specified dir.
      --noconftest          Don't load any conftest.py files.
      --keep-duplicates     Keep duplicate tests.
      --collect-in-virtualenv
                            Don't ignore tests in a local virtualenv directory
      --doctest-modules     run doctests in all .py modules
      --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
                            choose another output format for diffs on doctest
                            failure
      --doctest-glob=pat    doctests file matching pattern, default: test*.txt
      --doctest-ignore-import-errors
                            ignore doctest ImportErrors
      --doctest-continue-on-failure
                            for a given doctest, continue to run after the first
                            failure
    
    test session debugging and configuration:
      --basetemp=dir        base temporary directory for this test run.(warning:
                            this directory is removed if it exists)
      --version             display pytest lib version and import information.
      -h, --help            show help message and configuration info
      -p name               early-load given plugin (multi-allowed). To avoid
                            loading of plugins, use the `no:` prefix, e.g.
                            `no:doctest`.
      --trace-config        trace considerations of conftest.py files.
      --debug               store internal tracing debug information in
                            'pytestdebug.log'.
      -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
                            override ini option with "option=value" style, e.g.
                            `-o xfail_strict=True -o cache_dir=cache`.
      --assert=MODE         Control assertion debugging tools. 'plain' performs no
                            assertion debugging. 'rewrite' (the default) rewrites
                            assert statements in test modules on import to provide
                            assert expression information.
      --setup-only          only setup fixtures, do not execute tests.
      --setup-show          show setup of fixtures while executing tests.
      --setup-plan          show what fixtures and tests would be executed but
                            don't execute anything.
    
    pytest-warnings:
      -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
                            set which warnings to report, see -W option of python
                            itself.
    
    logging:
      --no-print-logs       disable printing caught logs on failed tests.
      --log-level=LOG_LEVEL
                            logging level used by the logging module
      --log-format=LOG_FORMAT
                            log format as used by the logging module.
      --log-date-format=LOG_DATE_FORMAT
                            log date format as used by the logging module.
      --log-cli-level=LOG_CLI_LEVEL
                            cli logging level.
      --log-cli-format=LOG_CLI_FORMAT
                            log format as used by the logging module.
      --log-cli-date-format=LOG_CLI_DATE_FORMAT
                            log date format as used by the logging module.
      --log-file=LOG_FILE   path to a file when logging will be written to.
      --log-file-level=LOG_FILE_LEVEL
                            log file logging level.
      --log-file-format=LOG_FILE_FORMAT
                            log format as used by the logging module.
      --log-file-date-format=LOG_FILE_DATE_FORMAT
                            log date format as used by the logging module.
    
    reporting:
      --alluredir=DIR       Generate Allure report in the specified directory (may
                            not exist)
    
    
    [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:
    
      markers (linelist)       markers for test functions
      empty_parameter_set_mark (string) default marker for empty parametersets
      norecursedirs (args)     directory patterns to avoid for recursion
      testpaths (args)         directories to search for tests when no files or dire
      console_output_style (string) console output: classic or with additional progr
      usefixtures (args)       list of default fixtures to be used with this project
      python_files (args)      glob-style file patterns for Python test module disco
      python_classes (args)    prefixes or glob names for Python test class discover
      python_functions (args)  prefixes or glob names for Python test function and m
      xfail_strict (bool)      default for the strict parameter of xfail markers whe
      junit_suite_name (string) Test suite name for JUnit report
      junit_logging (string)   Write captured log messages to JUnit report: one of n
      doctest_optionflags (args) option flags for doctests
      doctest_encoding (string) encoding used for doctest files
      cache_dir (string)       cache directory path.
      filterwarnings (linelist) Each line specifies a pattern for warnings.filterwar
      log_print (bool)         default value for --no-print-logs
      log_level (string)       default value for --log-level
      log_format (string)      default value for --log-format
      log_date_format (string) default value for --log-date-format
      log_cli (bool)           enable log display during test run (also known as "li
      log_cli_level (string)   default value for --log-cli-level
      log_cli_format (string)  default value for --log-cli-format
      log_cli_date_format (string) default value for --log-cli-date-format
      log_file (string)        default value for --log-file
      log_file_level (string)  default value for --log-file-level
      log_file_format (string) default value for --log-file-format
      log_file_date_format (string) default value for --log-file-date-format
      addopts (args)           extra command line options
      minversion (string)      minimally required pytest version
    
    environment variables:
      PYTEST_ADDOPTS           extra command line options
      PYTEST_PLUGINS           comma-separated plugins to load during startup
      PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
      PYTEST_DEBUG             set to enable debug tracing of pytest's internals
    
    
    to see available markers type: pytest --markers
    to see available fixtures type: pytest --fixtures
    (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option