Skip to content

Instantly share code, notes, and snippets.

@klamping
Last active January 23, 2024 18:22
Show Gist options
  • Select an option

  • Save klamping/4a9cc78eae3225a5ae977e95f4362bf5 to your computer and use it in GitHub Desktop.

Select an option

Save klamping/4a9cc78eae3225a5ae977e95f4362bf5 to your computer and use it in GitHub Desktop.

Revisions

  1. klamping revised this gist Jan 23, 2024. 1 changed file with 6 additions and 0 deletions.
    6 changes: 6 additions & 0 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -153,3 +153,9 @@ This seems like a neat idea, but don't. But introducing random data means tests
    ### Don't use random database/API data

    Using random data, while good for manual testing, is awful for automated testing. See the previous point

    ---

    ### Don't use automation for discover new bugs. Limit scope of tests

    Similar to above, automated tests need a limited scope in order to protect debugability.
  2. klamping revised this gist Dec 28, 2023. 1 changed file with 9 additions and 0 deletions.
    9 changes: 9 additions & 0 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,5 @@
    Tests need to be reliable most of all, and easy to debug second of all. An unreliable test is an absolute pain.

    ### Optimize tests for readability and debugging

    - That's where you'll spend the majority of your time (and frustration)- Your tests should not make you think
    @@ -12,6 +14,13 @@

    ---

    ### Magic is real, and it hurts!

    Avoid magic in code. Magic means logic, logic means bugs. You don't want to debug your tests!
    Don't do special code tricks. Code to a third-grade level. Be overly verbose.

    ---

    ### Avoid conditionals in your tests

    Conditionals in your tests mean that different test runs will execute different test code. This is a sign that there's dangerous "logic" in your tests, which means you need to test your tests. Anytime you're spending effort testing your tests... that's just not a good thing.
  3. klamping revised this gist Dec 13, 2023. 1 changed file with 6 additions and 0 deletions.
    6 changes: 6 additions & 0 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -12,6 +12,12 @@

    ---

    ### Avoid conditionals in your tests

    Conditionals in your tests mean that different test runs will execute different test code. This is a sign that there's dangerous "logic" in your tests, which means you need to test your tests. Anytime you're spending effort testing your tests... that's just not a good thing.

    ---

    ### Don't use text based selectors if the content is dynamic/comes from a database.

    ```
  4. klamping revised this gist Dec 11, 2023. 1 changed file with 7 additions and 1 deletion.
    8 changes: 7 additions & 1 deletion tips.md
    Original file line number Diff line number Diff line change
    @@ -131,4 +131,10 @@ Note that there is an issue with doing this, as it causes the 'retry' functional
    This seems like a neat idea, but don't. But introducing random data means tests that aren't reproducible, unless you specifically know that it was the random data that did it (you won't). There are two exceptions to this:

    - You need unique data, like for IDs and such. If you do, make sure there are strict restrictions on the data, so that the data generated will always use the subset of characters (e.g., no umlaüts).
    - You always use the same seed for data generation, so data stays the same between test runs. This is simple to do with Chance.js
    - You always use the same seed for data generation, so data stays the same between test runs. This is simple to do with Chance.js

    ---

    ### Don't use random database/API data

    Using random data, while good for manual testing, is awful for automated testing. See the previous point
  5. klamping revised this gist Nov 7, 2023. 1 changed file with 13 additions and 2 deletions.
    15 changes: 13 additions & 2 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    ## Optimize tests for readability and debugging
    ### Optimize tests for readability and debugging

    - That's where you'll spend the majority of your time (and frustration)- Your tests should not make you think
    - Test failures will happen at the worst time. Don't make it even more stressful
    @@ -120,4 +120,15 @@ Sometimes this is tough, because you have to move through a longer flow to get t

    ### `describe` the test, `it` the steps

    UI Test runners did a bad job. They were built for unit tests, not UI tests. Unit tests are about testing functions, UI tests are about testing user flows. User flows are a looooot longer than function tests. Test runners are not build for long test cases. So, let's work around it, and alter how we use test runners to fit our needs. Describe the test case, and use 'it' for individual steps. This means you need to be careful about parallelization, and using things like 'only'/'skip', but the test reporting is much cleaner/easier.
    UI Test runners did a bad job. They were built for unit tests, not UI tests. Unit tests are about testing functions, UI tests are about testing user flows. User flows are a looooot longer than function tests. Test runners are not build for long test cases. So, let's work around it, and alter how we use test runners to fit our needs. Describe the test case, and use 'it' for individual steps. This means you need to be careful about parallelization, and using things like 'only'/'skip', but the test reporting is much cleaner/easier.

    Note that there is an issue with doing this, as it causes the 'retry' functionality to break. If one test relies on another, if you 'retry' that failed test, it will absolutely fail, because the previous test won't be re-run.

    ---

    ### Don't use Random data generators

    This seems like a neat idea, but don't. But introducing random data means tests that aren't reproducible, unless you specifically know that it was the random data that did it (you won't). There are two exceptions to this:

    - You need unique data, like for IDs and such. If you do, make sure there are strict restrictions on the data, so that the data generated will always use the subset of characters (e.g., no umlaüts).
    - You always use the same seed for data generation, so data stays the same between test runs. This is simple to do with Chance.js
  6. klamping revised this gist Sep 19, 2023. 1 changed file with 10 additions and 2 deletions.
    12 changes: 10 additions & 2 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -41,7 +41,7 @@ Scenario:

    ----

    ### Don't hide assertions
    ### Don't hide assertions in utility functions

    ```
    const checkUserDetails = (page, user, itemIndex) => {
    @@ -108,8 +108,16 @@ test('User Details', async (page) => {
    - We also have the specific content we're looking for in the assertion, versus a variable we have to hunt for
    - We're not mixing actions with assertions. If 'openDetails' fails, we have a very narrow set of things to look at. If `checkUserDetails` failed, we have to look at everything. We're only abstracting as much as useful.

    ---

    ### Smaller, Concise tests

    Debugging a long test is, well, long. You have to play through all the steps needed to get to the failure point. And if it's an intermittent failure (it usually is), then that takes even longer.

    Sometimes this is tough, because you have to move through a longer flow to get to a certain state, but the more you can keep your tests small, the easier they will be to debug.
    Sometimes this is tough, because you have to move through a longer flow to get to a certain state, but the more you can keep your tests small, the easier they will be to debug.

    ---

    ### `describe` the test, `it` the steps

    UI Test runners did a bad job. They were built for unit tests, not UI tests. Unit tests are about testing functions, UI tests are about testing user flows. User flows are a looooot longer than function tests. Test runners are not build for long test cases. So, let's work around it, and alter how we use test runners to fit our needs. Describe the test case, and use 'it' for individual steps. This means you need to be careful about parallelization, and using things like 'only'/'skip', but the test reporting is much cleaner/easier.
  7. klamping revised this gist Sep 15, 2023. 1 changed file with 7 additions and 1 deletion.
    8 changes: 7 additions & 1 deletion tips.md
    Original file line number Diff line number Diff line change
    @@ -106,4 +106,10 @@ test('User Details', async (page) => {
    - Our test tells a much more detailed, specific story of what's going, versus just pointing to an abstract function that takes more thought to understand
    - If an assertion fails, we are in the exact spot in the test where we need to be.
    - We also have the specific content we're looking for in the assertion, versus a variable we have to hunt for
    - We're not mixing actions with assertions. If 'openDetails' fails, we have a very narrow set of things to look at. If `checkUserDetails` failed, we have to look at everything. We're only abstracting as much as useful.
    - We're not mixing actions with assertions. If 'openDetails' fails, we have a very narrow set of things to look at. If `checkUserDetails` failed, we have to look at everything. We're only abstracting as much as useful.

    ### Smaller, Concise tests

    Debugging a long test is, well, long. You have to play through all the steps needed to get to the failure point. And if it's an intermittent failure (it usually is), then that takes even longer.

    Sometimes this is tough, because you have to move through a longer flow to get to a certain state, but the more you can keep your tests small, the easier they will be to debug.
  8. klamping created this gist Sep 12, 2023.
    109 changes: 109 additions & 0 deletions tips.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,109 @@
    ## Optimize tests for readability and debugging

    - That's where you'll spend the majority of your time (and frustration)- Your tests should not make you think
    - Test failures will happen at the worst time. Don't make it even more stressful

    ---

    ### Prefer hard-coded values in assertions, versus dynamic content

    - It sure is fun to write a generated test
    - It sure isn't fun to debug a generated test

    ---

    ### Don't use text based selectors if the content is dynamic/comes from a database.

    ```
    const companyName = page
    .locator('.job-listing-fields')
    .getByText('Company Name');
    await expect(companyName).toBeVisible();
    ```

    Issues:
    - If 'Company Name' has a bug like a typo (or the wrong company is showing), then the failure is `Can't find element with text "Company Name"`
    - Compare this to checking the text first:
    - `Expected 'Company Name', instead got 'Company Anme'`
    - The problem? You don't know where to look because the selector is text-based and dynamic, and clearly that's
    Which one is easier to debug?
    - Typos/incorrect content is a common UI issue. Write the tests in a way that help debug/identify these issues

    ```
    const firstCompany = page.getByTestId('company-name').nth(2);
    await expect(firstCompany).toHaveText('Company Name');
    ```

    Scenario:
    - We have a list of items. Current functionality is that when you add an item, it goes to the bottom of the list. We build our tests on this assumption and they run great
    - New change. We now sort the list alphabetically. This change is made in the backend, with no related front-end changes.
    - Suddenly, our staging tests are breaking, failing builds and causing headaches. The test only breaks when the item gets renamed (and resorted automatically). We look at the content, and it appears fine. It takes significant effort to realize that sorting is breaking the locator, because we're focused on the content existing, not on the content being in the wrong place.

    ----

    ### Don't hide assertions

    ```
    const checkUserDetails = (page, user, itemIndex) => {
    const userContainer = page.locator('.user-details').nth(itemIndex)
    const username = userContainer.getByTestId('username');
    const age = userContainer.getByTestId('age');
    await expect(username).toHaveText(user.firstName + user.lastName);
    // Expand the user details so we can see age
    await userContainer.getByTestId('expand-details').click();
    // wait for the content to appear
    await age.waitFor({ state: 'visible' })
    await expect(age).toHaveText(user.age);
    }
    test('User Details', async (page) => {
    const user = {
    firstName: 'Bob',
    age: '22'
    };
    checkUserDetails(page, user, 1);
    })
    ```

    If the assertion fails, your debugger is placed into the 'checkUserDetails' function, and you have to work your way out of it to figure out where things went wrong, and what data it was looking for. Instead, be less DRY and hard-code these things:

    ```
    const getUserDetailsRow = (page, itemIndex) => {
    const container = page.locator('.user-details').nth(itemIndex);
    const username = container.getByTestId('username');
    const age = container.getByTestId('age');
    const openDetails = async () => {
    await container.getByTestId('expand-details').click();
    // wait for the content to appear
    await age.waitFor({ state: 'visible' })
    }
    return {
    container,
    username,
    age,
    openDetails
    }
    }
    test('User Details', async (page) => {
    const userDetailsRow = getUserDetailsRow(page, 1);
    await expect(userDetailsRow.username).toHaveText('Bob Dole');
    await userDetailsRow.openDetails();
    await expect(userDetailsRow.age).toHaveText('22');
    })
    ```

    - Our test tells a much more detailed, specific story of what's going, versus just pointing to an abstract function that takes more thought to understand
    - If an assertion fails, we are in the exact spot in the test where we need to be.
    - We also have the specific content we're looking for in the assertion, versus a variable we have to hunt for
    - We're not mixing actions with assertions. If 'openDetails' fails, we have a very narrow set of things to look at. If `checkUserDetails` failed, we have to look at everything. We're only abstracting as much as useful.