Skip to content

Instantly share code, notes, and snippets.

@limkhashing
Created August 28, 2025 07:48
Show Gist options
  • Save limkhashing/b9b62c2bce21cfbab4444a84edcf76e2 to your computer and use it in GitHub Desktop.
Save limkhashing/b9b62c2bce21cfbab4444a84edcf76e2 to your computer and use it in GitHub Desktop.
Prompt Engineering

Android Module Unit Test Generation Standard

Prompt Usage Instructions

Use these specific prompt formats to leverage this comprehensive testing framework:

For New Test Classes (Testing Brand New Classes)

Generate unit tests for {ClassName} using #file:unit-test-generation-prompt.md

Example:

Generate unit tests for PaymentValidator using #file:unit-test-generation-prompt.md

For Existing Test Classes (Adding Missing Tests)

Generate missing tests in {TestClassName} using #file:unit-test-generation-prompt.md

Example:

Generate missing tests in BookingHelperTest using #file:unit-test-generation-prompt.md

For Specific Method Testing (Targeted Test Generation)

Generate missing tests in {TestClassName} for method "{methodName}" using #file:unit-test-generation-prompt.md

Example:

Generate missing tests in FlightSearchPresenterTest for method "validateBooking" using #file:unit-test-generation-prompt.md

Shortcut Commands (Quick Usage)

For faster typing, use these shortcut formats:

Shortcut 1: New Test Class

1. {ClassName} #file:unit-test-generation-prompt.md

Example:

1. PaymentValidator #file:unit-test-generation-prompt.md

Shortcut 2: Existing Test Class (Add Missing Tests)

2. {TestClassName} #file:unit-test-generation-prompt.md

Example:

2. BookingHelperTest #file:unit-test-generation-prompt.md

Shortcut 3: Specific Method Testing

3. {TestClassName} {methodName} #file:unit-test-generation-prompt.md

Example:

3. FlightSearchPresenterTest validateBooking #file:unit-test-generation-prompt.md

Note:

  • Numbers 1, 2, 3 indicate the command type (new class, existing class, specific method)
  • Replace {ClassName}, {TestClassName}, and {methodName} with your actual class and method names
  • The AI will automatically follow the 6-step workflow and generate comprehensive, high-quality unit tests following [Your Project] mobile application standards

Overview

This document provides a comprehensive 6-step workflow for generating unit tests for Android modules following established conventions and patterns. This standard can be applied to any module within the [Your Project] mobile application.

7-Step Unit Test Generation Workflow

Step 1: Analysis of Logic Changes in Actual Class

Step 2: Plan Test Coverage Strategy

Step 3: Apply Test Implementation Plan

Step 4: Optimize Test Implementation

Step 5: Run and Debug Tests

Step 6: Verify All Tests Pass

Step 7: Generate HTML Coverage Report


STEP 1: ANALYSIS OF LOGIC CHANGES IN ACTUAL CLASS

1.1 Identify What Has Changed

Before writing any tests, thoroughly analyze the actual class to understand what has changed or what needs testing:

New Class Analysis (For Completely New Classes)

When testing a brand new class:

# Examine the new class structure
cat src/main/java/path/to/NewClass.kt

Document:

  • All public methods and their signatures
  • Constructor parameters and dependencies
  • Business logic flows and conditional branches
  • Error handling mechanisms
  • Integration points with other classes
  • Observable streams and async operations

Existing Class Modification Analysis

When adding tests to existing functionality:

# Check what methods/logic have been added or modified
git diff HEAD~1 src/main/java/path/to/YourClass.kt

Key Questions to Answer:

  • What new public methods were added?
  • What existing methods had logic changes?
  • Were new parameters added to existing methods?
  • Are there new conditional branches or logic paths?
  • Were new dependencies or collaborators introduced?
  • Are there new feature flags or configuration changes?

Impact Assessment Matrix

Create a matrix to document the changes:

Component Change Type Methods Affected Dependencies Added Logic Complexity
NewMethod Addition validatePayment() PaymentValidator Medium
ExistingMethod Parameter Added searchFlights() MealPreferenceService Low
ExistingMethod Logic Branch processBooking() FeatureFlagService High

1.2 Business Logic Flow Analysis

Identify All Logic Paths

For each method, map out all possible execution paths:

// Example: Analyze this method
fun processBooking(booking: BookingData): Observable<BookingResult> {
    return if (featureFlag.isNewFlowEnabled()) {
        // Path A: New flow logic
        newBookingProcessor.process(booking)
            .doOnSuccess { saveToSession(it) }
            .doOnError { logError(it) }
    } else {
        // Path B: Legacy flow logic  
        legacyBookingProcessor.process(booking)
            .map { convertToNewFormat(it) }
    }
}

Document Required Test Scenarios:

  • βœ… Path A: Feature flag enabled + success case
  • βœ… Path A: Feature flag enabled + error case
  • βœ… Path B: Feature flag disabled + success case
  • βœ… Path B: Feature flag disabled + error case

Error Handling Analysis

Identify all error scenarios that need testing:

  • Network timeouts and failures
  • Invalid input validation
  • Business rule violations
  • Database operation failures
  • Authentication/authorization errors

STEP 2: PLAN TEST COVERAGE STRATEGY

2.1 Determine Test Class Strategy

Decision Matrix: New vs Existing Test Class

Create NEW Test Class When:

  • βœ… Testing a completely new class
  • βœ… Testing a new module or component
  • βœ… Major refactoring that changes the entire class structure
  • βœ… The existing test class is already very large (>50 test methods)

Append to EXISTING Test Class When:

  • βœ… Adding tests for new methods in an existing class
  • βœ… Testing new logic paths in existing methods
  • βœ… Adding edge cases for existing functionality
  • βœ… Testing new parameters or variations of existing methods
  • βœ… Adding feature flag variations to existing logic

Test Class Naming Convention

// For new classes
{ClassName}Test.kt

// Examples:
PaymentValidatorTest.kt
FlightSearchHelperTest.kt
BookingSessionProviderTest.kt

2.2 Test Method Planning

Test Method Naming Strategy

All test methods MUST follow the three-part naming convention with optimized length:

{functionName}_{givenCondition}_{expectedResult}

Naming Convention Optimization Rules

CRITICAL: Balance conciseness with completeness - avoid both overly long names AND missing critical conditions

βœ… GOOD Examples (Balanced - Clear and Complete):

validatePayment_givenValidCard_returnsTrue
validatePayment_givenNullCard_returnsFalse
validatePayment_givenExpiredCard_returnsFalse
validatePayment_givenInvalidCvv_returnsFalse
processBooking_givenValidDataAndNewFlag_savesToSession
processBooking_givenValidDataAndLegacyFlag_usesLegacyFlow
processBooking_givenNetworkError_handlesGracefully
searchFlights_givenValidOriginAndDestination_returnsResults
searchFlights_givenEmptyOriginButValidDestination_returnsError
isPriceChanged_givenSamePriceAndCurrency_returnsFalse
isPriceChanged_givenDifferentPriceButSameCurrency_returnsTrue

❌ AVOID (Too Long - Redundant Details):

validatePaymentMethodWithCreditCardInformation_givenValidCreditCardWithCorrectCVVAndNotExpired_returnsTrueAndProcessesSuccessfully
processBookingDataWithPassengerInformationAndFlightDetails_givenValidBookingDataWithAllRequiredFieldsPopulated_savesDataToSessionProviderSuccessfully

❌ AVOID (Too Short - Missing Critical Conditions):

validatePayment_givenCard_returnsTrue          // Missing: what makes the card valid?
processBooking_givenData_saves                 // Missing: what type of data? saves where?
searchFlights_givenInput_returnsResults        // Missing: what kind of input? valid or invalid?
isPriceChanged_givenPrice_returnsTrue          // Missing: compared to what? what changed?
saveBooking_givenTrip_saves                    // Missing: what type of trip? one-way/return?

Optimization Guidelines:

  • Include key distinguishing conditions: givenValidOriginAndDestination vs givenEmptyOrigin
  • Specify important context: givenNewFlag vs givenLegacyFlag for feature flag tests
  • Use abbreviations for common terms: Param instead of Parameter, Config instead of Configuration
  • Drop redundant words: Use givenNull instead of givenNullValue
  • Use domain-specific terms: ExpiredCard instead of CardWithExpiredDate
  • Keep essential differentiators: givenSamePriceAndCurrency vs givenDifferentPriceButSameCurrency
  • Avoid combining unrelated conditions: Don't use givenValidInput if you need to test specific validation aspects separately

Critical Edge Case Coverage Requirements

MANDATORY: Cover critical edge cases to avoid crash, incorrect logic or break flow

For each method, ensure tests cover these critical scenarios:

  • Null input handling: Test with null parameters to prevent NullPointerException
  • Empty collection handling: Test with empty lists, arrays, or sets
  • Boundary values: Test minimum/maximum values, zero values, negative numbers
  • Invalid state conditions: Test when objects are in unexpected states
  • Network/IO failures: Test timeout, connection errors, and data corruption
  • Concurrent access issues: Test thread safety where applicable
  • Memory constraints: Test with large datasets or memory-limited scenarios
  • Configuration edge cases: Test missing configurations, invalid settings
  • Business rule violations: Test data that violates business logic constraints

Code Modification Restrictions

CRITICAL RULE: DO NOT MODIFY OR ADJUST ACTUAL LOGIC CLASSES

  • βœ… ONLY modify or adjust test classes
  • ❌ NEVER modify the class under test (CUT)
  • ❌ NEVER modify dependencies or collaborator classes
  • ❌ NEVER modify data models or DTOs to "make tests easier"
  • ❌ NEVER add public methods to classes just for testing
  • ❌ NEVER change access modifiers (private β†’ public) for testing

If tests reveal issues in the actual class:

  1. Document the issues in test comments
  2. Report findings to the development team
  3. Work around the issues using proper mocking techniques
  4. Focus on testing the current behavior "as-is"

Planning Template

For each method in the class, create a test plan:

// Method: validatePayment(card: CreditCard): Boolean
// Test Plan:
// 1. validatePayment_givenValidCard_returnsTrue
// 2. validatePayment_givenExpiredCard_returnsFalse  
// 3. validatePayment_givenNullCard_returnsFalse               // Edge case: null input
// 4. validatePayment_givenInvalidCVV_returnsFalse
// 5. validatePayment_givenEmptyCardNumber_returnsFalse        // Edge case: empty string
// 6. validatePayment_givenFutureExpiryDate_returnsTrue        // Boundary case
// 7. validatePayment_givenExactExpiryDate_handlesCorrectly    // Boundary case

// Method: processBooking(booking: BookingData): Observable<BookingResult>
// Test Plan:
// 1. processBooking_givenValidBookingAndNewFlagEnabled_usesNewFlow
// 2. processBooking_givenValidBookingAndNewFlagDisabled_usesLegacyFlow
// 3. processBooking_givenNetworkError_handlesErrorGracefully
// 4. processBooking_givenInvalidBooking_returnsValidationError
// 5. processBooking_givenNullBooking_handlesGracefully           // Edge case: null input
// 6. processBooking_givenEmptyBookingData_handlesGracefully      // Edge case: empty data
// 7. processBooking_givenTimeoutError_doesNotCrash              // Edge case: timeout
// 8. processBooking_givenLargeBookingData_handlesEfficiently    // Edge case: large data

2.3 Mock and Dependency Planning

Dependency Analysis

List all dependencies that need mocking:

// Dependencies identified in YourClass constructor:
class YourClass(
    private val bookingProvider: BookingProvider,          // Mock needed
    private val sessionProvider: SessionProvider,          // Mock needed  
    private val featureFlag: FeatureFlag,                 // Mock needed
    private val schedulerConfig: SchedulerConfiguration    // Mock needed
)

Mock Configuration Strategy

Plan the mock setup for different test scenarios:

// Common mocks (setup in @Before)
@Mock lateinit var bookingProvider: BookingProvider
@Mock lateinit var sessionProvider: SessionProvider  
@Mock lateinit var featureFlag: FeatureFlag

// Test-specific mock behaviors
// For success scenarios: `when`(provider.getData()).thenReturn(Observable.just(data))
// For error scenarios: `when`(provider.getData()).thenReturn(Observable.error(exception))
// For feature flags: `when`(featureFlag.isEnabled()).thenReturn(true/false)

2.4 Test Data Planning

Test Data Requirements

Plan what test data objects you'll need:

// Common test data (created in @Before or helper methods)
private lateinit var validBookingData: BookingData
private lateinit var invalidBookingData: BookingData
private lateinit var sampleFlightViewModel: FlightViewModelV2
private lateinit var expectedResult: BookingResult

// Helper method planning
private fun createValidBookingData(): BookingData { ... }
private fun createExpectedFlightSearchParams(): FlightSearchParams { ... }

STEP 3: APPLY TEST IMPLEMENTATION PLAN

3.1 Test Class Structure Implementation

Standard Test Class Template

Use this template when implementing new test classes:

@RunWith(MockitoJUnitRunner::class)
class {ClassName}Test {

    @InjectMocks
    lateinit var subject: {ClassName}

    @Mock
    lateinit var dependency1: Dependency1Type

    @Mock
    lateinit var dependency2: Dependency2Type

    // Test data
    private lateinit var sampleData: DataType

    @Before
    fun setup() {
        // Initialize common test data
        sampleData = createSampleData()
        
        // Configure common mock behaviors
        `when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
        `when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
        
        // Set up view if presenter
        subject.setView(view)
    }

    @Test
    fun methodName_givenCondition_expectedResult() {
        // Arrange
        // Act  
        // Assert
    }
}

3.2 Test Implementation by Component Type

Presenter Tests

Focus on:

  • View interactions and state changes
  • Business logic coordination
  • Error handling and user feedback
  • Navigation flows
  • Firebase logging
  • Lifecycle management
@Test
fun onSearchClicked_givenValidInput_proceedsToFlightSearch() {
    // Arrange
    `when`(recentAirportStore.get().saveRecentAirports(any(), any(), any()))
        .thenReturn(Observable.just(true))
    
    // Act
    subject.onSearchClicked("SIN", "LAX")
    
    // Assert
    verify(view).proceedToFlightSearch()
    verify(scopeManager.get()).releaseFlightSearchComponent()
}

Provider Tests

Focus on:

  • Data transformation
  • API call handling
  • Error scenarios
  • Caching behavior
  • Observable streams
@Test
fun getSavedFlights_givenDatabaseError_returnsError() {
    // Arrange
    `when`(database.getSavedFlights()).thenReturn(Observable.error(DatabaseException()))
    
    // Act
    val result = subject.getSavedFlights().test()
    
    // Assert
    result.assertError(DatabaseException::class.java)
}

Factory Tests

Focus on:

  • Object creation correctness
  • Property mapping accuracy
  • Null handling
  • Default value assignment
@Test
fun create_givenValidFlightData_mapsAllProperties() {
    // Arrange
    val flightViewModel = createSampleFlightViewModel()
    
    // Act
    val result = subject.create(flightViewModel, searchParams)
    
    // Assert
    assertThat(result.flightId).isEqualTo(flightViewModel.flightId)
    assertThat(result.tripType).isEqualTo(searchParams.tripType)
}

Helper/Utility Tests

Focus on:

  • Algorithm correctness
  • Edge cases
  • Input validation
  • Output formatting
@Test
fun formatDuration_given4Hours_returnsFormattedString() {
    // Arrange
    `when`(context.getString(R.string.duration_format, "4", "0"))
        .thenReturn("4 hours")
    
    // Act
    val result = subject.formatDuration(14400)
    
    // Assert
    assertThat(result).isEqualTo("4 hours")
}

Validator Tests

Focus on:

  • Validation logic accuracy
  • Required field checking
  • Business rule enforcement
  • Error condition handling
@Test
fun isComplete_givenMissingRequiredField_returnsFalse() {
    // Arrange
    val passenger = createPassengerWithMissingField()
    
    // Act
    val result = subject.isComplete(passenger)
    
    // Assert
    assertThat(result).isFalse()
}

3.3 Critical Implementation Rules

NEVER Use any() in Verify Statements

❌ WRONG - This will cause Mockito errors:

verify(sessionProvider).saveData(any())
verify(converter).convert(any())

βœ… CORRECT - Use specific mock objects:

@Mock
private lateinit var mockData: DataModel

@Test
fun testMethod_givenValidData_savesData() {
    // Act
    subject.processData(mockData)
    
    // Assert - Use the actual mock object, not any()
    verify(sessionProvider).saveData(mockData)
}

Complex Object Verification Pattern

For complex nested objects, create expected objects explicitly:

private fun createExpectedFlightSearchParams(): FlightSearchParams {
    return FlightSearchParams(
        departure = FlightSearchParams.FlightSearchSegment(
            airportCode = "SIN",
            date = expectedDepartureDate
        ),
        arrival = FlightSearchParams.FlightSearchSegment(
            airportCode = "LAX", 
            date = expectedArrivalDate
        ),
        passengerConfiguration = expectedPassengerConfig,
        cabinClass = CabinClass.ECONOMY
    )
}

STEP 4: OPTIMIZE TEST IMPLEMENTATION

4.1 Test Code Analysis and Optimization

After completing the initial test implementation, perform a comprehensive analysis to optimize the test code for readability, reusability, and maintainability while preserving all test logic.

Optimization Scanning Checklist

CRITICAL RULE: Maintain identical test logic - optimization must not change test behavior or coverage

4.1.1 Code Duplication Analysis

Scan for duplicate code patterns and extract them into reusable methods:

❌ BEFORE (Duplicated Setup):

@Test
fun saveBooking_givenOneWayTrip_savesToSession() {
    // Arrange
    val flightViewModel = FlightViewModelV2().apply {
        flightId = "SQ123"
        departureDate = "2023-12-01"
        arrivalDate = null
        tripType = TripType.ONE_WAY
        origin = "SIN"
        destination = "LAX"
    }
    
    // Act & Assert...
}

@Test
fun saveBooking_givenReturnTrip_savesToSession() {
    // Arrange
    val flightViewModel = FlightViewModelV2().apply {
        flightId = "SQ123"
        departureDate = "2023-12-01"
        arrivalDate = "2023-12-08"
        tripType = TripType.RETURN
        origin = "SIN"
        destination = "LAX"
    }
    
    // Act & Assert...
}

βœ… AFTER (Extracted Helper Methods):

@Test
fun saveBooking_givenOneWayTrip_savesToSession() {
    // Arrange
    val flightViewModel = createOneWayFlightViewModel()
    
    // Act & Assert...
}

@Test
fun saveBooking_givenReturnTrip_savesToSession() {
    // Arrange
    val flightViewModel = createReturnFlightViewModel()
    
    // Act & Assert...
}

private fun createOneWayFlightViewModel(): FlightViewModelV2 {
    return createBaseFlightViewModel().apply {
        tripType = TripType.ONE_WAY
        arrivalDate = null
    }
}

private fun createReturnFlightViewModel(): FlightViewModelV2 {
    return createBaseFlightViewModel().apply {
        tripType = TripType.RETURN
        arrivalDate = "2023-12-08"
    }
}

private fun createBaseFlightViewModel(): FlightViewModelV2 {
    return FlightViewModelV2().apply {
        flightId = "SQ123"
        departureDate = "2023-12-01"
        origin = "SIN"
        destination = "LAX"
    }
}

4.1.2 Mock Configuration Optimization

Consolidate repetitive mock setups:

❌ BEFORE (Repeated Mock Setup):

@Test
fun testMethod1() {
    `when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
    `when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
    `when`(dateFormatter.format(any())).thenReturn("2023-12-01")
    // Test logic...
}

@Test
fun testMethod2() {
    `when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
    `when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
    `when`(dateFormatter.format(any())).thenReturn("2023-12-01")
    // Test logic...
}

βœ… AFTER (Centralized Setup):

@Before
fun setup() {
    setupCommonMocks()
    setupTestData()
}

private fun setupCommonMocks() {
    `when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
    `when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
    `when`(dateFormatter.format(any())).thenReturn("2023-12-01")
}

private fun setupTestData() {
    baseFlightViewModel = createBaseFlightViewModel()
    validBookingData = createValidBookingData()
}

4.1.3 Assertion Pattern Optimization

Create reusable assertion methods for complex verifications:

❌ BEFORE (Repeated Assertion Logic):

@Test
fun testMethod1() {
    // Act
    subject.saveBooking(bookingData)
    
    // Assert
    val captor = argumentCaptor<CslSession>()
    verify(sessionProvider).saveSession(captor.capture())
    assertThat(captor.firstValue.flightId).isEqualTo("SQ123")
    assertThat(captor.firstValue.origin).isEqualTo("SIN")
    assertThat(captor.firstValue.destination).isEqualTo("LAX")
}

@Test
fun testMethod2() {
    // Act
    subject.saveBooking(differentBookingData)
    
    // Assert
    val captor = argumentCaptor<CslSession>()
    verify(sessionProvider).saveSession(captor.capture())
    assertThat(captor.firstValue.flightId).isEqualTo("SQ456")
    assertThat(captor.firstValue.origin).isEqualTo("LAX")
    assertThat(captor.firstValue.destination).isEqualTo("SIN")
}

βœ… AFTER (Reusable Assertion Method):

@Test
fun testMethod1() {
    // Act
    subject.saveBooking(bookingData)
    
    // Assert
    verifySavedSession("SQ123", "SIN", "LAX")
}

@Test
fun testMethod2() {
    // Act
    subject.saveBooking(differentBookingData)
    
    // Assert
    verifySavedSession("SQ456", "LAX", "SIN")
}

private fun verifySavedSession(expectedFlightId: String, expectedOrigin: String, expectedDestination: String) {
    val captor = argumentCaptor<CslSession>()
    verify(sessionProvider).saveSession(captor.capture())
    val savedSession = captor.firstValue
    assertThat(savedSession.flightId).isEqualTo(expectedFlightId)
    assertThat(savedSession.origin).isEqualTo(expectedOrigin)
    assertThat(savedSession.destination).isEqualTo(expectedDestination)
}

4.2 Readability Enhancement

4.2.1 Test Method Organization

Group related tests and add descriptive comments:

@RunWith(MockitoJUnitRunner::class)
class SaveAndCompareLoadingHelperTest {

    // === Test Subject and Dependencies ===
    @InjectMocks
    lateinit var subject: SaveAndCompareLoadingHelper
    
    @Mock
    lateinit var flightConverter: FlightModelConverterV2
    // ... other mocks

    // === Test Data ===
    private lateinit var baseFlightViewModel: FlightViewModelV2
    private lateinit var validBookingData: BookingData

    // === Setup Methods ===
    @Before
    fun setup() { /* ... */ }

    // === Save Booking Data Tests ===
    @Test
    fun saveBookingData_givenOneWayTrip_savesToSession() { /* ... */ }
    
    @Test
    fun saveBookingData_givenReturnTrip_savesToSession() { /* ... */ }

    // === Price Comparison Tests ===
    @Test
    fun isPriceChanged_givenSamePrice_returnsFalse() { /* ... */ }
    
    @Test
    fun isPriceChanged_givenDifferentPrice_returnsTrue() { /* ... */ }

    // === Helper Methods ===
    private fun createOneWayFlightViewModel(): FlightViewModelV2 { /* ... */ }
    
    private fun verifySavedSession(expectedData: BookingData) { /* ... */ }
}

4.2.2 Constants and Magic Numbers

Extract magic numbers and strings into meaningful constants:

❌ BEFORE (Magic Numbers):

@Test
fun testMethod() {
    val price1 = BigDecimal("299.50")
    val price2 = BigDecimal("350.75")
    // Test logic...
}

βœ… AFTER (Named Constants):

companion object {
    private val ECONOMY_PRICE = BigDecimal("299.50")
    private val BUSINESS_PRICE = BigDecimal("350.75")
    private const val SAMPLE_FLIGHT_ID = "SQ123"
    private const val ORIGIN_AIRPORT = "SIN"
    private const val DESTINATION_AIRPORT = "LAX"
}

@Test
fun testMethod() {
    val price1 = ECONOMY_PRICE
    val price2 = BUSINESS_PRICE
    // Test logic...
}

4.3 Maintainability Improvements

4.3.1 Test Data Builders

Create builder patterns for complex test objects:

class FlightViewModelBuilder {
    private var flightId: String = "SQ123"
    private var origin: String = "SIN"
    private var destination: String = "LAX"
    private var tripType: TripType = TripType.ONE_WAY
    private var departureDate: String = "2023-12-01"
    private var arrivalDate: String? = null

    fun withFlightId(flightId: String) = apply { this.flightId = flightId }
    fun withOrigin(origin: String) = apply { this.origin = origin }
    fun withDestination(destination: String) = apply { this.destination = destination }
    fun withTripType(tripType: TripType) = apply { this.tripType = tripType }
    fun withReturnDate(arrivalDate: String) = apply { this.arrivalDate = arrivalDate }

    fun build(): FlightViewModelV2 {
        return FlightViewModelV2().apply {
            this.flightId = this@FlightViewModelBuilder.flightId
            this.origin = this@FlightViewModelBuilder.origin
            this.destination = this@FlightViewModelBuilder.destination
            this.tripType = this@FlightViewModelBuilder.tripType
            this.departureDate = this@FlightViewModelBuilder.departureDate
            this.arrivalDate = this@FlightViewModelBuilder.arrivalDate
        }
    }
}

// Usage in tests:
@Test
fun testMethod() {
    val flightViewModel = FlightViewModelBuilder()
        .withFlightId("SQ456")
        .withTripType(TripType.RETURN)
        .withReturnDate("2023-12-08")
        .build()
    
    // Test logic...
}

4.4 Post-Optimization Validation

Critical Validation Steps

After optimization, ensure:

  1. Logic Preservation: All test logic remains exactly the same
  2. Coverage Maintenance: No reduction in test coverage
  3. Compilation: All tests still compile without errors
  4. Naming Consistency: All optimized names follow the three-part convention
  5. Readability: Code is more readable and self-documenting

Quick Validation Commands

# Verify tests still compile after optimization
./gradlew :{module}:compileDebugUnitTestKotlin

# Verify tests still pass after optimization  
./gradlew :{module}:testDebugUnitTest --tests "*{TestClassName}*"

MANDATORY CHECK: If any test fails after optimization, immediately revert changes and re-optimize more carefully to preserve exact test behavior.


STEP 5: RUN AND DEBUG TESTS

5.1 Compilation Verification

First: Ensure Tests Compile

./gradlew :{module-name}:compileDebugUnitTestKotlin

Common Compilation Errors and Fixes

Type Mismatches:

// Common issue: String vs Int for resource IDs
// ❌ Wrong
fareFamilyName = "Economy Standard" // Should be Int for some ViewModels

// βœ… Correct  
fareFamilyName = 2131689472 // Int resource ID

Import Issues with Nested Classes:

// ❌ Wrong
import com.singaporeair.mobile.booking.FlightSearchSegment

// βœ… Correct
import com.singaporeair.mobile.booking.FlightSearchParams
// Then use: FlightSearchParams.FlightSearchSegment

5.2 Test Execution and Debugging

Run Specific Test Class

./gradlew :{module-name}:testDebugUnitTest --tests "*{TestClassName}*"

Common Runtime Errors and Solutions

Mockito ArgumentMatcher Errors:

any(...) must not be null
InvalidUseOfMatchersException

Fix: Use typed matchers in stubbing, specific objects in verify:

// ❌ Wrong
`when`(converter.convert(any())).thenReturn(result)
verify(sessionProvider).saveData(any())

// βœ… Correct
`when`(converter.convert(any<FlightViewModelV2>())).thenReturn(result)
verify(sessionProvider).saveData(specificMockObject)

Unnecessary Stubbing Warnings: When you see "Unnecessary stubbings detected":

  1. Remove unused stubs (preferred):

    // Remove lines like this if the test doesn't reach that code path:
    // `when`(helper.formatDate(...)).thenReturn(...)
  2. Use lenient mocking for complex classes:

    @MockitoSettings(strictness = Strictness.LENIENT)

RxJava Scheduler Issues:

@Before
fun setup() {
    `when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
    `when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
}

5.3 Debugging Command Sequence

When tests fail, follow this debugging sequence:

  1. Check Compilation:

    ./gradlew :{module}:compileDebugUnitTestKotlin
  2. Run Specific Test:

    ./gradlew :{module}:testDebugUnitTest --tests "*YourTestClass*" --info
  3. Get Verbose Output:

    ./gradlew :{module}:testDebugUnitTest --tests "*YourTestClass*" --stacktrace --debug

STEP 6: VERIFY ALL TESTS PASS

6.1 Complete Test Suite Validation

Run Full Test Suite for Module

./gradlew :{module-name}:testDebugUnitTest

Success Criteria Checklist

  • All tests compile without errors
  • All tests execute without runtime failures
  • No unnecessary Mockito stubbing warnings
  • All verify() statements use concrete objects, not any()
  • Date formatting matches expected patterns
  • Mock objects are properly configured for all code paths
  • No test interference or flaky behavior

6.2 Integration Validation

Ensure No Regression

When adding tests to existing classes:

  1. Run existing tests first:

    ./gradlew :{module}:testDebugUnitTest --tests "*ExistingTestClass*"
  2. Add new tests incrementally

  3. Validate full integration:

    ./gradlew :{module}:testDebugUnitTest

Performance Check

Ensure tests run efficiently:

  • Individual test methods should complete in <5 seconds
  • Full test class should complete in <60 seconds
  • No memory leaks or excessive resource usage

STEP 7: GENERATE HTML COVERAGE REPORT

7.1 Create Comprehensive HTML Coverage Report

After all tests pass successfully, generate a professional HTML coverage report:

Report Template Structure

Use this exact HTML template format for all test coverage reports to maintain consistency:

<!DOCTYPE html>
<html>
<head>
    <title>Test Coverage Report - {ClassName}</title>
    <style>
        body { 
            font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; 
            margin: 0; 
            padding: 0; 
            background-color: #f8f9fa; 
        }
        .container { 
            max-width: 1200px; 
            margin: 0 auto; 
            padding: 20px; 
        }
        .header { 
            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); 
            color: white; 
            padding: 30px; 
            border-radius: 10px; 
            text-align: center; 
            margin-bottom: 30px; 
            box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1); 
        }
        .header h1 { 
            margin: 0; 
            font-size: 2.5em; 
            font-weight: 300; 
        }
        .header p { 
            margin: 10px 0 0 0; 
            opacity: 0.9; 
        }
        .metrics { 
            display: grid; 
            grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); 
            gap: 20px; 
            margin-bottom: 30px; 
        }
        .metric-card { 
            background: white; 
            padding: 25px; 
            border-radius: 10px; 
            box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1); 
            border-left: 5px solid #28a745; 
        }
        .metric-card h3 { 
            margin: 0 0 15px 0; 
            color: #333; 
            font-size: 1.3em; 
        }
        .metric-card p { 
            margin: 8px 0; 
            font-size: 1.1em; 
        }
        .content-section { 
            background: white; 
            padding: 25px; 
            border-radius: 10px; 
            margin-bottom: 20px; 
            box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1); 
        }
        .content-section h3 { 
            margin: 0 0 20px 0; 
            color: #333; 
            font-size: 1.4em; 
            border-bottom: 2px solid #f0f0f0; 
            padding-bottom: 10px; 
        }
        .content-section ul { 
            list-style: none; 
            padding: 0; 
        }
        .content-section li { 
            padding: 8px 0; 
            border-bottom: 1px solid #f8f9fa; 
            font-size: 1.05em; 
        }
        .content-section li:last-child { 
            border-bottom: none; 
        }
        .success { 
            color: #28a745; 
            font-weight: 600; 
        }
        .coverage-high { 
            color: #28a745; 
            font-weight: 700; 
            font-size: 1.2em; 
        }
        .emoji { 
            font-size: 1.2em; 
            margin-right: 8px; 
        }
        .footer { 
            text-align: center; 
            margin-top: 40px; 
            padding: 20px; 
            color: #6c757d; 
            font-style: italic; 
        }
    </style>
</head>
<body>
    <div class="container">
        <div class="header">
            <h1>Test Coverage Report</h1>
            <h2>{ClassName}</h2>
            <p>Generated on: {timestamp}</p>
        </div>
        
        <div class="metrics">
            <div class="metric-card">
                <h3><span class="emoji">πŸ“Š</span>Test Execution Summary</h3>
                <p><strong>{totalTests}</strong> Total Tests Executed</p>
                <p class="success"><strong>100%</strong> Pass Rate</p>
                <p><strong>0</strong> Failed Tests</p>
                <p><strong>0</strong> Skipped Tests</p>
            </div>
            <div class="metric-card">
                <h3><span class="emoji">🎯</span>Coverage Metrics</h3>
                <p class="coverage-high"><strong>~{coveragePercent}%</strong> Line Coverage</p>
                <p class="coverage-high"><strong>100%</strong> Method Coverage</p>
                <p><strong>{methodsCovered}/{totalMethods}</strong> Methods Tested</p>
                <p><strong>{branchesCovered}</strong> Logic Branches Covered</p>
            </div>
        </div>
        
        <div class="content-section">
            <h3><span class="emoji">βœ…</span>Test Cases Executed</h3>
            <ul>
                {testMethodsList}
            </ul>
        </div>
        
        <div class="content-section">
            <h3><span class="emoji">🧩</span>Logic Paths Covered</h3>
            <ul>
                <li>βœ… Valid input processing and success scenarios</li>
                <li>βœ… Error handling and exception scenarios</li>
                <li>βœ… Edge cases and boundary condition validation</li>
                <li>βœ… Null and empty input handling</li>
                <li>βœ… Feature flag and configuration variations</li>
                <li>βœ… Integration point and dependency testing</li>
                <li>βœ… Observable stream success and error flows</li>
                <li>βœ… Mock interaction verification patterns</li>
            </ul>
        </div>
        
        <div class="content-section">
            <h3><span class="emoji">πŸ”§</span>Code Quality & Standards</h3>
            <ul>
                <li>βœ… All tests follow three-part naming convention (method_condition_result)</li>
                <li>βœ… Proper mock verification patterns implemented (no any() in verify statements)</li>
                <li>βœ… Comprehensive assertion coverage with meaningful test data</li>
                <li>βœ… RxJava streams properly tested with TestObserver patterns</li>
                <li>βœ… AndroidX Test framework compliance and best practices</li>
                <li>βœ… Mockito strict stubbing validation passed</li>
                <li>βœ… No test interdependencies or flaky test behaviors</li>
                <li>βœ… Proper test data setup and teardown management</li>
            </ul>
        </div>
        
        <div class="content-section">
            <h3><span class="emoji">πŸ“‹</span>Test Generation Summary</h3>
            <ul>
                <li><strong>Original Tests:</strong> {originalTestCount} existing test methods</li>
                <li><strong>New Tests Added:</strong> {newTestCount} additional test methods</li>
                <li><strong>Total Coverage:</strong> {totalTests} comprehensive test scenarios</li>
                <li><strong>Quality Score:</strong> <span class="success">A+ (Excellent)</span></li>
                <li><strong>Compliance:</strong> <span class="success">100% [Your Project] Mobile Standards</span></li>
                <li><strong>Maintainability:</strong> <span class="success">High - Clear naming and structure</span></li>
                <li><strong>Integration:</strong> <span class="success">Seamless with existing test suite</span></li>
            </ul>
        </div>
        
        <div class="footer">
            <p>Generated using [Your Project] Mobile Android Unit Test Generation Standard v2.0</p>
            <p>All tests validated against established patterns and coding conventions</p>
        </div>
    </div>
</body>
</html>

7.2 Generate and Open Report

Create the HTML Report File

// Generate report with actual metrics
val reportContent = generateHtmlReport(
    className = "YourClassName",
    totalTests = actualTestCount,
    coveragePercent = estimatedCoverage,
    testMethods = listOfTestMethodNames,
    timestamp = LocalDateTime.now()
)

// Save to file
File("test-coverage-report-{ClassName}.html").writeText(reportContent)

Open Report in Browser

# Open the generated HTML report
open test-coverage-report-{ClassName}.html

7.3 Report Content Requirements

Essential Metrics to Include

  1. Quantitative Metrics:

    • Total tests executed: {X} tests
    • Pass rate: 100%
    • Estimated line coverage: ~{X}%
    • Method coverage: 100%
  2. Qualitative Analysis:

    • Business logic path coverage
    • Error scenario validation
    • Edge case handling
    • Integration point testing
    • Mock interaction verification
  3. Quality Assessment:

    • Test naming convention compliance
    • Mock usage patterns and verification quality
    • Test data management effectiveness
    • Assertion coverage and quality

Example Success Report Content

πŸ“Š Test Metrics Summary
- 14 Total Tests βœ…
- 100% Pass Rate βœ…  
- ~95% Line Coverage βœ…
- 100% Method Coverage βœ…

🧩 Logic Paths Covered
βœ… Valid one-way trip processing
βœ… Valid return trip processing  
βœ… Invalid date handling
βœ… Edge case validation
βœ… Error scenario testing
βœ… Feature flag variations

🎯 Test Quality Assessment
βœ… Three-part naming convention followed
βœ… Proper mock verification patterns
βœ… Comprehensive assertion coverage
βœ… Realistic test data usage
βœ… No any() usage in verify statements

REFERENCE MATERIALS

Module Structure Analysis

Example: Main Booking Module Components

The booking module consists of the following major components (adapt for your specific module):

  • Flight Search: Flight search functionality including flexible dates, passenger selection, and search parameters
  • Flight Selection: Flight selection logic and trip summary
  • Passenger Details: Passenger information handling, validation, and management
  • Review Booking: Booking review, seat selection, and modifications
  • Payment Integration: Payment processing and validation
  • Save and Compare Flights: Flight saving and comparison functionality
  • CIB (One-way Round-trip Booking): CIB-specific booking flows
  • Session Management: Booking session handling and extension
  • Message Handling: Booking-related messaging and notifications

Test Categories Found

  1. Presenter Tests: MVP pattern presenter logic testing
  2. Provider Tests: Data provider and service layer testing
  3. Factory Tests: Object creation and transformation testing
  4. Helper Tests: Utility and helper class testing
  5. Converter Tests: Data conversion and mapping testing
  6. Validator Tests: Input validation and business rule testing

Naming Convention

Naming Convention Standards

Test Method Naming Pattern

All test methods MUST follow the three-part naming convention separated by underscores:

{functionName}_{givenCondition}_{expectedResult}

Examples from Existing Tests

  • onViewResumed_givenCIBFeatureFlagIsTrue_seesDialog
  • getSavedFlights_givenNoSavedFlights_verifyDisplayNoFlightsView
  • deleteFlight_givenError_verifyViewDeleteSavedFlightNotCalled
  • onSearchClicked_givenExceptionOnStore_doesNotProceedToFlightSearch
  • checkFlightAvailability_givenFlightNotAvailableAndFlagOff_clearsDestinationAirport
  • getIsComplete_typeIsAdultAndAllMandatoryFieldsArePresent_returnsTrue

Function Name Conventions

  • Use the actual method name being tested
  • For lifecycle methods: onViewResumed, onViewDestroy, setUp
  • For click handlers: onSearchClicked, onUpdateSavedFlightClick
  • For getters: getSavedFlights, getIsComplete
  • For business logic: checkFlightAvailability, deleteFlight

Condition Conventions

  • given{Condition}: Describes the input state or parameters
  • givenNo{Entity}: When entity is empty/null
  • given{Flag}True/False: For feature flag conditions
  • givenError: When error conditions are tested
  • givenException: When exception handling is tested

Result Conventions

  • returns{Value}: For methods returning values
  • verify{Action}Called: For verifying method calls
  • verify{Action}NotCalled: For verifying methods are not called
  • shows{View/Dialog}: For UI state changes
  • clears{Field}: For field clearing actions
  • proceeds{Action}: For navigation or flow continuation

Mock Configuration Patterns

Lazy Dependencies

Many classes use Lazy dependencies:

@Mock
protected lateinit var dependency: Lazy<DependencyType>

@Before
fun setup() {
    `when`(dependency.get()).thenReturn(mock(DependencyType::class.java))
}

RxJava Testing

Always configure schedulers for synchronous testing:

@Before
fun setup() {
    `when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
    `when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
}

Observable Testing

Use RxJava TestObserver for stream testing:

@Test
fun getData_givenSuccessfulCall_emitsData() {
    // Arrange
    val expectedData = createTestData()
    `when`(provider.getData()).thenReturn(Observable.just(expectedData))
    
    // Act
    val testObserver = subject.getData().test()
    
    // Assert
    testObserver.assertComplete()
    testObserver.assertValue(expectedData)
    testObserver.assertNoErrors()
}

Common Test Data Setup

Date Handling

private lateinit var departureDate: LocalDate
private lateinit var returnDate: LocalDate

@Before
fun setup() {
    departureDate = LocalDate.of(2018, 11, 19)
    returnDate = LocalDate.of(2018, 11, 29)
    
    `when`(dateFormatter.formatLocalDate("2018-11-19", "yyyy-MM-dd"))
        .thenReturn(departureDate)
}

Airport Data

@Mock
private lateinit var originAirport: Airport
@Mock 
private lateinit var destinationAirport: Airport

@Before
fun setup() {
    `when`(airportProvider.findAirport(FLIGHT_SEARCH, "SIN"))
        .thenReturn(Observable.just(AirportSearchResult(true, originAirport)))
}

Passenger Data

private lateinit var passengerCountModel: PassengerCountModel

@Before
fun setup() {
    passengerCountModel = PassengerCountModel(
        adultCount = 2,
        childCount = 1, 
        infantCount = 0
    )
}

Feature Flag Testing

Many tests include feature flag conditions:

@Test
fun performAction_givenFeatureFlagEnabled_executesNewFlow() {
    // Arrange
    `when`(featureFlag.isNewFlowEnabled()).thenReturn(true)
    
    // Act
    subject.performAction()
    
    // Assert
    verify(view).showNewFlowView()
}

@Test  
fun performAction_givenFeatureFlagDisabled_executesLegacyFlow() {
    // Arrange
    `when`(featureFlag.isNewFlowEnabled()).thenReturn(false)
    
    // Act
    subject.performAction()
    
    // Assert
    verify(view).showLegacyFlowView()
}

Firebase Analytics Testing

Test Firebase event logging:

@Test
fun onBookingComplete_givenSuccessfulBooking_logsFirebaseEvent() {
    // Act
    subject.onBookingComplete()
    
    // Assert
    verify(firebaseLogProvider).logEvent(FirebaseEventType.BOOKING_COMPLETE)
}

Code Coverage Expectations

Minimum Coverage Requirements

  • Business Logic Classes: 90%+ line coverage
  • Presenters: 85%+ line coverage
  • Providers: 85%+ line coverage
  • Helpers/Utilities: 95%+ line coverage
  • Validators: 95%+ line coverage

Coverage Focus Areas

  1. All public methods should be tested
  2. Error handling paths must be covered
  3. Feature flag variations should be tested
  4. Lifecycle methods should be verified
  5. Observable streams and their error cases

Quick Reference for Module Adaptation

When using this standard for a specific module:

  1. Replace module references: Change :{module-name} to your actual module name (e.g., :booking, :flights, :check-in)
  2. Adapt component examples: Update the module components section to reflect your specific module's structure
  3. Customize test data: Create module-specific test data setup methods
  4. Module-specific patterns: Add any module-specific testing patterns or requirements
  5. Update imports: Ensure all import statements reflect your module's package structure

Gradle Commands Template:

# Compilation
./gradlew :{your-module}:compileDebugUnitTestKotlin

# Test execution
./gradlew :{your-module}:testDebugUnitTest --tests "*{YourTestClass}*"

# Full module test suite
./gradlew :{your-module}:testDebugUnitTest

APPENDIX: ADVANCED PATTERNS AND TROUBLESHOOTING

Verification Patterns

Method Call Verification

verify(view).showLoadingView()
verify(view, never()).showErrorView()
verify(view, times(2)).updateView(any())

InOrder Verification

For testing sequence of operations:

@Test
fun performAction_givenValidInput_callsMethodsInCorrectOrder() {
    // Arrange
    val inOrder = inOrder(view, provider)
    
    // Act
    subject.performAction()
    
    // Assert
    inOrder.verify(view).showLoadingView()
    inOrder.verify(provider).processData()
    inOrder.verify(view).hideLoadingView()
}

Argument Verification

verify(provider).saveData(argThat { 
    it.id == expectedId && it.name == expectedName 
})

Error Handling Test Patterns

Exception Testing

@Test
fun processData_givenNetworkError_showsErrorMessage() {
    // Arrange
    `when`(provider.getData()).thenReturn(Observable.error(NetworkException()))
    
    // Act
    subject.processData()
    
    // Assert
    verify(view).showErrorMessage(any())
    verify(view, never()).showSuccessView()
}

Timeout Testing

@Test
fun getData_givenTimeout_handlesTimeoutGracefully() {
    // Arrange
    `when`(provider.getData()).thenReturn(Observable.never())
    
    // Act & Assert
    subject.getData()
        .test()
        .awaitDone(5, TimeUnit.SECONDS)
        .assertNotComplete()
}

Best Practices

Test Organization

  1. Group related tests in nested classes when appropriate
  2. Use descriptive test names that clearly indicate the scenario
  3. Keep tests focused on single behaviors
  4. Use parameterized tests for multiple similar scenarios

Mock Management

  1. Create mocks for all external dependencies
  2. Use lenient() for mocks that may not be called in all test scenarios
  3. Verify only the interactions that are relevant to the test
  4. Use verifyNoMoreInteractions() carefully to avoid brittle tests

Data Management

  1. Create test data in setup when used across multiple tests
  2. Use factory methods for complex object creation
  3. Keep test data minimal but sufficient for the test scenario
  4. Use meaningful values rather than random data

Assertion Strategies

  1. Use AssertJ for fluent assertions when available
  2. Test both positive and negative scenarios
  3. Verify not just return values but also side effects
  4. Include edge cases and boundary conditions

Example Complete Test Class

@RunWith(MockitoJUnitRunner::class)
class BookingHelperTest {

    @InjectMocks
    lateinit var subject: BookingHelper

    @Mock
    lateinit var bookingProvider: BookingProvider
    
    @Mock
    lateinit var validator: BookingValidator
    
    @Mock 
    lateinit var sessionProvider: BookingSessionProvider

    private lateinit var sampleBookingData: BookingData
    
    @Before
    fun setup() {
        sampleBookingData = BookingData(
            id = 1,
            passengerCount = 2,
            origin = "SIN",
            destination = "LAX"
        )
    }

    @Test
    fun validateBooking_givenValidData_returnsTrue() {
        // Arrange
        `when`(validator.isValid(sampleBookingData)).thenReturn(true)
        
        // Act
        val result = subject.validateBooking(sampleBookingData)
        
        // Assert
        assertThat(result).isTrue()
        verify(validator).isValid(sampleBookingData)
    }

    @Test
    fun validateBooking_givenInvalidData_returnsFalse() {
        // Arrange
        `when`(validator.isValid(sampleBookingData)).thenReturn(false)
        
        // Act
        val result = subject.validateBooking(sampleBookingData)
        
        // Assert
        assertThat(result).isFalse()
    }

    @Test
    fun processBooking_givenValidData_savesToSession() {
        // Arrange
        `when`(validator.isValid(sampleBookingData)).thenReturn(true)
        `when`(sessionProvider.saveBooking(sampleBookingData)).thenReturn(Observable.just(true))
        
        // Act
        subject.processBooking(sampleBookingData)
        
        // Assert - Use specific mock object, NOT any()
        verify(sessionProvider).saveBooking(sampleBookingData)
    }
}

CRITICAL REQUIREMENTS SUMMARY

🎯 MANDATORY WORKFLOW:

  1. STEP 1: Analyze actual class changes thoroughly
  2. STEP 2: Plan comprehensive test coverage strategy
  3. STEP 3: Implement tests following established patterns
  4. STEP 4: Optimize test implementation for readability and maintainability
  5. STEP 5: Run and debug tests until all pass
  6. STEP 6: Verify complete test suite integrity
  7. STEP 7: Generate HTML coverage report and open in browser

⚑ CRITICAL RULES:

  • Always follow the three-part naming convention
  • NEVER use any() in verify statements - use specific mock objects
  • Set up common test data in the setup method
  • Test both success and failure scenarios
  • Include feature flag variations where applicable
  • MANDATORY: Run and verify tests after generation
  • MANDATORY: Fix compilation and runtime errors immediately
  • MANDATORY: Generate and open HTML coverage report
  • Never consider test generation complete until all 7 steps are successful

πŸ”§ SUCCESS CRITERIA:

  • βœ… Compilation passes without errors
  • βœ… All tests execute successfully
  • βœ… 90%+ line coverage achieved
  • βœ… HTML report generated and opened
  • βœ… No regression in existing test suite
  • βœ… Follow established patterns and conventions

This comprehensive workflow ensures high-quality, maintainable unit tests that integrate seamlessly with the existing [Your Project] mobile application codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment