Use these specific prompt formats to leverage this comprehensive testing framework:
Generate unit tests for {ClassName} using #file:unit-test-generation-prompt.md
Example:
Generate unit tests for PaymentValidator using #file:unit-test-generation-prompt.md
Generate missing tests in {TestClassName} using #file:unit-test-generation-prompt.md
Example:
Generate missing tests in BookingHelperTest using #file:unit-test-generation-prompt.md
Generate missing tests in {TestClassName} for method "{methodName}" using #file:unit-test-generation-prompt.md
Example:
Generate missing tests in FlightSearchPresenterTest for method "validateBooking" using #file:unit-test-generation-prompt.md
For faster typing, use these shortcut formats:
1. {ClassName} #file:unit-test-generation-prompt.md
Example:
1. PaymentValidator #file:unit-test-generation-prompt.md
2. {TestClassName} #file:unit-test-generation-prompt.md
Example:
2. BookingHelperTest #file:unit-test-generation-prompt.md
3. {TestClassName} {methodName} #file:unit-test-generation-prompt.md
Example:
3. FlightSearchPresenterTest validateBooking #file:unit-test-generation-prompt.md
Note:
- Numbers 1, 2, 3 indicate the command type (new class, existing class, specific method)
- Replace
{ClassName},{TestClassName}, and{methodName}with your actual class and method names - The AI will automatically follow the 6-step workflow and generate comprehensive, high-quality unit tests following [Your Project] mobile application standards
This document provides a comprehensive 6-step workflow for generating unit tests for Android modules following established conventions and patterns. This standard can be applied to any module within the [Your Project] mobile application.
Before writing any tests, thoroughly analyze the actual class to understand what has changed or what needs testing:
When testing a brand new class:
# Examine the new class structure
cat src/main/java/path/to/NewClass.ktDocument:
- All public methods and their signatures
- Constructor parameters and dependencies
- Business logic flows and conditional branches
- Error handling mechanisms
- Integration points with other classes
- Observable streams and async operations
When adding tests to existing functionality:
# Check what methods/logic have been added or modified
git diff HEAD~1 src/main/java/path/to/YourClass.ktKey Questions to Answer:
- What new public methods were added?
- What existing methods had logic changes?
- Were new parameters added to existing methods?
- Are there new conditional branches or logic paths?
- Were new dependencies or collaborators introduced?
- Are there new feature flags or configuration changes?
Create a matrix to document the changes:
| Component | Change Type | Methods Affected | Dependencies Added | Logic Complexity |
|---|---|---|---|---|
| NewMethod | Addition | validatePayment() | PaymentValidator | Medium |
| ExistingMethod | Parameter Added | searchFlights() | MealPreferenceService | Low |
| ExistingMethod | Logic Branch | processBooking() | FeatureFlagService | High |
For each method, map out all possible execution paths:
// Example: Analyze this method
fun processBooking(booking: BookingData): Observable<BookingResult> {
return if (featureFlag.isNewFlowEnabled()) {
// Path A: New flow logic
newBookingProcessor.process(booking)
.doOnSuccess { saveToSession(it) }
.doOnError { logError(it) }
} else {
// Path B: Legacy flow logic
legacyBookingProcessor.process(booking)
.map { convertToNewFormat(it) }
}
}Document Required Test Scenarios:
- β Path A: Feature flag enabled + success case
- β Path A: Feature flag enabled + error case
- β Path B: Feature flag disabled + success case
- β Path B: Feature flag disabled + error case
Identify all error scenarios that need testing:
- Network timeouts and failures
- Invalid input validation
- Business rule violations
- Database operation failures
- Authentication/authorization errors
Create NEW Test Class When:
- β Testing a completely new class
- β Testing a new module or component
- β Major refactoring that changes the entire class structure
- β The existing test class is already very large (>50 test methods)
Append to EXISTING Test Class When:
- β Adding tests for new methods in an existing class
- β Testing new logic paths in existing methods
- β Adding edge cases for existing functionality
- β Testing new parameters or variations of existing methods
- β Adding feature flag variations to existing logic
// For new classes
{ClassName}Test.kt
// Examples:
PaymentValidatorTest.kt
FlightSearchHelperTest.kt
BookingSessionProviderTest.ktAll test methods MUST follow the three-part naming convention with optimized length:
{functionName}_{givenCondition}_{expectedResult}
CRITICAL: Balance conciseness with completeness - avoid both overly long names AND missing critical conditions
β GOOD Examples (Balanced - Clear and Complete):
validatePayment_givenValidCard_returnsTrue
validatePayment_givenNullCard_returnsFalse
validatePayment_givenExpiredCard_returnsFalse
validatePayment_givenInvalidCvv_returnsFalse
processBooking_givenValidDataAndNewFlag_savesToSession
processBooking_givenValidDataAndLegacyFlag_usesLegacyFlow
processBooking_givenNetworkError_handlesGracefully
searchFlights_givenValidOriginAndDestination_returnsResults
searchFlights_givenEmptyOriginButValidDestination_returnsError
isPriceChanged_givenSamePriceAndCurrency_returnsFalse
isPriceChanged_givenDifferentPriceButSameCurrency_returnsTrueβ AVOID (Too Long - Redundant Details):
validatePaymentMethodWithCreditCardInformation_givenValidCreditCardWithCorrectCVVAndNotExpired_returnsTrueAndProcessesSuccessfully
processBookingDataWithPassengerInformationAndFlightDetails_givenValidBookingDataWithAllRequiredFieldsPopulated_savesDataToSessionProviderSuccessfullyβ AVOID (Too Short - Missing Critical Conditions):
validatePayment_givenCard_returnsTrue // Missing: what makes the card valid?
processBooking_givenData_saves // Missing: what type of data? saves where?
searchFlights_givenInput_returnsResults // Missing: what kind of input? valid or invalid?
isPriceChanged_givenPrice_returnsTrue // Missing: compared to what? what changed?
saveBooking_givenTrip_saves // Missing: what type of trip? one-way/return?Optimization Guidelines:
- Include key distinguishing conditions:
givenValidOriginAndDestinationvsgivenEmptyOrigin - Specify important context:
givenNewFlagvsgivenLegacyFlagfor feature flag tests - Use abbreviations for common terms:
Paraminstead ofParameter,Configinstead ofConfiguration - Drop redundant words: Use
givenNullinstead ofgivenNullValue - Use domain-specific terms:
ExpiredCardinstead ofCardWithExpiredDate - Keep essential differentiators:
givenSamePriceAndCurrencyvsgivenDifferentPriceButSameCurrency - Avoid combining unrelated conditions: Don't use
givenValidInputif you need to test specific validation aspects separately
MANDATORY: Cover critical edge cases to avoid crash, incorrect logic or break flow
For each method, ensure tests cover these critical scenarios:
- Null input handling: Test with null parameters to prevent NullPointerException
- Empty collection handling: Test with empty lists, arrays, or sets
- Boundary values: Test minimum/maximum values, zero values, negative numbers
- Invalid state conditions: Test when objects are in unexpected states
- Network/IO failures: Test timeout, connection errors, and data corruption
- Concurrent access issues: Test thread safety where applicable
- Memory constraints: Test with large datasets or memory-limited scenarios
- Configuration edge cases: Test missing configurations, invalid settings
- Business rule violations: Test data that violates business logic constraints
CRITICAL RULE: DO NOT MODIFY OR ADJUST ACTUAL LOGIC CLASSES
- β ONLY modify or adjust test classes
- β NEVER modify the class under test (CUT)
- β NEVER modify dependencies or collaborator classes
- β NEVER modify data models or DTOs to "make tests easier"
- β NEVER add public methods to classes just for testing
- β NEVER change access modifiers (private β public) for testing
If tests reveal issues in the actual class:
- Document the issues in test comments
- Report findings to the development team
- Work around the issues using proper mocking techniques
- Focus on testing the current behavior "as-is"
For each method in the class, create a test plan:
// Method: validatePayment(card: CreditCard): Boolean
// Test Plan:
// 1. validatePayment_givenValidCard_returnsTrue
// 2. validatePayment_givenExpiredCard_returnsFalse
// 3. validatePayment_givenNullCard_returnsFalse // Edge case: null input
// 4. validatePayment_givenInvalidCVV_returnsFalse
// 5. validatePayment_givenEmptyCardNumber_returnsFalse // Edge case: empty string
// 6. validatePayment_givenFutureExpiryDate_returnsTrue // Boundary case
// 7. validatePayment_givenExactExpiryDate_handlesCorrectly // Boundary case
// Method: processBooking(booking: BookingData): Observable<BookingResult>
// Test Plan:
// 1. processBooking_givenValidBookingAndNewFlagEnabled_usesNewFlow
// 2. processBooking_givenValidBookingAndNewFlagDisabled_usesLegacyFlow
// 3. processBooking_givenNetworkError_handlesErrorGracefully
// 4. processBooking_givenInvalidBooking_returnsValidationError
// 5. processBooking_givenNullBooking_handlesGracefully // Edge case: null input
// 6. processBooking_givenEmptyBookingData_handlesGracefully // Edge case: empty data
// 7. processBooking_givenTimeoutError_doesNotCrash // Edge case: timeout
// 8. processBooking_givenLargeBookingData_handlesEfficiently // Edge case: large dataList all dependencies that need mocking:
// Dependencies identified in YourClass constructor:
class YourClass(
private val bookingProvider: BookingProvider, // Mock needed
private val sessionProvider: SessionProvider, // Mock needed
private val featureFlag: FeatureFlag, // Mock needed
private val schedulerConfig: SchedulerConfiguration // Mock needed
)Plan the mock setup for different test scenarios:
// Common mocks (setup in @Before)
@Mock lateinit var bookingProvider: BookingProvider
@Mock lateinit var sessionProvider: SessionProvider
@Mock lateinit var featureFlag: FeatureFlag
// Test-specific mock behaviors
// For success scenarios: `when`(provider.getData()).thenReturn(Observable.just(data))
// For error scenarios: `when`(provider.getData()).thenReturn(Observable.error(exception))
// For feature flags: `when`(featureFlag.isEnabled()).thenReturn(true/false)Plan what test data objects you'll need:
// Common test data (created in @Before or helper methods)
private lateinit var validBookingData: BookingData
private lateinit var invalidBookingData: BookingData
private lateinit var sampleFlightViewModel: FlightViewModelV2
private lateinit var expectedResult: BookingResult
// Helper method planning
private fun createValidBookingData(): BookingData { ... }
private fun createExpectedFlightSearchParams(): FlightSearchParams { ... }Use this template when implementing new test classes:
@RunWith(MockitoJUnitRunner::class)
class {ClassName}Test {
@InjectMocks
lateinit var subject: {ClassName}
@Mock
lateinit var dependency1: Dependency1Type
@Mock
lateinit var dependency2: Dependency2Type
// Test data
private lateinit var sampleData: DataType
@Before
fun setup() {
// Initialize common test data
sampleData = createSampleData()
// Configure common mock behaviors
`when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
// Set up view if presenter
subject.setView(view)
}
@Test
fun methodName_givenCondition_expectedResult() {
// Arrange
// Act
// Assert
}
}Focus on:
- View interactions and state changes
- Business logic coordination
- Error handling and user feedback
- Navigation flows
- Firebase logging
- Lifecycle management
@Test
fun onSearchClicked_givenValidInput_proceedsToFlightSearch() {
// Arrange
`when`(recentAirportStore.get().saveRecentAirports(any(), any(), any()))
.thenReturn(Observable.just(true))
// Act
subject.onSearchClicked("SIN", "LAX")
// Assert
verify(view).proceedToFlightSearch()
verify(scopeManager.get()).releaseFlightSearchComponent()
}Focus on:
- Data transformation
- API call handling
- Error scenarios
- Caching behavior
- Observable streams
@Test
fun getSavedFlights_givenDatabaseError_returnsError() {
// Arrange
`when`(database.getSavedFlights()).thenReturn(Observable.error(DatabaseException()))
// Act
val result = subject.getSavedFlights().test()
// Assert
result.assertError(DatabaseException::class.java)
}Focus on:
- Object creation correctness
- Property mapping accuracy
- Null handling
- Default value assignment
@Test
fun create_givenValidFlightData_mapsAllProperties() {
// Arrange
val flightViewModel = createSampleFlightViewModel()
// Act
val result = subject.create(flightViewModel, searchParams)
// Assert
assertThat(result.flightId).isEqualTo(flightViewModel.flightId)
assertThat(result.tripType).isEqualTo(searchParams.tripType)
}Focus on:
- Algorithm correctness
- Edge cases
- Input validation
- Output formatting
@Test
fun formatDuration_given4Hours_returnsFormattedString() {
// Arrange
`when`(context.getString(R.string.duration_format, "4", "0"))
.thenReturn("4 hours")
// Act
val result = subject.formatDuration(14400)
// Assert
assertThat(result).isEqualTo("4 hours")
}Focus on:
- Validation logic accuracy
- Required field checking
- Business rule enforcement
- Error condition handling
@Test
fun isComplete_givenMissingRequiredField_returnsFalse() {
// Arrange
val passenger = createPassengerWithMissingField()
// Act
val result = subject.isComplete(passenger)
// Assert
assertThat(result).isFalse()
}β WRONG - This will cause Mockito errors:
verify(sessionProvider).saveData(any())
verify(converter).convert(any())β CORRECT - Use specific mock objects:
@Mock
private lateinit var mockData: DataModel
@Test
fun testMethod_givenValidData_savesData() {
// Act
subject.processData(mockData)
// Assert - Use the actual mock object, not any()
verify(sessionProvider).saveData(mockData)
}For complex nested objects, create expected objects explicitly:
private fun createExpectedFlightSearchParams(): FlightSearchParams {
return FlightSearchParams(
departure = FlightSearchParams.FlightSearchSegment(
airportCode = "SIN",
date = expectedDepartureDate
),
arrival = FlightSearchParams.FlightSearchSegment(
airportCode = "LAX",
date = expectedArrivalDate
),
passengerConfiguration = expectedPassengerConfig,
cabinClass = CabinClass.ECONOMY
)
}After completing the initial test implementation, perform a comprehensive analysis to optimize the test code for readability, reusability, and maintainability while preserving all test logic.
CRITICAL RULE: Maintain identical test logic - optimization must not change test behavior or coverage
Scan for duplicate code patterns and extract them into reusable methods:
β BEFORE (Duplicated Setup):
@Test
fun saveBooking_givenOneWayTrip_savesToSession() {
// Arrange
val flightViewModel = FlightViewModelV2().apply {
flightId = "SQ123"
departureDate = "2023-12-01"
arrivalDate = null
tripType = TripType.ONE_WAY
origin = "SIN"
destination = "LAX"
}
// Act & Assert...
}
@Test
fun saveBooking_givenReturnTrip_savesToSession() {
// Arrange
val flightViewModel = FlightViewModelV2().apply {
flightId = "SQ123"
departureDate = "2023-12-01"
arrivalDate = "2023-12-08"
tripType = TripType.RETURN
origin = "SIN"
destination = "LAX"
}
// Act & Assert...
}β AFTER (Extracted Helper Methods):
@Test
fun saveBooking_givenOneWayTrip_savesToSession() {
// Arrange
val flightViewModel = createOneWayFlightViewModel()
// Act & Assert...
}
@Test
fun saveBooking_givenReturnTrip_savesToSession() {
// Arrange
val flightViewModel = createReturnFlightViewModel()
// Act & Assert...
}
private fun createOneWayFlightViewModel(): FlightViewModelV2 {
return createBaseFlightViewModel().apply {
tripType = TripType.ONE_WAY
arrivalDate = null
}
}
private fun createReturnFlightViewModel(): FlightViewModelV2 {
return createBaseFlightViewModel().apply {
tripType = TripType.RETURN
arrivalDate = "2023-12-08"
}
}
private fun createBaseFlightViewModel(): FlightViewModelV2 {
return FlightViewModelV2().apply {
flightId = "SQ123"
departureDate = "2023-12-01"
origin = "SIN"
destination = "LAX"
}
}Consolidate repetitive mock setups:
β BEFORE (Repeated Mock Setup):
@Test
fun testMethod1() {
`when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
`when`(dateFormatter.format(any())).thenReturn("2023-12-01")
// Test logic...
}
@Test
fun testMethod2() {
`when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
`when`(dateFormatter.format(any())).thenReturn("2023-12-01")
// Test logic...
}β AFTER (Centralized Setup):
@Before
fun setup() {
setupCommonMocks()
setupTestData()
}
private fun setupCommonMocks() {
`when`(schedulerConfig.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfig.mainScheduler()).thenReturn(Schedulers.trampoline())
`when`(dateFormatter.format(any())).thenReturn("2023-12-01")
}
private fun setupTestData() {
baseFlightViewModel = createBaseFlightViewModel()
validBookingData = createValidBookingData()
}Create reusable assertion methods for complex verifications:
β BEFORE (Repeated Assertion Logic):
@Test
fun testMethod1() {
// Act
subject.saveBooking(bookingData)
// Assert
val captor = argumentCaptor<CslSession>()
verify(sessionProvider).saveSession(captor.capture())
assertThat(captor.firstValue.flightId).isEqualTo("SQ123")
assertThat(captor.firstValue.origin).isEqualTo("SIN")
assertThat(captor.firstValue.destination).isEqualTo("LAX")
}
@Test
fun testMethod2() {
// Act
subject.saveBooking(differentBookingData)
// Assert
val captor = argumentCaptor<CslSession>()
verify(sessionProvider).saveSession(captor.capture())
assertThat(captor.firstValue.flightId).isEqualTo("SQ456")
assertThat(captor.firstValue.origin).isEqualTo("LAX")
assertThat(captor.firstValue.destination).isEqualTo("SIN")
}β AFTER (Reusable Assertion Method):
@Test
fun testMethod1() {
// Act
subject.saveBooking(bookingData)
// Assert
verifySavedSession("SQ123", "SIN", "LAX")
}
@Test
fun testMethod2() {
// Act
subject.saveBooking(differentBookingData)
// Assert
verifySavedSession("SQ456", "LAX", "SIN")
}
private fun verifySavedSession(expectedFlightId: String, expectedOrigin: String, expectedDestination: String) {
val captor = argumentCaptor<CslSession>()
verify(sessionProvider).saveSession(captor.capture())
val savedSession = captor.firstValue
assertThat(savedSession.flightId).isEqualTo(expectedFlightId)
assertThat(savedSession.origin).isEqualTo(expectedOrigin)
assertThat(savedSession.destination).isEqualTo(expectedDestination)
}Group related tests and add descriptive comments:
@RunWith(MockitoJUnitRunner::class)
class SaveAndCompareLoadingHelperTest {
// === Test Subject and Dependencies ===
@InjectMocks
lateinit var subject: SaveAndCompareLoadingHelper
@Mock
lateinit var flightConverter: FlightModelConverterV2
// ... other mocks
// === Test Data ===
private lateinit var baseFlightViewModel: FlightViewModelV2
private lateinit var validBookingData: BookingData
// === Setup Methods ===
@Before
fun setup() { /* ... */ }
// === Save Booking Data Tests ===
@Test
fun saveBookingData_givenOneWayTrip_savesToSession() { /* ... */ }
@Test
fun saveBookingData_givenReturnTrip_savesToSession() { /* ... */ }
// === Price Comparison Tests ===
@Test
fun isPriceChanged_givenSamePrice_returnsFalse() { /* ... */ }
@Test
fun isPriceChanged_givenDifferentPrice_returnsTrue() { /* ... */ }
// === Helper Methods ===
private fun createOneWayFlightViewModel(): FlightViewModelV2 { /* ... */ }
private fun verifySavedSession(expectedData: BookingData) { /* ... */ }
}Extract magic numbers and strings into meaningful constants:
β BEFORE (Magic Numbers):
@Test
fun testMethod() {
val price1 = BigDecimal("299.50")
val price2 = BigDecimal("350.75")
// Test logic...
}β AFTER (Named Constants):
companion object {
private val ECONOMY_PRICE = BigDecimal("299.50")
private val BUSINESS_PRICE = BigDecimal("350.75")
private const val SAMPLE_FLIGHT_ID = "SQ123"
private const val ORIGIN_AIRPORT = "SIN"
private const val DESTINATION_AIRPORT = "LAX"
}
@Test
fun testMethod() {
val price1 = ECONOMY_PRICE
val price2 = BUSINESS_PRICE
// Test logic...
}Create builder patterns for complex test objects:
class FlightViewModelBuilder {
private var flightId: String = "SQ123"
private var origin: String = "SIN"
private var destination: String = "LAX"
private var tripType: TripType = TripType.ONE_WAY
private var departureDate: String = "2023-12-01"
private var arrivalDate: String? = null
fun withFlightId(flightId: String) = apply { this.flightId = flightId }
fun withOrigin(origin: String) = apply { this.origin = origin }
fun withDestination(destination: String) = apply { this.destination = destination }
fun withTripType(tripType: TripType) = apply { this.tripType = tripType }
fun withReturnDate(arrivalDate: String) = apply { this.arrivalDate = arrivalDate }
fun build(): FlightViewModelV2 {
return FlightViewModelV2().apply {
this.flightId = this@FlightViewModelBuilder.flightId
this.origin = this@FlightViewModelBuilder.origin
this.destination = this@FlightViewModelBuilder.destination
this.tripType = this@FlightViewModelBuilder.tripType
this.departureDate = this@FlightViewModelBuilder.departureDate
this.arrivalDate = this@FlightViewModelBuilder.arrivalDate
}
}
}
// Usage in tests:
@Test
fun testMethod() {
val flightViewModel = FlightViewModelBuilder()
.withFlightId("SQ456")
.withTripType(TripType.RETURN)
.withReturnDate("2023-12-08")
.build()
// Test logic...
}After optimization, ensure:
- Logic Preservation: All test logic remains exactly the same
- Coverage Maintenance: No reduction in test coverage
- Compilation: All tests still compile without errors
- Naming Consistency: All optimized names follow the three-part convention
- Readability: Code is more readable and self-documenting
# Verify tests still compile after optimization
./gradlew :{module}:compileDebugUnitTestKotlin
# Verify tests still pass after optimization
./gradlew :{module}:testDebugUnitTest --tests "*{TestClassName}*"MANDATORY CHECK: If any test fails after optimization, immediately revert changes and re-optimize more carefully to preserve exact test behavior.
./gradlew :{module-name}:compileDebugUnitTestKotlinType Mismatches:
// Common issue: String vs Int for resource IDs
// β Wrong
fareFamilyName = "Economy Standard" // Should be Int for some ViewModels
// β
Correct
fareFamilyName = 2131689472 // Int resource IDImport Issues with Nested Classes:
// β Wrong
import com.singaporeair.mobile.booking.FlightSearchSegment
// β
Correct
import com.singaporeair.mobile.booking.FlightSearchParams
// Then use: FlightSearchParams.FlightSearchSegment./gradlew :{module-name}:testDebugUnitTest --tests "*{TestClassName}*"Mockito ArgumentMatcher Errors:
any(...) must not be null
InvalidUseOfMatchersException
Fix: Use typed matchers in stubbing, specific objects in verify:
// β Wrong
`when`(converter.convert(any())).thenReturn(result)
verify(sessionProvider).saveData(any())
// β
Correct
`when`(converter.convert(any<FlightViewModelV2>())).thenReturn(result)
verify(sessionProvider).saveData(specificMockObject)Unnecessary Stubbing Warnings: When you see "Unnecessary stubbings detected":
-
Remove unused stubs (preferred):
// Remove lines like this if the test doesn't reach that code path: // `when`(helper.formatDate(...)).thenReturn(...)
-
Use lenient mocking for complex classes:
@MockitoSettings(strictness = Strictness.LENIENT)
RxJava Scheduler Issues:
@Before
fun setup() {
`when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
}When tests fail, follow this debugging sequence:
-
Check Compilation:
./gradlew :{module}:compileDebugUnitTestKotlin -
Run Specific Test:
./gradlew :{module}:testDebugUnitTest --tests "*YourTestClass*" --info -
Get Verbose Output:
./gradlew :{module}:testDebugUnitTest --tests "*YourTestClass*" --stacktrace --debug
./gradlew :{module-name}:testDebugUnitTest- All tests compile without errors
- All tests execute without runtime failures
- No unnecessary Mockito stubbing warnings
- All verify() statements use concrete objects, not any()
- Date formatting matches expected patterns
- Mock objects are properly configured for all code paths
- No test interference or flaky behavior
When adding tests to existing classes:
-
Run existing tests first:
./gradlew :{module}:testDebugUnitTest --tests "*ExistingTestClass*" -
Add new tests incrementally
-
Validate full integration:
./gradlew :{module}:testDebugUnitTest
Ensure tests run efficiently:
- Individual test methods should complete in <5 seconds
- Full test class should complete in <60 seconds
- No memory leaks or excessive resource usage
After all tests pass successfully, generate a professional HTML coverage report:
Use this exact HTML template format for all test coverage reports to maintain consistency:
<!DOCTYPE html>
<html>
<head>
<title>Test Coverage Report - {ClassName}</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
margin: 0;
padding: 0;
background-color: #f8f9fa;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 20px;
}
.header {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 30px;
border-radius: 10px;
text-align: center;
margin-bottom: 30px;
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}
.header h1 {
margin: 0;
font-size: 2.5em;
font-weight: 300;
}
.header p {
margin: 10px 0 0 0;
opacity: 0.9;
}
.metrics {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 20px;
margin-bottom: 30px;
}
.metric-card {
background: white;
padding: 25px;
border-radius: 10px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
border-left: 5px solid #28a745;
}
.metric-card h3 {
margin: 0 0 15px 0;
color: #333;
font-size: 1.3em;
}
.metric-card p {
margin: 8px 0;
font-size: 1.1em;
}
.content-section {
background: white;
padding: 25px;
border-radius: 10px;
margin-bottom: 20px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
}
.content-section h3 {
margin: 0 0 20px 0;
color: #333;
font-size: 1.4em;
border-bottom: 2px solid #f0f0f0;
padding-bottom: 10px;
}
.content-section ul {
list-style: none;
padding: 0;
}
.content-section li {
padding: 8px 0;
border-bottom: 1px solid #f8f9fa;
font-size: 1.05em;
}
.content-section li:last-child {
border-bottom: none;
}
.success {
color: #28a745;
font-weight: 600;
}
.coverage-high {
color: #28a745;
font-weight: 700;
font-size: 1.2em;
}
.emoji {
font-size: 1.2em;
margin-right: 8px;
}
.footer {
text-align: center;
margin-top: 40px;
padding: 20px;
color: #6c757d;
font-style: italic;
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Test Coverage Report</h1>
<h2>{ClassName}</h2>
<p>Generated on: {timestamp}</p>
</div>
<div class="metrics">
<div class="metric-card">
<h3><span class="emoji">π</span>Test Execution Summary</h3>
<p><strong>{totalTests}</strong> Total Tests Executed</p>
<p class="success"><strong>100%</strong> Pass Rate</p>
<p><strong>0</strong> Failed Tests</p>
<p><strong>0</strong> Skipped Tests</p>
</div>
<div class="metric-card">
<h3><span class="emoji">π―</span>Coverage Metrics</h3>
<p class="coverage-high"><strong>~{coveragePercent}%</strong> Line Coverage</p>
<p class="coverage-high"><strong>100%</strong> Method Coverage</p>
<p><strong>{methodsCovered}/{totalMethods}</strong> Methods Tested</p>
<p><strong>{branchesCovered}</strong> Logic Branches Covered</p>
</div>
</div>
<div class="content-section">
<h3><span class="emoji">β
</span>Test Cases Executed</h3>
<ul>
{testMethodsList}
</ul>
</div>
<div class="content-section">
<h3><span class="emoji">π§©</span>Logic Paths Covered</h3>
<ul>
<li>β
Valid input processing and success scenarios</li>
<li>β
Error handling and exception scenarios</li>
<li>β
Edge cases and boundary condition validation</li>
<li>β
Null and empty input handling</li>
<li>β
Feature flag and configuration variations</li>
<li>β
Integration point and dependency testing</li>
<li>β
Observable stream success and error flows</li>
<li>β
Mock interaction verification patterns</li>
</ul>
</div>
<div class="content-section">
<h3><span class="emoji">π§</span>Code Quality & Standards</h3>
<ul>
<li>β
All tests follow three-part naming convention (method_condition_result)</li>
<li>β
Proper mock verification patterns implemented (no any() in verify statements)</li>
<li>β
Comprehensive assertion coverage with meaningful test data</li>
<li>β
RxJava streams properly tested with TestObserver patterns</li>
<li>β
AndroidX Test framework compliance and best practices</li>
<li>β
Mockito strict stubbing validation passed</li>
<li>β
No test interdependencies or flaky test behaviors</li>
<li>β
Proper test data setup and teardown management</li>
</ul>
</div>
<div class="content-section">
<h3><span class="emoji">π</span>Test Generation Summary</h3>
<ul>
<li><strong>Original Tests:</strong> {originalTestCount} existing test methods</li>
<li><strong>New Tests Added:</strong> {newTestCount} additional test methods</li>
<li><strong>Total Coverage:</strong> {totalTests} comprehensive test scenarios</li>
<li><strong>Quality Score:</strong> <span class="success">A+ (Excellent)</span></li>
<li><strong>Compliance:</strong> <span class="success">100% [Your Project] Mobile Standards</span></li>
<li><strong>Maintainability:</strong> <span class="success">High - Clear naming and structure</span></li>
<li><strong>Integration:</strong> <span class="success">Seamless with existing test suite</span></li>
</ul>
</div>
<div class="footer">
<p>Generated using [Your Project] Mobile Android Unit Test Generation Standard v2.0</p>
<p>All tests validated against established patterns and coding conventions</p>
</div>
</div>
</body>
</html>// Generate report with actual metrics
val reportContent = generateHtmlReport(
className = "YourClassName",
totalTests = actualTestCount,
coveragePercent = estimatedCoverage,
testMethods = listOfTestMethodNames,
timestamp = LocalDateTime.now()
)
// Save to file
File("test-coverage-report-{ClassName}.html").writeText(reportContent)# Open the generated HTML report
open test-coverage-report-{ClassName}.html-
Quantitative Metrics:
- Total tests executed: {X} tests
- Pass rate: 100%
- Estimated line coverage: ~{X}%
- Method coverage: 100%
-
Qualitative Analysis:
- Business logic path coverage
- Error scenario validation
- Edge case handling
- Integration point testing
- Mock interaction verification
-
Quality Assessment:
- Test naming convention compliance
- Mock usage patterns and verification quality
- Test data management effectiveness
- Assertion coverage and quality
π Test Metrics Summary
- 14 Total Tests β
- 100% Pass Rate β
- ~95% Line Coverage β
- 100% Method Coverage β
π§© Logic Paths Covered
β
Valid one-way trip processing
β
Valid return trip processing
β
Invalid date handling
β
Edge case validation
β
Error scenario testing
β
Feature flag variations
π― Test Quality Assessment
β
Three-part naming convention followed
β
Proper mock verification patterns
β
Comprehensive assertion coverage
β
Realistic test data usage
β
No any() usage in verify statements
The booking module consists of the following major components (adapt for your specific module):
- Flight Search: Flight search functionality including flexible dates, passenger selection, and search parameters
- Flight Selection: Flight selection logic and trip summary
- Passenger Details: Passenger information handling, validation, and management
- Review Booking: Booking review, seat selection, and modifications
- Payment Integration: Payment processing and validation
- Save and Compare Flights: Flight saving and comparison functionality
- CIB (One-way Round-trip Booking): CIB-specific booking flows
- Session Management: Booking session handling and extension
- Message Handling: Booking-related messaging and notifications
- Presenter Tests: MVP pattern presenter logic testing
- Provider Tests: Data provider and service layer testing
- Factory Tests: Object creation and transformation testing
- Helper Tests: Utility and helper class testing
- Converter Tests: Data conversion and mapping testing
- Validator Tests: Input validation and business rule testing
All test methods MUST follow the three-part naming convention separated by underscores:
{functionName}_{givenCondition}_{expectedResult}
onViewResumed_givenCIBFeatureFlagIsTrue_seesDialoggetSavedFlights_givenNoSavedFlights_verifyDisplayNoFlightsViewdeleteFlight_givenError_verifyViewDeleteSavedFlightNotCalledonSearchClicked_givenExceptionOnStore_doesNotProceedToFlightSearchcheckFlightAvailability_givenFlightNotAvailableAndFlagOff_clearsDestinationAirportgetIsComplete_typeIsAdultAndAllMandatoryFieldsArePresent_returnsTrue
- Use the actual method name being tested
- For lifecycle methods:
onViewResumed,onViewDestroy,setUp - For click handlers:
onSearchClicked,onUpdateSavedFlightClick - For getters:
getSavedFlights,getIsComplete - For business logic:
checkFlightAvailability,deleteFlight
given{Condition}: Describes the input state or parametersgivenNo{Entity}: When entity is empty/nullgiven{Flag}True/False: For feature flag conditionsgivenError: When error conditions are testedgivenException: When exception handling is tested
returns{Value}: For methods returning valuesverify{Action}Called: For verifying method callsverify{Action}NotCalled: For verifying methods are not calledshows{View/Dialog}: For UI state changesclears{Field}: For field clearing actionsproceeds{Action}: For navigation or flow continuation
Many classes use Lazy dependencies:
@Mock
protected lateinit var dependency: Lazy<DependencyType>
@Before
fun setup() {
`when`(dependency.get()).thenReturn(mock(DependencyType::class.java))
}Always configure schedulers for synchronous testing:
@Before
fun setup() {
`when`(schedulerConfiguration.ioScheduler()).thenReturn(Schedulers.trampoline())
`when`(schedulerConfiguration.mainScheduler()).thenReturn(Schedulers.trampoline())
}Use RxJava TestObserver for stream testing:
@Test
fun getData_givenSuccessfulCall_emitsData() {
// Arrange
val expectedData = createTestData()
`when`(provider.getData()).thenReturn(Observable.just(expectedData))
// Act
val testObserver = subject.getData().test()
// Assert
testObserver.assertComplete()
testObserver.assertValue(expectedData)
testObserver.assertNoErrors()
}private lateinit var departureDate: LocalDate
private lateinit var returnDate: LocalDate
@Before
fun setup() {
departureDate = LocalDate.of(2018, 11, 19)
returnDate = LocalDate.of(2018, 11, 29)
`when`(dateFormatter.formatLocalDate("2018-11-19", "yyyy-MM-dd"))
.thenReturn(departureDate)
}@Mock
private lateinit var originAirport: Airport
@Mock
private lateinit var destinationAirport: Airport
@Before
fun setup() {
`when`(airportProvider.findAirport(FLIGHT_SEARCH, "SIN"))
.thenReturn(Observable.just(AirportSearchResult(true, originAirport)))
}private lateinit var passengerCountModel: PassengerCountModel
@Before
fun setup() {
passengerCountModel = PassengerCountModel(
adultCount = 2,
childCount = 1,
infantCount = 0
)
}Many tests include feature flag conditions:
@Test
fun performAction_givenFeatureFlagEnabled_executesNewFlow() {
// Arrange
`when`(featureFlag.isNewFlowEnabled()).thenReturn(true)
// Act
subject.performAction()
// Assert
verify(view).showNewFlowView()
}
@Test
fun performAction_givenFeatureFlagDisabled_executesLegacyFlow() {
// Arrange
`when`(featureFlag.isNewFlowEnabled()).thenReturn(false)
// Act
subject.performAction()
// Assert
verify(view).showLegacyFlowView()
}Test Firebase event logging:
@Test
fun onBookingComplete_givenSuccessfulBooking_logsFirebaseEvent() {
// Act
subject.onBookingComplete()
// Assert
verify(firebaseLogProvider).logEvent(FirebaseEventType.BOOKING_COMPLETE)
}- Business Logic Classes: 90%+ line coverage
- Presenters: 85%+ line coverage
- Providers: 85%+ line coverage
- Helpers/Utilities: 95%+ line coverage
- Validators: 95%+ line coverage
- All public methods should be tested
- Error handling paths must be covered
- Feature flag variations should be tested
- Lifecycle methods should be verified
- Observable streams and their error cases
When using this standard for a specific module:
- Replace module references: Change
:{module-name}to your actual module name (e.g.,:booking,:flights,:check-in) - Adapt component examples: Update the module components section to reflect your specific module's structure
- Customize test data: Create module-specific test data setup methods
- Module-specific patterns: Add any module-specific testing patterns or requirements
- Update imports: Ensure all import statements reflect your module's package structure
Gradle Commands Template:
# Compilation
./gradlew :{your-module}:compileDebugUnitTestKotlin
# Test execution
./gradlew :{your-module}:testDebugUnitTest --tests "*{YourTestClass}*"
# Full module test suite
./gradlew :{your-module}:testDebugUnitTestverify(view).showLoadingView()
verify(view, never()).showErrorView()
verify(view, times(2)).updateView(any())For testing sequence of operations:
@Test
fun performAction_givenValidInput_callsMethodsInCorrectOrder() {
// Arrange
val inOrder = inOrder(view, provider)
// Act
subject.performAction()
// Assert
inOrder.verify(view).showLoadingView()
inOrder.verify(provider).processData()
inOrder.verify(view).hideLoadingView()
}verify(provider).saveData(argThat {
it.id == expectedId && it.name == expectedName
})@Test
fun processData_givenNetworkError_showsErrorMessage() {
// Arrange
`when`(provider.getData()).thenReturn(Observable.error(NetworkException()))
// Act
subject.processData()
// Assert
verify(view).showErrorMessage(any())
verify(view, never()).showSuccessView()
}@Test
fun getData_givenTimeout_handlesTimeoutGracefully() {
// Arrange
`when`(provider.getData()).thenReturn(Observable.never())
// Act & Assert
subject.getData()
.test()
.awaitDone(5, TimeUnit.SECONDS)
.assertNotComplete()
}- Group related tests in nested classes when appropriate
- Use descriptive test names that clearly indicate the scenario
- Keep tests focused on single behaviors
- Use parameterized tests for multiple similar scenarios
- Create mocks for all external dependencies
- Use
lenient()for mocks that may not be called in all test scenarios - Verify only the interactions that are relevant to the test
- Use
verifyNoMoreInteractions()carefully to avoid brittle tests
- Create test data in setup when used across multiple tests
- Use factory methods for complex object creation
- Keep test data minimal but sufficient for the test scenario
- Use meaningful values rather than random data
- Use AssertJ for fluent assertions when available
- Test both positive and negative scenarios
- Verify not just return values but also side effects
- Include edge cases and boundary conditions
@RunWith(MockitoJUnitRunner::class)
class BookingHelperTest {
@InjectMocks
lateinit var subject: BookingHelper
@Mock
lateinit var bookingProvider: BookingProvider
@Mock
lateinit var validator: BookingValidator
@Mock
lateinit var sessionProvider: BookingSessionProvider
private lateinit var sampleBookingData: BookingData
@Before
fun setup() {
sampleBookingData = BookingData(
id = 1,
passengerCount = 2,
origin = "SIN",
destination = "LAX"
)
}
@Test
fun validateBooking_givenValidData_returnsTrue() {
// Arrange
`when`(validator.isValid(sampleBookingData)).thenReturn(true)
// Act
val result = subject.validateBooking(sampleBookingData)
// Assert
assertThat(result).isTrue()
verify(validator).isValid(sampleBookingData)
}
@Test
fun validateBooking_givenInvalidData_returnsFalse() {
// Arrange
`when`(validator.isValid(sampleBookingData)).thenReturn(false)
// Act
val result = subject.validateBooking(sampleBookingData)
// Assert
assertThat(result).isFalse()
}
@Test
fun processBooking_givenValidData_savesToSession() {
// Arrange
`when`(validator.isValid(sampleBookingData)).thenReturn(true)
`when`(sessionProvider.saveBooking(sampleBookingData)).thenReturn(Observable.just(true))
// Act
subject.processBooking(sampleBookingData)
// Assert - Use specific mock object, NOT any()
verify(sessionProvider).saveBooking(sampleBookingData)
}
}π― MANDATORY WORKFLOW:
- STEP 1: Analyze actual class changes thoroughly
- STEP 2: Plan comprehensive test coverage strategy
- STEP 3: Implement tests following established patterns
- STEP 4: Optimize test implementation for readability and maintainability
- STEP 5: Run and debug tests until all pass
- STEP 6: Verify complete test suite integrity
- STEP 7: Generate HTML coverage report and open in browser
β‘ CRITICAL RULES:
- Always follow the three-part naming convention
- NEVER use any() in verify statements - use specific mock objects
- Set up common test data in the setup method
- Test both success and failure scenarios
- Include feature flag variations where applicable
- MANDATORY: Run and verify tests after generation
- MANDATORY: Fix compilation and runtime errors immediately
- MANDATORY: Generate and open HTML coverage report
- Never consider test generation complete until all 7 steps are successful
π§ SUCCESS CRITERIA:
- β Compilation passes without errors
- β All tests execute successfully
- β 90%+ line coverage achieved
- β HTML report generated and opened
- β No regression in existing test suite
- β Follow established patterns and conventions
This comprehensive workflow ensures high-quality, maintainable unit tests that integrate seamlessly with the existing [Your Project] mobile application codebase.