Release V9.2.7
System Improvements
Comprehensive system enhancements to improve operational stability, security, and overall functionality:
- Terminal Password Rotation Control
- Selective Terminal Password Rotation Based on Version (UP-563)
- System Monitoring Enhancement for Stalled Jobs
- Webhook Notification Display Optimization
- Improved Error Handling in Extended Statement Table Generation (UP-687)
- Improved AVS Response Code Mapping for TSYS Integration (GA-1058)
- Enhanced Error Logging in Reporting API (UP-692)
Merchant Statement Processing Improvements
Enhanced merchant statement management features for better financial reporting and account management:
System Performance Improvements
Performance optimizations to enhance system stability and responsiveness:
- Merchant Statement Query Optimization (UP-643)
- Optimized Merchant Statement Extended Query Performance (UP-709)
API Improvements
Major API enhancements providing improved validation, error handling, and integration capabilities:
- Increased Validation Limits for Onboarding API Estimate Fields
- Enhanced TSYS Integration with Country of Origin Support (GA-1366)
- Onboarding API Validation Improvements (GA-1446)
- Improved Error Messages for Date Validation in API (GA-900)
- Expiration Date Validation in Account Verification Requests (GA-1403)
Bug Fixes
Critical fixes to ensure proper system operation and data accuracy:
- Fixed Passthrough Fees Total Calculation in Deposit Statement (UP-624)
- Volume Markup Fee Calculation Fix (GA-1459)
- Corrected fee charging during scheme transition (UP-700)
- Fixed Mastercard Card Type Preservation in Elavon-EU Integration
- Fixed D48 error handling for contact cards on proxy integration (GA-1430)
Transaction Processing Improvements
Significant enhancements to strengthen transaction security, reliability, and processing capabilities:
- Automatic Transaction Void for System Timeout (UP-475)
- Improved TSYS File Processing for Cost Calculation
Terminal Password Rotation Control
Context: Terminals could occasionally lose connection to the gateway after months of operation due to issues with the automatic password rotation mechanism that occurs every 90 days.
Solution: Implemented a system-level setting that allows administrators to enable or disable terminal password rotation functionality through the UI, preventing connection failures caused by password rotation issues.
Impact: System administrators can now control terminal password rotation behavior from a centralized UI, allowing them to quickly disable rotation when issues arise without requiring hotfixes or server code changes.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Added new system property 'unipay.system.terminal-password-rotation-enabled' with default value 'true'
- Configuration Changes: Added checkbox control in System Settings UI
- Code Changes: Modified terminal credential generation logic to check the property value before performing password rotation
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
Update actions are not required for this improvement
- All changes are implemented through standard code deployment and database delta
- The property automatically defaults to enabled (true) for backward compatibility
- The change doesn't alter existing system behavior unless explicitly configured
- No manual configuration is needed after deployment
Implementation Actions
- Access the System Settings form to configure terminal password rotation
- Navigate to System Perspective > System > Settings > Modify Settings
- Locate the "Terminal Password Rotation Enabled" checkbox in the General tab
- Uncheck the box to disable password rotation, check to enable it
Configuration and Access Steps
- Log in to the UniPay administrative interface
- Navigate to: System Perspective > System > Settings > Modify Settings
- Locate the "Terminal Password Rotation Enabled" checkbox in the General tab, System section
- Toggle the checkbox to change the setting
- Click Save to apply changes
Test Scenarios
-
Default Setting Verification
- Steps: Check the initial state of the "Terminal Password Rotation Enabled" setting on a fresh installation
- Expected result: The checkbox should be checked (enabled) by default
-
Disable Password Rotation
- Steps: Uncheck "Terminal Password Rotation Enabled", save settings, and initiate a get-credentials request from a terminal
- Expected result: The terminal receives its current password as the new password (no rotation occurs)
-
Enable Password Rotation
- Steps: Check "Terminal Password Rotation Enabled", save settings, and initiate a get-credentials request from a terminal
- Expected result: The terminal receives a newly generated password (rotation occurs)
-
Terminal Initial Setup with Rotation Disabled
- Steps: Uncheck "Terminal Password Rotation Enabled", save settings, and initialize a new terminal
- Expected result: The terminal should successfully initialize with a new password despite rotation being disabled
Common Issues
- Terminal connection failures may occur if password rotation is re-enabled after being disabled for an extended period
- Password rotation status changes do not affect already-established terminal sessions until next authentication
- For terminals suffering from the L01 error, manual password reset may still be required before disabling rotation
- The setting affects all terminals system-wide and cannot be applied to individual terminals or merchant accounts
Potential Impact Areas
- UI Components: System Settings form displays the new checkbox control for password rotation
- Reports & Analytics: No direct impact on reports or analytics
- APIs & Integrations: Terminal authentication process checks this setting when processing get-credentials requests
- Database Queries: Simple lookup of the system setting during terminal authentication
- Business Logic: Password rotation logic now checks the system setting before generating a new password
- Performance: Negligible impact with a simple additional boolean check during authentication
Schema Updates
- Added new row to the
iapp_settings
table with name='unipay.system.terminal-password-rotation-enabled', value='true', scope_cl='S', scope_code=0 - No structural changes to database tables or columns
- No data type changes or constraints added
Rollback Possibility Assessment
Database changes can be safely rolled back if needed
- Changes are limited to a single configuration row in the settings table
- No destructive operations or complex data transformations
- Rollback would only require removing or updating the setting row
- No cascading effects on other database objects or data
Improved Merchant Statement Hold Management
Context: Previously, when remittance holds were enabled for merchants, statements were automatically generated in "Posted" status without the possibility for review or regeneration.
Solution: Implemented configurable behavior for merchant statements with active holds to allow manual review prior to posting.
Impact: Operators can now enable a system setting that forces merchant statements with active holds to be placed in "Pending" status for review instead of being automatically posted.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Added new system property
unipay.business.remittance-hold-review-required
to theiapp_settings
table - Configuration Changes: Added new UI control in System Settings under "Hold Settings" section
- Code Changes: Modified merchant statement processing logic to check hold status and configuration setting
- Filesystem Changes: None
- Data Migration: Added default configuration with value 'false' for backward compatibility
Update Actions Analysis
No update actions required as changes are implemented through standard deployment mechanisms.
- Changes are applied through standard SQL delta scripts
- Default configuration maintains backward compatibility
- Feature is opt-in via configuration setting
- No manual data migration required
Implementation Actions
- Deploy standard update package
- Configure system setting as needed via System Settings UI
- Verify proper behavior with test merchants
- Monitor statement processing after enabling the feature
Configuration and Access Steps
- Navigate to: Administration → System Settings
- Locate the "Hold Settings" section
- Set "Statement Review Required" checkbox to enable/disable the feature
- Save configuration changes
Test Scenarios
-
Verify Default Configuration
- Steps: Check system settings after installation
- Expected result: The "Statement Review Required" option should be unchecked (false) by default
-
Verify Statement Status with Setting Disabled
- Steps: Create a merchant with active hold, generate statement with configuration disabled
- Expected result: Statement should be automatically posted (previous behavior)
-
Verify Statement Status with Setting Enabled
- Steps: Enable "Statement Review Required" setting, create a merchant with active hold, generate statement
- Expected result: Statement should be placed in "Pending" status for review
-
Verify Statement Review Process
- Steps: With setting enabled, generate statement for merchant with hold, then review and approve
- Expected result: Statement should successfully transition through review process to "Posted" status
Common Issues
- Setting changes may require cache refresh to take effect
- Existing statements aren't affected by changing the setting
- Merchants without active holds aren't affected by this setting
- Statements with zero amount aren't affected by the review requirement
Potential Impact Areas
- UI Components: Statement management screens will show more statements in "Pending" status
- Reports & Analytics: Processing time statistics may change as statements require manual review
- APIs & Integrations: Status changes in statement lifecycle may affect integration points
- Database Queries: No significant impact on performance of database queries
- Business Logic: Statement processing flow is modified based on configuration setting
- Performance: Minimal impact limited to statement generation process
Schema Updates
- Added new setting row to
iapp_settings
table:INSERT INTO `iapp_settings`
(`name`, `value`, `scope_cl`, `scope_code`)
VALUES (
'unipay.business.remittance-hold-review-required',
'false',
'S',
0
);
Rollback Possibility Assessment
Database changes can be safely rolled back if necessary.
- The change only adds a single setting row that can be deleted
- No schema modifications are involved
- No data transformations are performed
- No cascading effects on other system components
- Setting defaults to 'false' for backward compatibility
Automatic Transaction Void for System Timeout
Context: During system performance issues, timeout errors between UniBroker and UniPay could result in duplicate transactions when merchants retry payments after receiving error responses.
Solution: Implemented automatic void functionality for sale, sale-auth, and credit transactions that experience connection timeouts, with retry mechanisms to ensure transaction cleanup.
Impact: Merchants now receive specific timeout error messages (D41) indicating transaction was automatically voided, preventing duplicate charges and reducing chargebacks.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: Added new response code D41 and corresponding error message
- Code Changes: Enhanced UniBroker request processing, added timeout handling with automatic void
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
Update actions are not required for this implementation.
- All changes are implemented through standard code deployment
- The feature uses existing void functionality with new handling logic
- No manual configuration changes or database updates needed
- System will automatically apply the new behavior after deployment
Implementation Actions
- Deploy updated code to UniBroker and UniPay components
- Verify system logs for proper operation after deployment
- Monitor transaction processing during the initial deployment period
- No restart of services is required as changes will take effect with normal request processing
Configuration and Access Steps
- No special configuration is required; functionality is active by default
- To monitor: Check transaction logs for transactions with response code D41
- For review: Examine voided transactions in Transaction Management → Transaction List
- Use API testing tools to simulate timeout conditions and validate void behavior
Test Scenarios
-
Timeout During Sale Transaction
- Steps: Process sale transaction and simulate timeout between UniBroker and UniPay
- Expected result: Transaction should be automatically voided and response code D41 returned
-
Timeout During Sale-Auth Transaction
- Steps: Process sale-auth transaction and simulate timeout between UniBroker and UniPay
- Expected result: Transaction should be automatically voided and response code D41 returned
-
Timeout During Credit Transaction
- Steps: Process credit transaction and simulate timeout between UniBroker and UniPay
- Expected result: Transaction should be automatically voided and response code D41 returned
-
Retry After Initial Void Failure
- Steps: Simulate timeout followed by void attempt failure
- Expected result: System should schedule a retry void attempt after 5 minutes
Common Issues
- Timeout simulation may be challenging in test environments
- Initial void attempt may succeed, preventing observation of retry mechanism
- HaProxy layer may return HTTP 500 status rather than timeout exception
- Manual void operations may interfere with automatic void process
Potential Impact Areas
- UI Components: Transaction details should show void records for timed-out transactions
- Reports & Analytics: Transaction reports should correctly reflect voided status of timed-out transactions
- APIs & Integrations: Integrations expecting specific error codes should be updated to handle D41 response
- Database Queries: No impact on database structure, only transaction status changes
- Business Logic: Enhanced error handling improves transaction integrity during system issues
- Performance: Minimal performance impact with asynchronous void processing
Schema Updates
- No database schema changes are required for this implementation
- The feature uses existing transaction tables and void functionality
- All changes are implemented at the application code level
Rollback Possibility Assessment
Database rollback is not applicable for this implementation as no database changes are made.
- This feature only affects application behavior, not database structure
- Transaction data follows standard void processes which are already rollback-compatible
- No data migration or transformation is performed
- All changes are contained within application code
Improved TSYS File Processing for Cost Calculation
Context: Due to changes in TSYS file formats, the existing system incorrectly imported processing cost data. Specifically, interchange fees were imported from daily TDDF files, but other fee categories were either missing or only partially present.
Solution: Implemented a dual-file processing approach for TSYS reconciliation files. The system now extracts interchange fees from daily TDDF files and all other expense categories from monthly RESID files.
Impact: Merchants now receive comprehensive and accurate processing cost calculations, with all fee categories properly imported and assigned to the correct merchant accounts.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Added new tables for RESID file data storage and modified existing tables to support selective data import
- Configuration Changes: Disabled aggregation for reconciliation/tsys provider and removed file splitter component
- Code Changes: Refactored file processing pipeline to handle both TDDF and RESID formats
- Filesystem Changes: No changes to filesystem structure
- Data Migration: No data migration required for existing records
Update Actions Analysis
No Update Actions are required for this implementation.
- All changes are implemented through standard database scripts and code modifications
- The deployment process automatically applies the necessary configuration changes
- No manual intervention is needed during the update process
- Existing data remains compatible with the new processing logic
Implementation Actions
- Ensure proper configuration of TSYS provider profiles for correct merchant identification
- Verify that both TDDF and RESID files are being placed in the correct FTP locations
- Confirm merchant mapping between backend merchant IDs and internal system IDs
- Monitor initial file processing after deployment to verify correct data import
Configuration and Access Steps
- Configure a cards-realtime/tsys profile with a valid "Merchant Number (Backend)" value
- Ensure the reconciliation/tsys processor is properly configured
- Place both TDDF and RESID test files in the appropriate FTP location
- Navigation: Administration → Service Providers → Processing Profiles → reconciliation/tsys
Test Scenarios
-
TDDF File Processing
- Steps: Upload a TDDF file to the configured FTP location and trigger the reconciliation job
- Expected result: Only interchange fees (category 010) are imported into the merchant_processing_cost table
-
RESID File Processing
- Steps: Upload a RESID file to the configured FTP location and trigger the reconciliation job
- Expected result: All fee categories except interchange (010) are imported into the merchant_processing_cost table
-
Merchant Mapping Verification
- Steps: Process files containing multiple merchant records and verify merchant assignment
- Expected result: Costs are correctly assigned to the appropriate merchants based on backend merchant IDs
-
Combined Data Verification
- Steps: Process both TDDF and RESID files for the same time period and merchants
- Expected result: The system correctly combines interchange data from TDDF with other fees from RESID
Common Issues
- Merchant mapping failures if "Merchant Number (Backend)" is not properly configured in merchant profiles
- File format recognition issues if filename patterns don't match expected formats
- Expense category duplication if the same RESID file is processed multiple times
- Merchant identification failures if merchant IDs in files don't match system records
Potential Impact Areas
- UI Components: No impact to UI components
- Reports & Analytics: Improved data quality in merchant processing cost reports
- APIs & Integrations: No changes to external APIs
- Database Queries: Modified reconciliation data storage and retrieval patterns
- Business Logic: Enhanced fee processing and categorization logic
- Performance: Improved processing efficiency through simplified file handling and removal of unnecessary aggregation
Schema Updates
- Added new tables for storing RESID file data (tsys_monthly_processing_cost, tsys_monthly_processing_cost_batch)
- Modified existing configuration tables to disable aggregation and file splitting
- Updated processing_profile settings for reconciliation/tsys processors
- Added support for merchant mapping between backend IDs and system IDs
Rollback Possibility Assessment
Rollback is possible for these database changes.
- The changes involve configuration updates that can be reverted
- No destructive modifications to existing data structures were made
- A rollback script has been prepared to restore previous configuration values
- The system can continue processing with the previous logic without data loss
- No complex data transformations were implemented that would prevent reverting to the previous state
We recommend scheduling this deployment during non-business hours in the client's time zone, as potential offline periods may extend beyond estimates due to unforeseen circumstances.
Selective Terminal Password Rotation Based on Version
Context: Terminals with older application versions occasionally lost connection to the gateway when the system attempted to rotate passwords, particularly if power was interrupted during password update.
Solution: Implemented version-based logic that applies password rotation only to terminals with application version 5.2.3 or higher, while maintaining a consistent password for older terminal versions.
Impact: Improved system stability and eliminated connection issues for terminals with older firmware, while maintaining proper security measures for terminals with newer versions capable of handling password rotation.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: No direct database schema changes required
- Configuration Changes: No configuration property changes needed
- Code Changes: Modified terminal password rotation logic in TmsGetCredentialsIngenicoStrategy class to check terminal version
- Filesystem Changes: None
- Data Migration: None required
Update Actions Analysis
Update Actions are not required for this improvement as the changes are handled entirely through code without requiring manual intervention.
- The implementation involves only application logic changes
- No database structure modifications are needed
- No configuration changes required
- The change is automatically applied during normal operation
Implementation Actions
- Added logic to check terminal application version in terminal_diagnostics table
- Implemented comparison against minimum required version (5.2.3)
- Modified password handling for terminals below minimum version to return the same password
- Maintained existing password rotation logic for terminals with version 5.2.3 and higher
- Fixed error handling in get-credentials request processing
Configuration and Access Steps
- No special configuration is required for testing
- Testing requires terminals with different application versions (below 5.2.3 and 5.2.3 or higher)
- Use terminal diagnostics to verify terminal firmware version
- Navigation: Terminal Management → Terminal Details → Diagnostics
Test Scenarios
-
Terminal Version Below 5.2.3
- Steps: Initiate terminal diagnostics on a terminal with version below 5.2.3, monitor get-credentials requests
- Expected result: The server should return the same password that was sent in the request without generating a new one
-
Terminal Version 5.2.3 or Higher
- Steps: Initiate terminal diagnostics on a terminal with version 5.2.3 or higher, monitor get-credentials requests
- Expected result: The server should generate and return a new password following the standard rotation logic
-
Version Information Missing
- Steps: Simulate a scenario where terminal version information is missing or malformed
- Expected result: The system should default to standard password rotation behavior as a fallback
-
Terminal Restart After Password Update
- Steps: Perform diagnostics to trigger get-credentials, restart terminal, perform another operation
- Expected result: Terminal should maintain connection and operate correctly after restart
Common Issues
- Terminal firmware version may not be properly reported in diagnostics
- Version format variations may need special handling (e.g., "V5.2.3" vs "5.2.3")
- Diagnostic information may be missing for terminals that haven't reported recently
- When testing with older terminal versions, ensure you're using appropriate terminal application builds
Potential Impact Areas
- UI Components: No direct impact on UI components
- Reports & Analytics: No impact on reporting functionality
- APIs & Integrations: Improved stability for terminal integration endpoints
- Database Queries: Increased queries to terminal_diagnostics table during get-credentials processing
- Business Logic: Modified password handling logic for different terminal versions
- Performance: Minimal performance impact, additional version check during get-credentials requests
Schema Updates
- No database schema changes are required for this implementation
- The feature utilizes existing database structure
- No new tables or columns were added
- No modifications to existing schema were needed
Rollback Possibility Assessment
Database rollback is not applicable for this change as no database schema modifications were made.
- No database structure changes were implemented
- No data modifications requiring rollback capabilities
- The change is entirely in application logic
- In case of issues, the code can be reverted without database changes
System Monitoring Enhancement for Stalled Jobs
Context: The system previously lacked an automated mechanism to notify administrators about JobMessage entities stuck in processing state, leading to delayed detection and resolution of problematic jobs.
Solution: Implemented a new monitoring section in the existing 15-minute audit notification system that identifies and reports jobs remaining in the PROCESSING status for more than 15 minutes.
Impact: System administrators now receive regular notifications about stalled jobs, enabling faster detection and resolution of stuck processes without manual monitoring or intervention.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Added new section to the SystemAuditAction class for monitoring stalled JobMessage entities
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions required for this change.
- The implementation is contained within the standard application code
- All changes can be deployed through standard deployment procedures
- No manual configuration or database changes are needed
- The feature activates automatically after deployment
Implementation Actions
- Deploy the application with the standard deployment process
- Verify that the 15-minute System Audit Notification includes the new "Stalled Jobs" section
- Check that stalled jobs appear correctly in the notification when present
- No additional post-deployment actions are required
Configuration and Access Steps
- No special configuration is needed as the feature is enabled by default
- The feature works with the existing audit notification system
- Notifications will be sent to the same recipients configured for system audit notifications
- Navigation: System audit notifications are delivered via email based on existing notification settings
Test Scenarios
-
Basic Functionality Verification
- Steps: Deploy the application and wait for at least two 15-minute notification cycles
- Expected result: The "Stalled Jobs" section appears in the system audit notification email
-
Stalled Job Detection
- Steps: Create a test job that will remain in PROCESSING status for >15 minutes, then wait for the next notification cycle
- Expected result: The stalled job appears in the notification with correct ID, name, and duration
-
Duration Calculation Verification
- Steps: Create a job that will remain in PROCESSING for 30+ minutes and verify multiple notifications
- Expected result: The reported duration increases correctly between notifications (e.g., from 00:16:20 to 00:31:20)
-
No Stalled Jobs Scenario
- Steps: Ensure no jobs are in PROCESSING status for >15 minutes, then check notifications
- Expected result: The "Stalled Jobs" section appears but contains no entries
Common Issues
- Time zone differences may impact how durations are displayed or calculated
- Job status changes occurring between notification cycles may cause jobs to appear/disappear
- High volume of stalled jobs could make the notification email very large
- Check that email clients properly display the tabular format of the notification
Potential Impact Areas
- UI Components: No impact as this feature doesn't affect any UI components
- Reports & Analytics: Provides additional operational data through the notification system
- APIs & Integrations: No impact on external APIs or integrations
- Database Queries: Minimal impact with a new lightweight query that runs every 15 minutes
- Business Logic: No impact on core business processes or transaction processing
- Performance: Negligible performance impact due to the lightweight query operating on indexed fields
Schema Updates
- No database schema changes are required for this feature
- The implementation uses existing database tables and structures
- No new tables, columns, or indexes are created
Rollback Possibility Assessment
Rollback is possible without database concerns.
- This feature doesn't modify any database structures
- No data transformations or migrations are performed
- Rollback would only require reverting the code changes
- No data loss would occur during rollback
Webhook Notification Display Optimization
Context: The system previously displayed webhook notification errors grouped by merchant accounts, which was not informative for troubleshooting specific endpoint issues. Additionally, there was no time limitation, resulting in outdated notifications being displayed.
Solution: Implemented a new grouping mechanism based on webhook URLs instead of merchant accounts and added response message information to provide more context about failures. Added a one-month data retention filter to ensure only relevant notifications are displayed.
Impact: Administrators can now more effectively analyze and troubleshoot webhook notification issues by identifying problematic endpoints directly. The focused view with detailed error messages significantly improves problem resolution efficiency.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: No structural changes to database tables
- Configuration Changes: None
- Code Changes: Modified SystemAuditAction query to group webhook notifications by URL instead of merchant account
- Filesystem Changes: None
- Data Migration: None required
Update Actions Analysis
Update actions are not required for this implementation.
- All changes are implemented through standard code deployment
- The modification affects only the query and display logic
- No configuration or database structure changes are needed
- Existing data will be automatically displayed in the new format
Implementation Actions
- Deploy updated code to the target environment
- Verify section 309 of audit reports displays webhook notifications grouped by URL
- Confirm the one-month data retention filter is working correctly
- Test the display of response messages for failed notifications
Configuration and Access Steps
- No special configuration is required
- Navigate to: Audit → System Audit
- Locate section 309 "Webhook Errors"
Test Scenarios
-
Basic Display Verification
- Steps: Access the System Audit section and locate section 309
- Expected result: Webhook errors should be grouped by URL instead of merchant account
-
Column Verification
- Steps: Examine the table columns in section 309
- Expected result: Table should display URL, Failed Count, Unprocessed Count, and Failed Response Message columns
-
Time Limitation Verification
- Steps: Compare current data with historical records
- Expected result: Only webhook notifications from the past month should be displayed
-
Error Message Display
- Steps: Examine entries with failed status
- Expected result: Response messages should be displayed for failed webhook notifications
Common Issues
- No historical data visible if all notifications are older than one month
- Different response messages for the same URL will appear as separate entries
- Long URLs may appear truncated in the display
- Complex response messages might have formatting issues
Potential Impact Areas
- UI Components: Displays in section 309 of the System Audit report will show the new column structure and grouping
- Reports & Analytics: Manual exports of webhook error data will contain the new field structure
- APIs & Integrations: No impact on external APIs or integrations
- Database Queries: Query performance for section 309 data retrieval is optimized with the new grouping
- Business Logic: No changes to the core business logic of webhook processing
- Performance: Query performance is improved due to the time-based filtering of data
Schema Updates
- No structural database schema changes
- Only query logic has been modified to retrieve and group data differently
- No new tables, columns, or indexes were created
Rollback Possibility Assessment
Database rollback is not applicable for this change.
- The implementation involves only query logic changes
- No structural database modifications were made
- No data migration or transformation occurred
- All changes are contained within application code
Fixed Passthrough Fees Total Calculation in Deposit Statement
Context: In the Deposit Statement on Merchant Console, the "Passthrough Fees" section was displaying incorrect total values. For example, entries with values of -0.54 and -0.42 were showing a total of -95.75 instead of the correct -0.96.
Solution: Implemented proper division logic for P4 section values in the ListboxFooterConverter class by applying an additional division factor of 100 specifically for Passthrough Fees calculations.
Impact: Merchants now see accurate total values in the Passthrough Fees section of Deposit Statements on the Merchant Console, ensuring consistency between the UI display and PDF reports.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Modified the ListboxFooterConverter class to properly handle P4 section calculations
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions are required for this change.
- The fix is implemented through standard Java code changes
- The change affects only the UI calculation logic with no database schema modifications
- The deployment can be executed through normal release procedures
- The existing automatic update mechanisms fully cover the necessary changes
Implementation Actions
- Added section type detection in the ListboxFooterConverter class
- Implemented proper division factor (100) for P4 section values
- Maintained compatibility with other section calculations
- Applied division after the sum calculation to ensure accuracy
Configuration and Access Steps
- Log in to Merchant Console as a merchant or admin user
- Navigate to: Reports → Deposit Statement
- Select a merchant with P4 section entries in their statement
- Open a specific Deposit Statement with Passthrough Fees entries
Test Scenarios
-
Verify Passthrough Fees Total Calculation
- Steps: Open Deposit Statement with multiple Passthrough Fees entries, check the Total field
- Expected result: Total value should correctly sum all Passthrough Fees entries with proper decimal placement
-
Compare UI vs PDF Report Values
- Steps: View the same Deposit Statement in UI and PDF format, compare the Passthrough Fees Total values
- Expected result: Total values should match exactly between UI and PDF formats
-
Verify Calculation with Negative Values
- Steps: Check a Deposit Statement with negative Passthrough Fees values
- Expected result: The Total should correctly sum negative values with proper sign and decimal placement
-
Verify Other Statement Sections
- Steps: Check calculations in other statement sections (non-P4)
- Expected result: Total calculations in other sections should remain correct and unaffected
Common Issues
- Ensure browser cache is cleared when testing to avoid displaying outdated calculations
- Check statements with both positive and negative Passthrough Fees values
- Verify values across different merchant accounts to ensure consistent behavior
- Check behavior for edge cases with very small or very large fee values
Potential Impact Areas
- UI Components: The Total value display in Passthrough Fees section has been corrected, with no impact on other UI elements
- Reports & Analytics: PDF reports already showed correct values; UI now matches PDF output
- APIs & Integrations: No impact on external APIs as this was a display-only issue
- Database Queries: No impact as this was a UI calculation issue only
- Business Logic: No changes to core business logic or fee calculations
- Performance: Minimal performance impact with negligible additional processing time
Schema Updates
- No database schema changes were required for this fix
- No data structure modifications were implemented
- No migration scripts were created
Rollback Possibility Assessment
A database rollback is not applicable for this change.
- The fix involves only UI calculation logic in Java code
- No database modifications were made
- No data transformations were performed
- In the unlikely event of issues, a code rollback would be sufficient
Increased Validation Limits for Onboarding API Estimate Fields
Context: The Onboarding API had restrictive validation limits on transaction amount and volume estimate fields, preventing the creation of merchant accounts that process high-volume transactions.
Solution: Increased the maximum allowed values for estimates.annualDirectDebitVolume and estimates.maxTransactionAmount fields to support higher transaction volumes and amounts in the Onboarding API.
Impact: Merchants with high transaction volumes can now be onboarded without hitting validation limits, allowing for the creation of accounts that process larger transaction amounts and higher annual volumes.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: No schema changes required; only validation logic updates
- Configuration Changes: Updated validation limits for annualDirectDebitVolume and maxTransactionAmount fields
- Code Changes: Modified validation logic and arithmetic operations to handle larger values and prevent overflow
- Filesystem Changes: None
- Data Migration: None required
Update Actions Analysis
Update Actions are not required for this improvement as changes are implemented through standard code deployment.
- All changes are contained within the application code and do not require database schema modifications
- The validation limit changes can be deployed through the standard release process
- No manual configuration changes are needed as the limits are defined in the code
- The changes maintain backward compatibility with existing data
Implementation Actions
- Updated field validation limits in the Onboarding API
- Modified arithmetic operations to use long type for calculations to prevent integer overflow
- Added proper handling for combined values that might exceed integer limits
- Implemented overflow protection in volume calculations
Configuration and Access Steps
- No special configuration is needed to enable this functionality
- Access the feature through standard Onboarding API and UI
- Create a merchant test account with large transaction volume values
- Navigation: Onboarding → Create New → Business Details
Test Scenarios
-
API Validation Limit Testing
- Steps: Send an Onboarding API request with estimates.annualDirectDebitVolume and estimates.maxTransactionAmount values greater than the previous limits but less than the new maximum (2147482646)
- Expected result: Request is accepted without validation errors
-
UI Value Entry Testing
- Steps: Enter large values for Annual Direct Debit Volume and Max Transaction Amount fields through the Onboarding UI
- Expected result: Values are accepted without validation errors
-
Combined Volume Calculation Testing
- Steps: Create a merchant with large values for both annual direct debit volume and annual cards volume that combined exceed Integer.MAX_VALUE
- Expected result: System handles the overflow correctly and displays appropriate values
-
Previously Rejected Merchant Retry
- Steps: Retry onboarding a previously rejected merchant application that had exceeded the old limits
- Expected result: Merchant application processes successfully with the higher values
Common Issues
- Validation error messages may appear both next to the field and as a popup when entering values above the maximum limit
- Large values may be displayed with truncated formatting in some UI screens
- Combined calculations of annual volume figures should be monitored for correct display
- If using values close to the maximum limit, ensure accurate validation without rounding issues
Potential Impact Areas
- UI Components: Forms displaying transaction amount and volume fields need to handle larger numeric values correctly
- Reports & Analytics: Reports aggregating transaction volumes must handle the increased values without formatting issues
- APIs & Integrations: Third-party integrations that consume the estimates data should be compatible with larger values
- Database Queries: Queries filtering or sorting by these fields must handle the larger values correctly
- Business Logic: Logic combining direct debit and card volumes should correctly handle potential overflow conditions
- Performance: No significant performance impact expected as the data type size remains unchanged
Schema Updates
- No database schema changes are required for this improvement
- The existing database fields maintain their original data types
- Only the validation logic in the application code has been modified
Rollback Possibility Assessment
Rollback is possible for these changes as they do not modify the database structure or existing data.
- No database migration scripts are involved in this implementation
- The changes only affect validation rules in the application code
- Existing data remains compatible with both old and new validation limits
- In case of issues, the code can be reverted to use the previous validation limits
Volume Markup Fee Calculation Fix
Context: Volume Markup Fees were not being accurately charged to merchants with zero transaction volume, even when their fee setup included a range starting at $0.
Solution: Modified the fee calculation logic to properly apply Volume Markup Fees for merchants with no transaction volume when they have an active fee setup.
Impact: Merchants with zero transaction volume will now be correctly charged the appropriate Volume Markup Fee as defined in their fee configuration.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Modified fee calculation logic in the MerchantStatementCreator component
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
Update Actions are not required for this change.
- The fix is implemented through standard code changes
- No database schema modifications are needed
- No configuration adjustments are required
- Changes will be applied automatically during standard deployment
Implementation Actions
- Deployed with standard release procedure
- No manual intervention required
- No data migration needed
- No configuration changes required
Configuration and Access Steps
- Ensure test merchants are configured with Volume Markup Fee settings
- Set up Volume Markup Fee ranges starting from $0 in fee configuration
- Create test merchants with zero transaction volume
- Navigation: Admin → Merchants → Merchant Management → [Select Merchant] → Fees → Volume Markup
Test Scenarios
-
Zero Volume with Fee Configuration
- Steps: Configure Volume Markup Fee with $0-$10000 range set to $79, ensure merchant has no transactions for a month, generate statement
- Expected result: Statement should include a $79 Volume Markup Fee
-
Deactivated Merchant with Zero Volume
- Steps: Configure Volume Markup Fee, deactivate merchant, ensure no transactions for a month, generate statement
- Expected result: No Volume Markup Fee should be charged for periods after deactivation
-
New Merchant with Zero Volume
- Steps: Configure Volume Markup Fee for a newly created merchant, ensure no transactions for first month, generate statement
- Expected result: No fees for first month, fees charged from second month
-
Merchant with Partial Month Activity
- Steps: Configure Volume Markup Fee, deactivate merchant mid-month, generate statement
- Expected result: Full Volume Markup Fee should be charged for that month
Common Issues
- Fee not appearing in statements if merchant was inactive during the entire statement period
- Incorrect fee amount if fee setup was changed during the statement period
- Reconciliation statement showing zero amount when fee should be applied
- Statement generation timing affecting fee calculation
Potential Impact Areas
- UI Components: Fee summaries and reconciliation statements in the merchant portal will now display correct fee amounts
- Reports & Analytics: Financial reports will show updated fee amounts for merchants with zero volume
- APIs & Integrations: API responses for fee information will reflect the corrected fee calculations
- Database Queries: No impact on database queries as the change is purely in business logic
- Business Logic: Fee calculation process is modified to include zero-volume scenarios
- Performance: No significant performance impact as the calculation overhead is minimal
Schema Updates
- No database schema changes required
- No new tables, columns, or indexes added
- No data structure modifications
Rollback Possibility Assessment
This change can be safely rolled back if necessary.
- The change is limited to business logic implementation only
- No destructive database operations are performed
- No data transformations occur that would prevent reversal
- Rolling back would simply revert to previous fee calculation behavior
Improved AVS Response Code Mapping for TSYS Integration
Context: The system was incorrectly mapping some AVS response codes received from TSYS, particularly when no address information was submitted with a transaction.
Solution: Updated the mapping logic in the code_mapping database table to properly interpret and display AVS response codes from TSYS according to their official specifications.
Impact: Merchants will now receive accurate AVS response codes and descriptions, allowing for proper evaluation of address verification results during transaction processing.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Updated 18 entries in the code_mapping table to correct AVS response code mappings for TSYS integration
- Configuration Changes: No configuration file changes required
- Code Changes: No application code changes required
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions are required as the changes can be executed through standard database delta deployment.
- The changes are limited to mapping entries in the code_mapping table
- The delta script handles all necessary updates through SQL UPDATE statements
- Standard deployment mechanism covers all the necessary changes
- No data structure modifications or complex transformations involved
Implementation Actions
- Deploy database delta script to update code_mapping table entries
- Verify updated mappings in production environment after deployment
- No additional manual steps required during implementation
- No application restart needed as mappings are loaded dynamically
Configuration and Access Steps
- No special configuration is required to test this improvement
- This change affects AVS response processing for TSYS integration
- To verify, process transactions without address information
- Navigation: Transactions → Transaction Search → View Details → AVS Response field
Test Scenarios
-
Transaction Without Address Data
- Steps: Process a transaction without providing any address information
- Expected result: AVS code should return "C0" with description "AVS was not requested" instead of "4F"
-
Transaction With Invalid Address
- Steps: Process a transaction with an address that doesn't match the cardholder's information
- Expected result: Appropriate AVS code should be returned based on the specific mismatch type
-
Transaction With Matching Address
- Steps: Process a transaction with an address that matches the cardholder's information
- Expected result: AVS codes indicating match should be returned with appropriate descriptions
-
International Transaction
- Steps: Process a transaction with international address information
- Expected result: Proper geographical context should be included in the AVS response description
Common Issues
- Cached mapping data might require system restart in some environments
- Transactions processed during update might show inconsistent results
- Custom reporting based on specific AVS codes may need adjustments
- Merchants using post-processing rules with AVS codes should be notified of changes
Potential Impact Areas
- UI Components: Transaction details screen showing AVS response codes
- Reports & Analytics: Reports containing AVS verification statistics
- APIs & Integrations: API responses including AVS result information
- Database Queries: Queries filtering transactions by AVS response codes
- Business Logic: Post-processing rules using AVS verification results
- Performance: No performance impact expected
Schema Updates
- Updated 18 entries in the
code_mapping
table for TSYS AVS response codes - Modified
response_code
field for specific mapping entries - Updated
provider_response_message
with more detailed and accurate descriptions - No structural changes to database tables or indexes
Rollback Possibility Assessment
Rollback is possible if needed.
- Changes only affect data mapping and not structural elements
- A rollback script has been prepared to restore original mapping values
- No data loss would occur during rollback
- Transaction processing would continue with previous AVS code mapping
Note: Database changes should be scheduled during non-business hours in the client's time zone. This timing is critical because potential offline periods may extend beyond estimates due to unforeseen circumstances.
Improved Error Handling in Extended Statement Table Generation
Context: Previously, the statement extended table generation process silently suppressed errors, which delayed error detection and resolution.
Solution: Modified the error handling mechanism in the SynchronizeStatementDataCuratorTask Java class to properly propagate exceptions instead of suppressing them.
Impact: System errors in extended table generation are now immediately visible, allowing faster issue detection and resolution through standard monitoring tools.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Modified error handling in SynchronizeStatementDataCuratorTask Java class
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions are required for this implementation.
- Changes were implemented through standard code modifications
- No database schema or configuration changes were needed
- Existing error monitoring and logging systems will capture the propagated errors
- The implementation is fully compatible with the existing deployment process
Implementation Actions
- Removed try-catch block that was suppressing exceptions in SynchronizeStatementDataCuratorTask
- Simplified code by removing redundant error handling logic
- Removed unnecessary merchantAccountCode parameter from the task constructor
- Modified the direct service method call pattern to allow exceptions to propagate
Configuration and Access Steps
- No special configuration is required
- This is a system internal change with no UI component
- The change affects background processing jobs only
- Navigation: Not applicable - this is a background process change
Test Scenarios
-
Verify Normal Statement Processing
- Steps: Trigger statement processing through standard processes
- Expected result: Statement processing completes successfully with extended tables generated
-
Verify Error Handling
- Steps: Monitor job execution logs after statement processing
- Expected result: Any errors during extended table generation are properly logged with full stack traces
-
Verify System Monitoring
- Steps: Check system monitoring tools when failures occur
- Expected result: Job failures are properly reported in monitoring dashboards
-
Verify Job Recovery
- Steps: Fix any underlying issues causing failures and restart failed jobs
- Expected result: Jobs should resume and complete successfully after issue resolution
Common Issues
- Check server logs for any unexpected exceptions in statement processing
- Verify job status in administration console after job completion
- Ensure monitoring alerts are properly configured to detect job failures
- Validate that scheduled jobs complete within expected timeframes
Potential Impact Areas
- UI Components: No impact on UI components
- Reports & Analytics: Extended tables for merchant statements will generate more reliably
- APIs & Integrations: No impact on external APIs or integrations
- Database Queries: No change to database query patterns or performance
- Business Logic: Statement processing core logic remains unchanged
- Performance: No significant performance impact expected
Schema Updates
- No database schema changes in this implementation
- No data structure modifications required
- No new tables or columns added
Rollback Possibility Assessment
This change can be rolled back safely if needed.
- No database changes were made that would require rollback
- Implementation is limited to code-level error handling changes
- Previous error suppression behavior can be restored if necessary
- No data integrity concerns associated with error handling modification
Enhanced Error Logging in Reporting API
Context: The Reporting API was generating excessive error logs due to invalid locale values in requests, causing difficulty in identifying critical system issues.
Solution: Implemented improved validation for locale parameters and extracted locale resolution logic into a dedicated method with better error handling.
Impact: Reduced error log spam ensures better system monitoring and faster identification of genuine issues.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Enhanced validation and error handling in ReportServiceHelper class
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No Update Actions are required for this change.
- The changes are limited to code-level improvements in error handling
- All modifications are contained within the standard deployment process
- No database schema modifications are introduced
- No configuration settings need to be manually updated
Implementation Actions
- Deploy the updated code to the target environment
- Monitor error.log file to verify reduction in locale-related errors
- No system restart is required as the changes only affect runtime behavior
- No additional configuration is needed
Configuration and Access Steps
- No specific configuration is needed
- The fix applies to all report generation functionality
- To validate, execute any report that utilizes the Reporting API
- Navigation: Management → Reports
Test Scenarios
-
Report Generation with Valid Locale
- Steps: Generate any report with a valid locale parameter
- Expected result: Report generates successfully with no locale errors in logs
-
Report Generation with Null Locale
- Steps: Attempt to generate a report with null locale value
- Expected result: Report generates with default locale, no errors in error.log
-
Report Generation with Invalid Locale Format
- Steps: Attempt to generate a report with incorrectly formatted locale
- Expected result: Report generates with default locale, no errors in error.log
-
Verify Error Log File
- Steps: Check error.log after executing reports with various locale settings
- Expected result: No "IllegalArgumentException: Invalid locale format" errors present
Common Issues
- Ensure proper testing on all report types to validate consistent behavior
- Verify integration with external systems that may provide locale parameters
- Check that reports with date/time formatting display correctly with default locale
- Check performance impact of locale handling on report generation time
Potential Impact Areas
- UI Components: No impact on UI components
- Reports & Analytics: All reports should continue functioning correctly with improved error handling
- APIs & Integrations: External systems integrating with Reporting API will benefit from more robust locale handling
- Database Queries: No impact on database queries
- Business Logic: Improved error handling during report parameter processing
- Performance: Slight improvement due to more efficient locale handling and reduced error logging
Schema Updates
- No database schema changes are included in this update
- No data migration is required
- No table structure modifications
Rollback Possibility Assessment
Database rollback is not applicable as no database changes were made.
- This change only affects application code
- No data structures were modified
- No stored procedures were changed
- No data transformations were performed
Enhanced TSYS Integration with Country of Origin Support
Context: TSYS began enforcing the countryOfOriginCd field for merchant onboarding applications with specific MCC codes (9211, 9222, 9311, 9399, 9402, 9405, 9406), causing application failures without proper error messaging.
Solution: Implemented support for the countryOfOriginCd field in TSYS integration by adding the field to the database structure and automatically populating it with the ISO numeric code (840) for US-based merchants.
Impact: Onboarding applications for government agencies and other merchants with specified MCC codes now process successfully through TSYS integration, eliminating the "S99 Field is required" error that previously caused application failures.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Added COUNTRY_OF_ORIGIN_CD VARCHAR(3) column to TSYS_PROVISIONING_TRANSACTION table
- Configuration Changes: No configuration changes required
- Code Changes: Enhanced ProvisioningTsysTransformer to populate the country code field automatically for US-based merchants
- Filesystem Changes: None
- Data Migration Needs: None required, as the new field is nullable
Update Actions Analysis
Update actions are not required for this implementation because:
- The database changes are handled automatically through the standard delta mechanism
- The new field is implemented with nullable characteristics, ensuring backward compatibility
- The changes only enhance existing functionality without modifying core business processes
- The automatic population of the field requires no manual input or configuration
Implementation Actions
- Deploy the database delta script to add the COUNTRY_OF_ORIGIN_CD column
- Deploy application code with updated TSYS integration logic
- Monitor onboarding applications, particularly those with government agency MCC codes
- Verify successful TSYS integration by checking for absence of S99 error messages
Configuration and Access Steps
- No special configuration needed for testing
- Create a new onboarding application with Business Type set to "Government Agency"
- Set the MCC code to one of the affected codes: 9211, 9222, 9311, 9399, 9402, 9405, or 9406
- Submit the application through the standard onboarding process
- Navigation: Merchants > Onboarding > Applications > Create Application
Test Scenarios
-
US Government Agency Onboarding
- Steps: Create an onboarding application with US address, Business Type as "Government Agency", and MCC code 9399
- Expected result: Application successfully submits to TSYS, countryOfOriginCd is populated with "840" value
-
Non-US Government Agency Onboarding
- Steps: Create an onboarding application with non-US address, Business Type as "Government Agency", and MCC code 9399
- Expected result: Application successfully submits to TSYS, countryOfOriginCd is not populated
-
Standard US Business Onboarding
- Steps: Create an onboarding application with US address, Business Type as "LLC", and standard MCC code
- Expected result: Application successfully submits to TSYS, countryOfOriginCd is populated with "840" value
-
Existing Applications Re-submission
- Steps: Find a previously failed application with "S99 Field is required" error, resubmit the application
- Expected result: Application successfully processes without the previous error
Common Issues
- Ensure correct MCC code selection for government agencies
- Verify country information is properly set in merchant details
- Check TSYS response logs for any validation errors
- Monitor integration logs for proper XML generation
Potential Impact Areas
- UI Components: No impact on UI components as the enhancement is backend-only
- Reports & Analytics: No impact on reporting functionality
- APIs & Integrations: Improved reliability of TSYS integration for merchant onboarding
- Database Queries: Minimal impact with addition of one nullable field
- Business Logic: Enhanced validation logic for government agency merchants
- Performance: No significant performance impact expected
Schema Updates
- Added COUNTRY_OF_ORIGIN_CD VARCHAR(3) column to TSYS_PROVISIONING_TRANSACTION table
- Field is nullable to maintain backward compatibility
- No indexes or constraints added
Rollback Possibility Assessment
Database changes can be safely rolled back if necessary.
- The added column doesn't affect existing data integrity
- No data transformations are performed on existing records
- Column is nullable, so no constraints are violated
- No cascading effects to other tables or systems
Note: Database structure changes should be scheduled during non-business hours in the client's time zone. This timing is critical because potential offline periods may extend beyond estimates due to unforeseen circumstances.
Onboarding API Validation Improvements
Context: The previous implementation of Onboarding API required unnecessary fields for Government Agency business types and Canadian merchants, forcing users to submit dummy data for these fields.
Solution: Modified the validation logic in Onboarding API to make certain fields optional based on business type and country, eliminating the need to provide irrelevant information.
Impact: Simplified the onboarding process for Government Agency businesses and Canadian merchants, reducing data entry requirements and improving the user experience.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: No schema changes; only validation logic modifications
- Configuration Changes: No configuration changes required
- Code Changes: Modified validation logic in OnboardingIntegrityValidator and related classes
- Filesystem Changes: None
- Data Migration: None required
Update Actions Analysis
No update actions are required for this change as all modifications are implemented through standard code deployment.
- Changes are contained within application code and don't require database schema modifications
- No configuration files need to be manually updated
- Validation logic changes are handled automatically upon deployment
- No data migration scripts are needed
Implementation Actions
- Deploy updated code with modified validation rules
- Verify validation behavior in test environment
- Update documentation to reflect new validation requirements
- Train support staff on new validation behavior
Configuration and Access Steps
- No special configuration is required to enable this feature
- Changes apply automatically to all new onboarding applications
- Testing should focus on specific business types and country combinations
- Navigation: Application Management → Onboarding → New Application
Test Scenarios
-
Government Agency Business Type
- Steps: Create a new onboarding application with business.ownershipStructureType = GA
- Expected result: Only officer name, title, phone, and email are required; other fields are optional
-
Canadian Merchant Validation
- Steps: Create a new onboarding application with business.countryCode = CA
- Expected result: SSN/socialSecurity fields are optional for all business types
-
Government Agency UI Field Visibility
- Steps: Create a new onboarding application with Government Agency type and open the Owners tab
- Expected result: TaxID field is hidden on the Owners tab
-
Onboarding API Request Validation
- Steps: Submit an API request for Government Agency with minimal required fields
- Expected result: Request is accepted without requiring owner data or officer SSN/address/DOB
Common Issues
- Validation errors may still appear if using old API requests with missing required fields
- Ensure all required fields for officers (name, title, phone, email) are provided
- For non-GA business types, standard validation rules still apply
- Canadian merchants still require all standard fields except SSN/socialSecurity
Potential Impact Areas
- UI Components: The onboarding wizard form now conditionally displays field requirements
- Reports & Analytics: No impact on reports or analytics functionality
- APIs & Integrations: Onboarding API now accepts different field combinations based on business type and country
- Database Queries: No impact on query structure or performance
- Business Logic: Validation rules now consider business type and country when determining required fields
- Performance: Negligible impact on system performance
Schema Updates
- No database schema changes were made in this enhancement
- Changes are limited to application validation logic
- Existing database tables and fields are used without modification
Rollback Possibility Assessment
Database rollback is not applicable as no database structure changes were made.
- No tables or columns were added, modified, or removed
- No data transformations were performed
- No constraints or indexes were altered
- Changes are isolated to application code validation logic
Corrected fee charging during scheme transition
Context: When merchants were switched from Demand-Demand fee scheme to Demand-Cycle scheme, the system would incorrectly regenerate Reconciliation Statements for prior periods. This resulted in duplicate fee charges being applied to merchant accounts.
Solution: Modified the fee scheme transition logic to properly set the last_statement_date field when switching from Demand-Demand to other schemes. The system now copies the last_deposit_statement_date value to the last_statement_date field during the transition.
Impact: Merchants transitioning between fee schemes will no longer experience duplicate charges for previous periods. This prevents financial discrepancies and improves the reliability of the fee processing system.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: No direct schema changes, only data value updates to the last_statement_date field
- Configuration Changes: None
- Code Changes: Modified fee scheme transition logic in UniCoreFeesHandler
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
Update actions are not required for this implementation.
- The fix is fully implemented through code changes
- No database schema modifications were needed
- No configuration changes are required
- No manual data updates are necessary for the fix to function
Implementation Actions
- Updated the UniCoreFeesHandler.java class to set last_statement_date when changing from Demand-Demand to cycle-based schemes
- Added condition to detect fee scheme transitions from Demand-Demand to Demand-Cycle or Cycle-Cycle
- Implemented copy of last_deposit_statement_date to last_statement_date during transitions
- Added the change to both merchant creation and merchant update flows
Configuration and Access Steps
- Create a test merchant account with the Demand-Demand fee scheme
- Generate several transactions to establish fee history
- Navigate to: Merchants → Merchant Details → Fee Settings
Test Scenarios
-
Transition from Demand-Demand to Demand-Cycle
- Steps: Change merchant fee scheme from Demand-Demand to Demand-Cycle
- Expected result: The system should set last_statement_date equal to last_deposit_statement_date
-
Verify statement generation after transition
- Steps: Wait for the next reconciliation cycle to execute
- Expected result: Only statements for periods after the transition date should be generated
-
Check fee charges after transition
- Steps: Review merchant account activity and reconciliation statements
- Expected result: No duplicate charges for periods before the transition should appear
-
Test with a new merchant account
- Steps: Create a new merchant with Demand-Cycle fee scheme
- Expected result: The system should properly initialize last_statement_date
Common Issues
- Verify no statements are generated for periods prior to the transition date
- Confirm that the last_statement_date is correctly populated during schema changes
- Check that no duplicate fee charges appear in merchant account activity
- Ensure reconciliation process correctly honors the last_statement_date value
Potential Impact Areas
- UI Components: No impact on UI components
- Reports & Analytics: Fee reports should show correct amounts without duplications
- APIs & Integrations: API requests that change fee schemes will apply the same fixes automatically
- Database Queries: Queries that check last_statement_date will now find proper values after scheme transitions
- Business Logic: Fee calculation will no longer process duplicate charges for historical periods
- Performance: No significant performance impact expected
Schema Updates
- No schema changes were required for this fix
- Only runtime updates to existing fields (last_statement_date) are involved
- No new tables, columns, or relationships were added
Rollback Possibility Assessment
This change can be safely rolled back if needed.
- No destructive database changes were implemented
- The fix only affects how a field is populated during specific operations
- Rollback would only affect future fee scheme transitions, not existing data
- No data transformation is involved that would cause information loss
Optimized Merchant Statement Extended Query Performance
Context: The merchant statement generation system was performing additional database queries to retrieve submission dates from statistics tables, causing unnecessary database load and slower response times.
Solution: Added direct storage of submission dates in the extended statement details table, eliminating the need for additional queries to statistics tables.
Impact: Improved performance when viewing extended merchant statements and reduced database load during statement generation and retrieval operations.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: Added
submission_date
column to themerchant_statement_extended_detail
table - Configuration Changes: None
- Code Changes: Modified query logic to use the new field instead of joining with statistics tables
- Filesystem Changes: None
- Data Migration: Populated the new field with data from existing submission records
Update Actions Analysis
No update actions are required for this improvement as all changes are handled through standard deployment mechanisms.
- The database schema change is implemented through delta scripts
- Data migration is performed automatically during deployment
- Application code changes are deployed through standard mechanisms
- No manual configuration changes are needed
Implementation Actions
- Deploy database schema changes
- Execute data migration for existing records
- Deploy updated application code
- Schedule deployment during off-hours due to potential database impact
Configuration and Access Steps
- No special configuration is required after deployment
- Verify extended statement access through standard interfaces
- Navigation: Management → Reports → Extended Statement
Test Scenarios
-
Statement Generation Verification
- Steps: Generate a new merchant statement with multiple submissions
- Expected result: Statement details should include correct submission dates
-
Existing Statement Date Accuracy
- Steps: Access previously generated statements and verify submission dates
- Expected result: All historic submission dates should be accurately displayed
-
Performance Improvement Validation
- Steps: Compare query execution time before and after the change
- Expected result: Reduced query execution time for statement retrieval operations
-
Statement Report Filtering
- Steps: Filter statements by date ranges
- Expected result: Filtering should work correctly and display appropriate submissions
Common Issues
- Query execution plans may need to be refreshed on some database configurations
- High-volume merchants might experience longer initial migration times
- Database load monitoring is recommended during the first statement generation
- Temporary performance impact during index creation
Potential Impact Areas
- UI Components: No visible changes to UI components, all changes are internal
- Reports & Analytics: Merchant statement reports will retrieve data more efficiently with no functional changes
- APIs & Integrations: No changes to API interfaces or integration points
- Database Queries: Improved efficiency for statement detail retrieval queries
- Business Logic: Submission date determination now uses direct field access instead of joins
- Performance: Reduced database load and faster statement retrieval operations
Schema Updates
- Added
submission_date
column (DATETIME type) to themerchant_statement_extended_detail
table - Created temporary migration process to populate the new field
- Added appropriate indexes to support query patterns
Rollback Possibility Assessment
Database changes in this release can be rolled back if necessary.
- Changes are additive in nature (column addition)
- No destructive operations are performed on existing data
- Original data sources remain intact
- Rollback scripts have been provided
Note: We recommend scheduling the deployment during non-business hours due to the database schema modifications. While the changes are non-destructive, there might be temporary performance impacts during the migration process.
Fixed Mastercard Card Type Preservation in Elavon-EU Integration
Context: In the Elavon-EU integration, Mastercard transactions were incorrectly displayed as UnionPay credit cards when void operations were performed or when E02 decline responses were received.
Solution: Modified the card type detection logic to preserve the original Mastercard card type during void operations and E02 decline responses, ensuring correct classification throughout the transaction lifecycle.
Impact: Transactions performed with Mastercard cards now correctly maintain their card type classification during all operations, allowing proper filtering, reporting, and tokenization based on the actual card type.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None required for implementation. A post-update SQL script will update existing affected transactions.
- Configuration Changes: None
- Code Changes: Modified conditional check in the RetailElavonEisopAuthorizationBackTransformer class to prevent brand redefinition during void operations
- Filesystem Changes: None
- Data Migration: Post-update SQL script to correct existing affected transactions
Update Actions Analysis
Update actions are required for this fix because existing data must be corrected:
- The system needs to identify and correct previously affected Mastercard transactions that were incorrectly labeled as UnionPay
- This cannot be implemented through standard code changes as it requires updating existing transaction records
- The update must be performed with careful database modification to ensure system integrity
- Correcting the token values requires direct database interaction which is not possible through standard deployment mechanisms
Implementation Actions
- Modified the transaction processing logic to use more specific response code validation
- Updated the RetailElavonEisopAuthorizationBackTransformer class to prevent Mastercard to UnionPay conversion during void/decline
- Created a post-update SQL script to correct existing affected transactions
- Applied conditional logic to ensure only cards beginning with '5' (Mastercard) are corrected
Configuration and Access Steps
- No special configuration is needed
- No specific settings changes are required
- Standard authorization credentials are sufficient
- Navigation: Transaction Reports → Search Transactions
Test Scenarios
-
Mastercard Sale Transaction
- Steps: Process a regular sale transaction using a Mastercard
- Expected result: Transaction is properly recorded with Mastercard card type
-
Mastercard Void Operation
- Steps: Process a sale transaction with Mastercard, then void the transaction
- Expected result: Void transaction retains Mastercard card type classification
-
Mastercard Transaction with E02 Decline
- Steps: Simulate a transaction that receives an E02 decline response
- Expected result: Transaction maintains Mastercard card type despite E02 response
-
Transaction Search Filtering
- Steps: Search for transactions using Mastercard as filter criteria
- Expected result: All Mastercard transactions appear in results, including voids and declines
Common Issues
- In some cases, transactions voided prior to this fix may still show as UnionPay until the database update script is executed
- Automatic retries on declined transactions may need monitoring to verify correct card type preservation
- Token references will be corrected for transactions that have been updated by the script
- Report filtering on card type may show different results after the fix is applied
Potential Impact Areas
- UI Components: Transaction details screen, transaction reports, and search results will now display correct card types
- Reports & Analytics: Reports aggregating by card type will show proper Mastercard transaction counts
- APIs & Integrations: Applications consuming transaction data will receive correct card type information
- Database Queries: Queries filtering by card type will return more accurate results
- Business Logic: Card-type dependent business rules will now apply correctly to all Mastercard transactions
- Performance: Post-update SQL script execution may create temporary database load
Schema Updates
- No schema changes are required for this fix
- The correction is implemented through application logic changes
- Post-update SQL script will update data in existing tables without structural changes
- No new fields, tables, or indexes are needed
Rollback Possibility Assessment
Rollback of this fix is technically possible but not recommended:
- The code changes can be reverted to restore previous behavior
- However, data corrected by the post-update SQL script will not automatically revert
- Manual data correction would be required to restore the previous state
- Reverting would reintroduce the reported issue, affecting reporting and tokenization
It's recommended to schedule the deployment of this fix during non-business hours in the client's time zone, as potential offline periods may extend beyond estimates due to unforeseen circumstances.
Fixed D48 error handling for contact cards on proxy integration
Context: The system incorrectly triggered D48 error codes for contact cards during transactions on proxy integration, causing the PIN pad to disappear from the terminal screen.
Solution: Implemented enhanced validation logic in the RetailProxyAuthorizationBackTransformer to check both card entry mode and PIN code presence before applying D48 error codes.
Impact: Contact card transactions now process correctly without unnecessary PIN verification prompts, eliminating the issue of missing PIN pad after D48 responses.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Modified RetailProxyAuthorizationBackTransformer.java to add card entry mode verification for D48 error handling
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions are required as the changes can be executed through standard deployment.
- The modification involves only Java code changes in the application logic
- No database schema modifications are needed
- No configuration file changes are required
- The change is automatically applied when the system is redeployed
Implementation Actions
- Deploy the updated application code to production environment
- Monitor transaction processing after deployment, especially focusing on contact card transactions
- Verify correct PIN pad behavior across different terminal types
- No additional configuration or manual actions required post-deployment
Configuration and Access Steps
- Ensure testing environment includes proxy integration configuration
- Set up test terminals with both contact and contactless card capabilities
- Configure test amounts that trigger D48 error codes (for contactless cards)
- Navigation: Terminal Application → Card Payment → Contact/Insert Card
Test Scenarios
-
Contact Card Transaction Validation
- Steps: Perform transaction with contact card using amount that would typically trigger D48
- Expected result: Transaction processes without D48 error, PIN pad not requested unnecessarily
-
Contactless Card Transaction Validation
- Steps: Perform transaction with contactless card using amount that triggers D48
- Expected result: D48 is triggered correctly, PIN pad displayed for verification
-
Contact Card with Different Entry Modes
- Steps: Test transactions with contact cards using different entry modes (chip, fallback swipe)
- Expected result: No D48 errors occur regardless of entry mode for contact cards
-
Error Handling Verification
- Steps: Cancel transaction during processing for both contact and contactless cards
- Expected result: System handles cancellation gracefully without UI freezing issues
Common Issues
- Potential intermittent behavior during high transaction volumes
- Possible variation in behavior across different terminal models
- Terminal application version compatibility considerations
- Network connectivity issues may affect error handling display
Potential Impact Areas
- UI Components: PIN pad display and transaction confirmation screens show correctly for all card types
- Reports & Analytics: No impact on reporting or analytics functionality
- APIs & Integrations: Proxy integration handling of response codes is improved
- Database Queries: No impact on database queries or performance
- Business Logic: Transaction processing logic for different card types is now more accurate
- Performance: Minor improvement in error handling efficiency, no significant performance impact
Schema Updates
- No database schema changes are included in this release
- No new tables, columns, or indexes created
- No data manipulation required
Rollback Possibility Assessment
Rollback is possible as no database changes are involved.
- The change involves only application code
- No data structures are modified
- Transaction data integrity is not affected
- Previous code version can be reinstated if necessary
Improved Error Messages for Date Validation in API
Context: When freezing a subscription through the API, users needed to provide an effective date that matched the billing date, but received unclear error messages when validation failed.
Solution: Enhanced the API response with clearer error messages that explain the relationship between the effective date and billing date, including specific examples of valid dates.
Impact: API users now receive more informative error messages when date validation fails, making it easier to understand the specific requirements for subscription freeze operations.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Modified error handling for date validation in the subscription service
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No update actions are required for this implementation.
- Changes are implemented through standard code deployment
- New error message format is handled automatically by the system
- No manual configuration changes needed
- No database schema or data migrations required
Implementation Actions
- Added new error code F32 for more descriptive date validation errors
- Added new relationship type "must match" for improved error context
- Implemented example date generation to show valid dates in error messages
- Modified subscription freeze validation to use the new error format
Configuration and Access Steps
- No special configuration is required for this feature
- Testing requires access to the API for subscription operations
- Prepare test subscription accounts with various billing cycles for validation
- Navigation: N/A (API functionality only)
Test Scenarios
-
Monthly Subscription Freeze with Invalid Date
- Steps: Send a freeze request with an effective date that doesn't match the monthly billing day
- Expected result: Error message F32 showing the effective date must match the billing date, with examples of valid dates
-
Weekly Subscription Freeze with Invalid Date
- Steps: Send a freeze request with an effective date that doesn't match the weekly billing day
- Expected result: Error message F32 showing the effective date must match the billing day of week, with examples
-
Quarterly/Semi-Annual/Annual Subscription with Invalid Date
- Steps: Send a freeze request with an effective date that doesn't match the billing day
- Expected result: Error message showing the effective date must match the billing date with appropriate examples
-
Valid Date for Subscription Freeze
- Steps: Send a freeze request with an effective date that matches the next billing date
- Expected result: Request processes successfully without error
Common Issues
- Legacy integrations might expect the previous error format (F27)
- Different date formats in requests may affect validation behavior
- Remember that date validation is based on billing cycle (monthly/weekly)
- Edge cases around month boundaries (e.g., billing on 31st) need special attention
Potential Impact Areas
- UI Components: No impact on UI components
- Reports & Analytics: No impact on reports or analytics
- APIs & Integrations: External integrations that handle error responses may need to be updated to recognize the new error format
- Database Queries: No impact on database queries
- Business Logic: Subscription freeze validation logic has been updated to provide more informative errors
- Performance: No impact on performance
Schema Updates
- No database schema changes required for this implementation
- No tables, fields, or constraints have been modified
- No indexes or stored procedures have been affected
Rollback Possibility Assessment
Database rollback is not applicable as no database changes were implemented.
- No database structures were modified
- No data migrations were performed
- No database configuration changes were made
- Feature is entirely implemented in application code
Expiration Date Validation in Account Verification Requests
Context: During the COVID-19 pandemic, the system was configured to accept expired payment cards to accommodate temporary card extension policies implemented by US banks. This temporary measure was no longer necessary but remained active in the system.
Solution: Restored standard validation of payment card expiration dates for all Processing API operations, including account verification requests, ensuring expired cards are properly rejected with appropriate error messages.
Impact: The system now verifies that payment cards have not expired when processing account verification requests and other API operations, providing clearer error messages to merchants when expired cards are used.
- Implementation Details
- Testing & Validation
- Database Changes
System Changes Overview
- Database Changes: None
- Configuration Changes: None
- Code Changes: Updated validation logic for card expiration dates in Processing API and UI components
- Filesystem Changes: None
- Data Migration: None
Update Actions Analysis
No Update Actions are required for this implementation.
- The changes are implemented through standard code modifications
- No database structure or configuration changes were needed
- No existing data requires migration or transformation
- All changes will be automatically applied during the standard deployment process
Implementation Actions
- Modified validation logic to verify that card expiration dates are in the future
- Added new error code V51 for expired cards with message "[ObjectName] has expired"
- Updated UI components to calculate expiration dates based on the current year
- Improved error handling for payment pages and redirects
Configuration and Access Steps
- No special configuration is required to enable this validation
- The validation is automatically applied to all relevant API endpoints
- This validation affects account-verification, sale, sale-auth, credit, tokenization and other card-based operations
- Navigation: Transactions → Process Transaction
Test Scenarios
-
Account Verification with Expired Card
- Steps: Submit an account verification request with a card that has an expiration date in the past
- Expected result: Request is rejected with error code V51 "Card has expired"
-
Account Verification with Valid Card
- Steps: Submit an account verification request with a card that has a future expiration date
- Expected result: Request is processed successfully
-
Sale Transaction with Expired Card
- Steps: Attempt to process a sale transaction with a card that has an expiration date in the past
- Expected result: Transaction is rejected with error code V51 "Card has expired"
-
PaymentOption Creation with Expired Card
- Steps: Attempt to create a payment option with a card that has an expiration date in the past
- Expected result: Request is rejected with error code V51 "Card has expired"
Common Issues
- Verify format of expiration date in requests follows MMYY pattern
- Ensure the system date is correctly set on the server
- Check if payment processor is correctly interpreting the expiration date format
- Confirm proper error message display on hosted payment pages
Potential Impact Areas
- UI Components: Form validation behavior for card expiration date fields
- Reports & Analytics: None
- APIs & Integrations: All integrations processing card-based transactions must handle the V51 error code
- Database Queries: None
- Business Logic: Card validation process for all payment operations
- Performance: None
Schema Updates
- No database schema changes were required for this implementation
- The validation is handled entirely through application logic
- No new tables, fields, or indexes were created or modified
Rollback Possibility Assessment
Database rollback is not applicable for this change.
- No database changes were made as part of this implementation
- The modifications are limited to application code only
- No data migration or transformation was performed
- In case of issues, standard code rollback procedures would be sufficient